Every time we start talking about measuring employee communication here at Davis & Company, the conversation invariably turns to one of our pet peeves: bad survey questions.

To be fair, survey questions are tricky. Writing effective questions requires pinpoint precision. You can be flabby or effusive. Ambiguity is not your friend.

The good news is that the more you know what a bad question looks like, the better you'll get at writing good ones. Here are 5 mistakes to understand-and avoid:

We're living in a nuanced world, and very rarely can the answer to a question be as black and white as yes or no. For example, a few years ago, I was asked to complete a survey about a magazine I sometimes read. One question went like this, “Do you find this publication valuable in doing your job?” with a yes/no response. I found myself wanting to answer, “It depends,” which was a clue that the surveyor should have used a five-point scale to give me a chance to answer more precisely.

And another thing . . .
A good question focuses on one topic. Once you add a second dimension by putting an “and” in the question, you've created a double question. Here are two examples: “Employees and supervisors are held accountable for quality.” And “Communication from leaders is meaningful and effective.” As a respondent, I might believe that employees are held accountable, but not supervisors (or vice versa). Ban the “and”; instead, break double questions into two.

Very, very vague
Do you ever find yourself taking a survey and wondering what a question is about? Usually, it's because what's being asked is too broad or general. For example, a question we see a lot is: “How effective is communication?” (on a scale of 1 to 5). “Effective” is a vague term, as is “communication.” The employee answering might think HR communication is great, but corporate messages not so good. Therefore, too much vagueness leads to inconclusive data.

Not enough choices
One of my favorite bad questions came from a survey my town sent me about Department of Public Works services. Here's how it read: “Indicate your preference for bagging leaves in the fall (versus putting them at the curb) OR taking household trash cans to the curb (versus rear yard pickup).” The possible answers: A. I would prefer to bag leaves_B. I would prefer to take my trash to the curb_C. No preference for one or the other. As a respondent, I didn't like any of the three answers. (A better strategy: Separate this question into at least two questions and let me indicate how strongly I feel about leaves in one question and trash in the next.

Is this scale accurate?
When I take my car for service, my dealer sends me an electronic survey afterwards. The survey always uses a 10-point scale that goes from 1 (Unacceptable) to 10 (Exceptional). Right away, the 10-point scale is a problem for me: From the survey respondent's point of view, 10 presents a large number of choices-maybe too many.

But the real problem with the dealer's 10-point scale is that it's unevenly weighted toward the positive. The scale has 1 on the left and 10 on the right. Above it, not exactly aligned to the numbers, re the descriptive terms: Unacceptable (roughly above numbers 1 to 2), Average (above 3 to 5), Outstanding (above 6 to 8) and Exceptional (above 9 and 10). If you consider average to be neutral to positive, the numbers 5 through 10 are all affirmative. That means the dealer is not getting the straight story.

Want to learn more about measurement? Check out our resources.

Join for free to access more information
Register today to learn more