- About Us
- Insight by Design
- Knowledge Center
Published by Quirk's Media on July 5th, 2018
Communication is something everyone does every day, but it is not an inherently simple process. Miscommunication happens. People must have the skills to encode and decode messages. They must sort through emotions and noise that can distort these messages. Most importantly, as anyone who has found themselves embroiled in a semantic argument can tell you, they must trust they are using the same code. All questionnaires, whether qualitative or quantitative/subjective or objective, require the fundamental basics of communication, and in self-report research, quality data depends on minimizing miscommunication between respondents and researchers.
When designing questionnaires, the researchers have a notion of what they want to learn and must construct a question (or questions) to find out. To do this, researchers are required to encode their inquiry into symbols (i.e. language) via a question and possible response options. It then further requires respondents to decode the symbols to understand the message (i.e. the question). The message then travels back to the researchers in another set of encoding and decoding processes: respondents develop answers in their heads, encode the answers for the researchers in the form of their response, and finally the researchers decode the symbols in the response to learn the answers for which they are supposedly looking.
Close-ended questions, those particularly used in quantitative research, are assumed to minimize miscommunication because the researchers provide the code they are using and ask respondents to use the same code. “On a scale from 0 to 10, with 0 being ‘not at all’ and 10 being ‘definitely,’ how likely are you to purchase a new car in the next year?”. If a respondent knows for sure they will not be buying a car, they pick zero. If they know they will definitely buy a car, they pick ten. If they are not 100% confident that the answer is “yes” or “no”, they have nine levels of uncertainty from which to choose.
What happens, then, if the response code includes the option of a separate “not sure”? Is the meaning behind the “not sure” for the researchers the same as for the respondents? What if “not sure” is lumped together with “N/A”? Is that the same as a double-barreled question? In response to a question, a person could conceivably answer “not sure” for at least three different reasons.
The first reason to respond with a “not sure” is because the respondent simply does not have enough information to provide an answer. This is particularly relevant for concrete questions that ask for specific, factual information. I am not sure how many nurses use non-latex gloves; I have never paid attention or bothered to find out. When a researcher intends to include a response option which means the respondent is unaware or does not have enough information to satisfactorily answer the question, placing this option outside of the response scale makes sense (Figure 1). It is important in this case to separate those with knowledge or awareness from those without. They are qualitatively different types of respondents. Oftentimes, researchers may assume respondents will decode “not sure” to mean “I don’t have enough information to provide a meaningful answer”; however, this may not always be the case, and to avoid ambiguity it may be best to label the response “not aware” rather than “not sure”.
The sentiment of “not sure” could also be a valid attitude. In more subjective measures of thoughts and feelings, being unsure about an answer is legitimate and meaningful. An attitude is an evaluation that varies in intensity and can often vary in direction (e.g. you can feel positively and/or negatively about something). Standard Likert-type scales measure attitudes by asking respondents to distinguish along a continuum of strong positive feelings (e.g. “I strongly agree”) to strong negative feelings (e.g. “I strongly disagree”). In these cases, “not sure” can represent two different types of attitudes: (1) a neutral feeling (e.g. I don’t feel strongly one way or the other) or (2) an ambivalent feeling (e.g. I hold both positive and negative feelings about this, so I am not sure how to respond). Neutral feelings ought to be captured along the response scale via the mid-point of the scale (Figure 2).
Although an argument may be made that the reason the person holds neutral feelings is because they do not have enough information, this should be assessed via a different question. The specific question asks how they feel, and “not sure” is legitimately how they feel. If you want to further understand why someone feels the way they do, follow-up questions can do this nicely and equally apply to positive, negative, and neutral feelings. Just as there are qualitative differences between people who feel neutral because they (a) have enough information and don’t care or (b) don’t have enough information and haven’t formed an opinion, there are qualitative differences between those who feel positive/negative because of (a) direct experiences or (b) indirect experiences based on what they have heard.
Ambivalent feelings are a bit more complicated and require more than one item to assess. Respondents may indicate they are ‘not sure’ on a single-item scale because the emotional average of their competing feelings is neutral. If asked how much I like exercise, I may answer that I am not sure, but not because I feel neutral about it. On the one hand, I know that exercise will help me achieve desired health goals (a positive). On the other hand, I also know that exercise takes time, makes me sore, and is not particularly fun (negatives). If I am to sum up all those feelings into a single attitude, a “not sure” would be appropriate, but not equivalent to neutral. In these situations, a type of scale called a sematic differential would be appropriate to capture respondents’ feelings (Figure 3). This scale allows for the rating of a single object on several different bipolar adjectives, providing a more granular understanding of the feelings toward a specific object (Cozby & Bates, 2015). On this type of scale, I could rate exercise high on “healthy” and “boring”, rather than aggregating them to a single “not sure”.
Returning to the context of communication, it is important in close-ended questions that the researchers encode the option of “not sure” clearly, so that the respondents’ interpretation and use of this code matches the researchers’ intent. If researchers want an option for “not enough information”, it could be written that way so as not to confuse it with other possible “not sure” meanings. Furthermore, because “neutral” is a legitimate feeling, researchers should not hesitate to place “not sure” on the mid-point of a scale.
If there is a concern that too many respondents may indicate “not sure” as a crutch-type of response, there are ways to minimize that. One solution is to use an even-number of response options so there is no definitive mid-point and respondents are forced to lean either positively or negatively. Another solution is to use more response options in a scale, for instance presenting a scale from 1-7 or 1-9 rather than 1-5. Research by Matell & Jacoby (1972) found that participants are less likely to rely on the mid-point of a scale when there are more response alternatives available. Increasing the number of points from 5 to 7 resulted in about a 10% drop in reliance on the mid-point.
There are so many other high stakes considerations when designing questionnaires, such as making sure all business objectives are addressed, the topics covered are useful for developing insights, and that all items will fit into the limited time window respondents are expecting. Because of these, it is easy to miss the potential for miscommunication when writing questions. After all, you know exactly what you mean, right? But do all the respondents? The “not sure” example discussed here is just one possible misalignment between researcher encoding and respondent decoding, but it is a valuable one that can go a long way in improving the quality of data collected.
Cozby, P.C. & Bates, S.C. (2015). Methods in behavioral research: 12th Ed. New York: McGraw-Hill Education.
Matell, M.S. & Jacoby, J. (1972). Is there any optimal number of alternatives for Likert-scale items? Effects of testing time and scale properties. Journal of Applied Psychology, 56(6), 506-509.