The term questionnaire denotes a collection of items designed to measure one or more underlying constructs. Given that questionnaires are one of the most widely used research tools in the social sciences, it is not surprising that a large body of literature has developed around various design features in their use.
Researchers who use questionnaires must first decide what method of administration to employ. One approach is to use a self-administered questionnaire, such as the traditional paper-and-pencil-based booklet completed in a supervised setting or mailed to respondents and completed at their homes or workplaces. Recently, it has become popular to post self-administered questionnaires on Web sites that can be accessed via the Internet. Alternatively, questionnaires can be administered using interviewers to conduct telephone or face-to-face interviews. In choosing one of these methods, researchers should consider the attributes of the project, the possibility of social desirability effects, and the ease of administration.
Due to their cost-effectiveness and ease of administration, self-administered questionnaires (in either the traditional or Internet-based form) are popular among social scientists. Because of the sense of anonymity provided by self-administered measures, this method lessens the likelihood of social desirability effects. Thus, these types of questionnaires are especially useful when studying highly sensitive topics. Furthermore, self-administered measures are self-paced, which ensures that respondents have sufficient time to generate meaningful answers to the questions. Internet-based questionnaires tend to be especially cost-efficient, because expenses often associated with a research project (i.e., photocopying of materials and payment of research assistants) are minimized.
Despite such advantages, there are limitations associated with self-administered measures. If researchers mail their questionnaires, they may obtain very low response rates. Furthermore, individuals who take the time to complete the measure may not be representative of the intended sample. Self-administered questionnaires also may not be suitable for respondents with limited cognitive skills or topics that are complex and require extensive explanation.
One alternative to using self-administered questionnaires is to conduct telephone interviews. Telephone interviews are associated with substantially higher response rates than the use of self-administered questionnaires, which lessens the possibility that one's data will be compromised by nonresponse error. They also allow researchers to probe respondents' answers if they are initially unclear. Unfortunately, there are several drawbacks associated with this method of administration. These include substantially greater expense relative to self-administered questionnaires and increased vulnerability to social desirability effects.
Researchers may also administer their questionnaires via face-to-face interviews. Face-to-face interviews are ideal when one's sample consists of individuals with limited cognitive or verbal abilities, as this type of interview allows researchers to clarify the meaning of more challenging items through the use of visual props (e.g., show cards). Like telephone interviews, face-to-face interviews also allow researchers to clarify the meaning of ambiguous questions and probe respondents for clarifications to their answers. However, face-to-face interviews are more costly and time-consuming than other methods of administration. Such interviews are also the most vulnerable to social desirability effects, potentially making them an inappropriate method of administration when highly sensitive topics are being studied.
One decision that researchers must make in designing the actual questions that make up a survey is whether to use closed-ended questions (which include response alternatives) or open-ended questions (which allow respondents to generate their own answers). Measures containing closed-ended questions may be easier to interpret and complete. However, such questions may fail to provide response options that accurately reflect the full range of respondents' views. While open-ended questions eliminate this problem, they pose other difficulties. To interpret and analyze responses to open-ended questions, researchers must engage in the costly and time-consuming process of developing a coding scheme (i.e., a way to categorize the answers) and training research assistants to use it.
There are several guidelines with respect to question wording that should be followed in generating effective items. Questions should be short and unambiguous, and certain types of items should be avoided. These include “double-barreled questions,” which constrain people to respond similarly to two separate issues. Consider the dilemma faced by the respondent who encounters the following question: “How positively do you feel about introducing new social programs, such as government-subsidized day care, to this state?” The respondent might generally harbor negative feelings about social programs but might view subsidized day care programs quite positively. In light of such conflicting views, this question would be difficult to answer.
It is also worth noting that questions that are substantively identical may yield different answers from respondents, depending on the specific wording used. Consider the following two questions:
- Is it appropriate for libraries to not be allowed to carry certain books?
- Is it appropriate for libraries to be forbidden to carry certain books?
Technically, the meaning inherent in these two questions is the same. However, people might respond to them very differently, because “forbidden” has stronger negative connotations than “not allowed.” Thus, subtle variations in wording can have a dramatic effect on responses.
If closed-ended questions are to be used, researchers must decide on an appropriate response format. When researchers wish respondents to indicate their relative preferences for a series of objects, they may choose to ask respondents to rank order them, instead of rating each one. The main advantage associated with this approach is that it eliminates the problem of nondifferentiation between objects. Specifically, when evaluating numerous objects using a rating scale, respondents will inevitably be forced to assign the same rating to several of them. If respondents rank order the objects, non-differentiation ceases to be a problem.
However, there are difficulties associated with this approach. Rank ordering generally does not allow for the possibility that respondents may feel the same way toward multiple objects, and it may compel them to report distinctions that do not exist. Furthermore, respondents may find the process of ranking a large number of objects burdensome. Rank-ordered data can also be difficult to analyze, as it does not lend itself to many statistical techniques commonly used by social scientists.
When researchers employ questions that entail choosing between alternatives, the order of the response options may influence answers. The nature of such order effects depends on the method of administration. If self-administered measures are being used, primacy effects (i.e., biases toward selecting one of the first options presented) may pose a threat to the validity of the data. However, if the questionnaire is being orally administered by an interviewer, recency effects (biases toward selecting one of the latter options) may occur. Researchers can safeguard against these issues by counterbalancing response options across respondents and testing for order effects.
If researchers employ scale-based measures, they must decide on the number of scale points to include. If an insufficient number of points are used, the measure may be insensitive to variability in respondents' answers. On the other hand, too many scale points may make the differences between points difficult for respondents to interpret, thereby increasing random error. Generally speaking, the optimal length for unipolar scales (measures used to assess the extremity or amount of a construct) is 5 points, while the optimal length for bipolar scales (measures in which the end points reflect opposing responses) ranges from 5 to 7 points.
Researchers must also decide whether to label the points on their scales. Labeling scale points generally helps respondents to interpret them as intended. However, if the questionnaire is to be administered via telephone, respondents may have difficulty remembering verbal labels. In such instances, numeric labels are preferable.
Another decision is whether to include a midpoint in one's scale. Although midpoints are useful when there is a meaningful neutral position associated with a question, there are potential disadvantages to including one. Because the midpoint of a scale is often interpreted as reflecting neutrality, people who are not motivated to consider the items carefully may automatically gravitate toward the middle of the scale. Furthermore, the meaning of the midpoint may be somewhat ambiguous. Unless a researcher stipulates what the midpoint signifies, respondents can interpret it in several ways. A midpoint response to an item could potentially indicate ambivalence, neutrality, or indifference on the part of the individual completing the questionnaire. As a result, researchers who fail to label their scales clearly may find it impossible to ascertain what midpoint responses signify.
When people are queried about their attitudes, they may sometimes generate random, spur-of-the-moment responses. This is especially likely when respondents are asked about issues that they think about infrequently. One method of avoiding this problem is to incorporate nonresponse options into one's questions. This allows respondents to indicate that they are unsure of their opinions and alleviates the pressure to generate a substantive response instantaneously. The difficulty associated with this technique is that respondents who actually have opinions about an issue may simply select the nonresponse option if they are not motivated to consider the questions carefully. An alternative approach is to ask respondents to indicate how strongly they feel about their answers to each question (e.g., the certainty of their responses, how important they consider the issue to be). This method requires people to respond to each item, while allowing the researcher to gauge the strength of their answers.
The way in which researchers structure their questionnaires can have a profound impact on the types of responses that people provide. Researchers can either order questions by thematic content or randomize them. Often, social scientists have assumed that randomizing their items is more appropriate. However, studies have indicated that organizing items by thematic content makes it easier for respondents to process the content of the questions, thereby reducing random error.
Researchers should also minimize the potential for order effects in their questionnaires. When they complete questionnaire-based measures, people tend to adhere to conversational norms. More specifically, they adhere to the same set of implicit rules that keep regular social interactions running smoothly. For example, respondents avoid providing redundant information when they complete questionnaires. Thus, using questions that overlap in content (e.g., a question pertaining to how satisfied people are with their social lives, followed by a question pertaining to how satisfied people are with their lives in general) may prompt respondents to generate answers that are very different from the ones that they would have given if each question had been posed separately. On a related note, the initial questions in a measure may prime certain concepts or ideas, thereby influencing respondents' answers to subsequent items.
Several steps can be taken to circumvent the problems associated with question order effects. For instance, order effects can be minimized by counter-balancing items. Filler items (i.e., questions that do not pertain to the phenomena being studied) can also be used in order to “isolate” questions that could influence people's responses to subsequent items. Finally, sensitive, controversial questions should be situated at the end of a measure, so that they do not affect respondents' willingness to complete the other items.
In sum, if researchers exercise forethought in designing their questionnaires, the end product of their efforts will likely be a valid measure that is well suited to its purpose.
Measurement, Measurement Error, Reliability Theory, Validity Theory
The Webster's Dictionary definition for questionnaire is “A prepared set of written questions for purposes of statistical compilation or...
a form containing questions to be administered to a number of people mainly in order to obtain information and record opinions. Social...
An instrument for collecting data for survey analysis . Every respondent is asked the same questions, in the same way and in the same...