So far, we have talked about a number of different aspects of doing a survey, including
In this chapter, we’re going to talk about how you carry out the survey. We’re not going to get into the nuts and bolts of doing a survey. There are lots of good books that will do this, and we’ll mention them in the annotated bibliography at the end of this chapter. Rather we’re going to describe the steps that every researcher must go through in carrying out a survey.
Developing the Survey
Let’s assume that you want to do a survey of adults in your county to determine their perception of the quality of life. You know that there are certain areas that you want to explore, including perceptions of crime and the economy. You want to develop a survey that can be repeated on an annual or biannual basis to track how perceived quality of life varies over time. You’re aware of other quality-of-life surveys to which you would like to compare your survey results. What should you do to begin developing your survey?
Looking at Other Surveys
It’s often helpful to look at the types of questions that other researchers have used. One place to search is Google (http://google.com) and Google Scholar (http://scholar.google.com). If you happen to be on a college campus that subscribes to the Roper Center for Public Opinion Research (http://www.ropercenter.cornell.edu), consider using iPOLL, which is a database of over 700,000 survey questions. You can search all these search engines by keywords. Entering the words quality and life will search for all questions containing both words in the question. Often what others have asked will give you ideas of what you might ask.
Focus Groups
Focus groups are another tool that you can use in developing your survey. A focus group is a small group of individuals from your study population who meet and discuss topics relevant to the survey.1 Typically, they are volunteers who are paid to take part in the focus group. For example, if your study deals with quality of life, you might explore with the focus group what they think quality of life means and which issues, such as crime and jobs, are critical to quality of life. A focus group gives you the opportunity to discuss the types of information you want to get from your survey with a group of people who are similar to those you will sample.
Cognitive Interviews
A cognitive interview is a survey administered to volunteers from your study population that asks them to “think out loud”2 as they answer the questions.3 Cognitive interviews give you the opportunity to try out the questions and discover how respondents interpret them and what they mean by their answers. Let’s say that one of the questions you want to ask in your survey is “What is the most pressing problem facing the community in which you live?” In a cognitive interview, you can ask respondents how they interpret this question. What does “most pressing problem” mean to them? And you can ask them to take you through their thought processes as they think through the question and formulate an answer.
Behavior Coding and Interviewer Debriefing
Another way to pretest a survey is to conduct a pilot study, where you administer the survey to a small sample of respondents. Respondent behavior can be coded to help you identify problem questions. Gordon Willis suggests that you look for questions in which the following events occurred—“(1) Interrupts question reading (2) Requests repeat of question reading (3) Requests clarification of question meaning (4) Provides qualified response indicating uncertainty (5) Provides an uncodeable response (6) Answers with Don’t Know/Refused.”4 Interviewers can also be debriefed about problems they encountered while administering the survey.5
Asking Experts to Review the Survey
When you have a draft of the survey completed, ask survey experts to review it and point out questions that might be confusing to respondents as well as other types of problems. Most colleges and universities will have someone who is trained in survey research and willing to review your draft.
Pretesting the Survey
When you think you are ready to try out your survey, select a small number (25–40) of respondents from your study population and have them take the survey using the same procedures you will use in the actual survey. In other words, if you are using a telephone survey, then do your pretest over the phone. If it’s a web survey, then your pretest should be over the web. You probably won’t be using these responses as part of your data since you are likely to make changes in the survey based on the pretest results.
Here are some of the things that you ought to look for in your pretest6
Pretesting is an essential step in preparing your survey so it is ready for delivery to your sample. Here are some other suggestions for the pretest.
Administering the Survey—Using Probe Questions
Administering the survey depends in part on your mode of survey delivery. In Chapter 5 (Volume I), we talked about the four basic modes of survey delivery—face-to-face, mailed, telephone, and web—and mixed-mode surveys, which combine two or more of these delivery modes. You might want to go back and look at this chapter again and at some of the references mentioned in the annotated bibliography.
One of the most important tasks of survey administration is to clarify the answers of respondents through follow-up questions. These types of questions are referred to as probes. There are a number of different types of probes. For example, we could ask respondents to “tell us more” or what they meant by a particular answer. Patricia Gwartney suggests some other probes.9
Some questions are particularly likely to require a follow-up question in order to clarify what respondents tell us. Here are some examples.
Probing in Web Surveys
The way in which we probe depends in large part on the mode of survey delivery. Surveys that are interviewer-administered, such as face-to-face and telephone surveys, provide the interviewer with considerable control over the use of probe questions. Web surveys are not interviewer-administered, but technological advances give the researcher considerable control here as well.
There are some questions that you know will require a probe question. For example, if you ask someone their job title, you will need to follow that up with a question asking about the duties and activities of their job. If you ask people what they think is the most pressing problem facing their community, you might want to follow that up with a probe asking, “Why do you feel that way?” This type of probe can easily be built into any survey, including web surveys.
There are other types of probe questions that depend on what respondents tell you. Pamela Alreck and Robert Settle call these interactive or dynamic probes.12 For example, if respondents give you a one-word answer, such as “crime” or “drugs,” to the most-pressing-problem question, you would want to ask them to “tell me a little more about that.” That’s more difficult to carry out in a web survey unless you can identify the specific keywords for which you want to ask a probe question. In addition, you need to be using web survey software that allows you to use this type of probe question.
Probing in Mailed Surveys
Probing is more difficult in a mailed survey. Mailed surveys are not interactive. There is no contact between the interviewer and the respondent unless one provides the respondent with a telephone number or web address that they can use to contact you. Consequently, all instructions and questions have to be written out in the survey. This limits you to probes that can be anticipated in advance. If you are asking about the respondent’s occupations or jobs, you can include a probe question asking the respondents to tell you about their job’s duties and activities. If you are asking about attitudes or opinions on some issue, you can ask them to tell you why they feel that way. But there is no opportunity for following up on respondents’ specific answers. If they tell you that their race is Swedish, you can’t follow that up. You have to make your instructions clear and specific enough to make sure that the respondents know what you are asking.
Administering the Survey—Record Keeping
Another important part of survey administration is record keeping. It’s essential to keep good records regardless of the survey delivery mode. But the information that is available for your records will vary by the mode of survey delivery. In an interviewer-administered survey, you might have information about individuals you are unable to contact or who refuse to be interviewed. Each time you attempt to reach a potential respondent, a record must be kept of the result. These are often referred to as disposition codes. You should be sure to record the following information.
Patricia Gwartney has a detailed list of disposition codes for telephone interviews, which could be adapted for face-to-face surveys.13 You can also look at the disposition codes published by the American Association for Public Opinion Research.14
Often respondents are unable to do the interview at the time you reach them and the interview needs to be scheduled for a callback. This should be recorded on a callback form. You should attach a call record to each survey, which records each contact, the outcome, the date and time of the contact, the interviewer’s name, and when to call back along with any other information that the interviewer wants to convey to the next interviewer. If you are doing a phone survey and are using CATI software, the program will create this record for you.
In a self-administered survey, you probably won’t have much information about nonrespondents. You may only know that they didn’t respond. However, sometimes the respondents will contact you and indicate why they aren’t completing your survey. This could be because they have moved and aren’t part of your study population or because they don’t have the time or aren’t interested or because they have a problem about survey confidentiality. Be sure to record this information. But at the very least, you need to be able to report the response rate15 for your survey.
Another reason that good record keeping is so important is that it provides a record of the way in which you carried out your survey. For example, when you create a data file, you make decisions about how to name your questions and how you record the responses to these questions. An example is a person’s age. You would probably name this question as age and record the person’s age as a number. But what will you do if a person refuses to answer this question? You might decide to use 98 for any person who is 98 years of age or older and use 99 for refusals. You should record this decision in a permanent file so that you will remember what you did when you come back to this data file after several years. Or you might give someone else permission to use your data sometime in the future, and he or she will need to know how you recorded age. There needs to be a permanent record of the way in which the survey was carried out to enable future use of this survey.
Administering the Survey—Linking to Other Information
For some types of surveys, there are other administrative or organizational data that might be available. For example, if your population is students at a university, the registrar will have information about the students. If your population is employees in a large organization, there is bound to be information on these employees, such as length of time at the organization and salary. You might be able to link your survey to these types of administrative or organizational data.
This, of course, raises a series of questions that you must consider and answer before linking to these types of data. Here are just a few of these questions.16
Processing the Data
Coding
If your survey includes open-ended questions, you will probably want to code the responses into categories. Let’s consider the question we have been using as an example: “What is the most pressing problem facing your community today?” Responses to this question could be coded into categories, such as the economy, crime, education, traffic and transportation, and so on. You will probably want to divide each of these categories into more specific categories, such as lack of jobs, violent crime, and property crime. Once you have developed the categories, have two or more people code the data independently so you can see if the coding done by different individuals is consistent.
Editing the Data
In addition to coding answers to open-ended questions, you will want to review all the answers. For example, let’s say that you’re doing a mailed survey and you ask an agree–disagree question with the following categories: strongly agree, agree, disagree, strongly disagree. What are you going to do if someone selects more than one answer? With other survey delivery modes, you have more control over the types of answers that respondents give so you would be able to avoid this type of problem. But you still need to edit the data to check for completeness and consistency. You may need to have a category for uncodable, and you will definitely need categories for people who say they don’t know or refuse to answer questions.
Data Entry
There are several options for data entry. You could enter your data directly into a program, such as Excel, or into a statistical package, such as SPSS. If you are using CATI software or web survey software, such as Survey Monkey or Qualtrics, the data can be exported into a number of statistical packages, such as SPSS or SAS, or into an Excel file.
Data Analysis
Data analysis is beyond the scope of this book. There are many good books on statistical analysis, and we’ll mention some of them in the annotated bibliography at the end of this chapter.
Writing the Report
Writing reports will be one of the topics covered in Chapter 5, Volume II.
Making the Data Available to Other Social Scientists
It has become commonplace for researchers to make their survey data accessible to other researchers by placing their data in archives, such as the Inter-university Consortium for Political and Social Research (ICPSR) at the University of Michigan and the Roper Center for Public Opinion Research at Cornell University. Depending on the nature of your data, you may or may not choose to make your survey data publicly available.
Regardless of your decision to make your data available, it’s important for you to document how the data were collected. If your data are in a file that can be read by a statistical program, such as SPSS, SAS, Stata, or R, you need to document how that file was created. At some later point in time, you may want to reanalyze your data or give it to another researcher for further analysis.
You may want to look in the archives of the ICPSR or the Roper Center for data that you might be interested in accessing. Mary Vardigan and Peter Granda provide an excellent introduction to data archiving, documentation, and dissemination.17
Listening
In an interviewer-administered survey, it’s important for the interviewer to be a good listener. Raymond Gorden talks about active listening and suggests that interviewers ask themselves several questions as they are listening to the respondent.
Gorden also suggests a number of keys to being a good listener.19
Interviewer Training
In interviewer-administered surveys, interviewers need to be trained. It’s unreasonable to expect them to pick up what they need to know through on-the-job training. Here are some different training techniques. A good training program will combine several of these approaches.
Providing Documentation
You will need to provide documentation for interviewers to study and to have available for reference during interviews. These should include:
For some of these reasons, there’s an easy response. For example, if someone doesn’t have time to do it now or is sick, you should offer to call back at a more convenient time. If someone says they never do surveys, you should explain why this survey is important and worth their time. Don Dillman and Patricia Gwartney also have examples of handouts on how to handle refusals.21
Practice Interviews
Interviewers should have the opportunity to practice the interview before actually starting data collection. A good place to start is to practice interviewing themselves. Have them read through the questions and think about how they would answer and what they might find confusing. Then interviewers could pair off with another interviewer and take turns interviewing each other. They could also interview friends and family.
Role playing is often a useful training device. Have experienced interviewers play the role of respondents, and simulate the types of problems interviewers might encounter. For example, problems often arise when asking questions about race and ethnicity. Respondents often give one-word answers to open-ended questions. These types of difficulties could be simulated in a practice session.
Another useful training tool is to have experienced interviewers work with new interviewers and coach them on how to handle difficult problems that arise. Experienced interviewers could listen to practice interviews and then discuss with the new interviewers how they might improve their interviewing technique. If it’s possible, record the practice interviews so you can review them and use them as teaching tools.
Survey Participation
As has been mentioned in Chapters 3 and 5 (Volume I), a major concern of survey researchers is the declining response rates that all modes of survey delivery have experienced during the last 35 to 40 years.22 This has been one of the factors that have led to the increased cost of doing surveys. But the concern is not just over cost. The concern is also that this will lead to increased nonresponse bias. Bias occurs when the people who do not respond to the survey are systematically different from those who do respond, and these differences are related to the questions we ask. Increasing response does not necessarily decrease bias. Jeffrey Rosen et al. note that increasing response rates among those who are underrepresented is what is necessary to reduce nonresponse bias.23
We discussed survey participation in Chapter 3 (Volume I), so we’re not going to repeat the discussion here. Rather we want to emphasize that declining response to surveys is a serious potential problem since it increases the possibility of nonresponse bias. Take a look at our discussion in Chapter 3 (Volume I) of various theories of survey participation and how you might increase response rates.
Robert Groves and Katherine McGonagle describe what they call a “theory-guided interviewer training protocol regarding survey participation.”24 It starts with listing the types of concerns that respondents have about participating in the survey and then organizing these concerns into a smaller set of “themes.” Training consists of:
For example, if the respondent says, “I’m really busy right now!” the interviewer might respond, “This will only take a few minutes of your time.” Basically what the interviewer is doing is tailoring his or her approach and response to the respondent’s concerns.26
Summary
Annotated Bibliography
There are a number of very good books on how to do various types of surveys. Here are some excellent sources.
Here are some good references on training interviewers.
These are excellent discussions of nonresponse, nonresponse bias, and increasing response.
Data analysis is beyond the scope of this book, but here are some excellent references.
iThis list is not meant to be exhaustive. It is only meant to give examples.