Acquiescence bias, 13
Administrative error, 11
data processing error, 11
interviewer cheating, 12
interviewer error, 11
sample selection error, 11
Association–Identification–Measurement (A.I.M.), 55
affective component, 53
behavioral component, 53
behavioral differential scales, 59
category rating scale, 56
cognitive component, 53
constant sum sorting scale, 55
continuum attitude rating scale, 56
graphic rating scales, 59
in paired comparisons, 54
numerical scales, 59
ranking, 53
rating, 54
semantic differential, 58
simple attitude rating scale, 56
sorting, 55
staple scales, 59
summated rating scale, 56
thurstone equal-appearing interval scale, 59
Auspices bias, 13
Behavioral differential scales, 59
Behavioral questions, 63
Bipolar scale, 51
Category rating scale, 56
Census, 98
Central tendency, 129
Close-ended questions, 78
Cloud-based survey platforms, 40
culture amp, 40
google forms, 41
qualtrics, 40
question pro, 41
snap survey, 41
survey anyplace, 41
survey monkey, 40
Cluster sampling, 109
Completion rate, 22
Computer-assisted telephone interviewing (CATI), 29
Constant sum sorting scale, 55
Content analysis, 42
Continuum attitude rating scale, 56
Convenience sampling, 102
Correlation coefficient, 138
Counter-biasing statement, 82
Cross-sectional surveys, 4
Culture amp, 40
central tendency, 129
frequency distributions, 128
mean, 129
measures of dispersion, 130–131
median, 130
proportion, 129
methodology, selecting, 133–138
Data processing error, 11
Deliberate falsification, 12
Dependent variable, 126
Descriptive statistics, 127
Determinant-choice question, 62
Deviation scores, 132
Dichotomous-alternative questions, 78
Digital technology, 24
Dillman, Don, 27
Dispersion, calculating, 131
deviation scores, 132
range, 131
Distribution, 128
Double bias, 15
Double-barreled question, 82
Embedding surveys on a website, 16
Errors in survey, 8–13. See also under online surveys
administrative error, 11. See also individual entry
systematic errors, 11
total survey error, 9
Executive summary, 151
Extremity bias, 13
Face-to-face interviews. See Personal or face-to-face interviews
Federal Communication Commission (FCC), 30
Feedback, 22
Fixed list sampling, 15
Fixed-alternative question, 61
Frequency count, 134
Frequency distributions, 128
Frequency-determination question, 63
Funnel technique, 89
Geographic flexibility, 26
Google forms, 41
Hispanic market analysis, 163–173
background and objectives, 164–165
conclusions and recommendations, 170
executive summary, 164
implications for Houston, TX market, 163–173
sampling plan and survey creation, 166–170
Homogeneity, 116
Independent variable, 126
Inferential statistics, 127
Infinity insurance case study, 2–18, 68, 122, 140–150
project flowchart, 5
Information-seeking tool, 16
Intercept sampling, 15
Interquartile range (IQR), 131
Interval scale, 48
Interviewer bias, 13
Interviewer cheating, 12
Interviewer error, 11
Item scaling, 47–65. See also Scales; Survey questions
attitude measurement, 53–61. See also individual entry
conceptual definitions, 52
operational definitions, 52
rules of measurement, 52
selecting measuring system, 52–61, 64–65
Judgment sampling, 103
Length of interview, 22
Loaded questions, 82
Longitudinal surveys, 4
Mail and self-administered questionnaires, 24–29
anonymity of respondent, 26
cost, 26
geographic flexibility, 26
increasing response rates to mail surveys, 27
interviewer’s absence, 26
length of mail questionnaire, 27
respondent convenience, 26
response rates, 27
standardized questions, 27
time commitment, 27
ways to increase, 28
cover letter, 28
follow-ups, 28
keying mail questionnaires, 28
monetary incentives, 28
preliminary notification, 28
self-administered questionnaires, 28
survey sponsorship, 28
Mean, 117
Measures of dispersion, 130
Median, 130
Mode, 130
Monadic rating scale, 65
Multiple-choice questions, 78
Multistage area sampling, 109
Nominal scale, 48
Nonprobability sampling, 101, 102–106
comparison, 111
convenience sampling, 102
focus groups, 105
judgment sampling, 103
quota sampling, 104
snowball sampling, 104
Nonresponse error, 12
Numerical scales, 59
e-mail invite surveys, 15
embedding surveys on a website, 16
pop-up survey, 16
sampling, 15
fixed list, 15
intercept, 15
spam, 17
stakeholder bias, 17
unsolicited mail filters, 17
unverified respondents, 18
Open-ended response questions, 61, 77
Optical magnetic recognition (OMR) scanning technology, 24
Order bias, 89
Ordinal scale, 48
Periodicity, 107
Personal or face-to-face interviews, 20–24
anonymity of respondent, 23
completion rate, 22
door-to-door interviews, 23
feedback, 22
high participation, 23
length of interview, 22
probing complex answers, 22
props and visual aids, 22
disadvantages, 24
Pilot testing, 153
5-point scale, 50
7-point scale, 48
Population parameters, 127
Population, 98
Pop-up survey, 16
Primary sampling unit (PSU), 101
Probability sampling, 101, 107–110
appropriate sample design, 110
advance knowledge of the population, 110
degree of accuracy, 110
resources, 110
time, 110
cluster sampling, 109
comparison, 112
multistage area sampling, 109
proportional versus disproportional strata, 108–109
simple random sampling, 107
stratified sampling, 107
systematic sampling, 107
Probing complex answers, 22
Props and visual aids, 22
Qualtrics, 40
Question pro, 41
Questionnaire design, 67–92. See also Survey objectives; Survey questions, guidelines
infinity auto insurance, administering a survey, 68–69
quota sampling, 104
Random digit dialing (RDD), 29
Randomness, 107
Random sampling error, 9, 11, 113
Range, 131
Ratio scales, 48
Representative sample, 8
nonresponse error, 12
response bias, 12. See also individual entry
Response bias, 12
acquiescence bias, 13
auspices bias, 13
deliberate falsification, 12
extremity bias, 13
interviewer bias, 13
social desirability bias, 13
unconscious misrepresentation, 12
Reverse scored items, 50
Sample selection error, 11
Sample statistics, 127
Sample survey, 4
Sampling, 93–123. See also Nonprobability sampling; Probability sampling
accurate and reliable results, 98
actual sampling units, selecting, 115–116
destruction of test units, 98
hard-to-identify groups, 101
minimizing response bias, 114–123
nonprobability versus sampling probability, 101
pilot situations, 102
pragmatic reasons for, 98
primary sampling unit (PSU), 101
random sampling error, 113
accuracy, 116
confidence level, 117
homogeneity, 116
mean, 117
proportion, 117
sampling frame, 100
secondary sampling units (SSU), 101
systematic (nonsampling) errors, 113–114
target population, defining, 99–100
tertiary sampling units (TSU), 101
Scales, 47
5-point scale, 50
7-point scale, 50
comparison, 60
interval scale, 48
nominal scale, 48
ordinal scale, 48
Ratio scales, 48
types of, 48
Secondary sampling units (SSU), 101
Self-selected opinion polls, 15
Semantic differential, 58
Simple attitude rating scale, 56
Simple random sampling, 107
Simple-dichotomy questions, 62, 78
Skew, 130
Snap survey, 41
Snowball sampling, 104
Social desirability bias, 13
Spam, 17
Split-ballot technique, 82
Stakeholder bias, 17
Standard deviation, 132
Standardized normal distribution, 132
Statistical significance, 138
Stratified sampling, 107
Summated rating scale, 56
Survey, 3
cross-sectional surveys, 4
design process, 5
distinguishing features, 3
longitudinal surveys, 4
questions, 3
sample survey, 4
types of, 4
Survey administration, 19–45. See also Mail and self-administered questionnaires; Personal or face-to-face interviews; Telephone questionnaires
analyzing results, 35
data management, 35
exporting results, 35
number of questions/responses/surveys, 35
security/password protection, 35
support, 36
Survey anyplace, 41
Survey dashboard reporting, 42–45
Survey landscape analysis, 39–42. See also Cloud-based survey platforms
text analysis software, 42
Survey monkey, 40
designing questions, 75
identifying/measuring desired outcomes, 72–75
purposeful, 75
behavioral questions, 63
determinant-choice question, 62
fixed-alternative question, 61
frequency-determination question, 63
graphic rating scale, 62
open-ended response question, 61
simple-dichotomy question, 62
staple scale, 63
Survey questions, guidelines, 77–92
best layout, 89
close-ended questions, 78
ambiguity, avoiding, 82
complexity, 81
double-barreled question, 82
leading and loaded questions, 81
loaded questions, 82
making assumptions, avoiding, 84
split-ballot technique, 82
multiple-choice, 78
open-ended response questions, 77
pretesting and revising, 89–92
single-dichotomy questions, 78
specific guidelines, 84
survey design checklist, 90–91
background and objectives, 152
conclusions and recommendations, 155–156
cross-tabulate relevant pairs of questions, 155
Doughnut chart, 149
executive summary, 151
graphs, 145
recommendations for effective presentations, 159–160
reporting software for survey platforms, 160–162
table of contents, 151
title page, 151
useful hints and phrases for report, 158
word-cloud visualization, 149
Survey research, 1–18. See also Infinity Insurance Case Study
deliverables in each phase, 9
errors in, 8–13. See also individual entry
Survey software and application features, 36–39
anonymity, 37
automatic reminder e-mails, 37
collaboration/accessibility, 38–39
compatibility, 37
customized page, 39
distribution, 36
embedding media, 38
integration with other services, 39
language, 39
logic features, 38
piping, 38
pop-up/exit surveys, 38
response validation, 37
scheduling of e-mail campaigns, 37
survey templates, 36
survey/question library, management, 36
randomization, 36
text analysis, 38
trend analysis, 38
Systematic (nonsampling) errors, 11, 113–114
Systematic sampling, 107
Table of contents, 151
Tally, 134
Technological software, 32–34. See also Survey software and application features
Telephone questionnaires, 29–32
computer-assisted telephone interviewing (CATI), 29
Telephone survey, 17
Tertiary sampling units (TSU), 101
Thurstone equal-appearing interval scale, 59
Title page, 151
Unconscious misrepresentation, 12
Unit, sampling, 100
Universe, 98
Unsolicited mail filters, 17
Unverified respondents, 18
Variable, 125
dependent, 126
independent, 126
Working population, 100