CHAPTER 4

Changing Technology and Survey Research

In 2011, Robert Groves, a former Director of the U.S. Census Bureau and one of the deans of U.S. survey research, highlighted the impacts of changing technology in a discussion of the state of survey research.1

at this moment in survey research, uncertainty reigns. Participation rates in household surveys are declining throughout the developed world. Surveys seeking high response rates are experiencing crippling cost inflation. Traditional sampling frames that have been serviceable for decades are fraying at the edges. Alternative sources of statistical information, from volunteer data collections, administrative records, and Internet-based data, are propelled by new technology. To be candid, these issues are not new to 2011, but have been building over the past 30 years or so. However, it is not uncommon to have thoughtful survey researchers discussing what lies in the future and whether key components of the basic paradigm of sample surveys might be subject to rethinking.2

The Force of Modern Computing

The birth and development of modern computers is closely intertwined with today’s survey techniques. A rapid development of general-use computing began in 1951, when the U.S. Census Bureau signed a contract for the first commercial computer in the United States. When UNIVAC—the Universal Automatic Computer—was dedicated a few months later, the New York Times called the machine “an eight-foot-tall mathematical genius” that could in one-sixth of a second “classify an average citizen as to sex, marital status, education, residence, age group, birthplace, employment, income and a dozen other classifications.” The UNIVAC was put to work to process parts of the 1950 census; then in 1954, it was employed to handle the entire economic census.3 By the mid-1960s, the world began to experience what might be called the age of the computer. In the early days, general-use computing was dominated by huge, expensive mainframe computers. These large machines were physically sequestered in secure, environmentally controlled facilities that were accessible only to a select number of individuals in government, large corporations, and universities, and thus survey research using computers was similarly limited to large institutions with large budgets. Within a few short years, however, the processing power and data storage capacity of computers began to increase at an exponential rate. In 1965 Gordon Moore, who cofounded the chip maker Intel, predicted that processor speeds, or overall processing power for computers (actually transistors on an affordable CPU), would double every two years. His prediction, now termed Moore’s Law, has held up for more than 50 years.4

In the nearly 60 years since, technology has provided an amazing ability to put more and more data processing and storage capacity hardware into smaller and smaller units at a cheaper cost. To illustrate, consider that in 1956, the IBM RAMAC 305 (mainframe) had 5 MB of storage. The original 305 RAMAC computer system could be housed in a room of about 9 meters (30 ft) by 15 meters (50 ft); and its disk storage unit measured around 1.5 square meters (16 sq ft). Currie Munce, research vice president for Hitachi Global Storage Technologies (which has acquired IBM’s hard disk drive business), stated in a Wall Street Journal interview5 that the RAMAC unit weighed over a ton, had to be moved around with forklifts, and was delivered via large cargo airplanes.6 It was hardly portable! Today, you can buy a PC external disk drive that can hold 8 Terabytes (TB) of data for around $160.00,7 which is equal to about 8,000 Gigabytes (GB) or 8,000,000 Megabytes (MB) of information, more than one and one-half million times more data storage than the RAMAC 305. These leaps in technology have resulted in the shrinking of mainframes into desktops, desktops into laptops, and laptops into tablets and notebooks. It was this transformation in computer technology to these smaller, yet more powerful computers that gave rise to the development of desktop and laptop computing, which, in turn, led directly to the development of a computer-assisted paradigm of survey research.

The Move to Wireless Telephones

As discussed in Chapter 5 (Volume I), this computer revolution popularized computer-assisted telephone interviewing (CATI) in the mid-1970s and computer-assisted personal interviewing (CAPI) in the 1980s and 1990s. By the 1990s, two other impacts of the ubiquitous march of technology were beginning to have a fundamental impact on the larger world, including survey research. The first of these impacts was the rapid miniaturization of memory chips with simultaneous expansion of capacity, which ushered in a tremendous increase in the use of cell phones and other portable “smart” devices. In 1990, there were roughly 20 cell-phone users per 1,000 persons in the United States; by 2005 that number had grown to 683, and in 2009 it exceeded 900. By the second half of 2017, Center for Disease Control’s National Center for Health Statistics study reported that a majority of American homes had only wireless telephones.8

The linkages between advances in technology created rapid evolution in survey methodology. As computer technology moved from personal computers to smaller handheld devices, the CATI/CAPI survey followed. Today there are MCATI and MCAPI surveys (with the M designating the mobile device nature of these modalities), using platforms that have moved interviewing onto mobile devices. However, while advances in technology enabling such platforms have greater elasticity in conducting interviews, they have introduced their own set of coverage, sampling, nonresponse and measurement problems, as well as specific legal restrictions directed at mobile contact.9 For example, preliminary results from the most recently available July to December 2017 National Health Interview Survey (mentioned above) highlight the infusion of mobile communication technology into everyday life. The survey found that more than one-half (53.9 percent) of American homes did not have a landline telephone but did have at least one wireless telephone. Yet, a closer look at the demographics of these cell-only households reveals unevenness in wireless phone access among certain subgroups within the population on characteristics such as income, race/ethnicity, age, and geographic location.10

The National Center for Health Statistics study authors, Stephan Blumberg and Julian Lake, warn that these findings raise red flags, because as the number of adults who are cell only has grown, the potential for bias in landline surveys that do not include cell-phone interviews is also growing: “The potential for bias due to undercoverage remains a real threat to health surveys that do not include sufficient representation of households with only wireless telephones.”11 The researchers also indicated this undercoverage problem is made worse by the fact that some households with landlines nevertheless still take their calls on cells. Moreover, some people who live in households with landlines cannot be reached on those landlines because they rely on wireless telephones for all or almost all of their calls.12 Thus, to be methodological sound, sampling frames for such mobile device interviewing now must include dual sampling frames, comprised of both mobile phones and landline phones.

Internet Access and Survey Mobility

A second major impact was the rapid expansion of Internet access and use. The Pew Research Center has documented this rise in Internet adoption in a series of studies, finding that it grew from 14 percent of the U.S. adult population in 1996 to 89 percent less than a decade later.13 A sharp growth of web-based survey research paralleled this rapid expansion of the Internet. As a result, the platforms for the selection of probability samples and survey data collection were rapidly adapted to the needs of many survey researchers. Today, the use of the web for data collection is occurring in all sectors of the survey industry from the U.S. Census Bureau, which, for example, now allows respondents to take the American Community Survey14 on the Web, to political polling, where the cost and difficulty of targeting phone calls geographically is prompting more organizations to rely on online polls, to the marketing research community, which has moved virtually all consumer surveys to the Web.

Moreover, just as mobile phones ushered in MCATI and MCAPI survey interviewing, the rapid growth of web-based survey research was transformed by the rapid extension of Internet access through mobile technologies. The evolution in technology from simple cell phones to Internet-accessible smartphones brought online mobility to the everyday uses of the Internet for shopping, reading newspapers, participating in forums, completing and making surveys, communicating with friends and making new ones, filing their tax returns, getting involved in politics, and purchasing things or looking for information before purchasing offline.15 By 2015, a Pew Research Center study16 found nearly two-thirds of Americans owned a smartphone. Such availability and familiarity make the web-enabled smartphone or mobile device a prime modality for a wide range of survey data collection.17 As Pinter et al. note, “Respondents in online surveys planned for a PC environment may rather use mobile devices. Further, mobile devices can be used independently in mobile internet-based surveys, in mobile ethnography, in mobile diary, in location-based research or in passive measurement.”18

At first glance, the increased availability of portable online access would appear to solve one problem of online survey data collection because it permits researchers to reach more potential respondents with survey applications, particularly populations that traditionally have had limited or no access to the Internet, that is, the 22 percent of Americans who are dependent on smartphones for their online access. This is a group of individuals the Pew Research Center terms “smartphone dependent.”19 Some of those who rely on smartphones for online access at elevated levels, include:

  • Younger adults—15 percent of Americans ages 18 to 29 are heavily dependent on a smartphone for online access.
  • Non-whites—12 percent of African Americans and 13 percent of Latinos are smartphone-dependent, compared with 4 percent of whites.
  • Those with low household incomes and levels and educational attainment:
    • Some 13 percent of Americans with an annual household income of less than $30,000 per year are smartphone-dependent.
    • Just 1 percent of Americans from households earning more than $75,000 per year rely on their smartphones to a similar degree for online access.20

However, a closer look at the details within Pew Research Center’s study shows that mobile online access does not mean that the underrepresentation of individuals who might otherwise have limited or no access to traditional PC accessed online platforms necessarily disappears. For the Pew Center study also revealed that the connections to online resources that smartphones enable are often most tenuous for those users who rely on those connections the most. Users who are dependent on their smartphone are subject to sporadic loss of access due to a blend of economic and technical constraints.21 Thus, while the smartphone initially appears to open access to web-based surveys for those with limited access otherwise, there may be a hidden undercoverage and nonresponse error for those very groups.

Further, access is only one of the problems that have been identified with the use of mobile devices with web-based surveys.22 Although research is still lacking in this area,23 recent studies are now beginning to illustrate the breadth of these issues.

First, just as Internet access, the so-called “digital divide” itself shows variation by age, racial/ethnic background, education, and economic status, which can affect coverage and response error (see the Pew Center study above), research is beginning to show the adoption of various mobile device platforms may also be differentially impacted due to demographic differences. For example, Christopher Antoun found not only was mobile Internet use unevenly distributed across demographics groups but the usage divide is also reflected in the significant demographic differences between those who use mostly their phones to go online and those who use mostly their computers.23

Second, response times are greater with mobile survey applications contrasted with PCs. For, example, in a well-controlled study of the differences in survey response times between mobile and PC-based respondents, Ioannis Andreadis found that smartphone users had longer response times. She proposes that longer mobile response times may be due to completing the survey outside of their home, with this environment creating more distractions than are available to the desktop users, who complete the survey in a quieter room in their home or in their office.24

Third, breakoff rates (the rates at which individuals stop responding to the survey before completion) in mobile web surveys are a key challenge for survey researchers. In the introduction to their meta-analysis of breakoff rates, Mavletova and Couper note studies showing breakoff rates for commercial mobile web surveys ranging from slightly over 40 percent to as high as 84 percent, while the breakoff rate for PCs based ranged from 17 to 24 percent.25 In their own meta-analysis of breakoff rates of 14 studies of mobile surveys, there were breakoff rates ranging between roughly 1 percent and about 30 percent.26 The results of their meta-analysis led the researchers to conclude that optimizing web surveys for mobile devices to minimize breakoffs among mobile respondents was very important. They also found that e-mail invitations, shorter surveys, using prerecruitment, more reminders, a less complex design, and an opportunity to choose the preferred survey mode all decrease breakoff rates in mobile web surveys.27

Survey Panels

As mentioned in Chapter 4 (Volume I), another web-based survey option gaining popularity, particularly because of low response rates to traditional telephone surveys and the growing availability of mobile Internet accessibility, is the survey panel. A survey panel is a sample of respondents who have agreed to take part in multiple surveys over time. Within online panel research, there are two distinct types: probability-based panels and opt-in or access panels.28

Probability-based Panels

With probability-based panels, a random sample is drawn from the population of interest, and the selected individuals are contacted and solicited to join the panel. While such a probability-based sample is considered the gold standard in terms of survey panels (as with other types of survey samples), it is not without problems. As might be inferred from the discussion of online access earlier, one crucial factor with probability-based panels is that those who do not have Internet access must be provided with it. If not, then the elimination of such individuals results in a biased sample that may not reflect the target population. As mentioned earlier in this chapter, the Pew Research Center estimates 89 percent of U.S. adults self-identify as Internet users. The fact that such an online-only survey panel would be excluding roughly one-in-ten adults might be considered unimportant, but the true share of the population that will be excluded from the Web-only survey is actually larger than that estimate suggests. For in addition to respondents who are not Internet users, the Pew Research identified other problems of web panel participation in connection with its American Trends Panel (ATP).29 Some respondents invited to participate in the Pew Research Center’s ATP either did not have or were not willing to provide an e-mail address in order to facilitate participation online. In fact, a little less than half of the typical mail sample of the Pew Center’s ATP consisted of Internet users who, for one reason or another, declined to participate in the ATP via the Web, and that share likely would have been higher if researchers had not stopped inviting these respondents to join the panel partway through recruitment. In total, the weighted share of panelists in the Pew study of the ATP who took surveys by mail was around 18 percent.30

Opt-in Panels

On the other hand, in opt-in or access panels, individuals volunteer to participate. If they do not have Internet access, they cannot be part of the panel. This fact, in turn, raises the issues of how representative of the target population different online panels truly are in terms not just of socio-demographics but also of attitudinal variables.31 Because people volunteer to participate in opt-in panels, there is also a risk of professional respondents, that is respondents who frequently participate in surveys and are mainly doing so for incentives.32 Succinctly put, the key characteristic of opt-in panels is that the participant pool is not constructed with random selection. It is worth noting that the majority of online research is based on such nonprobability panels.33

An interesting trend fostered by the growth in opt-in online panels has been a corresponding growth in third-party vendors. These vendors vet potential participants based on different background characteristics and willingness to meet the participation requirements of the vendor. Such vendors, then, market these panels to companies, as well as governmental, nongovernmental, and academic entities. These panels, which can range in size from 100 to over 1,000,000, are usually recruited to match certain characteristics sought by the research sponsor. For example, American Consumer Opinion, a company that provides online panels, advertises for panel participants thusly:

You will never have to pay any money to be a member. Your participation in our surveys is the only “cost” of membership. Join our paid online survey panel and help evaluate new products, test new advertising, and tell companies what you think. Make your opinions count.34

Indeed, a quick Internet search will show dozens of online survey recruitment sites with catchy come-ons such as:

Get free gift cards for taking polls, answering surveys and so much more!35 and “Want to earn money taking online surveys? Here’s your chance. Always high payouts. Free to join. Get paid for your opinion. Over $236 million Awarded.”36

An example of a large opt-in survey panel is YouGov, a company that bills itself as a global public opinion and data company. It currently claims:

An online panel of over 6 million panellists [sic] across 38 countries covering the UK, USA, Europe, the Nordics, the Middle East and Asia Pacific. These represent all ages, socio-economic groups and other demographic types which allows us to create nationally representative online samples and access hard to reach groups, both consumer and professional. Our US panel has 2 million respondents.37

YouGov provides a variety of “incentives”; basically participants earn points for taking part in YouGov surveys, which they can turn into cash or vouchers.

Another popular online opt-in survey panel called Audience is operated by the SurveyMonkey company. SurveyMonkey’s Audience panel is currently comprised of 2.4 million people in the United States. These individuals are recruited from among those who take one of SurveyMonkey’s surveys. Individuals who volunteer from this pool of potential participants are incentivized by the company making a fifty-cent contribution to the participant’s preferred charity, which according to SurveyMonkey provides better representativeness: “We use charitable incentives—and ensure diversity and engagement—so you get trustworthy market insights,” and “. . . attracts people who value giving back and encourages thoughtful honest participation.”38 Exactly how such a selection process and incentive system might accomplish this is not explained.

Clearly the advantage to businesses of having such participant-ready survey panels is their immediate availability; the risk is whether the panel truly represents the target population. The rapid expansion of online survey vendors attests to the popularity (and likely profitability) of these approaches but also provides concerns about quality.39 Somewhat ironically, there are even companies that provide rankings of different survey panel opportunities for potential participants.40

To sum up, online panels offer five advantages:

  1. Perhaps the most familiar use of panels is to track change in attitudes or behaviors of the same individuals over time. Whereas independent samples can yield evidence about change, it is more difficult to estimate exactly how much change is occurring—and among whom it is occurring—without being able to track the same individuals at two or more points in time.
  2. Considerable information about the panelists can be accumulated over time. Because panelists may respond to multiple surveys on different topics, it is possible to build a much richer portrait of the respondents than is feasible in a single survey interview, which must be limited in length to prevent respondent fatigue.
  3. Additional identifying information about respondents (such as an address) is often obtained for panelists, and this information can be used to help match externally available data, such as voting history, to the respondents. The information necessary to make an accurate match is often somewhat sensitive and difficult to obtain from respondents in a one-time interview.
  4. Panels can provide a relatively efficient method of data collection compared with fresh samples because the participants have already agreed to take part in more surveys.
  5. It can be possible to survey members of a panel using different interviewing modes at different points in time. Contact information can be gathered from panelists (e.g., mailing addresses or e-mail addresses) and used to facilitate a different interview mode than the original one or to contact respondents in different ways to encourage participation.

On the other hand, survey panels have limitations:

  1. They can be expensive to create and maintain, requiring more extensive technical skill and oversight than a single-shot survey.
  2. Repeated questioning of the same individuals may yield different results from what we would obtain with independent or “fresh” samples. If the same questions are asked repeatedly, respondents may remember their answers and feel some pressure to be consistent over time.
  3. Survey panels comprise many different types of samples. A fundamental distinction is between panels built with probability samples and those built with nonprobability, or “opt-in” samples. While probability panels are purportedly built on probability sampling, there has been an explosion of nonprobability or opt-in sample strategies. Using techniques such as weighting back to the population some providers of opt-in panel indicate they can achieve representative online samples.

For many, the Web survey has become the pathway to solve the problems of using traditional methods in a rapidly changing technological landscape. Yet in the haste to take advantage of this technology, scant attention has been paid to some of the fundamental elements of probability sampling. As Groves insightfully points out:

The Internet offers very low per-respondent costs relative to other modes; it offers the same within-instrument consistency checks that CATI and CAPI offer; it offers the promise of questions enhanced with video content; and it offers very, very fast turnaround of data records. When timeliness and cost advantages are so clear, the problems of the absence of a sampling frame are ignored by those parts of the profession whose users demand fast, cheap statistics.41

Big Data and Probability Surveys

The Internet and mobile technologies are producing large databases as information is routinely captured in written, audio, and video form. This constant and largely unseen capture of information about individuals is providing virtual warehouses of data about individuals throughout the world. Robert Groves captures the heart of such data accumulation thusly:

We’re entering a world where data will be the cheapest commodity around, simply because society has created systems that automatically track transactions of all sorts. For example, Internet search engines build data sets with every entry; Twitter generates tweet data continuously; traffic cameras digitally count cars; scanners record purchases; radio frequency identification (RFID) tags feed databases on the movement of packages and equipment; and Internet sites capture and store mouse clicks.42

Today the term Big Data has become increasingly used to describe the resultant amalgam of information that is available for access through companies, organizations, government agencies, and universities. More than 15 years ago, Doug Laney provided a discussion of the three major characteristics that distinguished Big Data from other forms of data collection.43 Today his precepts are referred to as the Three Vs of Big Data. As characterized by the University of Wisconsin’s Data Science program, today the Three Vs of Big Data include:

  1. Volume (high volume): The unprecedented explosion of data means that the digital universe will reach 180 zettabytes (180 followed by 21 zeroes) by 2025. Today, the challenge with data volume is not so much storage as it is how to identify relevant data within gigantic data sets and make good use of it.
  2. Velocity (high velocity): Data is generated at an ever-accelerating pace. Every minute, Google receives 3.8 million search queries. E-mail users send 156 million messages. Facebook users upload 243,000 photos. The challenge for data scientists is to find ways to collect, process, and make use of huge amounts of data as it comes in.
  3. Variety (high variety): Data comes in different forms. Structured data is that which can be organized neatly within the columns of a database. This type of data is relatively easy to enter, store, query, and analyze. Unstructured data is more difficult to sort and extract value from. Examples of unstructured data include e-mails, social media posts, Word-processing documents; audio, video and photo files; web pages; and more.44

The massive amount of Big Data collected through the Internet enterprise has offered great promise for survey research but also comes with distinct warnings. On the one hand, the ability to collect, store, and analyze so-called Big Data clearly offers opportunities to examine the relationships between variables (topics of interest) previously unavailable, and on a scale of populations rather than small samples. In so doing, many of the concerns about sampling and sampling error presumably fall away. At its extreme, Big Data provides the possibility of simple enumeration data, requiring nothing more complicated than addition, subtraction, multiplication, and division to summarize results. Some see the use of Big Data as a pathway to the elimination or at least great reduction of the need to do traditional probability sampling surveys at all!

However, several large issues do not bode well for a blanket abandonment of traditional surveys in favor of simple analysis of Big Data. One of the most important of these for survey researchers is that Big Data are often secondary data, intended for another primary use. As Lilli Japec et al. indicate, “This means that Big Data are typically related to some non-research purpose and then reused by researchers to make a social observation.”45 Japec et al. relate this to Sean Taylor’s distinction between “found vs. made” data. He argues that a key difference between Big Data approaches and other social science approaches is that the data are not being initially “made” through the intervention of some researcher.46 Japec et al. also highlight other problems related to the nature of such found data that are of concern to survey researchers, including the fact that there often are no informed consent policies surrounding their creation, leading to ethical concerns, and they raise statistical concerns with respect to the representative nature of the data.47

Despite the drawbacks, it is clear that Big Data is likely the 800-pound gorilla in the room of survey research. In the Big Data in Survey Research, AAPOR Task Force Report, Lilli Japec and her associate describe the nature of the new type of data as being transformative.48 The traditional statistical paradigm, in which researchers formulated a hypothesis, identified a population frame, designed a survey and a sampling technique, and then analyzed the results,49 will give way to examining correlations between data elements not possible before. Whatever the methods are that become the standards for the incorporation of Big Data, it seems that much of the focus in the analytic process is moving away from concentrated statistical efforts after data collection is complete to approaches centered around collecting, organizing, and mining of information. As Jules Berman puts it, “the fundamental challenge in every Big Data analysis project: collecting the data and setting it up for analysis. The analysis step itself is easy; preanalysis is the tricky part.”50

Today survey researchers are faced with many issues. Many of these are driven by rapidly changing technology that creates a moving target in the development of data collection approaches. Some of the solutions attempt to take advantage of the new methodologies within a framework of the existing sampling paradigm. The web has become an easily accessible and inexpensive tool for survey delivery, even though a large number of web applications use nonprobability sampling methods, such as certain survey panels, and therefore are suspect in terms of generalizing back to a larger population of interest. With these new technologies come problems that affect the representativeness of sampling when they are simply layered over designs created around different data collection methods. The creation of new platforms for survey delivery requires an examination of alternative approaches.51

Summary

The birth and development of modern computers is closely intertwined with today’s survey techniques.

  • In the early days, general-use computing was dominated by huge, expensive, mainframe computers, which were physically sequestered in secure facilities that were accessible only to a select number of individuals.
  • Since the mid-1960s, computer technology has grown at an exponential rate, providing an amazing ability to put more and more data-processing and storage-capacity hardware into smaller and smaller units at a cheaper cost.
    • Transformation in computer technology to these smaller, yet more powerful computers then gave rise to the development of desktop and laptop computing, which, in turn, led directly to the development of a computer-assisted paradigm of survey research.
    • The computer revolution popularized computer-assisted telephone interviewing (CATI) in the mid-1970s and computer-assisted personal interviewing (CAPI), in the 1980s and 1990s.

The rapid miniaturization of memory chips with simultaneous expansion of capacity ushered in a tremendous increase in the use of cell phones and other portable “smart” devices.

  • As computer technology moved from personal computers to smaller handheld devices, the CATI/CAPI survey followed. Today there are MCATI and MCAPI (with the M designating the mobile device nature of these modalities), using platforms that have moved interviewing onto mobile devices.
  • While advances in technology enabling such platforms have provided greater elasticity in conducting interviews and conducting online surveys, they have introduced their own set of coverage, sampling, nonresponse, and measurement problems.
  • Availability and familiarity make the web-enabled smartphone or mobile device a prime modality for a wide range of survey data collection.
  • The survey panel as web-based survey option is gaining popularity, particularly because of low response rates to traditional telephone surveys and the growing availability of mobile Internet accessibility.
    • Within online panel research, there are two distinct types: probability-based panels and opt-in or access panels.
    • The majority of online research is based on such nonprobability panels.

Internet and mobile technologies are resulting in the capture and use of large databases of information. The term Big Data has become increasingly used to describe the resultant amalgam of information that is available for access through companies, organizations, government agencies, and universities.

  • Big Data problems identified as most important for survey researchers include: (1) Big Data are often secondary data, intended for another primary use (identified as found data); (2) often there are no informed consent policies surrounding their creation, leading to ethical concerns; and (3) there are statistical concerns with respect to the representative nature of the data.
  • Whatever the methods are that become the standards for the incorporation of Big Data, it seems that much of the focus in the analytic process will move away from concentrated statistical efforts after data collection is complete to approaches centered on collecting, organizing, and mining of information.

Annotated Bibliography

Survey Sampling and Technology

Some resources for the impacts of technology on sampling methodologies and modalities include:

  • See AAPOR’s (American Association for Public Opinion Research) 2014 report Mobile Technologies for Conducting, Augmenting and Potentially Replacing Surveys.52
  • Brick provides a good review of the forces now shaping survey sampling in his Public Opinion Quarterly article “The future of survey sampling.”53 75, no. 5, pp. 872–888.
  • Similarly, Courtney Kennedy, Kyley McGeeney, and Scott Keeter 2016 provide a recent discussion on the transformation of survey interviewing as landlines disappear in “The twilight of landline interviewing.” http://www.pewresearch.org/2016/08/01/the-twilight-of-landline-interviewing/54 (accessed October 2, 2018).

There are many research studies which have compared the results of survey administration using different survey platforms. Here are some examples of the different avenues of this research:

  • Malanie Revilla and Carlos Ochoa discuss differences in narrative questions and responses using PCs and smartphones in their article “Open narrative questions in PC and smartphones: Is the device playing a role?”55
  • For a good review and more global perspective, see Daniele Toninell, Robert Pinter, and Pable de Pedraza’s Mobile Research Methods: Opportunities and Challenges of Mobile Research Methodologies.56
  • Tom Wells, Justin Bailey, and Michael W. Link examine differences between smartphone and online computer surveys in “Comparison of smartphone and online computer survey administration.”57
  • Similarly Yeager et al. explore quality differences between RDD (telephone survey) and Internet surveys in “Comparing the accuracy of RDD telephone surveys and Internet surveys conducted with probability and non-probability samples.”58

Online Surveys

In the past decade or so, there has been much discussion regarding online surveys, particularly as they have migrated from PCs to mobile devices (which are rapidly becoming the predominate way to access the web).

  • Baumgardner et al., for example, examine the impacts of the Census Bureau’s move to an online version of the American Community Survey: The Effects of Adding an Internet Response Option to the American Community Survey.59
  • Mick Couper’s book, Designing Effective Web Surveys,60 provides a nice overview of the fundamental issues surrounding the design and implementation of online surveys.
  • Roger Tourangeau et al. similarly explore online surveys with additional insights gained since Couper’s earlier volume in their 2013 book, Science of Web Surveys.61

Survey Panels

Online survey panels have become one of the most ubiquitous and often controversial elements of the transition of survey research to mobile electronic platforms.

  • Nearly a decade ago, the American Association for Public Opinion Research took on the topic of online panels, exploring the strengths and weakness of panel-type surveys, and discussed how such panel would likely integrate into the larger realm of survey research. AAPOR (American Association for Public Opinion Research). 2010. AAPOR Report on Online Panels. Also see, Baker, R., S. J. Blumberg, M. P. Couper, et al. 2010.62AAPOR Report On Online Panels.The Public Opinion Quarterly, 74, no. 4, pp. 711–781.
  • Similarly, Callegaro et al. visited the online panel issue, focusing on data quality, which has been one of the major concerns of panel surveys, in their volume, Online Panel Research: A Data Quality Perspective.63
  • More data is being accumulated on panels as they migrate to mobile devices. A good example of recent comparative research is Peter Lugtig and Vera Toepoel’s 2016 article: “The use of PCs, smartphones, and tablets in a probability-based panel survey: Effects on survey measurement error.”64

Big Data and Probability Sampling

The availability of massive qualities of secondary data gleaned from the everyday use of the web has both excited and alarmed the survey research community.

  • In 2015, AAPOR (American Association for Public Opinion Research) undertook an extensive review of the impacts of Big Data on survey research, which can be found in AAPOR Report on Big Data.65 Also see, Lilli Japec et al. discussion: “Big Data in Survey Research: AAPOR Task Force Report.”
  • AAPOR (American Association for Public Opinion Research) also examined the larger topic of nonprobability sampling in 2013 in the Report of the AAPOR Task Force on Non-Probability Sampling.66
  • Berman provides a more global review on Big Data in Principles of Big Data: preparing, sharing, and analyzing complex information.67 (Note: Some of his discussion is presented at a fairly high technical level.)
  • Kreuter’s edited book provides an examination of information coming available through online transactions as it relates to survey research: Improving Surveys with Paradata: Analytic Uses of Process Information.68
  • Lampe et al. presented an interesting paper dealing with the critical issue of trust in the application of Big Data to social research at the AAPOR meeting in 2014: “When are big data methods trustworthy for social measurement?”69
  • Finally, an intriguing look at the impacts of Big Data is provided by Mayer-Schonberger and Cukier in Big Data: A Revolution That Will Transform How We Live, Work, and Think (2013).70
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset