CHAPTER 1

An Introduction to Audience Research

Audiences are the source of the media’s wealth and power. They pay directly for goods and services. And even when audiences choose “free” media, their attention is sold to advertisers for billions of dollars, euros, and yen. Beyond establishing the value of media products, audiences confer social significance on the media through their choices. The programs and websites that succeed in attracting large numbers of followers help set public agendas and shape the cultures in which we live.

However, audiences are elusive. They are dispersed over vast geographical areas, sometimes on a global scale. They are tucked away in homes and businesses, where they move fluidly from one “platform” to the next. For media providers to make sense of their audiences, let alone profit from them, they must be able to see them.

It is audience research that makes them visible. Without it, institutions cannot hope to manage public attention for good or ill. And without an understanding of audience research, media professionals are ill equipped to do their jobs. This research, especially ratings research, is the central focus of this book. In the following pages, we explore audience measurement systems across various countries and what we can learn from these data.

TYPES OF AUDIENCE RESEARCH

To put audience measurement in context, we begin by considering several broad categories of research. These categories are not unique to the study of audiences, nor will we deal with all of them in subsequent chapters. We review them here to provide an overview of research practices, to help readers identify the various motivations and methods of researchers, and to build a vocabulary for talking about the field.

Applied Versus Theoretical

Applied research, sometimes called action research, provides practical information that can guide decision making by describing some phenomenon of interest or by illuminating the consequences of a particular course of action. Applied research is typically concerned with an immediate problem or need, and rarely is there any pretense of offering generalizable explanations about how the world works. Nevertheless, this research can produce useful insights and sometimes forms the basis for more enduring theories about audience behavior.

In media industries, applied research dominates audience analysis. Examples from television include surveys that measure which advertisements are well remembered, which celebrities are well liked, and whether the social media “buzz” about a program suggests high levels of viewer engagement. These insights can affect production and programming decisions. Examples from the Internet include web-based experiments that test the effectiveness of various appeals or offers in getting visitors to click through to a purchase. That can affect the sales of books or DVDs. Of course, both television and the websites depend on ratings data to describe the size, composition, and behaviors of their audiences. These become the metrics used to place and evaluate advertising and, as such, are the essence of applied research.

A special type of applied research, sometimes treated as a separate category, is methodological research. This is, basically, research on research. As we explain in the chapters that follow, many audience research companies, like Nielsen or Arbitron, rose to prominence by developing new research methods. They are, after all, in the business of selling research products. Like any self-interested company, they engage in product testing and development to provide their clients with the data they need in a fast-changing media environment. Methodological audience research might include questions like, “How can we measure television viewing more accurately?” or “How should we recruit people into our panels?” or “How can we track users across media platforms?” Many of the answers to these methodological questions are discussed in our chapters on audience data.

Theoretical research tests more generalized explanations about how the world operates. If those explanations, or theories, are broad and well supported by evidence, they can be useful in many different settings. As Kurt Lewin, a well-known communication researcher, said, “Nothing is as practical as a good theory” (Rogers, 1994, p. 321). Although theoretical research is sometimes conducted in industry, it is more common in academic settings. Examples include experiments designed to identify the effects of watching violence on television or the factors that determine which songs people download from websites. These studies typically go beyond the specific problems of individual organizations.

Neither applied nor theoretical research is reliably defined by the type of method used by the investigator. Surveys, experiments, in-depth interviews, content analyses, and other methods can all serve applied or theoretical purposes. To make matters even more complicated, a specific research project could conceivably serve either purpose depending on who is reading the study and the lessons they learn. This flexibility is probably a good thing, but it does mean that the boundary between applied and theoretical research is sometimes difficult to determine.

Quantitative Versus Qualitative

Industry researchers and academics alike often make a distinction between quantitative and qualitative research. A good deal of ambiguity surrounds the use of these terms. Strictly speaking, quantitative research reduces the object of study to numbers. This allows researchers to analyze large groups of people and to use statistics to manage the data. Qualitative research produces non-numeric summaries such as field notes or comments transcribed from an interview. While qualitative methods allow an investigator to dig deeply into a given topic, it is often hard to generalize the findings to larger populations. Ideally, the two approaches are used in tandem. Qualitative studies provide rich details and unexpected insights, and quantitative studies provide generalizability.

Unlike the differences between theoretical and applied research, qualitative and quantitative categories tend to be associated with particular research methods. Quantitative studies rely heavily on surveys, experiments, and content analyses. These methods identify variables of interest and assign numbers to people, or other units of analysis, based on those attributes. For example, a survey researcher might record people’s ages and keep track of their gender by assigning a “1” to males and a “2” to females. An experimenter might quantify physiological responses like heart rates or eye movements to identify response patterns. Similarly, someone studying political communication might record the number of times each politician is quoted in news reports during a presidential campaign, to identify reporting biases.

Qualitative methods such as group interviews or participant observation usually produce non-numeric results like transcripts or field notes. However, to make sense of these records, a bit of quantification can enter the picture. Investigators sometimes categorize and count (i.e., quantify) their observations. For example, an investigator might want to track the prevalence of ideas or phrases. Thus, the richness of open-ended comments and idiosyncratic behaviors are reduced and summarized in a way that looks like quantitative research.

The distinction between qualitative and quantitative becomes even murkier as the terms are used in industry. Many media professionals equate the term “quantitative research” with “audience ratings.” As we will see in the chapters that follow, ratings act as a kind of “currency” that drives media industry revenues. Any research that does not provide the hard numbers used to value audiences is rather casually referred to as qualitative research, which includes studies that address less routine audience characteristics such as lifestyles, values, opinions, and product preferences. While these data usually do not replace ratings as the currency used to buy and sell media, they are technically “quantitative” because they reduce the characteristics of interest to statistical summaries.

That said, there are many examples of true qualitative work in industry. Focus groups, which involve gathering a small group of people to talk about some topic of interest, are widely used. Krueger and Casey (2000) define this type of study as “a carefully planned discussion designed to obtain perceptions on a defined area of interest in a permissive, non-threatening environment” (p. 5). Focus groups are a popular way to assess radio station formats, news personalities, and program concepts. For example, Warner Bros. routinely tests television pilots using this technique. A skilled moderator probes to determine how prospective audience members react to the various program elements—what works and what does not. These insights can be used to inform decisions about character development, plot lines, and programming.

In the past three decades, another family of qualitative approaches, broadly termed audience ethnography, has gained in popularity. Some ethnographies are very much like focus groups. Some involve nonstructured, one-on-one interviews with media users. Others involve studying what people, like fans, are saying on social media sites. Still other ethnographies introduce observers into places of interest like households or fan conventions. In 2008, the Council for Research Excellence (CRE) funded a study that had trained observers to follow people throughout an entire day to better understand how they used different media platforms like televisions, computers, and mobile devices. At the extreme end of the spectrum, ethnographers might immerse themselves in the site of study for months or even years. The best ethnographies can produce a depth of understanding that is hard to match with quantitative methods.

Micro Versus Macro

Audience research can operate at different “levels of analysis.” Social scientists often draw a distinction between micro- or macro-level research. Micro-level studies, like ethnographies, look at audiences from the inside out, by adopting the perspective of an individual audience member. Macro-level studies look at audiences from the outside in, to understand how they behave as large, complex systems. Like the other distinctions we have reviewed, telling the difference can be tricky because macro-level systems, like markets or social networks, are assembled by aggregating individual media users. Knowing when you have moved from one level to the next is not always obvious. Still, it is an important distinction to keep in mind.

Micro-level studies focus on individuals—their traits, predispositions, and media-related behaviors. They frame research questions on an intuitively appealing, human scale. It is natural for us to think about audiences in this way, because we all have experience as media users and, through introspection, can imagine what might explain someone else’s actions. Micro-level research often operates on the assumption that if we could only figure out what makes individual media users tick, then we will understand audience behavior. After all, audiences are just collections of people.

Focusing on individuals, though, causes researchers to turn a blind eye to factors that are not person specific. We have known for a long time that program-scheduling practices can affect program choices, sometimes overriding individual program preferences. And now, with the growth of social media, we see patterns of media consumption, like “herding,” that are not easily explained by individual traits. As Duncan Watts, a noted sociologist and researcher at Microsoft, observed, “You could know everything about individuals in a given population—their likes, dislikes, experiences, attitudes, beliefs, hopes, and dreams—and still not be able to predict much about their collective behavior” (2011, p. 79).

But most audience research, especially ratings research, is about collective behavior. Audience analysts usually want to make statements about what large numbers of people have done or will do. They generally do not care if Bob Smith in Cleveland sees a newscast, but they do care how many men aged 35 to 64 are watching. This interest in mass behavior, which is typical of macro-level research, turns out to be a blessing. Trying to explain or predict how any one person behaves, on a moment-to-moment, day-to-day basis, can be an exercise in frustration. But when you aggregate individual activities, the behavior of the mass is often quite predictable—and the business of selling audiences to advertisers is built on predictions.

This science of studying large populations has been called statistical thinking. It was developed in eighteenth-century Europe by, among others, insurance underwriters. Consider, for example, the problem of life insurance. Predicting when any one person will die is almost impossible, but if you aggregate large numbers, you can estimate how many people are likely to expire in the coming year. You need not predict the outcome of each individual case to predict an outcome across the entire population. In the same sense, we do not need to know what Bob Smith will do on a given evening to predict how many men his age will be watching television.

When we focus on macro-level phenomena, media use becomes much more tractable. We can identify stable patterns of audience size and flow. We can develop mathematical equations, or models, that allow us to predict media use. Some have even gone so far as to posit “laws” of audience behavior. These laws, of course, do not bind each person to a code of conduct. Rather, they are statements that mass behavior is so predictable that it exhibits law-like tendencies. This kind of reasoning is typical of most commercial audience research, and it underlies many of the analytical techniques we discuss in later chapters.

There is no one right way to study audiences. Like qualitative and quantitative methods, each level of analysis has its virtues and limitations. And, as was true with our discussion of methods, using both micro- and macro-level approaches generally leads to a deeper understanding of media use.

Syndicated Versus Custom

The final distinction we will draw is between syndicated and custom research. Syndicated research offers a standardized product that is sold to multiple subscribers. Audience ratings reports, for instance, serve many users in a given market. Custom research is tailored to meet the needs of a single sponsor.

Syndicated research is common anywhere audiences are sold to advertisers, which, these days, is just about everywhere. Table 1.1 lists major suppliers of syndicated audience research around the world, as well as the kinds of products they sell. This list is representative rather than comprehensive. You will note it includes many large international companies that operate in dozens of countries. Many use sophisticated, sometimes expensive, techniques to electronically monitor digital media use. These data are bought by media industries and usually are not available to the general public in much detail. Several companies also provide comparative media reports that track advertising placement and how much it costs to reach listeners or viewers in various markets. A growing number of large media operators, like Google and Facebook, provide data about their own users. All in all, media industries are awash in numbers. And, as the digital media environment becomes more pervasive and complex, the ability to manage and interpret those numbers becomes increasingly important.

Syndicated research has several advantages relative to other kinds of research. Because the cost of a syndicated study is shared by many subscribers, each user pays just a portion of the total. The methods that syndicators use to collect data are generally well understood and sometimes subject to independent audits. They are further motivated to be objective because their reports often serve clients with competing interests. The semi-public nature of the documents makes it harder for any one entity to misrepresent the research, while the standardization of report formats facilitates routine uses of the data. Although they are imperfect, syndicated data, like audience ratings, often become the official numbers used to transact business.

Custom research is designed to meet the needs of a particular sponsor and might not be shared outside the sponsoring organization. These studies could be commissioned from specialists, like news and programming consultants, or conducted by an in-house research department. Many major market radio stations, for example, track public tastes in music through telephone surveys. Researchers call a sample of potential audience members and ask them to listen to the “hook,” or most memorable phrase, of several popular songs. Stations use this call-out research to adjust their “playlists.”

TABLE 1.1
Major Suppliers of Syndicated Audience Measurement Worldwide

Arbitron
www.arbitron.com
Best known as the supplier of radio ratings in the United States, Arbitron is an international marketing research firm measuring radio, TV, cable, and out-of-home media. Its “portable people meters” are used for audience measurement in North American, Europe, and Asia.

Audit Bureau of Circulations
http://www.accessabc.com/index.html

ABC verifies the circulation claims of print and interactive media. It audits website traffic and, in conjunction with Scarborough, offers reports on newspapers’ print and online readership.

comScore
http://www.comscore.com

This is an international research firm operating across 170 countries. It is well known for Media Metrix reports, which measures Internet use by combining data from a 2-million-person global panel along with website-based data.

CSM Media Research
http://www.csm.com.cn/

A joint venture between CTR Market Research and Kantar Media, CSM provides television and radio ratings in China and Hong Kong. It operates a large audience panel estimating the behavior of over 1 billion people.

GfK. Group
http://www.gfk.com/group/index.en.html

A large marketing research company that provides media measurement in over 20 countries, it owns Telecontrol, which offers electronic TV audience measurement in several countries including Germany, France, and India. It also owns Mediamark Research (MRI), which publishes a national survey of U.S. consumers including product use, demographics, and general measures of print and electronic media use. MRI sells a service that “fuses” their data with Nielsen’s national television panel.

Hitwise
http://www.hitwise.com/us

Hitwise aggregates data from Internet service providers (ISPs) to provide a range of standard metrics about websites including page requests, visits, average visit length, search terms, and behavior. This approach yields large samples including 25 million people worldwide and 10 million in the United States. It is a part of Experian Marketing Services.

IBOPE
http://www.ibope.com.br

This Brazilian multinational marketing and opinion research firm operates IBOPE Media, which provides television audience measurement in 13 Latin American countries and Internet measurement in conjunction with Nielsen Online.

Ipsos
http://www.ipsos.com

Ipsos is a global marketing research firm that measures audience size and composition across media platforms. Among other things, they measure audiences for print media in 59 countries and radio in 24 countries.

Kantar Media
http://www.kantarmedia.com

Kantar Media includes what was once known as TNS media. It offers a range of TV, radio, and Internet audience measurement services in more than 50 countries.

Knowledge Networks, Inc.
www.knowledgenetworks.com

Knowledge Networks conducts both custom and syndicated reports, including MultiMedia Mentor, which surveys media use across eight different platforms and is based on a panel of over 50,000 people in the United States.

Marketing Evaluations Inc.
www.qscores.com

Best known for “Q Scores” that measure the public’s familiarity with and liking of TV programs, brands, and celebrities alive and dead, Marketing Evaluations also have a Social TV Monitor that measures viewer involvement with prime-time TV programs.

Mediametrie
http://www.mediametrie.com

A French audience measurement firm owned by the media and advertisers, it tracks radio, Internet, and cinema audiences and produces television ratings using peoplemeters.

Nielsen
http://nielsen.com/us/en.html

The largest marketing research firm in the world, Nielsen is best known as the provider of U.S. TV audience ratings in both national and local markets. Nielsen operates in some 100 countries, providing audience measures for television, radio, music, movies, books, DVDs, video games, mobile devices, and online activities including website visits and the “buzz” on social media platforms.

Rentrak
http://www.rentrak.com

An audience measurement and research company, Rentrak tracks movie box office numbers in over 25 countries, mobile media use, and television viewing using “set-top box” data.

Roy Morgan Research
http://www.roymorgan.com/company/index.cfm

An Australian market research and opinion polling firm, Roy Morgan Research surveys consumers in Australian and New Zealand about their lifestyles, product purchases, and media consumption habits.

Scarborough Research
www.scarborough.com

Scarborough Research provides local market reports in over 75 U.S. cities and measures demographics, shopping, lifestyle, and use of electronic, print, and out-of-home media. It is owned by Arbitron and Nielsen.

Simmons
http://www.experian.com

Simmons publishes a national survey of over 25,000 U.S. respondents with demographics, product use, and general measures of print and electronic media use. Data can be fused with Nielsen’s national TV ratings. It is a part of Experian Marketing Services.

Synovate
http://www.synovate.com

A large marketing research firm based in the Netherlands, Synovate publishes the European Media & Marketing Survey (EMS), which measures TV, print, and website use across 20 European countries and is useful in pan-European media campaigns. It also publishes PAX, which similarly surveys Asia Pacific and Latin American. Synovate is now part of Ipsos.

The Media Audit
http://www.themediaaudit.com
The Media Audit issues a variety of reports in over 80 U.S. markets. It conducts telephone surveys measuring audience levels and audience characteristics for radio stations, local TV news programs, cable TV viewing, daily newspapers, weekly and monthly publications, the Internet, local media websites, and outdoor media.

Another way to test the audience appeal of new programs is to use a program analyzer, a device that CBS developed in the late 1930s. Researchers bring respondents into an auditorium and ask them to listen to programs and then vote at regular intervals on what they like and dislike. This tradition of audience research lives on in a large CBS facility in Las Vegas called “Television City.” In addition to program analyzer studies, Television City conducts focus groups and gauges viewers’ reactions to programs by measuring eye movements and brain wave activity. Other large media corporations, like Time Warner, operate research labs that engage in similar kinds of activities.

Custom research can be very valuable to its sponsors, but often it goes no further. Those who conduct call-out or program-analyzer research would be loath to share the results with anyone outside their organizations. And if they did, the information might be regarded with some suspicion. Outsiders could have a hard time verifying the methods and might well assume that the sponsor has a self-serving motive for promoting the results.

Although most of research conducted in colleges and universities is customized, it is generally referred to as original or primary research. When the results of academic studies are published in scholarly journals, they are reviewed by other experts in the field. This process provides some assurance that the authors used defensible research procedures. Occasionally, academic or university research centers are commissioned by industry to perform customized studies. A university affiliation may contribute to greater public credibility.

The attributes of both syndicated and customized research are sometimes combined in hybrid studies. Research syndicators will often produce standardized reports, but they still have vast stores of raw data that could be analyzed in ways that are of particular interest to a single client. It is common, these days, for syndicators to provide paying customers with online access to their databases so they can produce “customized” reports. Because they are based on existing data, these studies are called secondary analyses.

For example, many companies around the world measure television audiences using “peoplemeters.” These devices record who is watching television and what they are watching on a minute-by-minute basis. These are the data used to estimate program ratings. But a client might want to know how their audience moves from one program or channel to the next. With access to the peoplemeter database, it is not hard to conduct studies of “audience flow” that could provide that information. As we will see in chapter 8, this information can be useful to programmers when they make scheduling decisions. Similarly, companies that measure website audiences not only report the number or unique visitors to various sites; they often have online tools their clients can use to identify where those visitors come from and where they go when they leave the site.

Hybrid studies have a number of advantages. They are certainly in the syndicator’s interest since they can generate additional revenues while requiring very little additional expenditure. Clients may also find that they are less expensive than trying to conduct original custom research to answer the same questions. Moreover, because the results are based on syndicated data, they have the air of official, objective numbers.

For all these reasons, secondary analyses of existing data can be enormously valuable. But they must also be performed with caution and an understanding of what is sound research practice. Quite often, when data are sliced up in ways that were not intended at the time of collection, the slices become too small to be statistically reliable. We will have much more to say about the problems of sampling and sample sizes in chapter 3.

RATINGS RESEARCH

The type of audience research that is at the heart of this book is ratings research. As such, it is worth saying a few words about what ratings research is and why it is so important.

What Is It?

Historically, the term ratings has been used as a kind of shorthand for a body of data on people’s exposure to electronic media. Strictly speaking, a rating is a percentage of the entire population who sees or hears something, and it is just one of many audience summaries that can be derived from that data. In the United States, the practice of reporting program ratings goes back to the 1930s, when radio needed to authenticate its audience to advertisers.

Ratings research can be described using the categories we have just reviewed. It is always done for some applied purpose, like selling audiences to advertisers or making programming decisions. However, as we noted, the data that are generated for an applied purpose can also be used to test various theories of media use. Ratings research is always quantitative because its primary purpose is to describe what large populations are doing. For that same reason, it is almost always pitched at the macro level of analysis. When the data are disaggregated, though, it is possible to track individual users to learn things such as what media “repertoires” they use on a day-to-day basis. And ratings research is generally provided by an independent, “third party” syndicator, although that practice is beginning to loosen.

The rapid growth in digital media that are “served” over networks has affected the historic nature of ratings research in two ways. First, it has expanded the list of firms that collect and report audience information. Second, it has changed the kinds of data that are being collected.

Often, the companies that are gathering these novel sorts of data are not traditional research syndicators. For example, Facebook is a media outlet that makes money by selling advertising. At this writing, Facebook has in excess of 800 million users worldwide. Users interact with friends and discuss things, and in the process they divulge a lot of information about themselves. Facebook uses that data to sell highly targeted advertisements. Similarly, Google collects enormous amounts of information and sells advertising. These companies make some of their data publicly available with services like “Facebook Page Insights” and “Google Analytics.” Occasionally, they cooperate with traditional syndicators like Nielsen. Still other firms harvest what is available on the web and provide a variety of specialized audience measures. But to the extent that these data come from the media themselves, they are not attributable to the “objective” third parties that have traditionally produced audience ratings.

With these new sources of data, the very notion of what might constitute a rating could change. Since the early twentieth century, ratings research has measured exposure to media: first radio, then television, and now Internet. Of course, whether people see a program or visit a website is not the only thing an advertiser or programmer might want to know. For a long time, ratings users have also wondered whether audiences were “engaged” with what they saw. To address that question, social media like Facebook and Twitter are now monitored to track the amount and type of discussion about programs, products, and personalities. Indeed, the enormous amounts of data being collected by the servers that power digital networks raise the possibility that a whole new array of media ratings might be upon us. This has caused one commentator to predict a “post-exposure audience marketplace” that would offer ratings users a “basket of currencies” (Napoli, 2011, p. 149).

Unfortunately, the wealth of possibilities presents a problem. For any measure to work as a currency, people need to agree that it will be the coin-of-exchange. The new postexposure marketplace offers so many alternatives that agreement is often hard to find. For example, many believe engagement could be as valuable a metric as exposure. To build consensus about what exactly it means, the Advertising Research Foundation gathered industry experts and produced a white paper on the subject. After careful deliberations, they identified no fewer than 25 different definitions of “engagement” (Napoli, 2011)—and those definitions do not exhaust the possibilities. Without a shared understanding of what is to be measured and how those metrics are to be used, it is difficult for newer types of ratings to gain traction. While they can undoubtedly enrich our understanding of media use and inform marketing and programming decisions, we suspect they will complement, rather than replace, measures of exposure.

What, then, is the proper scope of a book on “ratings analysis”? Our approach continues to emphasize measures of exposure. We do this for three reasons. First, there are more data on exposure than ever before. Changing the channel on a digital set-top box, clicking on a web page, downloading a song, or streaming a video can all be construed as measures of exposure. While these are not simple, uniform behaviors, they are relatively straightforward compared with concepts like “engagement.” Because they are easy for people to understand, they form the basis of useful metrics. Second, inventive analyses of exposure, like noting how much time people spent on a web page or tracking their media choices over time, can often reveal their loyalties or levels of engagement. Third, measures of exposure are still the currency media industries use to transact business. For any media product or service to be successful, it must first attract an audience. Once you know who is out there, you can try to do something more with them, like sell them an idea or a product. But the process generally begins with documenting and understanding patterns of exposure. That is the central focus of ratings research.

Why Is It Important?

It should be apparent by now that ratings research is important to many people in the media industries. In the United States, ratings guide the allocation of some $150 billion in television advertising alone. Worldwide, that number is predicted to exceed $500 billion by 2015. Ratings research is also valuable to people who program stations and television networks, develop websites, assess the value of media properties, and craft public policy. In the chapters that follow, we will discuss how audience ratings are used to support all of these activities. But a simple list of the people who depend on audience data to do their jobs understates the larger social significance of ratings research. To understand why, we need to appreciate the world’s growing dependence on electronic media and how audience measurement shapes those systems.

We noted that ratings research began in the 1930s. Listening to radio broadcasts quickly became a popular pastime. In the United States, advertising provided the money needed to operate the industry. However, both broadcasters and advertisers needed ratings data to make that system work. European broadcasting began at about this time, although, initially, most European countries relied on government funding for radio and then television. The rest of the world has followed suit. Today, China has the world’s largest television audience, with well over 1 billion viewers. It is also the world’s third largest advertising market. India has gone from having five television channels in 1991 to more than 500 active channels (FICCI, 2011, p. 18). And with the introduction of new media platforms like Internet and smartphones, everyone is consuming more digital media. The average American spends almost 5 hours a day watching television and another hour on the Internet. And half of Americans now watch at least some video online (Nielsen, 2011). Although a few media outlets are still state supported, most media are funded by some combination of advertising and direct consumer payments. But regardless of the source of funding, they all depend on market information to operate.

Academicians sometimes call the systems that produce these data “market information regimes.” According to sociologists Anand and Peterson, “Market information is the prime source by which producers in competitive fields make sense of their actions and those of consumers, rivals, and suppliers” (2000, p. 217). Ratings data are a prime example of such market information. They allow media institutions, public or commercial, to make sense of their audiences and act accordingly. Without it, they are blind. But like all market information regimes, ratings research is never neutral. Although the best ratings suppliers conform to well-established research practices, they all make decisions about exactly what to measure and how data are to be gathered and reported. Those decisions have consequences, and they almost always operate to the advantage of some and the disadvantage of others. For example, in chapter 2, we will describe the controversy that erupted when Nielsen began replacing diaries with peoplemeters in local U.S. markets. Broadcasters were generally unhappy with the change and argued that it would make minority programming less viable.

That argument proved to be more a rhetorical strategy to delay the implementation of peoplemeters than a real problem with the research. But it illustrates that these arcane audience statistics can have consequences beyond their seemingly narrow purpose. Any change in how you produce audience ratings can have ripple effects throughout the system. The New York Times explained it this way:

Change the way you count, for instance, and you change where the advertising dollars go, which in turn determines what shows are made and what shows then are renewed. Change the way you count, and potentially you change the comparative value of entire genres (news versus sports, dramas versus comedies) as well as entire demographic segments (young versus old, men versus women, Hispanic versus black). Change the way you count, and you might revalue the worth of sitcom stars, news anchors and—when a single ratings point can mean millions of dollars—the revenue of local affiliates and networks alike. Counting differently can even alter the economics of entire industries, should advertisers … discover that radio or the Web is a better way to get people to know their brand or buy their products or even vote for their political candidates. Change the way you measure America’s cultural consumption, in other words, and you change America’s culture business. And maybe even the culture itself. (Gertner, 2005, p. 36)

America is not unique in this regard. As more and more countries use ratings research to understand and manage their own media systems, the ripple effects of audience measurement will be felt around the world.

Audience ratings loom large for virtually everyone with a stake in the operation of electronic media. They are the tools used by advertisers and broadcasters to buy and sell audiences. They are the report cards that lead programmers to cancel some shows and clone others. Ratings are road maps to our patterns of media consumption and, as such, might be of interest to anyone from an investment banker to a social scientist. They are the object of considerable fear and loathing, and they are certainly the subject of much confusion. We hope this book can end some of that confusion and lead to an improved understanding of audience research and the ways in which it can be used.

The rest of the book is divided into three parts. The first considers the audience data itself by reviewing who collects it and the methods that they use. The second provides a way to understand and analyze audience data, including a general framework for explaining audience behavior and a review of useful analytical techniques. And the final part examines the many applications of audience research and how different users, like advertisers and programmers, tend to look at the data.

RELATED READINGS

Balnaves, M., O’Regan, T., & Goldsmith, B. (2011). Rating the audience: The business of media. London, UK: Bloomsbury

Beville, H. (1988). Audience ratings: Radio, television, cable (Rev. ed.). Hillsdale, NJ: Lawrence Erlbaum Associates.

Easley, D., & Kleinberg, J. (2010). Networks, crowds, and markets: Reasoning about a highly connected world. Cambridge, UK: Cambridge University Press.

Ettema, J., & Whitney, C. (Eds.). (1994). Audiencemaking: How the media create the audience. Thousand Oaks, CA: Sage.

Gunter, B. (2000). Media research methods: Measuring audiences, reactions and impact. London, UK: Sage.

Krueger, R. A., & Casey, M. A. (2000). Focus groups: A practical guide for applied research (3rd ed.). Thousand Oaks, CA: Sage.

Lindlof, T. R., & Taylor, B. C. (2011). Qualitative communication research methods (3rd ed.). Thousand Oaks, CA: Sage.

Napoli, P. M. (2011). Audience evolution: New technologies and the transformation of media audiences. New York: Columbia University Press.

Webster, J., & Phalen, P. (1997). The mass audience: Rediscovering the dominant model. Mahwah, NJ: Lawrence Erlbaum Associates.

Wimmer, R., & Dominick, J. (2010). Mass media research: An introduction (9th ed.). Belmont, CA: Wadsworth.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset