Chapter 14

The Human Advantage

Leveraging the Power of Predictive Analytics to Strategically Optimize Social Campaigns*

B.H.B. Honan; D. Richer    KRC Research, New York, NY, United States

Abstract

KRC Research has developed a series of proprietary social and digital media content analytics methods that combine human intelligence with advanced analytical tools. By randomly sampling social media content, coding the content for qualitative dimensions and format attributes, and building statistical models from the results, our work for a range of clients has discovered important strategic insights into the why behind social media sentiment and how clients drive forward successful campaigns. KRC Research helps companies better engage consumers with this method. For example, for one client, we developed a framework to code 1500 pieces of Twitter and LinkedIn text and multimedia content. We then created a predictive model to show which elements of posts and content best engaged the company’s audience to help optimize its messaging. This chapter explains how organizations can use and learn from our dynamic approach.

Keywords

Coding framework; Social media analytics; Social media listening; Predictive modeling; Subgroup analysis; Sentiment analysis; Data coding; Predictive modeling; Social media analysis; Analytics; Human analysis; Measurement; Reporting; Strategic planning

1 Introduction

In today’s increasingly digitally connected world, content publishing is an imperative, and it is increasingly becoming essential to brands for their social media marketing purposes. Social media content can be vital fuel, a force field that creates a certain amount of gravitational pull, drawing in customers and stakeholders to join and engage with a brand, product, or service in new and interesting ways.

Historically, marketing was a means for brands and companies to broadcast messages to their customers but it was a one-way street; essentially, people sitting at home watching commercials on television, receiving direct mail, or seeing a point-of-sale display. For consumers it was possible to experience a brand when they bought it or if they knew someone who used the product but otherwise there was very little interaction between the consumer and the brand, leading to a gap between the brand and the consumer’s experience as a prospect.

Now companies and brands are using social media as a key mechanism to stay involved in the ongoing conversation in their particular space, whether that space is health insurance or consumer technology. To that end, brands today are creating social media content and driving forward thought leadership conversations. They are trying to relate to consumers and connect with them in a way that is no longer a monologue but a two-way street, largely through social media platforms such as Twitter, Facebook, Vine, Instagram, and Pinterest, which people use to converse and visually communicate with their family and friends. In this way brands are now reaching people—and their friends, families, and peers—where they live and breathe, and are becoming much more a part of consumers’ daily lives and daily conversations.

However, as more resources are dedicated to this space, marketers are growing frustrated with traditional automated analytical tools, software, and techniques that aim to measure impact. Marketers see many of these automated tools as too often simply “data dumps” where a lot of information is provided yet there are no actionable or prescriptive insights into the target audience. Today, the number of scorecards, dashboards, and other technologies has exploded. But what do they really tell us about customers? What insights do they actually provide besides providing a snapshot?

On the basis of our work we are hearing the same questions. Automated tools on their own are not providing very much and are not providing what is needed; namely, strategy to drive social campaigns. We live in the era of big data but marketers do not have the right data.

We have therefore concluded that sentiment analysis is at an important inflection point: chief marketing officers and chief communications officers need the tools to understand the whole picture about their customers and audience. They increasingly want to know the reasons behind consumer behavior online, not simply what their customers do, but why they do it.

The automation of sentiment and content analysis is in its infancy but as more marketing dollars are dedicated to social and digital programs, marketers need far better analytics to provide a clear strategy for where to go and how to get there. What is missing in many social media campaigns is a deep strategic approach and road map for how to develop content that drives the most effective engagement.

Content and data-driven marketing have become mission critical because marketers are actively searching for insights and information in all different formats. Data-driven marketing offers companies a way to analyze the way the customer interacts with the brand and use that to design strategic communications that have a much higher likelihood of engaging consumers. Ultimately, data-driven marketing is a means to satisfy customers’ desires—often through direct communication with the customers.

While understanding sentiment is key, there remains a significant amount of social media content that does not perform well. A surprising number of social media campaigns are simply not breaking through the clutter and are not being heard, so they end up simply cluttering the digital space. Businesses are thus missing an opportunity to share their message and get the word out about their brand—and thus clients are missing an opportunity to grow their brand and improve their reputation.

Even more concerning, marketers are in many cases being misled by false sentiment data and are creating the wrong content and the wrong campaigns. It is important for marketers to know not only whether content is performing but also what they can do or change to improve performance. Often, social media analytics fails to improve performance because the algorithms generate misleading counts, and because counting alone does not offer any explanation as to the why. To provide an explanation, methods that draw statistical connections and test hypotheses are required.

If we acknowledge that simply creating content for the sake of creating content does not make sense, we then must ask what does effective social media content look like? What is the right strategy for creating it? The growing consensus is that it is crucial to move beyond simply counting the number of likes, shares, and retweets to understand the true sentiment behind these numbers. KRC Research has developed a way to marry social media measurement with human intervention that can transform data to answers, and tell us not just what is working but why and what might work better.

2 The Current Philosophy Around Sentiment Analysis

Part of the answer for marketers is smart, actionable analytics, which connects content and engagement to a business or communications goal or objective. Smart analytics matters because it helps in devising and developing strategies that will engage people. Optimizing social media marketing strategy is very important to brands so that they can best connect with consumers and achieve their organization’s mission and goals. More should be done to help marketers make decisions so that they are not wasting time and resources developing suboptimal content.

At the same time, companies do not need to conduct a “census”; meaning they do not need to analyze every single piece of content individually, which is neither cost-efficient nor time-efficient. Yet far, far too often the philosophy in social media analytics is that only a census is sufficient—every single piece of social media content must be studied and analyzed—and that means deferring to the computer software and taking humans largely out of the equation.

Automated listening tools and dashboards are powerful and impressive tools that have the capacity to code and analyze hundreds of millions of pieces of content, and do it in a matter of minutes. However, with technology in the driver’s seat, this type of analysis does not always equal strategic campaign planning or actionable insights. While automated dashboards are necessary, they are not sufficient in and of themselves because they cannot tell marketers everything they need to know. In fact, this philosophy carries many pitfalls for companies, many of which are being overlooked.

Coding errors are chief among these pitfalls. Computers are fallible, and bad data ensures bad analysis. As an example, computers do not understand language nuances. For example, with sentiment coding, there is no way to determine if “this pizza is hot” means that it is at a high temperature and thus might burn your mouth, or that it is spicy, or that it is popular/desirable. Yes, algorithms can learn over time; for example, while “this pizza is hot” can have several meanings, if qualified with explanation and context in the form of “this pizza is hot, so I need to get a glass of water,” a computer can be trained to understand the statement. Yet despite the fact that computers can learn, social monitoring platforms consistently make coding errors. For this reason, we believe that automated monitoring alone is an insufficient measurement if the goal is to improve engagement because it does not result in a strategy for engagement.

Monitoring and listening can tell you that, for example, in the last hour, 1000 people retweeted a specific keyword, and in the hour before that, only 900 people retweeted that keyword. If a brand is trying to run a successful campaign, what does this information really tell it and how do you interpret it? And what are you supposed to really do? In addition, all too often, monitoring is an afterthought and is not integrated into a campaign’s strategic direction. Monitoring is not leading strategy the way it should.

The ever-popular and often-used dashboard is not by itself a strategy creation tool either. Dashboards belong in the category of monitoring and listening. A dashboard is essentially a data dump. It allows readers to get an at-a-glance snapshot of important data but it cannot tell clients how and what they need to do to optimize their social media marketing strategy. See Fig. 14.1 for an example.

f14-01-9780128044124
Fig. 14.1 Real-world example of a dashboard that shows spikes in mentions over time but does not provide any context for this information.

3 KRC Research’s Digital Content and Sentiment Philosophy

While unactionable data and information abounds, new techniques—that combine human intelligence with the right kind of advanced analytical techniques—offer considerable promise and offer the next place that the industry must go. KRC Research’s vision is about evolving the traditional approaches away from research programs that involve creating endless static scorecards and dashboards to a dynamic technique designed to iteratively improve our understanding of the underlying sentiment behind social media posts online—a process that reveals actionable insights and helps develop strategic marketing strategy.

How do we do this? In a word, people. Marketers need smart, analytical people working side by side with them to build a strategy and a real game plan for effectively engaging with their target audience and ensuring it is being executed efficiently. Existing automated social listening tools and software monitor sentiment and count likes and tweets, but, with technology in the driver’s seat, automated analyses do not always equal powerful or even more than basic insights. To accurately analyze the conversation around sustainability on social channels, we need humans. Humans can interpret social media context and dual word meanings where and in ways that machines often cannot. And humans can code and categorize content that can then be studied and analyzed in ways that computers cannot.

KRC Research’s four-pillar social media and sentiment philosophy centers on the basic premise that we can and must treat social data in the same way we would likely treat traditional quantitative research and analysis. Social media posts demonstrate opinions and behavior just as survey responses do, and with the right curating, a collection of tweets can be analyzed to provide meaningful and nuanced findings.

The four pillars of KRC Research’s philosophy are:

1. Robust pretesting is crucial—not optional—to eliminate work and ideas that are of little or no use.

2. We must continuously and iteratively learn how to get better.

3. We must use scientific sampling rather than reviewing every piece of content.

4. We must build predictive models to improve/increase the likelihood of driving desired actions, outcomes in social media.

The resulting output for our clients is a clear and precise strategy, not guesswork, that helps us identify what content is most sharable, most engaging, and helps move the brand or company forward. Let us look at each of these pillars in more detail.

3.1 Pretesting Is Crucial

We often see programs that are creative and have an interesting tag line, along with flashy apps, great tools, and compelling marketing collateral. Yet simply developing a campaign, rolling it out, and hoping it works is not a wise expenditure of money and resources. It is not enough to design a campaign, no matter how brilliant it is.

KRC Research believes in creating smaller bits of social media content that can be pushed out in advance of a campaign to see if they have resonance and then learn iteratively from them. One tweet or Facebook post may not have all the creative assets behind it, nor does it represent the fully developed campaign, but it is still sufficient to test the waters. This is a very simple, easy step that does not require a lot of time or cost, yet a lot of companies are skipping it. They are not doing the initial groundwork to figure out if an idea really works and if they are on the right track.

One way in which KRC Research tests the waters for clients is with a proprietary tool we developed called the Social Sandbox. This is a Facebook-like, customizable secure platform that lets us “road test” potential ideas in a safe environment. By recruiting frequent social media users as we would a focus group, and asking them to browse and comment on the conversation, we can determine which communication platforms elicit a strong response from the target audience and which individual messages are particularly successful in driving engagement.

Respondents are able to “like,” share, and comment on as many posts as they wish. Results are then tallied, detailing which messages seem to be most strongly resonating. This is a very simple thing to do, yet is also very effective in ultimately determining if a campaign or communication is going to be sharable or highly unpopular.

Clients can use the Social Sandbox to test ideas, campaigns, and programs in advance before they create content. For instance, if Major League Baseball wanted to develop a campaign to promote the World Series, it could talk about sharing memories with your children and your experiences growing up watching baseball, or it could talk about bragging rights; for example, “I was at the game and I saw history in the making.” But which approach will work best? Which strategy is the right one? What tactics will most effectively drive the greatest degree of engagement? The Social Sandbox will tell you.

We have also used the Social Sandbox to stop campaigns in the testing stage because they simply were not working, and in turn we have seen campaigns that tested well go on to be highly successful for our clients. This success is due to our discovery, through our testing, that people wanted to be involved in the call to action. The Social Sandbox lets us continuously learn how to improve so we can advise our clients of when they need to course correct a campaign.

3.2 Continuously Learn How to Improve

One thing that digital analytics and sentiment analysis requires is that companies not rest on their laurels. In traditional advertising campaigns, they use time-tested tools such as surveys and focus groups to gauge how their target audience responds to the ad, and how and where campaigns need to be adjusted and course corrected. We believe that companies should use this same thought process applied to the digital space to refine what they are doing and optimize the social media content they are putting out.

Our work for a life insurance organization is a good example of this practice (see the case study at the end of this chapter for more information). For this client, our approach was to examine the social media content that was being created and analyze the sentiment around that content in a robust way. We were then able to help this client optimize its content creation so it could develop something that is even more effective.

Specifically, we found that its messaging on Twitter was not well matched with its Twitter audience. Its tweets focused on fear, such as “what happens to your children if you are killed in an accident?” Unfortunately that kind of message does not work well on Twitter. Our insights helped the company to change its content so that it was much more uplifting, focusing on success stories (such as “John was able to go to college and start his own business because his father had life insurance when he passed away”) rather than worst-case scenarios (see Fig. 14.2).

f14-02-9780128044124
Fig. 14.2 KRC Research’s analytics show topics and types of Twitter posts that lift and drop the engagement rate.

At KRC Research we often say the only bad mistake is one that you keep repeating. Our work with this life insurance organization is a strong example of how important it is for marketers to learn from their mistakes and to optimize content so that it works better for the company and its audience. We should never take anything for granted. It is crucial for companies to experiment, admit when they are wrong, and change strategy and execution accordingly.

3.3 Use Scientific Sampling Rather Than Reviewing Every Piece of Content

We believe that the current “census philosophy” that advises companies to analyze every piece of content is just plain wrong. This census mentality posits that everything every person tweets or posts is important and must be analyzed; however, this is not the wisest path forward—actually far from it.

Consider that the US Constitution stipulates that the country is required to count the number of people living in the United States every 10 years. Yet there are tens of millions of people that the US Census is unable to reach. To compensate, the US Census conducts numerous studies and sampling between the decennial census to gain a better understanding of trends and patterns. The point is that the US Census does not—and cannot—analyze every household. Unfortunately this philosophy is not shared by many clients. At a recent Super Bowl half-time show, a celebrity performer tweeted about something that contained a brand message for a popular beverage. Some 2 million people commented on her tweet. A lot of the conversation among people who were working on the campaign for this beverage focused on how it was crucial to examine all 2 million tweets.

The problem is that to analyze that much content you have to use computers, and this can lead to coding errors and nonactionable analysis. Why? Computers simply cannot think. Even the best software has significant limitations for developing marketing strategy. Increasingly with social media campaigns it is not sufficient to conduct analysis that lacks actionability. That is not what clients, account teams at agencies, or campaigns need. Rather, what they need is a clear strategy for how to drive engagement forward. And no computer, no dashboard, and no software package can outline that.

This is not to say that computers have no use—far from it. For sure, computers do a great job at looking at millions of pieces of data and producing, within a few minutes, how many tweets are related to certain mentions. We use computers all the time to look at all the tweets or posts related to a hashtag, and the computer is able to provide us with a random sampling of 1000 pieces of social media content in a matter of minutes. This is valuable because it would take an enormous amount of time for us to collect all the content and determine the random sampling.

Computers are also valuable in creating statistical models. Once each piece of content has been coded by hand by a person, the data are fed into a computer program that produces the statistically generated model. It is a human, however, that interprets the data that are fed into the computer. When the computer produces the statistical model, a person interprets the model and makes sense of it. That is an approach different from having someone hit computer keys to produce a graph in less than 15 seconds purporting to show “analysis.”

In no uncertain terms, the only way to help clients drive engagement is with people who understand the business and communications needs and objectives; who can determine if brand content is meeting those needs, and whether one tactic or strategy is more engaging than another. When people, rather than computers, look at a sampling of the content, those people can deduce errors that a computer may likely overlook, and their analytical edge far surpasses that of any software program.

Similarly, humans need to play an active role in model development, specifically in the exploratory data analysis/identification of data structures and phases and more importantly the model generation and validation process. Just as humans code content by hand, they are responsible for establishing the parameters, modeling inputs, and optimization for model fit based on their analysis of fit statistics, prediction errors, and other factors. Humans are also needed to determine the appropriate modeling technique for the data available and the desired outcome.

Additionally, computers can only pull out a basic aggregation of data—basic sentiment such as negative, neutral, or positive and how many likes a topic receives. If you ask a computer to analyze comments that appear on Twitter about, for example, a news item, the computer will tell you what percentage of comments are negative, neutral, or positive. Yet knowing the percentages of positive, negative, and neutral comments is not very helpful to a company that is trying to develop or improve on a campaign, nor does it help us determine how to change the conversation.

We have found that in the vast majority of cases, regardless of what topic or product the comments or tweets are about, the computer analysis reaches the same conclusion: approximately the same percentages are ascribed to positive, neutral, and negative. It is rare to have deviations, even though we know that the conversation about a serious news topic cannot be the same as, say, the conversation about a food product. This indicates that there are serious flaws with the computer analysis.

Thus a census, when all is said and done, is a superficial analysis. We know that automated coding of sentiment, as well as automating counting of themes, can be inaccurate. Yet to understand what types of social media content are delivering engagement, it is critical to be accurate about both sentiment and themes. The only way to be sure of accuracy is through human coding. At the same time, it is impossible and impractical for humans to code the vast quantity of social media content generated about and by most brands. This is where probability theory comes in. Probability theory tells us that random samples have an enormously high probability of mirroring the population they are drawn from; in other words, if you have a statistically valid sample that is balanced across gender, race, age, and geography (eg, urban and rural), you can purport to know the viewpoint of all Americans on the basis of your sample.

We believe this same approach should apply to social media analysis—a census is not needed; real sampling is. By applying probability theory to social media analysis, we can make it possible to conduct the kind of analysis required to understand how to continually improve the performance of social media content.

One final note about sampling is that it is important to consider whether you are analyzing audience posts (high volume) or owned social posts (low volume). Sampling makes sense with social listening—campaign mentions that are driven by the marketing messages—given the vast volume of content; however, sampling of owned social media comments is not relevant or needed because the volume of content is low and, therefore, sampling could cause you to miss important nuances.

3.4 Build Predictive Models

Using sophisticated analytical and statistical tools, we build predictive models for clients. A predictive model is a tool that allows us to look at a lot of data about past behavior and then, to a great extent, make some predictions about future behavior.

For example, if a food manufacturer continues to produce content about recipes on its Facebook page and this content has received a modest level of engagement from consumers, we can presume this level of engagement will continue in the future unless there is a dramatic change. In the case of the food manufacturer, we were able to predict how many people would continue to share information related to a specific hashtag, whereas communications about the health benefits of a specific food would yield a smaller number of shares.

Our point of view is that you have to be clear with clients at the beginning of campaigns, and predictive models allow us to do that, providing we have the relevant historical data to support the model. For new campaigns/digital properties, there is a lag time for model development because we must allow the campaign to mature over time to produce the 12–18 months’ worth of data needed for the modeling. Once we have the data, predictive models help clients understand what does and does not work so that they and their agency partners can build or revise their campaigns on the basis of the results of the predictive model. We can also use predictive models to analyze campaigns that are not achieving a good return on investment and redirect them to a more efficient direction.

4 KRC Research’s Sentiment and Analytics Approach

Our sentiment analysis approach begins with a random sampling of comments on social media. We conduct a strategic examination of a client’s existing social media content so we can identify trends and patterns. We then create a custom social media codebook or coding framework, based on the core objectives of the client’s campaign, to capture variables that we identify as most essential on which to evaluate content.

A team of KRC Research researchers then codes, or categorizes, by hand the thousands of hashtags, tweets, posts, and comments randomly selected. Each piece of content is placed into categories on the basis of the theme, sentiment, and other components (eg, photos, links) within the content. For a life insurance client we looked at each tweet and determined if the tone was uplifting or sad, whether life insurance was mentioned, what the topic was (eg, bereavement), and whether video or photo links were present (eg, this tweet went out on Thursday in the morning; it was about getting one’s financial house in order and included an infographic with a rhetorical question; it had a serious tone). By coding individual pieces of content across dozens of categories, one can create a robust dataset that can then be analyzed quantitatively.

We utilize the information obtained from our hand coding to determine the most effective content and sentiment among target audiences. Once we have manually coded the relevant content, we develop a statistical predictive model for each channel to understand the relationship between each variable and outcome measure(s). A statistical model focuses not on the highest number of tweets or posts but on patterns, themes, and trends. We can tell a client, for example, that putting out a tweet on a Tuesday afternoon results in more retweets than putting that same tweet out on a Saturday, or we can say that a video is retweeted more than a photo.

The statistical model is essentially a tool that lets us look at trends across the entire exercise. It allows us to understand the relationships between various factors and to what extent one factor versus another has more impact on the audience. We can then accurately predict what types of content will be most impactful. See Fig. 14.3 for an outline of our approach.

f14-03-9780128044124
Fig. 14.3 KRC Research’s step-by-step proprietary process to enable clients to engage better with target audiences.

As a result of our approach, clients are able to produce strategic, mediable content including potential headlines, social media content, and engagements; visual storytelling elements including infographics that they can integrate into their current campaigns; meaningful subgroup analysis that gives key insights into the behavior of their target audiences; and strategic recommendations for communications that build on existing consumer sentiment.

5 Case Study

The following example shows how KRC Research has helped one of its clients optimize its communications through actionable insights and guidance on the basis of our sentiment analysis method. This real-life example illustrates how this work can be applied to drive thought leadership, campaign evaluation and planning, and, ultimately, engagement.

5.1 Life Insurance Organization

A life insurance organization commissioned KRC Research to conduct an in-depth review and statistical analysis of the content on its traditional channels such as email and webinars, and its Twitter and LinkedIn channels. The organization wanted to gain a deeper understanding of its overall communications channels, as well as to clarify any questions about each of them. KRC Research also analyzed its Twitter followers by demographics, including age, gender, location, occupation, relationship status, and social interests, to gain a better understanding of the profile of the organization’s Twitter followers.

KRC Research began with a high-level overview of the current content. Each piece of content posted on the organization’s Twitter page, LinkedIn page, and LinkedIn group in a 1-year period was individually coded on the basis of 20 different custom variables, including the intended audience, keywords, type of content, and tone. See Fig. 14.4 for an example.

f14-04-9780128044124
Fig. 14.4 KRC Research’s coding method in which custom variables in social media posts (such as the presence of a photo or video) are categorized.

Having manually coded more than 1500 pieces of content, KRC Research created a predictive model that identified key themes, tones, and words within and across channels that were driving engagement with that brand.

The key output of this effort was a strategic game plan for how the organization could most effectively use and target resources toward industry and consumer communications in the next 12 months. Ultimately, this analysis provided this client with actionable insights on which elements, such as topic, tone, and content type, most effectively drive engagement so as to optimize the organization’s outreach strategy on its social channels. Examples of our key findings include:

 Content designed to educate its followers, both consumers and industry, resonated strongly.

 There are some clear differences in the types of topics and formats that best resonate on each channel.

 Compared with text-only submissions, images greatly increase engagement across Twitter and LinkedIn platforms.

6 Conclusion

It is now commonplace for companies to leverage data to help them make business decisions. When we talk about data, it is not simply a matter of generating more data; rather; we are leveraging the right kinds of tools to help address the needs of brands and companies. At KRC Research, we believe there is value in tools such as automated dashboards. When companies need to know if they are a part of the social media conversation, dashboards have enormous value, but at the same time, dashboards and similar tools cannot tell companies how to engage people.

The pendulum has swung to a pivotal point where marketers are realizing that software alone is not sufficient to explain the “why” behind social media sentiment or to provide insights into the target audience. Brands need human intelligence, along with the more traditional analytical tools, to develop a solid strategy for influencing the social/digital conversation. Machine learning and algorithms are simply too fallible for us to rely on them exclusively. KRC Research’s dynamic approach generates meaningful, nuanced findings that inform strategic marketing strategy, and allow brands and companies to be heard by breaking through the glut of social content.


* Throughout this chapter, we have presented case studies to illustrate our social and digital media content analytics method. While we cannot give the names of the organizations with which we have worked, we think these case studies are instructive lessons that apply broadly beyond their respective categories. Thus you will notice that information has been redacted in some of the figures.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset