Chapter 4

Operationalizing Strategy

Who asks whether the enemy was defeated by strategy or valor?

—Virgil

Introduction

How does theory become a strategy? We explore the links between mapping the factors that influence a market and creating action plans to leverage those relationships. After reading this chapter, you should be able to:

Create operational definitions of the concepts you’ve identified in a theory

Develop measures based on those operational definitions

Design decision rules that trigger marketing actions

Begin to assess the effectiveness of your measures

By now, you already understand, at least conceptually, that Big Data by itself isn’t strategic. In fact, creating rule engines around data such as Call customer after three visits to the website isn’t strategy either. Such engines support systems, a key part of making strategy work1—it’s just not strategy.

But what do you do when you’ve gained new insight into your market through conceptual mapping and realized that you have data to be leveraged for greater value? Here is the chapter when we take the conceptual and make it real, when we go from conceptual to operational.

Conceptual to Operational Definition

If you recall in Chapter 2, we discussed models of loyalty to understand how you build your concept map. One possible model might look like the one in Figure 4.1.

95218.jpg

Figure 4.1. A simple model of loyalty.

Conceptually, you may think of customer satisfaction as “the degree to which expectations are met.” This definition is one of a handful of commonly-used definitions. For loyalty, how about “the degree to which a customer prefers to use a particular product,” or “the proportion of purchases made of the same product.” These two definitions of loyalty are conceptually miles apart. One considers how the buyer feels about the product whereas the other simply considers purchasing behavior. As we discussed in Chapter 2, the first reflects attitudinal loyalty whereas the second is behavioral loyalty. Making the distinction is important because you’ll do different things to achieve one versus the other.

For example, you may consider attitudinal loyalty more important because having people like your brand is necessary to get positive word of mouth—most students certainly think attitudinal is most important. But without purchasing, attitudinal loyalty may not be worth much. And in some market segments, behavioral may be all you can get. One participant in our study on data cultures identified a particular segment where speed of service was most important.2 There was no “liking” of any brand,—it was all about speed. Get speed to serve right and you had their business—it was pure behavioral loyalty of the best kind.

Distinguishing between behavioral and attitudinal loyalty, though, doesn’t mark the end of the definitional challenge. What did speed to service mean? Did it mean how long they had to wait to get served, how long it took to be served once they talked to a clerk, or how long it took from the point they drove into the parking lot to the point when they left? Or did they really know how to measure it and was their perception something you could influence?

The distinction between these measures isn’t unimportant. You’ve got an entire market segment that wants speed—but if you want to give them what they want, you’ve got to understand what they mean when they say “fast.”

If you want to change loyalty, you have to be able to measure it so you can tell if it changed. In this instance, simply observing frequency of purchase would be one way to measure change. Further, your measure has to reflect your conceptual definition. If you want to influence attitudinal loyalty, your measure isn’t going to be frequency of purchase. Yes, you want more sales but that should be an outcome of attitudinal loyalty. Rather your measure may be something like how likely they are to refer you to someone or how they answer a set of questions around their preference for your brand. But is preference the same as loyalty?

To make matters more difficult, your salespeople will talk about customers who are loyal because they always buy something, but at the same time these same customers are buying twice as much from a competitor! Or, marketing thinks these customers love us because they’re Platinum cardholders, without realizing that these same buyers are Platinum for three other vendors.

Getting the definition right matters because the definition alters the rules by which your marketing automation engines and sales strategies work. Getting the definition right matters because people in your organization are making decisions based on that definition. When you’ve got Big Data flying at you at increasing rates, you have to have good operational definitions because these reduce the data into manageable form.

Operational Definition

Yes, you have a conceptual definition but to work with Big Data, you also need operational definitions. You might know that your marketing is supposed to create a qualified lead and conceptually, you know that a lead is someone ready to receive a sales call, but specifically, what makes a lead qualified? How will you put that into practice?

An operational definition is how you measure the concept, and, in this instance, set a standard to determine the label. We all operationalize concepts like quality in our everyday life. Quality is a conceptual term. For example, think about how you try to figure out which product has higher quality when you’re shopping and you don’t have any experience with that product to guide your evaluation. When you look at a product that costs $10 and another that costs $15, you assume the higher priced product is of higher quality if you have no other basis for that judgment.

As marketers, we also have to turn concepts like perceived quality into concrete measures, actions, and standards. For example, let’s go back to our need to create a qualified lead.

Conceptually, a qualified lead is someone who is ready to receive a sales call; that is, relative to others drawn from the same pool, this person or this business is more likely to buy. That pool could be a list from the phone book, attendees at a trade show, or licensed dog owners, depending on what you are selling. Once the individual has shown some level of interest, you engage in certain marketing activities to get that individual to the point where she or he is ready to engage with a salesperson. What makes a lead a qualified lead will depend on what sales wants or what has been shown to lead to sales but you’ll need that definition.

Some examples of operational definitions of qualified leads:

1. Business has X number of employees

2. Browsed the website X number of times

3. Downloaded a white paper

4. Visited the trade show booth and rated a qualified lead by the booth staff

As you can see, none of these are very far down the path of purchasing, but then again, they’re just leads. Which one is the best definition? Clearly, the first one is pretty weak. We have no indication of interest at all. Perhaps the last has shown the greatest level of interest, but the real answer has to come from your sales staff. If you are trying to generate qualified leads, what separates qualified from unqualified is whether they are ready to receive a sales call, and that varies from market to market.

Further, get it wrong and it won’t take long before salespeople quit trusting any leads. If you give them a list based only on the number of website visits, for example, it won’t take a day before they realize these are people who, for the most part, are not ready to receive a sales call.

So what is the behavior or marker that indicates that they are ready? Once you find that out, you have an operational definition of a qualified lead and you can begin to direct marketing activities to move unqualified leads to qualified leads.

The operational definition also describes how the concept will be measured. If a qualified lead is to meet a specified level of certain characteristics, then those characteristics define the measure.

Since few companies engage in only one marketing activity (e.g. trade shows), lead scoring models are developed that weigh the relative readiness of a potentially qualified lead to receive a sales call. Look back at the list. We could assign a point value to each item: say 2 points for #1 and 5 points for #4. When the total value of points reaches 10, we then have a qualified lead. Some lead scoring models are far more sophisticated but most are not.

Some well-known variables that are really operational definitions of concepts include:

Net promoter score3 as a way to operationalize customer satisfaction.

Loyalty program level (e.g., Silver, Gold, Platinum) as a way to operationalize customer value

Gartner’s Magic Quadrant® score of technology providers and US News & World Report rankings of universities as ways to operationalize overall quality, at least as they define quality.

One way to tame Big Data, then, is through good operational definitions. Good operational definitions serve the purpose of reducing data down to important judgments. You may have all types of data as part of your lead scoring model for determining lead qualification but once you create the definition, all of those observations and variables become one variable with only two values: lead or not a lead. And that’s far more manageable.

From Strategy to Action

Of course, strategy isn’t about defining some nebulous concept but rather about doing things that accomplish objectives. Strategy is about making choices, selecting from alternatives the route you’ll take. But which alternatives,—what actions? If you want to create loyalty, what actions should you take? This set of decisions is where your conception of how which factors influence the outcomes you are hoping for becomes so important.

For example, if you want to create habits (behavioral loyalty), such as Target was trying to do through its promotions aimed at expectant mothers, the habits are the objectives and we know from research how many times actions have to be taken over a period of time to become a habit. That understanding then informs our action plans, if we’re Target, regarding how often we present an offer to that expectant mother.

Since strategy is about making choices, let’s take a look at decision making in organizations. In particular, we’ll start with some of the barriers that can make effective decision making less likely.

Barriers to Effective Use of Big Data

Many current management practices and, in some instances, the ethos of today’s management culture actually make effective decision making with Big Data more difficult. The problem isn’t that decisions are left unmade; it’s just that the best decision may get missed.

A Bias for Action

The quote at the start of this chapter may be old,—even ancient —but it still reflects the bias for action that can stymie real progress. To be a leader of action is highly desirable, it seems. Too often, though, a bias for action is simply a euphemism for too-lazy-to-do the necessary-due-diligence and fail fast an excuse for mistakes.

Yet, one of the characteristics of Big Data is velocity. The velocity at which data comes at us is increasing, as is the rate at which we have to make decisions. Thus, the greater velocity of data creates yet another trap by making a bias for action seem even more important than ever.

How does a bias for action hurt decision making? One way is a failure to consider all of the alternatives. In decision making research, they call this commitment—the tendency to commit to a course of action too early.

An alternative, though, is to set up data systems so that waiting for data isn’t long or expensive. The velocity of Big Data can, with the right decision support system, improve decision making speed. I’m not advocating that waiting for all of the data is always better; rather, I’m saying that Big Data has the power to improve the chances of a bias for action succeeding, if the right systems are in place. We’ll talk more about some of those systems when we talk about application in Chapters 5 and 6. For now, although you may not have time to stop and smell the roses, at least stop and force yourself to brainstorm a list of alternatives before you select a course of action. Otherwise, you are likely to settle on a course of action and then use the data to support the decision you made, rather than using the data to make a decision.

Further, research shows that if you can sleep on a decision just one night, the chances are much higher that you’ll make a better decision.4 Just one night! So if you have a big decision to make, sleep on it.

Numbers Myopia

A second way that decision making is hurt is in how the numbers get interpreted or presented. The classic research in this area asked people questions such as, “If you had a treatment for an epidemic but half the people you used it on would die, would you use it?” Most would say no. “If you had a treatment for an epidemic that saved the lives of half of the people you used it on, would you use it?” Most would say yes. In both cases, half die, half live; the only difference was whether the choice was framed as a gain or as a loss.

Similarly, I teach a case where the reader is told that 30% of first-time visitors to a horseracing track don’t come back. Students (whether execs or undergrads) always assume that to be a bad situation, but flip it. If you could count on 70% of your first-timers to come back, you would be pretty happy. Further, looking at the two alternatives—that is, either first-timers are not coming back or first-timers are coming back—the follow up questions change completely and the alternatives for action are very different.

We are primed, as human beings, to avoid loss.5 Study after study shows how we try to avoid loss, pain, or suffering, and that we prefer avoiding pain to realizing gain. That fact of human nature has wide reaching consequences. But how numbers are presented to us can cause us to focus on the negative, allowing our natural tendency to avoid loss to overwhelm our decision making, causing us to lose sight of what we can gain.

So think for a minute about what can happen when a bias for action combines with numbers myopia. What happens when you are prone to acting quickly but misread the truth that is behind the numbers? Discounts get offered too quickly, product lines get cancelled, and other ways to miss opportunities are realized. Before you act, make it a common practice to flip numbers (to turn that 30% leaving rate into a 70% staying rate), then see if you still find a course of action desirable. And again, sleep on it!

Data Trust

Another limiting factor is a lack of trust in data. This lack of trust takes two forms: the hero ethos and the not-invented-here (NIH) syndrome.

The hero ethos is a trap leaders can fall into when they take themselves a bit too seriously and believe they can, or need to, do it all. They trust their gut because their gut hasn’t let them down and even if it has, they blame that failure on some other factor. After all, they are the Chief Executive Officer (CEO), or the Chief Marketing Officer (CMO)! Actual CEOs and CMOs are less likely to fall into this trap than are CEO-wannabes but even so, their bias for action coupled with their ego makes it difficult for someone with numbers to rely on the numbers to sell an idea.

NIH is just the not-invented-here syndrome applied to data. If it’s not my data, I can’t trust it. The decision maker with NIH can’t trust anyone else’s data. Without confidence in how the data were gathered, what exactly was the question asked, how were the data analyzed, and so forth, this data scientist remains skeptical and frozen to action.

In a study published by Pitney Bowes, some 9% of data scientists said their biggest challenge in unlocking the value of Big Data is that their upper management team doesn’t see the value in Big Data.6 Perhaps one of the most famous data skeptics is the CEO of Starbucks. “Howard [Schultz] doesn’t care about data. He has absolutely no head for data,” said Joe LaCugna, director of analytics and business intelligence at Starbucks during a session at the Big Data Retail Forum in Chicago.7 (I wonder if LaCugna got called in to explain that remark.) But add to the 9% of upper management who fail to trust data another 11% with a culture of not trusting data, and you can see the opportunity is there to beat out at least one-fifth of your competitors by simply using data effectively.

A Case Study: Random House

A very simple example of Dynamic Customer Strategy (DCS) involves the book, Defending Jacob, published by Random House.8 This suspense novel by William Landay tells the story of a district attorney defending his son in a murder case. When Random House launched the book, they ran a limited ad campaign designed to reach a certain demographic, mostly men because men are the typical thriller audience. As they listened to readers’ comments in social media and online focus groups, however, they realized that the book resonated strongly with a different group, and for a different reason, than they had planned. Moms and other readers focused more on the parenting relationships in the story, a very different audience than the masculine one that reaches for traditional thrillers. Random House then changed their ad copy and placement strategy to reach more moms.

At its simplest, this example could be viewed as just a miscalculation on the part of the publisher. Conceptually, they know that thrillers are read by more men; parenting stories by more women and those two relationships did not change. But that’s the point—DCS doesn’t necessarily mean that you change your conceptual map of how your market works. Yes, DCS can mean that you learn that your conceptual map is wrong but more often than not, the challenge lies in operationalizing well; in this case, recognizing what kind of story they had to work with, then using their pre-existing conceptual map to re-target their approach.

The second important aspect of this case study is how they implemented a fail fast, fail cheap approach to learning. Applying a limited marketing budget coupled with online focus groups enabled them to get the book out and see how the book fared before making a full investment. We hear about Defending Jacob because the alteration in strategy was ultimately successful; we don’t hear about the books that completely failed because many don’t get a full launch if they can’t make it out of the test market.

Creating Decision Rules

After creating operational definitions of the concepts and determining what actions you want to take to reach your objectives, the next step is putting it into action.

One aspect of Big Data that many are overlooking, however, is that Big Data gives us the ability to implement the type of personalized marketing that CRM has been promising for almost two decades. Essentially, what gets created is a network of If/Then rules, decision rules that allow your marketing technology or sales/service staff to determine what action to take when.

If my lead score is a 5, then have a call center agent call; if below 5, then send the next email in the campaign. If the lead downloads a white paper, send Email #3 two days later. These are simple decision rules but remember that a lead score can actually be a composite measure of whether the lead browsed the website, visited the booth at a trade show, or any combination of activities. The decision rule may appear simple but the operational definitions behind it may be complex.

Decision rules can play another role, too. When experimenting, hypotheses actually should be thought of as decision rules.

Typically, in an A/B test (where A is tested against B), the hypothesis is something like this: I think A might work better than B. So one is run against the other and whichever performs better is the one used.

Simple hypotheses like that get much more difficult to test, though, when you are testing several at a time. Using advanced forms of experimental design, you can literally test A through Z simultaneously. As you can imagine, determining whether F is better than S can be pretty tricky with so many.

Further, if you remember Type I and Type II error from your statistics class, you know that Type I is the error made when you decide that there’s a difference but there really isn’t, and Type II is when you fail to recognize a difference when one exists. Like most students, you probably just thought of these errors as statistical things to be forgotten after the tests.

The reality is that these are forms of decision risk that you can influence. Think about it from this standpoint: a decision maker makes a Type I error when he decides to invest in the new marketing strategy when in fact the old marketing strategy was just as good. Type I error is equivalent, in that example, to investment risk because the decision maker is making an investment without gain.

Alternatively, Type II error is the same as opportunity risk. In this instance, Plan B is better, but the decision maker fails to recognize it and fails to capitalize on the opportunity.

If you recall from your stats class, we usually decide that there’s a difference between A and B if p is less than .05. That decision rule assumes that investment risk is more important. Your statistician can actually alter decision rules to control the amount of statistical risk you are taking in both investment and opportunity risk. The thing for you to remember is that by reducing p, you are reducing investment risk but you are not directly affecting opportunity risk. That’s actually a different statistic and your statistician can help you with that so talk over Type I and Type II error if you are dealing with a really big decision.

Summary

Operational definitions take the concepts and make them real. An operational definition can be a standard (such as a lead is someone who scores above X on a series of measures) or a measure (the degree of loyalty is their Recency, Frequency, and Monetary score). Once we operationalize the concepts, we can begin to create the actions that will influence those factors that will move customers and markets toward our objectives. But a bias for action and numbers myopia can inhibit our ability to use Big Data. To use data more effectively, we can create decision rules that consider both investment risk and opportunity risk.

Discussion Questions

1. Explain the difference between an operational definition and a conceptual definition as if you were talking with an executive. Give an example of an operational definition and the concept it represents from your favorite television commercial or brand.

2. How do you know how much to favor action over careful thinking and planning? In other words, how do you know the optimal balance between acting quickly or thinking through? What are the implications for the DCS strategist? List three characteristics of a decision that might lead you to either act quicker or invest more in gathering information.

3. Of the barriers to effectively using Big Data listed in the chapter, which do you think is the most difficult for a company or manager to overcome, and why?

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset