CHAPTER 4
Want Budget? Build the Business Case!

As an organization evolves to operational analytics, various investments will be required. These investments include the people, tools, and technologies that must be put in place in order to implement operational analytics successfully. The process of making analytics operational will be neither cheap nor easy, but with the right discipline it can pay off. Of course, getting the agreement and approval for the investments required is no easier today than it ever was. Therefore, building a business case for operational analytics is critical.

In this chapter, we lay out concepts and frameworks to assist you in building the business case for operational analytics in your organization. Many of the concepts can be applied more broadly to investments in analytics. The good news is that if time and care are taken to develop a business case and ensure that it accounts for some of the unique aspects of analytics, you and your organization can be successful.

Setting the Priorities

Before starting on a business case for operational analytics, it is necessary to lay out what investments the business case will address and how the business case will address them. As with anything, the direction and tone of a business case can be as important as the supporting facts and figures. In this section, we discuss how to start with the right perspective to give your business case the maximum chance to succeed. A few small adjustments to common practices can make a business case far more interesting and compelling and, therefore, make it more likely to be approved.

Start with a Business Problem, Not Data or Technology

We discussed in Chapter 2 the need to identify a business problem before collecting data. Collecting data or purchasing technology without a clear plan is a losing strategy. It is also important not to build a business case for acquiring a new data source or purchasing a new tool or technology. Rather, a business case for analytics must solve real business problems that an organization faces. As luck would have it, acquiring that shiny new data source or software may be a key part of solving the identified problem. A strong business case doesn't abandon the data, tool, or technology acquisitions; it simply puts them in the right context.

The difference between a technology focus and a business focus is also the difference between justifying a cost and justifying an investment. In most organizations, it is far easier to get people interested in a business case that solves a specific set of business problems than it is to get them interested in a business case that solves a specific set of technical challenges. It is not clear to me why so many organizations sell acquiring data or technology instead of solving a problem. Let's explore two hypothetical discussions to illustrate the difference between these approaches.

In the first discussion, the vice president of information technology (IT) for a large utility walks into the executive committee meeting alone and says, “We need to collect sensor data from our smart grid infrastructure. It's going to cost several million dollars to do it. All of the business units we partner with have asked for it and are willing to partially fund it. We can cover all the costs of the data acquisition and storage with the funds the business units are offering plus a small incremental IT investment.”

In the second discussion, the VP of IT walks into the meeting alongside a VP-level business partner. Together they say, “We're going to make our existing power capacity meet demand for an additional five years, which will enable us to delay several power plant projects. We will do this by incenting customers to change their usage patterns so that we can lower peak demand levels by analyzing our smart grid sensor data. Of course, to acquire, store, and analyze that data will cost us several million dollars. Those costs will be more than offset by the tens of millions in savings that we've identified by postponing the new plants in addition to the many other analytics we've identified that will be possible once we have the smart grid data.”

The first discussion is all about cost and data, is driven by IT, and is not very persuasive, even though the cost will be covered. The second discussion is driven by the business with IT support and is focused on the value of collecting new information instead of the cost. Which argument do you think your executives would find more persuasive?

Focus on Returns, Not Costs

The previous examples illustrate two approaches to requesting funding. The primary difference is that one is trying simply to justify itself as cost neutral, while the other is attempting to drive huge value. Unfortunately, many technology- and analytics-related investment pitches put outsize focus on the costs and the attempt to offset those costs. It's better to make the costs simply a part of a high-impact solution, as outlined in Table 4.1.

Table 4.1 Making a Case for Investment in Analytics

Maximize Focus On Minimize Focus On
Business problems solved Tools or technologies required
Benefits and returns Costs
Differentiators Incremental improvements

The focus on costs came about, in part, because it used to be necessary to justify technology investments in this way. Historical technology investments often involved huge up-front costs that would be spread across a wide range of uses that generated various returns over time. For example, due to the massive scale of the investment required, a mainframe would never have been justified in the 1980s by just a handful of analytics requirements. Rather, it took a large set of enterprise-level requirements in total to justify a mainframe.

Today, however, tools and technologies are often inexpensive enough that it is possible to gain entry with a moderate investment. The benefits demonstrated by the initial analytics use cases and the initial investment can be leveraged to get support for further investment later. Investment in analytics no longer has to be a massive, business-altering expense for an organization. With today's flexible cost structures, it's often possible to start on a far less grand scale, and often it is a straightforward cost-benefit analysis that's very achievable at a business unit level.

Target Differentiators, Not Incremental Improvements

Exciting new ideas usually get more attention than improvements to existing ideas. That's also true with analytics. To the extent that new data and new analytics can be utilized to solve new problems, it will be easier to get attention for a business case. When addressing new problems using new data, often larger returns are possible than when simply tuning existing analytics processes to address existing problems. Often it is possible to outline a plan that enables short-term, incremental improvements as well as long-term differentiators at the same time. Such a situation is especially nice because it promises fast, visible progress while chasing the larger long-term benefits. That's a win on two dimensions at once.

One of the best things about the emergence of big data (Chapter 2) and Analytics 3.0 (Chapter 1) is that the possibilities for analytics are expansive and go far beyond those of a few years ago. Be sure to take this into account as you build your business plan. The exciting world of big data and operational analytics provides ample opportunity to focus on new differentiators while still adding incremental improvements to existing analytics processes. As we've discussed already, it is rare not to have multiple uses for data once it's captured. This means that even as a case is made with one or two of today's defined business issues, the future benefits that can accrue in other areas should also be mentioned, even if those other areas are still a bit ambiguous and undefined. Some like to call the process of uncovering new value “having a conversation with the data.” That conversation can lead to new ideas, insights, and value.

Let's consider an example. Would restaurants or retail stores be interested in knowing how many people walk past their entrances every day and what profiles of those people look like? You can bet that they would, and location data generated by cell phones can tell them that. If a cellular provider is looking to justify storing detailed historical location information for operational purposes, certainly it is possible to explore alternate uses, such as providing foot traffic figures to stores and restaurants. The cellular provider can charge retailers for information about how many people are walking or driving past their doors.

By matching the location data with demographic and usage data, it is also possible to provide information on how many people fitting certain profiles pass by the door. Offering analytics of this nature can be a differentiator for the cellular provider, create a new revenue stream, and can support the costs of capturing the data for the original operational purposes. Note that I am not suggesting that the cellular provider divulge any information on any individual customer. That would be a privacy issue, as we discuss in Chapter 6. Rather, the information provided to the retailers or restaurants should be aggregated. For example, 200 people walk past 124 Main Street on average every day, and 30 percent of them make more than $100,000 per year.

It will take time for an organization to get to the point of offering such services. But discussing the option helps to demonstrate the bigger value that a new source of data can drive over time. This can get people more excited than they would be from just the initial plans that focus only on the value being targeted in the short term. If an organization can at least cross the bar on the return necessary based on initial short-term initiatives, the future potential analytics identified can help get the approval of an investment over the goal line.

Choosing the Right Decision Criteria

When laying out a business case for operational analytics, it is necessary to decide the criteria that will be the primary drivers of the decision. In other words, what is it that must be maximized or minimized with the investment? It will be necessary to carefully define the criteria to be targeted correctly and to understand the implications of those choices. Many factors come into play when assessing the cost and benefit sides of operational analytics, and some new criteria that have not been widely used in the past will be necessary as well.

The decision criteria for analytics investment cannot be classic IT metrics like price per terabyte, price per hardware node, price per seat license, or seconds to process a specific query. All of those criteria can be examined to ensure that they aren't way out of line, but they can't be the only criteria. One of the key criteria for analytics is to look at the lift in human performance that can be achieved through an investment in one option versus another. For example, consider questions such as:

  • How much faster and more efficiently will analytics professionals be able to perform their duties, given each of the investment options?
  • How effectively can the organization build, test, and deploy new operational analytics processes with each option?
  • How easy will it be to experiment with new analytics techniques?
  • Can the environment ingest data rapidly and support rapid change?
  • Will new, and possibly expensive, skill sets be needed?

Considerations like these matter for investments in operational analytics and must be assessed for every option.

The faster that an analytics team can produce new insights for an organization and implement what is found into an operational context, the higher the returns will be. Paying a higher cost per terabyte is fine if the team will be able to produce analytics much faster than with a cheaper option. Paying more for an analytics application license is okay if the application is more user friendly and robust. It's all about getting to the results in the most efficient manner possible.

This isn't much different from how you likely make purchases at home. Many people pay extra for a computer that has more memory, or more disk space, or other specific features that are important to them. The cheapest computer may make certain activities that are important to you very difficult; therefore, the extra money for a better computer is worth it. For example, if you don't have enough disk space to store all of your videos, then a computer can't serve as your video editing and archiving platform.

Paint a Bigger Picture

Many organizations execute a targeted proof of concept (POC) as a first step. This is a great idea, but it is important not to make the limited scope of the POC the endgame. Solving a single subset of a problem probably won't get a proposal across the finish line for investment, especially when proposing large capital investments and many hours of labor. In other words, a POC might focus on one type of analysis against one set of products. If the endgame is to have the investment support multiple types of models for all products, then that must be made clear. If the returns from the limited POC scope are all that is discussed, then the figures are likely not going to be very impressive. At the same time, the value of the bigger vision may not be obvious from the limited POC either. That is why it is necessary to clearly lay the plan out.

The key is to position a pilot project or POC as but one example of what's possible, not the endgame. Also provide a list of other problems, both similar and dissimilar, that can also be addressed if the plan is approved. Make the point that while the POC didn't specifically quantify the impact possible for the other problems, it is reasonable to assume that analytics to address them will add additional value to the POC findings. If the POC itself had a solid return, having extra upside to add into the mix can only help get the green light.

I had a customer from a large media organization tell me that he had struggled to get one of his analytics initiatives approved. (I won't reveal his company to protect his confidentiality.) His team had executed several successful POCs but had never received approval to make the bigger investment required to scale them out. He suspected that the problem was that the pitches for investment had focused exactly and only on the scope of the POCs. That was the fatal flaw. Focusing only on the return of the exact analytics tested in the POC didn't provide a big enough return. Equally important, that approach didn't paint the bigger vision my customer had for the executives making the decision. Figure 4.1 illustrates the difference.

images

Figure 4.1 Paint a Bigger Picture

My customer decided that on his next attempt, he would point to the POC as but one illustrative example of what the investment would enable. He would make clear that the examples executed in the pilot were meant to show how the novel use of a new data source for novel new analytics processes would work in a few relevant scenarios. There were many other similar scenarios that couldn't be tested in the POC but logically would also be successful, given the similarity to the scenarios that had been proven to work. This approach, which some people call showing “the art of the possible,” is a much stronger one.

Time to Insight

When investing to enable the analytics discovery process, I recommend considering a criterion called “time to insight.” Time to insight looks at the time to go from a new question to finding the insight desired. This is distinctly different from the criteria required when operationalizing an insight found in a discovery process. As an insight is operationalized, traditional IT metrics, such as how fast the process generating the insight can be executed to support operational decisions, will be important.

The different needs for discovery versus operationalizing are discussed more deeply in Chapter 6. For now, just note that there is going to be a difference between a business case aimed at enabling discovery and one aimed at operationalizing discoveries. The differences are necessary because the two have vastly different goals and priorities. In addition, in today's world, it is no longer acceptable to have discovery cycles measured in weeks or months. Time to insight must be days to just a few weeks.

Time to insight includes everything from data acquisition, to data preparation, to coding time, to running the analytics process, to identifying the insights hidden within the results, as Figure 4.2 illustrates. Time to insight is literally the total time from start to finish. For example, if one option requires 60 minutes of coding, 30 minutes to execute the code, and 10 minutes to explore the results, then its time to insight is 100 minutes. If another option requires only 20 minutes of coding but 60 minutes to execute and 20 minutes to explore the results, its time to insight also is 100 minutes. Both options lead to a 100-minute total time to insight, though they arrive there via different paths. This means that a business case will need to account for the cost differences between the time components behind the time to insight as well. For example, extra labor time costs a lot more than extra processing time, and labor is often the largest component of time to insight.

images

Figure 4.2 Time to Insight Components

Focusing on time to insight helps account for all the factors that impact the time to build an analytics process. Shifting from typical criteria to something like time to insight makes terrific sense when investing for discovery. After all, new insights are what drive the revenue side of the business case. Minimizing time to insight maximizes the chances of finding the insights that drive revenue. Treating investments in discovery differently isn't something that's typically done today. However, while it will take some getting used to, it must become commonplace.

A time to insight metric will impact not just cost but also employee satisfaction and motivation. Analytics professionals want to develop impactful analytics processes. The faster your analytics professionals can get to a new insight, the faster they can have an impact, and the faster they can move on to the next discovery process. A short average time to insight will keep analytics professionals happy and motivated. Nobody enjoys working in an environment where work takes longer than necessary due to inefficiencies.

Ability to Operationalize

In the prior section, we discussed how new criteria, such as time to insight, are required for a discovery investment. Let's now look at the different criteria required when investing to make processes operational. Unfortunately, when making analytics operational, it is no longer possible to evaluate analytics tools based on functionality alone. It is also necessary to take into account how well a tool will integrate with the operational environment. A tool can be highly robust in functionality, but if it cannot be easily integrated and cannot provide the level of scale and process simplicity required, then it is not going to work.

When it comes to operational analytics, milliseconds often count. In the long run, it can be better to leverage a tool that is not as user friendly as long as it can be more fully embedded into business processes to handle thousands or millions of analytics decisions every day. It is necessary to assess a tool's ability to operationalize alongside its raw functionality.

This is a different way of looking at things. Historically, organizations found the most user-friendly analytics tools with the most functionality. Analytics processes were executed offline in a distinct environment, so integration didn't matter much. When going operational, an organization must make sure that the integration, scalability, and performance are there too. That can absolutely lead to choosing tools that wouldn't have been chosen in the past. User friendliness is still critical for the discovery process, but the ability to be embedded and scale is more critical for operational processes. It may take more effort to build an operational process initially, but that extra effort gets amortized over millions of faster decisions over time. We discuss these different requirements more in Chapter 6.

Not focusing primarily on functionality and user friendliness isn't as unusual as it sounds. When building a single-family home, there are certain products that work just fine. They are easy to install and use and are often chosen for new houses. When it comes to a commercial property, much tougher components are utilized that may be much more expensive to acquire and install. The commercial products might also look worse and be less user friendly, but they are necessary to handle the level of usage that will occur in a commercial environment. Something as simple as a door handle has to be addressed differently. A cheap door handle with standard fittings will work great when a door is opened only three times a day at your home, but it will break in a matter of a weeks if put in a large office building. This same principle is at play when selecting analytics tools to support operational processes.

Given the preceding facts, it may not be possible to have a single analytics tool set from a single vendor that handles all needs. It is entirely possible that different tools will be used for the discovery process than are used when making a discovery operational. Over time, tools will evolve. It is hoped that some will eventually be able to handle both needs with equal effectiveness. As of early 2014, that is not the case.

Analysis Value versus Technology Value

There are two components to the benefits achieved with an investment in analytics. While unfortunately they are often intertwined, it is very important to separate and distinguish them. The first component is the value of an analysis itself. In other words, regardless of what tools, technologies, or methodologies are used to get to the results, much of the benefit comes from simply getting to the results. Clearly there is a need to have tools and technologies to get to the results, but care must be taken not to associate the benefit of the underlying analytics with any individual tool or platform option.

For example, simple affinity analysis to derive cross-sell offer opportunities is valuable. Regardless of the tools and platforms used to run an affinity analysis, the results have an inherent value. The value of the tools and technologies is in how efficient they make it, compared to other options, to create, test, and execute the analytics process required to get the affinity analysis results. In most cases, as illustrated in Figure 4.3, the inherent value of the analysis will be much larger than the incremental value of a given tool or technology.

images

Figure 4.3 Typical Total Value Decomposition

The first step should be to determine the benefit of an analysis in the absolute sense, independent of any tool or platform. After that is determined, then determine the effectiveness of the various options for creating that analysis quickly, efficiently, and cost effectively. One trap that people fall into is having a salesperson discuss the huge return on investment (ROI) that analytics created with his or her products can generate. However, those making such a claim often add together the ROI inherent in the analytics with the incremental value offered by his or her company's tools or technologies to get those results. It is necessary to separate the value of the tool from the value of the underlying analysis.

As a side note, if every salesperson for every option you are considering embeds the analysis value with the tool value, then at least it is possible to compare the options on fair footing. Since all estimates include the same inherent value, any differences reflect a difference in incremental tool or technology value.

Business Case Framework to Consider

Richard Winter from WinterCorp published a terrific study called “Big Data: What Does It Really Cost?”1 The paper defines a framework for taking into account all types of costs and getting to a measure of what Winter calls “total cost of data” (TCOD) when making hardware and software investments to support analytics. TCOD reflects the total cost across a wide variety of relevant components, such as those we discuss in the next section of the chapter.

Keep in mind that Winter's TCOD framework, as well as much of the discussion in this section of the chapter, focuses primarily on the cost side of the equation. This is purposeful because the components of cost tend to be fairly consistent across organizations whereas the benefits vary widely based on the specific analytics processes being pursued. What is often missed is an accurate assessment of costs when it comes to analytics. Therefore, that is the focus here.

The most important thing about WinterCorp's TCOD framework is that it isn't biased toward one solution or another but simply provides a framework that helps identify and account for the various cost components. For example, two different examples in the paper led to completely opposite conclusions based on the facts. In one case, based on the nature of the data and processing required, a massively parallel relational environment was three to four times more expensive than a Hadoop implementation. In another case, based on the nature of the data and processing required, a Hadoop investment was three to four times more expensive than a relational environment.

Leveraging a framework that's neutral to the tools and technologies being evaluated allows an accounting for all costs in an unbiased fashion. The TCOD framework needs some slight modification for operational analytics, as it was targeted at a slightly different investment. As we discuss shortly, however, combining the TCOD framework with some additional metrics tied specifically to operational analytics is a terrific starting point.

What Are the Total Costs for Operational Analytics?

It is critical to get to an accurate total cost when assessing options for analytics investment. When considering open source tools, for example, organizations can't get too hung up on the fact that the license for the software is free. It is necessary to look at the full picture of costs over time. It's not that open source tools can't be a tremendous addition to an organization's environment, but it is necessary to look at the total costs and be diligent in looking for cases where perverse incentives are inadvertently driving higher costs over time.

When assessing the costs related to operational analytics, what must be included? The costs include, but are not limited to, these:2

  • Hardware to support the analytics processing
  • Software acquisition (Note that even in the case of open source software, there are costs to install and configure the software.)
  • Space the equipment uses and the power consumed
  • Fully loaded labor costs to configure and implement security, resource prioritization, and network connectedness
  • Acquisition, loading, and preparation of data
  • Labor required to develop an analytics process
  • Effort to test code logic and accuracy of process output
  • Maintenance costs for the platform, software, and analytics processes over time
  • Training for staff on how to use all the various components of the analytics environment

All of these costs must be viewed across the typically several-year period of time that represents the life span of the investment.

Obviously, there are many cost components to consider, and the primary categories are shown in Figure 4.4. Some components, such as a hardware purchase, will require large initial expenses but little ongoing expense after that. Other costs will be spread more evenly over time, such as maintenance costs. To compare options appropriately, it is necessary to look at total cost across all of those components over time. The flip side of the equation is that it is also necessary to account for the various returns that will be realized from the investment. Next we discuss some concepts that help create an accurate business case.

images

Figure 4.4 Cost Components of an Analytics Investment

Account for All Costs over Time

Just as with any investment, when making a business case for operational analytics, it is critical to account for all costs, not just key line items, and to account for those costs over the lifetime of the investment. One mistake organizations make is to not fully account for some very real costs that they will face. This is partially driven by the fact that some costs get more attention because they are much more visible and/or politically charged than others. Make sure that people are keeping an eye on all of the costs even as they try to focus on only the few that they're most interested in. Let's walk through a few examples of the impacts of ignoring total costs in day-to-day life.

Hotel Rates

There was a popular hotel right next to an office that I often traveled to. My employer at the time had a rate of $109 at the hotel, which included breakfast and Internet service. That was a good deal because the breakfast and Internet were priced at $10 apiece. The $109 rate was providing $129 in value.

The following year there was a big push to lower our average nightly room rates. Our travel department set up a new rate for us that was $99 but didn't include breakfast and Internet. By the time breakfast and Internet service was added, almost all travelers were going to be paying an effective rate of $119 every night. The company's goal was to decrease the nightly rate line item, and somebody got a gold star for “saving” $10 per room night at that property. The other charges may have hit against different line items, but at the end of the day, the company was going to be paying more in total.

Cost per Unit

A client confided that he was struggling with an upcoming hardware investment. His management was focused 100 percent on the cost per server. The performance gap between the more expensive and less expensive servers was at least three times while the costs were only about 25 percent different. His company was heading toward spending nearly three times what was required simply because a lower cost per server was the primary target. He couldn't convince those making the purchase to look at the bigger picture because they were hung up on that one metric. I didn't get an update on how it played out, but I hope that cooler heads prevailed. Focusing on cost per server without taking into account performance is a losing formula.

Game Show Winnings

The Price Is Right was my favorite game show when I was growing up. There are many stories about winners who are shocked to learn that the “free” RV they've won comes with a huge tax bill and a lot of maintenance costs.3 If contestants want to take a $60,000 RV home today, they better be prepared to pay about $20,000 in income and sales taxes and high gas and maintenance bills. If a contestant is not comfortable with those costs, that free RV isn't really free at all, is it? Contestants had better also look at resale value to ensure that they can sell what will be an officially used RV at a high enough price to net a positive income after taxes and fees. It is a big mistake to look only at the value side without looking into the cost side. As an aside, you don't think that Olympic medals come without taxes, do you? U.S. Olympic athletes face taxes for winning medals because of the cash awards that come with them from the U.S. Olympic committee.4

The Most Overlooked Component of Cost

One of the most often underestimated, if not completely missed, components of a business case for investing in tools and systems supporting analytics is the labor component. It is critical that labor costs are accounted for. There are very real labor costs related to all aspects of building, testing, implementing, and maintaining operational analytics processes. There are also very real labor costs related to implementing and maintaining an analytics platform or a set of analytics tools.

Labor costs can be driven up immensely if an organization doesn't have the right skills on staff and therefore is burdened with inefficiencies during implementation and process creation activities. The costs related to labor can exceed by multiple times the underlying licensing and hardware costs. This can be especially true for analytics processes that are not yet mature and require more care and feeding. Many operational analytics fall into this category today.

A man from a government agency (names withheld for obvious reasons!) confided during a discussion that his organization had reduced a substantial portion of its software licensing fees through a mandate to use open source technologies wherever possible across the agency. However, his team ended up spending millions of dollars on incremental labor and was multiple quarters behind deadlines. This was because a lot of the open source tools the agency migrated to were not ready to replace the commercial tools previously in place. Not only had the organization not saved anything in total, but it had spent millions more and lost a lot of time. Targeting the license fee line item alone led the agency down a path that cost dearly in terms of labor even though the license fee line item was drastically reduced.

There's another area where labor comes into play that's very hard to quantify but very real. If it takes extra time to do something on a given platform or with a given tool compared to another option, then that additional time should be associated with that investment choice. Outside of labor costs for implementation and ongoing maintenance, which are easy to identify, if an organization is less efficient with a chosen option, that lack of efficiency can quickly add up and possibly dwarf the other costs.

As you assess potential investments, you must look objectively at all of your costs and all of the skills that you have available. These are summarized in Figure 4.5. Based on available skill sets alone, one organization could be led down a different path than another. As with anything, the right answer is often “It depends.” Without going through the process of accounting for your situation, you can't make the right choices.

  • Installation and configuration
  • Ongoing maintenance of both tools and analytic processes
  • Analytic process creation
  • Analytic process testing and operational implementation
  • Incremental effort required to utilize one option over another (often missed!)

Figure 4.5 Labor Costs that Must Be Accounted For

Issues that Change the Formula

Realities can lead an organization to deviate from the cheapest cost option, of course. For example, perhaps the capital budget is fully spent this year and everyone has been told that there is absolutely no way that any more capital expenses will be approved . . . period. In that case, any option requiring a capital expense isn't going to work, and it is necessary to come up with an alternative. That alternative might involve a cloud solution or leasing instead of purchasing equipment, for example. Those options may even be more expensive over time, but a higher long-term cost is the price to be paid for the tight budgets at present.

It's important to understand it's okay to pursue a more expensive option as long as it is being done with a full understanding that more is being paid and an organization understands the reasons for doing it. Knowing and understanding that more will be paid and deciding for practical reasons to do it is okay. That is far different from skipping the exercise of understanding the costs and potentially even fooling yourself into thinking that you aren't paying more when in reality you are.

Scalability Is Not Just about Storage and Processing

In Chapter 2, we discussed that operational analytics and big data require scale in multiple dimensions. This means not just in terms of storage and processing but also in terms of the number of users, concurrency, security, workload management, and integration with other tools. When making analytics operational, there will be millions of decisions (potentially tens of millions of decisions) being made on an ongoing basis so it is necessary to ensure that the needed scale is available across all the necessary dimensions.

If a chosen investment can't support all the types of scale required for operational analytics, an organization will pay dearly on the back end working around the scale limitations. The cost of those workarounds can really add up. In the worst case, it may not be possible to work around some of the gaps and it may be necessary to start over.

I'll provide an analogy to this concept from my own past. A few years ago, I bought a cheap weed trimmer. I only needed to do a little bit of weed trimming in my yard, so I decided to go with the cheapest trimmer I could find. When I got my purchase home, it didn't work very well, so it took me longer to do the weeding I needed to do. In addition, the string it used was cheap and broke quite frequently. The string was very difficult to change on the spool, and the spool was very difficult to put back on once I did change the string.

In the end, that cheap trimmer cost me a bundle due to the extra time and inefficiency it caused me. After a few weeks of trying to make it work, I abandoned the trimmer and went and bought a more expensive version. If I had really taken into account not just the price of the trimmer but also the total effort it would take me to use it in the ways that I planned, I would have made a different choice from the start. Luckily, a weed trimmer is relatively inexpensive, and I learned my lesson with minimal monetary damage. That won't be the case if similar mistakes are made with investments in operational analytics.

Tips for Creating a Winning Business Case

Now that we've covered some of the considerations that need to go into a business case for analytics, we turn our attention to concepts that can increase the odds of successfully pitching the case to an executive team. Once a solid business case is created, how can it be positioned most effectively to ensure success in getting it approved? Let's look at a few things to consider.

Don't Force a Business Case

Don't waste time trying to force a business case where one doesn't exist. If you are trying to make a case and the numbers just aren't working out, then it is time to move onto another problem. When there is a lot of hype around certain approaches, it is easy to buy into all the sizzle and to get sucked into trying to make a business case work. Don't let shiny new technology, data, or tools and the sizzle that surrounds them move a business case past facts and into emotion.

In 2013, people at multiple organizations around the world talked to me about the difficulty they faced justifying substantial investments in the acquisition of social media data and the related analytics of that data. My clients couldn't find use cases that justified increased investment. Each of the organizations had third parties providing high-level sentiment analysis and other trend analyses based on aggregate social media information. However, the organizations couldn't make the case for bringing the raw social media data in-house. The costs for the data and the development of the analytics processes didn't appear to have enough return to make it worthwhile. At each customer, the people I spoke with were stressed over their inability to justify something that they perceived others in the marketplace commonly could justify. They all wanted to know what they were missing.

What I told each of the organizations was that they shouldn't worry. Perhaps investing in detailed social media data didn't make sense for them at the time. Perhaps it never will. If the high-level summaries that the organizations already had access to were sufficient and they weren't able to prove the need for a deeper level of investment, that is okay. After all, even after going to the effort and expense to get the data, matching social media accounts with internal customer accounts can be quite difficult, and the success rate in matching is fairly low. I suggested to each organization that the best path may be to stick with what is already in place for social media and divert energy into finding another higher-value analytics opportunity to pursue.

Part of the problem my clients faced was that there was substantial hype around social media analytics at the time. It seemed like everyone else was investing and getting a return on social media analytics. I pointed out that I'd had the same conversation with several other organizations just like theirs. Each organization seemed to think that others were doing more than they actually were.

Such scenarios seem a lot like high school, when everyone else seemed to have had a much more exciting life than you did. In fact, most of it was simply rumor, and other kids may have been envying you and what they perceived as your exciting life. In high school, nobody wants to be left out, and that's also true in the business world. Don't give in to the pressure to prove a business case that doesn't exist. Your energy is better spent building cases in areas where you're confident value exists and you're able to prove it.

To Succeed, Start Small

As mentioned at the beginning of the chapter, the way tools and technologies are used to build analytics processes today make it possible to start with a much smaller investment and then build from there. This point is so important that I've discussed it from different angles in my book Taming the Big Data Tidal Wave, and my regular International Institute for Analytics blog and a Harvard Business Review blog.5 I'm going to reinforce some key themes here.

One of the reasons people immediately have concerns when someone suggests starting small is the concept of anchoring, which I first heard about in the book Predictably Irrational by Dan Ariely.6 To illustrate the concept, say that there is a big room full of people. I take half into the hall, tell them that I'm going to have lunch with 10 people today, and have them take their seats again. Then I take the other half of the group into the hall and tell them I am going to be at the airport with 10,000 people that afternoon. We all return to the room, I set a jar of jelly beans on the table in the front of the room, and ask the room how many jelly beans are in the jar. This is where it starts to get interesting.

As it ends up, the half of the people who heard me say the number 10 will on average guess lower than the half of the people who heard me say the number 10,000. This is true even though the jelly beans have nothing to do with the numbers I said. The reason is that the people's minds get anchored on either the number 10 or 10,000. The group that heard 10 starts at 10 and works their way up until they think the number they're guessing is big enough. The group that heard 10,000 works their way down until they think the number is small enough. It is a psychological trick our minds play on us.

That's exactly what happens with big data and operational analytics. The phrases sound intimidating. Our minds get focused on big, complex, massive-scale analytics. As we think about how to get started, our minds drift toward highly complex, very difficult paths. We start aiming for the end state rather than the first steps that lead to the end state.

There is a perspective that is critical here. We don't need all of the data over years of operation for every piece of equipment in a fleet to identify predictive maintenance opportunities. What we do need is enough data over enough time on enough pieces of equipment to establish what trends exist and what the general magnitude of the opportunity is. Instead of starting with a massive project, start with a pilot or proof of concept on a subset of data. That effort can prove that an idea makes sense and can produce a return. Simultaneously build up the final business case as you learn more about both the effort required to create the final operational process and about any data or process issues that will have to be addressed. Feed the results from the pilot into the case for the larger investment. Just make sure that your mind doesn't get tricked into anchoring on something much bigger.

Accept Some Uncertainty

When entering new areas, like big data and operational analytics, there will be more unknowns than is typical when developing a business case. When pursuing a new initiative of the scale of operational analytics, there will also be a lot of assumptions necessary. These assumptions include the obvious, such as how well the analytics will work and how accurate the data is. There will also be some assumptions about how well the results actually will be implemented and adopted by the organization, once they are available. More or less cultural resistance than expected in an organization can vastly influence the final impact of the operational analytics.

Think back to Chapter 1 and the discussion about drivers being given an optimized daily route based on complex analytics. If the drivers embrace the changes to their usual routes and actually drive the new routes to log fewer miles, there will be a big gain. But if the drivers resist and follow only a small portion of the suggestions, the return will be much less than it could have been. Note that this lack of impact has nothing to do with the power, accuracy, and potential of the analytics process itself. It is purely due to drivers not actually making use of the recommendations. It is a cultural and compliance issue. We discuss these topics more in Chapter 9.

It is critical that an organization understand the assumptions being made and document the risks that either can't be quantified or have a lower degree of precision. When undertaking something new like operational analytics, it might not be possible to specify some of the assumptions as precisely as for other business cases. Many business cases focus on a common situation that has been executed in the past and is understood quite well. For example, imagine a proposal to create a new manufacturing process to produce a fiftieth variation on a product line. In such a situation, it is possible to be very confident in the various assumptions being made about how the equipment will work, how smoothly the line will run, and how the staff will adapt to the new process. After all, something similar has been done 49 times.

However, new and innovative ideas always are going to be a bit more ambiguous. The politics around getting the organization to accept some of the less firm assumptions can be difficult to navigate. Some executives will say they want to assume very low acceptance and adoption of a new process to be safe. Others will want to be aggressive and assume that employees will fully embrace the new process. How do you resolve that gap and get approval?

One way to move past the disagreements is to demonstrate that a wide range of reasonable assumptions all point to the same decision, which is that investing is a smart move. If the uncertainty can't be removed completely, show that the impact of the uncertainty won't be an issue. Whether people want to assume an 80 or a 50 percent compliance rate, if both of those assumptions still point to a positive return, then people can agree to disagree and still feel comfortable proceeding. Over time, the more that an organization embraces analytics, the easier it will be for people to make what might be viewed as a partial leap of faith. It's easier for people to accept some uncertainty when they've seen the same type of uncertainty work out just fine in the past.

There Are Many Options, So Choose Wisely

As an organization plans where to invest in analytics, it is necessary to weed through all of the possible analytics that could be pursued and to decide which of those should be given focus. Even if an organization developed a list of 100 compelling operational analytics to implement this year, it wouldn't be possible to implement them all. It is necessary to prioritize and reduce the list to a number that can be handled from a business process change and resource perspective. It just isn't possible to go after everything at once.

Consider creating a quarterly or yearly process of gathering all of the possibilities that the analytics and business teams think they can make a case for. Come to the table with all of the great ideas, and then start to weed through them. Ask:

  • Which would have the most internal or external political hurdles?
  • Which might be too narrowly focused to have enough upside?
  • Which tie to long-term corporate priorities?
  • Which are based on data and skills that are readily available?
  • Which have been given a high priority by the business team?

Debate the options and then decide which of the options make the most sense to build a business case for. Identify the number that can be handled for the year, but take along a few others just in case the business cases don't work out for some options. By going through the process of starting with all the possibilities and whittling them down, it is possible to be confident that good choices are made.7

Illustration of Doing It Right

A few years ago, a European retail client wanted to capture web browsing history as part of each customer's profile in order to enable better direct marketing and website customization. The effort was projected to cost several million euros, and the team was struggling to get the approval to proceed. In such a situation, many teams would either quit and give up or continue to push the same plan quarter after quarter until the project got approved. In either case, the opportunity would be either missed or severely delayed.

This team had an epiphany. Its members realized that it was true that it would cost several million euros to capture all web browsing history for all customers across all of the organization's multiple websites. However, their executives weren't questioning whether the idea would work as much as they didn't understand to what extent it would work for their organization. The team therefore did something very smart.

Team members identified a couple of popular product lines on one of the company's websites. Then they captured browsing history for customers browsing just those products on that one site for a few months and executed some tests in a pilot. By vastly scaling down the initial scope of the pilot, the amount of data wasn't very big, and the team was able to use existing tools and technologies along with some labor to get it done. The team was able to prove that, for example, sending a follow-up e-mail to someone who browsed an item but didn't buy would produce a big return. In fact, the total return across the tests in the pilot was 800 percent in five months.

Next, the team went back to the executive committee and explained that the 800 percent return on the pilot was achieved in a few months using existing tools, technology, and people. If the team was to build the solution out across all of the company's websites, all products, and all customers, there was a very impressive projected revenue impact to discuss. The team next pointed out that the estimates actually were on the low end because only a few of the fields of the web logs had been used and only a few simple ideas had been tested with the data. Team members had a lot of other ideas about how to use the data that they hadn't tested. While they couldn't quantify the return from the other ideas, the results would only add to the results seen in the pilot. The numbers from the pilot that everyone was excited about represented a floor, not an expected value and certainly not a ceiling, of what could be expected in a full rollout. Team members also discussed that they had now worked with the data, understood it better, and could lower the risk of the rollout because they were much more confident in their work estimates.

With those facts, it was easy to get approval. The executive team was excited to invest in the initiative, knowing that the returns would be there because returns had already been proven. The investment was no longer seen as a risky, huge expense that had unknown benefits months out. Instead, it was seen as a smart investment that everyone knew was going to pay off. In fact, the executives probably would have shoveled the money out the door faster if it had been possible to accelerate implementation.

Notice that the retailer started small and built a business case in stages. However, the endgame was not simply scaling out the exact analytics on the exact products included in the small-scale pilot. The case took into account all costs, including ongoing labor. Team members also pointed out, as suggested earlier in the chapter, the bigger picture they were pursuing. The advantage of starting small and building a business case for analytics is that it shifts focus away from costs and toward the benefits.

Wrap-Up

The most important lessons to take away from this chapter are:

  • Build a case for solving a business problem, not for covering the costs of a project. Also, make it a partnership between business and IT.
  • Build a case for analytics with the potential to be a differentiator, not just an incremental improvement to existing analytics processes.
  • Prove a concept, not a case. Design a proof of concept to illustrate the potential of a more general class of approach. Don't make it only about proving the value of the limited scope addressed directly in the POC.
  • When investing for discovery, use different criteria, like time to insight, that account for the usability and flexibility of options in addition to processing performance.
  • If necessary, trade off tool functionality and user friendliness for scalability and ease of integration when making analytics operational.
  • Distinguish the inherent value of an analysis from the incremental value that a tool or technology provides to generate the analysis results.
  • Identify and account for all costs related to an analytics investment over time within a neutral framework. Don't focus on only certain line items.
  • Pay particular attention to ongoing labor costs for both maintenance and to build and test analytics processes. Labor costs often are the most overlooked or underestimated costs.
  • Make sure a business case takes into account the various dimensions of scalability required. If not, the gaps will lead to extra costs or even having to start over.
  • Don't force a business case where one doesn't exist. A heavily hyped topic isn't going to be right for every organization right now (or ever).
  • Start small and leverage targeted pilots to provide tangible results. It isn't necessary to fully implement an analytics process to prove its value.
  • Accept that new, innovative initiatives will have more uncertainty than most. If agreement can't be reached on required assumptions, show that all assumptions being argued point to the same decision.

Notes

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset