CHAPTER 11

image

Metrics: Performance and Context

Purna “Chandoo” Duggirala, who created and writes for Chandoo.org, is a strong advocate of data context. Consider his article “Never use simple numbers in your dashboards.” In it, he argues for including additional information about single data points. For example, the sales data for a single month can tell only that month’s story. However, including last month’s sales, last year’s sales, and how sales have trended can provide for a much richer story. This is because, in Chandoo’s words, several data points can present a “better story than a simple number alone.”

Telling the Whole Story Like a Reporter: An Introduction to Analytics

A good dashboard report or interactive decision tool does more than just provide information; it tells a story. And a good story requires description, setting, and context. In my high-school journalism class, we liked to talk about the five w’s and one h: who, what, where, when, why, and how. Dashboards aren’t much different than a good newspaper article or a well-researched magazine article. Your dashboard should tell a story, and you should present all the details sufficient to tell that story.

Thinking back to the introductory chapters of this book, you’ll recall the reasons why some dashboards and data visualizations fail. I had argued that data applications cannot make decisions for you, nor should you expect them to. Instead, they can provide you with the necessary information to make good decisions. This confusion results from a misunderstanding of the purpose of the data displayed.

Dashboards and decision tools can easily provide a descriptive context. For example, they can easily address the who, where, and when, even in exhaustive detail if need be, provided the data necessary to present this information is available. However, the what to some extent (arguably, what has some overlap into descriptive context), why, and how are not as easily presented, in part because a good description of what has happened, the root causes behind why it has happened, and the context required to sufficiently describe how it has happened are harder to capture. For example, the reason why one department performs better than another may require some research into an organization’s processes, and for that matter, it may be the subject of intense debate within an organization.

Decision tools should contain the who, when, and where, which you can think of as descriptive analytics. In addition, the what, why, and how aid in the development of prescriptive action. That is, they help alert you to and potentially inform the best decision. A metric that does not meet its goal or department that is performing better than another can encourage you to investigate the reasons for these phenomena and how to replicate or avoid them. This is a key area where dashboards and decision tools differ. A dashboard can inform you to take prescriptive action. A decision tool, like a decisions support system, can provide recommendations through prescriptive analytics.

Finally, there’s one more w to the mix—that of the “what if.” The what-if forms the predictive analytics of the dashboard. Prediction speaks to what you think will happen given certain preconditions. For example, the easiest precondition is that you will continue to perform next month as you did this month. See Figure 11-1.

9781430249443_Fig11-01.jpg

Figure 11-1. A description of the different types of analytics

Let’s go through these.

Who and Where

The who or where in this case could be a person or a department. In a certain sense, they aren’t very different. Consider the table in Figure 11-2. The table answers the question, how well have the departments under my purview provided quality customer service?

9781430249443_Fig11-02.jpg

Figure 11-2. A demonstration of a dashboard answering the who and where questions of descriptive analytics

This table answers the where, which refers to a particular department, but it doesn’t answer the who, which refers to a person. For example, you might be interested in your representatives who support the Customer Service department. It’s tempting then to include the entire list of staff and their corresponding scores on the dashboard. I’ve excerpted part of this list in Figure 11-3.

9781430249443_Fig11-03.jpg

Figure 11-3. A table that answers the where question of descriptive analytics but not the who

But the full list is much bigger (100 names in fact). And that’s just my sample data. Real data, as you probably know, can be much bigger than that. So, excerpting a list for the user is probably not the best idea. Furthermore, excerpting the list puts the responsibility of finding the problem on the user. Instead, you must think in terms of telling a story; specifically, you want to communicate relevant context quickly, providing accurate descriptions to bring the narrative to life. As such, the who you are interested in are probably the worst performers. See Figure 11-4.

(Arguably, you might also be interested in the best performers, but to the extent dashboards alert you to problems requiring action, you are definitely more interested in the worst performers.)

9781430249443_Fig11-04.jpg

Figure 11-4. A table that helps you communicate the worst performers quickly

When

Answering the when always deals with time. For example, answering when can be as simple as supplying the previous month’s rank in addition to the current rank (see Figure 11-5). This provides context for the current performance. Is there an underlying trend or perhaps an unexpected aberration?

9781430249443_Fig11-05.jpg

Figure 11-5. A table that helps answer the who, where, and when of descriptive analytics

The when can be much more in depth than this, of course. A timeseries chart is often a good way to present the when.

Why, How, and What

In this section, I’ll discuss the why, how, and what that make up the components of prescriptive analytics. In particular, the way prescriptive analytics is used actually differs in the context in which it takes place. In the next two subsections, I’ll discuss the differences of how analytics exist on dashboards and in other decision tools.

Dashboards

You may recall from earlier in the book that there are inherent differences between dashboards and other decision support tools. To wit, dashboards don’t make prescriptive recommendations; rather, they alert users to areas that need attention. They can, for instance, measure current performance against a selected target. In a sense, the target is a type of “ideal.” By placing a metric on a dashboard with an ideal value to be attained, there is a prescriptive argument being made about how what is being measured ought to perform. Bullet charts, introduced in Chapter 4, are great examples of comparing how you’re doing against what you think you should be doing. Figure 11-6 shows an example bullet chart you can make in Excel.

9781430249443_Fig11-06.jpg

Figure 11-6. Bullet charts show how you’re actually doing against how you think you should be doing

On the other hand, when a measurement comes in significantly under an ideal, the dashboard is signaling that some intervention ought to be taken. Ideally, it will also provide other descriptive measurements important to why the measure is coming in under the ideal. With this information, the dashboard user can take the necessary actions or investigations to rectify the problem. Dashboards don’t usually present prescriptive analytics by themselves, but they can prescribe action to be taken when necessary.

Decision Tools

Decision support systems, on the other hand, employ full prescriptive analytics. For example, the analysis in Figure 11-7 shows an example decision support system that lists countries with the best healthcare systems based on a set of a metrics.

9781430249443_Fig11-07.jpg

Figure 11-7. An example of a decision support system using prescriptive analytics

This decision support system allows you to change the weights the model uses. This is called sensitivity analysis. With sensitivity analysis, you can change the weights to see which countries are still the top performers (that is, you’re testing how sensitive the model is to changes in weights). This is different from a regular dashboard because it allows you to change how the results are calculated. In other words, the result is making a prescriptive recommendation based on the decisions made by the user. This is different from a dashboard, which can only recommend intervention and investigation.

You’ll investigate in greater detail the decision support system shown in Figure 11-7 in Chapters 17 through 20.

What If?

In this section, I’ll talk about the “what-if” question predictive analytics helps answer. “What-if” analysis asks questions about the data, given certain conditions, to make an educated guess about what happens as a result. “What-if” analysis often takes the form of regression on a dashboard, but it needn’t always. For example, the chart in Figure 11-8 (from Chapter 3) makes a prediction about years of experience and salary. It assumes two key conditions: that salary and years of experience are inherently related and that the relationship between the two can be expressed accurately by a linear equation.

9781430249443_Fig11-08.jpg

Figure 11-8. An example of predictive analytics. Using the regression line, you can predict how much one might make based on the years worked

Predictive analytics plays roles on both dashboards and decision support systems. On dashboards, regression can help predict what sales in the next quarter might look like. On a decision support tool, the user might specify a set of interactive assumptions before the model results are generated. Often these assumptions help test prescriptive actions (“What if our company sells this type of product that would result in a better gross margin?”). Whether on a dashboard or decision support tool, you must always remember that predictions follow from a series of assumptions. Their results are true only to the extent their underlying assumptions hold true.

Metrics, Metrics, Metrics

At the end of the day, your dashboard or decision tool will produce some data to display. That data presented together with context—a signal, a performance indicator, a goal, or a target—is a metric. What metrics you choose to display will depend ultimately on what you want to convey to the user, and that decision will be manifest regardless of whether the problem you must solve takes the form of a dashboard or decision support application.

Knowing what information to collect and ultimately display is half the battle. The other half is deciding how to present the information. As discussed earlier, dashboards are more descriptive in nature; decision support systems are more prescriptive and predictive. Here some examples of metrics commonly found on dashboards and in decision tools:

  • Financial: Revenues, expenses, profits, financial ratios
  • Marketing: Satisfaction surveys, return on investment, audience engagement
  • Information technology: Downtime, transfer rate
  • Healthcare: Wait times, occupancy rate, length of stay, lab turnaround time

As I said in the beginning of the book, I won’t attempt to tell you which of these metrics are the best to display. There are many good books and web sites that can help you on this path. At the same time, there are equally as many resources that claim certain KPIs are the best. A simple Internet search of the best KPIs to use will return many results with contradictory information. With that in mind, I feel I should take a few moments to tell you what criteria I use to evaluate metrics. I call this my “working criteria” because they make for a good rule of thumb, but they are only a starting point for evaluation. Remember, what may be the best for you depends on what you want to display.

Working Criteria for Choosing Metrics

The following criteria were in fact suggested for another domain altogether, specifically that of Value Focused Thinking, a decision support tool used in the field of management science and operations research.1 However, I found them appropriate for helping analyze criteria to be placed on dashboards and decision tools. Specifically, you can judge metrics by their ability to fulfill the following criteria:

  • Mutual exclusivity
  • Common interpretation
  • Sufficiency

Mutual Exclusivity

Metrics should not overlap in what they measure and present as best as possible. For example, profit margin is net profit divided by net income. There may indeed be good reason to place the net profit, net income, and profit margin on the same dashboard. But there also may be other cases where profit margin is the only thing of interest. In this case, the addition of net income with a profit margin measure adds nothing. Ratios are susceptible to this problem. Often you’re interested only in the resulting ratio but not its components. Be on the lookout for instances in which metrics with shared components merely repeat the same information.

Figure 11-9 provides a good example of mutual exclusivity being violated.

9781430249443_Fig11-09.jpg

Figure 11-9. An example where mutual exclusivity is not being followed

The total bar simply reintroduces the unit sales values for each region. However, how much does it actually add to the intracomparison of one region to the other? The total value may still be important, but the visualization in Figure 11-9 merely reintroduces the same information. Figure 11-10 includes the same information as before but in a way that doesn’t visually repeat data already presented.

9781430249443_Fig11-10.jpg

Figure 11-10. Data is not repeated and therefore mutual exclusivity is maintained

Common Interpretation

Common interpretation is the assurance that the metric presented is interpreted in the same way by everyone within the organization. Most financial ratios aren’t subject to disagreement. However, constructed metrics may bring about confusion. Consider the benefit value function in Figure 11-11: 0 has the least benefit; 1 has the most. It’s not clear, however, what a 1 or a 0 really means. You simply know that a greater number is better.

9781430249443_Fig11-11.jpg

Figure 11-11. An example where common interpretation is violated; you don’t really know what a one or zero means in this context

The chart shown in Figure 11-11, unfortunately, is adapted from a real one I came across during my career. I knew that a 1 was better than a 0, but it wasn’t clear what that might mean. One might interpret a value of .5 to return half the value of a year with a 1. Others might interpret a year with a .5 to be half as good (“good” could mean anything here) as a year with a 1. These interpretations and so many more are all valid when the metric has not been clearly defined. Common interpretation suggests we should all have the same understanding of metrics presented our way.

Sufficiency

Sufficiency deals with the number of metrics being displayed. On the one hand, you shouldn’t waste screen real estate by showing only a few metrics while leaving lots of whitespace. On the other, you shouldn’t display data just to fill space, especially if the data overlaps and violates mutual exclusivity.

I used to tell developers to display “as few metrics as possible,” but I’ve since changed my tune. Less isn’t always more when it comes to presenting information. Finding optimal sufficiency in what to present is a goal rather than a task to be completed. Often the first iteration of your work will fail to sufficiently present all of your ideas. This is to be expected; achieving sufficiency happens through the continuous improvement of your work. Rarely will you get it right the first time, but if you are open to change and view development as an evolution, then you will converge upon a highly sufficient display over time.

In all cases, you should be wary of dumping every piece of data, metric, and chart on your display. Often you display everything because you want your bosses, managers, and clients to be happy. In organizations where there is disagreement about what to display, putting everything on a dashboard feels like a positive compromise. But you shouldn’t do this. As developers, it’s your job to help tell the correct story in the most effective way possible. If your work attempts to be all things to all people, it gives more weight to the personalities attempting to shape the story in their own way than to the actual underlying story (which should be told by the data). You should push back against the interests that would have your work become a cluttered patchwork of the opinions and ideas that ineffectively present what’s really going on.

The Last Word

In the previous several chapters, you learned layout, formulas, and how to present data to your users. In this chapter, we looked at the different ways dashboards and decision support tools help you answer questions. In particular, I focused on the differences between descriptive, predictive, and prescriptive analytics. Next, I covered the principles to help you present information well. I showed that data presented should be mutually exclusive to the other data being presented (that is, you should eliminate redundancies). I established you should maintain a common and consistent interpretation of the information presented. Finally, I established you should strive to show the required amount of information necessary to tell the correct story.

___________________________

1See Parnell, G. S., Chapter 19, Value-Focused Thinking Using Multiple Objective Decision Analysis, Methods for Conducting Military Operational Analysis: Best Practices in Use Throughout the Department of Defense, Military Operations Research Society, Editors Andrew Loerch and Larry Rainey, 2007.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset