Chapter 9

The Success Case Method

Using Evaluation to Improve
Training Value and Impact

Robert O. Brinkerhoff and Timothy P. Mooney

In This Chapter

The Success Case Method (SCM) evaluation method measures and evaluates training accurately, simply, and quickly, in a way that is both extremely credible and compelling. Upon reading this chapter, you will

  • acquire a strategic, future-directed perspective on training evaluation
  • learn the practical steps in a Success Case Method study
  • understand the factors that should be addressed in helping an organization improve the business impact from training.
 

The Success Case Method

Results derived through applying the SCM are actionable. We can make strategic and constructive use of evaluation findings that actually helps training clients be more effective and successful.

There is an additional strategic outcome that the SCM helps achieve. For decades training and development professionals have recognized that manager support for training is absolutely vital to success. When managers support training and learners, it works. When they don’t, it does not. As a result, we have begged and cajoled managers to support our efforts. But despite our pleas, most managers find other things more important to do. With the Success Case Method, we are able to give them a clear and data-based business case for supporting training. We can show them specific actions they can take to reinforce learning and performance, and tie these directly to bottom-line results and economic payoff to them and their organization. Then, rather than trying to make all sorts of mandatory prescriptions for support actions, we can simply show managers the data and let them do what they are paid to do: look at the facts and make a business decision.

The SCM uncovers and pinpoints the factors that make or break training success. Then, it shows how these factors can be more effectively managed so that more learning turns into worthwhile performance in the future. It is aimed directly at helping leaders in an organization discover their organization’s “learning disabilities” then figure out what needs to be done to overcome these problems. Over time, the SCM helps an organization become better and better at turning an ounce of training investment into a pound of effective performance.

Defining Success

Most kinds of training conducted in organizations currently are based on the belief that some employees need certain knowledge or skills to perform their jobs correctly or improve their current job performance, and thus training is provided. Trainees are then supposed to return to their jobs and correctly use the training-acquired skills to perform in their jobs. Eventually, so goes this rationale, the company will benefit from the application of these skills in increased revenues, higher-quality products, more productive employees, increased output, decreased scrap rates, and so forth. Note that the benefit to the organization derives not from what was learned but from what actually gets used—that is, value doesn’t come from exposure to the training or the acquisition of new capability. Instead, value comes from the changes in performance that the training eventually leads to.

There are other reasons that training is conducted, such as to promote advancement and career fulfillment, to avoid legal exposure, to meet regulatory requirements to provide certain training, or simply to offer training because it is perceived as a staff benefit, and this may help recruitment and personnel retention. These sorts of training do not necessarily require applying skills to produce value, and thus they are not the focus of typical SCM applications.

So for most training, impact and value are achieved only when the training actually gets used to improve or sustain job performance. Thus training success is defined as application of training-acquired capabilities in improved performance and job results.

Training Evaluation Realities

Two realities about training programs must be recognized and effectively dealt with because they dramatically influence the way we should think about and conduct evaluation of training. The first reality is predictable results, which we will discuss before moving on to the second reality.

Reality One: Predictable Results

Training programs typically produce reliable—and unfortunately marginal—results. The results that some trainees achieve may not be at all marginal, but over a large group of trainees, overall results are typically mediocre at best. Some people use their learning in ways that get great results for their organizations. Others do not use their learning at all. Most others may try some parts of it, notice little if any changes or results, and eventually go back to the ways they were doing things before. The good news is that the few who actually use their training in effective on-the-job applications often achieve highly valuable results; we have seen, for example, where just one manager used her training to help land a $500 million sale, a result that would not have been achieved had she not participated in the training. In another instance, a senior leader used his training to increase operating income for his business division by more than $1.87 million. These are dramatic and exceptionally valuable results; we have documented many more less dramatic outcomes that were nonetheless significant and worthy.

So, the problem is not that training does not work at all; it is just that it does not work frequently enough with enough trainees. In most cases, a typical training program produces only a few quite successful trainees who achieve these great results. Similarly, there is typically a small (but sometimes not so small) percentage of people who, for one reason or another, just were not able to use their training at all, or didn’t even try to. The bulk of trainees are distributed between these extremes.

Making a business case to “grow” impact. A key principle of the SCM is that we can learn a lot from inquiring into the experience of these extreme groups. A SCM study can tell us for example, how much good a training initiative produces when the learning it produces is used in on-the-job performance. If the good that it produces is a great deal of good, such as when some trainees use their learning in a way that leads to highly valuable business results, then we know that the training had great potential for a high return-on-investment. When we find that the training produces really worthwhile results, but that it worked this well with only a small number of trainees, then we can construct a defensible business case for investing time and resources to extend the good results to more people.

Tyranny of the mean. Typical quantitative evaluation approaches are based on reductionist statistical procedures, such as calculating a mean or “average” effect. But, because training typically only helps achieve worthwhile results for a small proportion of the trainees, on average training will always be calculated to be mediocre. When we have a range of effects, those at the high end will be offset by those at the low end when we calculate a mean score. Assume for purposes of illustration that we have two different training programs: program A and program B. In training program A, assume further that we had an evenly split distribution of impact such that one half of the trainees did extremely well with their training, using it in improved performance to get worthy results. The other half did not use their learning at all. If we added these two halves of the distribution together and divided by the total number of trainees, as we would do in determining a mean score, then the training would look to have, overall, mediocre results. Assume that program B worked equally well with virtually all of the trainees, but was mediocre in that none of the trainees used their learning in useful ways, although none failed to use their learning either. That is, they all used it, but all in a mediocre way.

When we calculate the mean impact of program B, it will appear to have had exactly the same results as program A. In reality, however, these two programs represent two different strategic scenarios. In the case of program A, it has great potential because it produced excellent results, although for only half of the trainees. It is clearly a powerful intervention, although for some reason only half of the participants were able to get these results. Program B, however, has little to no promise, as it works well with virtually no one. It is probably not worth keeping.

This “tyranny of the mean” effect is very powerful and at the same time very dangerous. It probably explains why, on average, most training programs have over the years been assessed as having only mediocre effects. On average, it is true that most training does not work well. But some programs work well with some of the people, and this represents their great potential for being leveraged for even greater results.

The SCM avoids this tyranny of the mean effect by intentionally separating out the trainees that used their training, then aiming to discover what value those applications of the training produced. So, the SCM does not ask: “On average, how well is the training working?” (we already know the answer to that question: not very well). Instead, the SCM asks, “When the training works (is used), what good does it do?”

Reality Two: Training Alone Never Works

Training and performance improvement practitioners wanting to evaluate their success have struggled for decades with the seemingly intractable issue that “other factors” are always at work with training. In a sales training program, for example, we might see an increase in sales, or we might not. How do we know it was the training that led to increased sales or the failure to get increases? Maybe it was some other factor, such as a change in the market, a new incentive to sell more, or something else. Training alone does not produce results. There are always a number of nontraining factors that enable or impede successful results from the training. Supervisory support, incentives, opportunities to try out learning, and the timing of the training, to name a few, are examples of the sorts of nontraining or performance system factors that determine whether and how well training works to improve performance.

A corollary of this reality is the fact that, when training works or does not work, it is most often the case that the nontraining factors account for more of the success than features and elements of the training intervention or program itself.

This second reality of training evaluation strongly suggests that most—potentially 80 percent or more—of the failures of training to achieve results are not caused by flawed training interventions, they are caused by contextual and performance system factors that were not aligned with and were otherwise at odds with the intended performance outcomes of the training. Thus, when we evaluate “training” impact, we are most often in reality evaluating an organization’s performance management system.

This fact is nothing new. We have known for years that the major reasons for failures of training to achieve impact are that training readiness and performance support factors were never adequately developed or implemented. Most evaluation models and methods have attempted to cope with this reality by attempting to isolate the training related causes.

In common practice, the way that this reality is often dealt with is to avoid it and evaluate only the training itself, asking whether it appeared to be useful in the eyes of participants, and sometimes going so far as to measure whether people actually learned anything. But going beyond this to measure application of learning has typically not been very productive. First, surveys of learning application produce discouraging results, showing quite predictably that most trainees have not applied or sustained use of their learning. Second, when we discover that most trainees are not using their learning in sustained performance, there is little to do with this information, because trying to improve the rate of application by improving the training program itself will not yield concomitant improvements in applying learning on the job.

The Success Case Method, on the other hand, makes no attempt to “parcel out” or otherwise isolate the training-related causes or to make any training-specific causal claims. Instead, we leverage the fact that training never works alone or in a vacuum. We seek in an SCM study to identify all of the major factors that helped or hindered achieving worthwhile performance results from training so that we can build on and leverage this knowledge into recommendations for increasing performance in later iterations of training efforts. We discovered in an evaluation of a training prgoram for financial advisors, for instance, that almost all new advisors who were successful in applying their learning and getting good financial results had also made use of additional resources that helped them practice new emotional competence skills on the job. We also discovered that nearly all of the successful advisors sought and received feedback from a manager or peer. We concluded that the training was very unlikely to get any positive results without such additional interactions. This led in turn to recommendations to future trainees and their managers to be sure to provide time and opportunity for such assistance, as without it, the training was likely to be ineffective and wasted.

Leveraging the Two Realities

The Success Case Method begins with a survey to determine the general distribution of those training graduates who are using their learning to get worthwhile results and those who are most likely not having such success. In the second stage of an SCM study, we conduct in-depth interviews with a few of these successes and nonsuccesses—just enough of them to be sure we have valid and trustworthy data. The purpose of the interviews is two-fold. First, we seek to understand, analyze, and document the actual scope and value of the good results that the apparently successful people have claimed from the survey phase. This allows us to verify the actual rate of success, and also gauge its value. In an SCM study of sales representatives, for example, we were able to determine that the actual rate of success was about 17 percent; that is, 17 percent of the trainees who completed the training used their new learning in sustained and improved performance. Further, we could determine that the results they achieved were of a known value, in this example the typical results were worth about $25,000 per quarter in increased profits from sales of products with more favorable margins.

This first part of the SCM, identifying the quantitative distribution of extremes of success, is typically accomplished with a brief survey. That is, we usually conduct a simple survey of all the participants in a training program and ask them, through a few carefully constructed items, the extent to which they have used their learning to get any worthwhile results. Although a survey is often used, it is not always necessary. It may be possible to identify potential success cases by reviewing usage records and reports, accessing performance data, or simply by asking people—tapping into the “information grapevine” of the organization. A survey is most often used, however, because it provides the additional advantage of being able to extrapolate results to get quantitative estimates of the proportions of people who report using, or not using their training. Also, when careful sampling methods are used, then probability estimates of the nature and scope of success can also be determined.

Second, in the interview phase we probe deeply to identify and understand the training related factors (using certain parts of the training or particular tools taught in the training, for instance) and performance system factors (supervisory assistance, incentives, feedback, and so forth) that differentiated nonsuccesses from successes. We know that when the training works, it is likely that is has been supported by and interacted with certain replicable contextual factors. Knowing what these factors are enables us to make recommendations for helping subsequent trainees and later versions of the training initiative achieve better results.

Putting information from both of these SCM phases together creates highly powerful and useful information. First, we know what rate of success the training had, and the value of that rate in terms of the nature of the results that successful trainees were able to achieve using their learning. This lets us extrapolate the unrealized value of the training initiative—the value that was “left on the table” by the program due to its rate of nonsuccess instances.

Figure 9.1 represents a typical distribution of training results, showing a relative small proportion of trainees who used their learning and achieved positive results, and the larger percentage of those who did not achieve worthwhile results. Added to this figure is a notation of that proportion of the distribution that represents a positive return on the training investment (ROI), and that proportion of it that had a negative return. The area above the darker and solid-line arrow shows that the trainees in this portion of the distribution achieved a positive ROI; we assume for purposes of this illustration that the value of the positive results in this portion of the distribution is indeed greater than the cost of providing and supporting the training for these people depicted in this portion. That is, whatever was spent to train the people who are represented in the solid-line area of the distribution was exceeded by the value of the results they achieved. However, everything to the left of this dividing line represents a loss or negative ROI. These people in the area above the dotted line were trained, but did not use their learning in ways that led to positive results.

Given this, the larger the area of the distribution above the solid-line arrow, then the greater the ROI. If we doubled, for example, the number of people who used their learning and got positive results, then we have clearly dramatically increased the overall ROI of the training, because the costs for training all of the people in the distribution are roughly the same for each individual. Or, looked at another way, the distribution to the left of the solid-line arrow area represents the unrealized value of the training. If we could take actions needed to “move” more people from the left portions of this distribution to the far right portion, then we would be increasing ROI and impact. And this is exactly the principal aim of the Success Case Method—to “grow” ROI and increasingly leverage more results from training.

We know from the interview phase of the study both what the value of success is, and also the factors that enable success. This lets us make a business case for growing the far right side of the distribution in figure 9-1. We can ask, for instance, what the value would be if we could grow the number of successful application instances by 10 percent. Then, we can ask what it might take to attempt this, for example, getting more managers to support the training, or getting more trainees to use the same job aid as their successful counterparts did.

We should also point out that it is not always necessary to make conclusions about impact in terms of dollar values. We used such values in the preceding example only to make the case simple and clear. In SCM practice, we encounter many instances where programs that do not entail such simply translated results can likewise benefit greatly from SCM methods.

This, in a nutshell, is how the SCM works. First, a survey (or sometimes another information harvesting method) is used to gauge the overall distribution of reported success and nonsuccess. This is followed by an in-depth interview phase where we sample cases from each extreme of the distribution and dig deep to understand, analyze, and document the specific nature of exactly how the training was used and exactly what verifiable results it led to. Our aim is to discover, in clear and inarguable terms, exactly how the training was used (if it was) and exactly what value (if any) it led to. The standard of evidence is the same as we would use in a court of law: it must be provable beyond a reasonable doubt, documentable, verifiable, and compelling.

From this, we are able to conclude the following:

  • When training works, what value does it help achieve?
  • How frequently and at what rate does it work this well?
  • When it works, why? What factors help or hinder results?
  • What is the value lost when training does not work?
  • What is the case for making it work better?
  • What would it take to make it work better; would such efforts be worthwhile?

Knowledge Check: Questions to Assess Understanding of Content

Answer the questions to assess your knowledge of the content. Check your answers in the appendix.

1. According to the Success Case Method, why is it often misleading to try to isolate the impact of the training program in an evaluation study?

a. often it is too difficult to do

b. other contextual factors are always operating and affect the business impact

c. employee motivation is the biggest determinant of business impact, so it will make the training look ineffective

d. only some training is expected to produce measurable business results

2. The Success Case Method looks at both the value obtained from the training and the unrealized value. Why are both concepts important? What fundamental questions do each of these two concepts address?

3. What is the best way to significantly increase the ROI of training?

a. turn classroom training into e-learning programs, which will significantly reduce the costs per training hour

b. shorten the length of time of any training program to make it more efficient

c. improve the amount of information/skill people acquire in the training; the more they learn they better they will be able to produce results

d. get more people to use the training in ways that make a difference to the business

About the Authors

Robert O. Brinkerhoff, PhD, is an internationally recognized expert in training effectiveness and evaluation and the principal architect of The Advantage Way. His next-generation ideas have been heralded by thought leaders ranging from Donald L. Kirkpatrick to Dana Gaines Robinson and adopted by dozens of top-tier organizations, including Bank of America, Children’s Healthcare of Atlanta, Motorola, and Toyota.

A keynote speaker and presenter at hundreds of industry conferences and institutes worldwide, Brinkerhoff is a recent ISPI Award of Excellence recipient.

Brinkerhoff is the author of Telling Training’s Story (2006), The Success Case Method (2003), and High Impact Learning (2001). He is coauthor with Timothy P. Mooney of Courageous Training: Bold Actions for Business Results, which was released in June 2008.

A professor emeritus at Western Michigan University where he was responsible for graduate programs in human resource management, Brinkerhoff originally earned his doctorate in program evaluation at the University of Virginia. Brinkerhoff can be reached at [email protected].

Timothy P. Mooney is a partner with the Advantage Performance Group, a wholly owned subsidiary of BTS Group AB. He works directly with clients on consulting projects and is the practice leader for The Advantage Way. Prior to joining Advantage in 2000, he served in a senior management capacity for DDI, working closely with leading global organizations. In addition, he has more than 25 years of corporate sales management and consulting experience.

Mooney holds a BA in psychology from Butler University in Indianapolis and an MA in industrial/organizational psychology from the University of Akron. He is a frequent speaker and writer on the topic of achieving measurable business impact from training. He recently coauthored a book with Robert O. Brinkerhoff, Courageous Training, which was released in June 2008. Other publications include “Level 3 Evaluation” in the ASTD Handbook for Workplace Learning Professionals (2008); “Creating Credibility with Senior Management” and “Taking a Strategic Approach to Evaluation” in The Trainer’s Portable Mentor (2008); and “Success Case Methodology in Measurement and Evaluation” in ISPI Handbook: Improving Performance in the Workplace, vol. 3. (2009). Mooney can be reached at [email protected].

Additional Reading

Brinkerhoff, R. O. (2006). Telling Training’s Story. San Francisco: Berrett-Koehler.

Brinkerhoff, R. O. (2002). The Success Case Method. San Francisco: Berrett-Koehler.

Mooney, T. and R. O. Brinkerhoff. (2008). Courageous Training. San Francisco: Berrett-Koehler.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset