CHAPTER 20

Impact and ROI: Results Executives Love

Jack J. Phillips and Patricia Pulliam Phillips

Many TD professionals rarely think about the results their leaders want from the talent development department—but they should. Leaders depend on financial measures to define their organizations’ success. TD professionals should develop the business acumen to understand what their leaders need and how to measure it.

IN THIS CHAPTER:

  Define ROI

  Describe the importance of impact and ROI

  Ensure programs deliver positive impact and ROI

What Is ROI?

Organization leaders rely on financial measures. Financial measures describe how an organization is faring with a particular investment (Phillips and Phillips 2019). Each metric has its own use, and not all are suitable for evaluating training and development programs. Three financial measures are useful for any type of investment, allowing decision makers to compare results across a wide spectrum of programs and projects, including training and talent development. The measures are:

•  Benefit-cost ratio (BCR)

•  Return on investment (ROI)

•  Payback period (PP)

Benefit-Cost Ratio (BCR)

The BCR is the output of cost-benefit analysis, an economic theory grounded in welfare economics and public finance. Economists in the United States adopted it in the early 1900s to justify projects initiated under the River and Harbor Act of 1902 and the Flood Control Act of 1936 (Prest and Turvey 1965). Today, BCR use describes the value of many types of projects. The BCR formula is:

Return on Investment (ROI)

The concept of ROI has been used in business for centuries to measure the success of investment opportunities (Sibbett 1997). While its initial use was in evaluating capital investments, it has become the leading indicator describing the value of other types of programs and projects. This growth in use, particularly in talent development and human resources, stems from the 1973 work of Jack J. Phillips, who began using it to demonstrate value for a cooperative education program. His use of ROI grew and was first formally recorded in Handbook of Training Evaluation and Measurement Methods, the first book on training evaluation published in the US (Phillips 1983). In the book, Phillips introduces an evaluation framework. More important, he provides a process and standards operationalizing the framework, something that had not been done with earlier training evaluation concepts. Over the years, his application of ROI has been adopted as a standard practice in talent development and HR evaluation, as well as marketing, project management, supply chain management, chaplaincy, and others. The ROI formula is:

SIMILAR, YET DIFFERENT

ROI and BCR provide similar measures of the financial benefit of investing in programs. BCR is typically used in public sector organizations, whereas ROI is used in business and industry. However, they’re both applicable in all settings. BCR compares gross benefits to costs, while ROI presents the net benefits compared to costs reported as a percentage. A BCR of 2:1 means for every $1 invested, there is a gross benefit of $2. This translates into an ROI of 100 percent, which means for every $1 invested, $1 is returned after the costs are recovered (a net benefit of $1). Periodically, someone will calculate a BCR of 3:1, for example, and then calculate the ROI as 3 × 100 = 300%. This is incorrect. ROI is the net benefits divided by the costs. Thus, a 3:1 BCR is actually equal to a 200% ROI.

Payback Period (PP)

The third measure, payback period (PP), determines the point in time when program owners can expect to recover their investments. Those programs with a shorter PP are usually the more desirable ones. This measure does not consider the time value of money, nor does it consider future benefits. It simply indicates the break-even point, or a BCR of 1:1, which translates to an ROI of 0 percent. PP is used occasionally when evaluating training and talent development programs, particularly when forecasting the payoff prior to investing in a program. The formula for PP is:

WHAT IS A GOOD ROI?

An ROI is only as good as that to which it is compared. Use the following guidelines to help establish your target ROI:

•  Set the ROI at the same level as other investments (for example, 18 percent).

•  Set the ROI slightly higher than the level of other investments (for example, 25 percent).

•  Set the ROI at break-even, 0 percent.

•  Set the ROI based on client expectations.

Why Impact and ROI Are Important

There are four basic reasons why impact and ROI data are important to an organization:

•  ROI requires impact data.

•  Impact and ROI are fundamental to resource management.

•  ROI data answer a logical question: Was it worth it?

•  Executives love impact and ROI results.

ROI Requires Impact Data

Whether you’re calculating a program’s BCR, ROI, or PP, the numerator of the formula requires the monetary value of the impact a program has on key business measures. Measures may be objectively based, such as output, quality, cost, and time, or they may represent more subjective measures such as customer satisfaction, image, and work climate. Attributing a program to improvement in business measures requires accounting for other factors that may have contributed to the improvement. Isolating the effects of the program is a requirement when describing the program’s impact, and is a key step in the ROI Methodology. Omitting this step results in a baseless claim of business results, not to mention the overstatement of a program’s financial value. Once you have credible proof that the improvement is due to the program, you can annualized it and convert the improvement to money. The annual monetary benefit is input into the numerator of the ROI formula.

For example, assume an organization suffers from too many employee complaints that meet a specific severity level. Experts in the organization indicate that each complaint of this type costs $6,500. Six months after a leadership program, the number of complaints decreased on average by 10 per month. Analysis to isolate the effects of the program found that it could account for seven fewer complaints per month (meaning that the other three fewer complaints were attributed to something else)—this is the impact of the program. Therefore, the annual change in performance was 84 per year, and the annual monetary benefit was $546,000. Assuming the fully loaded cost of the program was $425,000, the ROI was 28 percent.

Let’s look closer at the steps to ROI using this example. (Note that Step 3 accounts only for the improvement due to the program, which was an average of seven fewer complaints per month.) Recall that the cost of the program was $425,000.

•  Unit of measure: 1 complaint

•  Value of a complaint: $6,500

•  Change in performance due to the leadership program: 7 per month

•  Annual change in performance: 7 × 12 = 84

•  Annual monetary benefit: 84 × $6,500 = $546,000

Impact and ROI Are Fundamental to Resource Management

When organization leaders fail to use financial resources optimally, it usually means one of two things:

•  They are withholding opportunity.

•  They are overextending their resources and using more than they have, which is not sustainable.

In either case, they are inefficient in their use of resources. This premise is based on an economic theory known as Pareto Optimality or Pareto Efficiency (Nas 2016). When optimal use of resources is occurring, leaders must take funding away from one program before they can increase funding for another. All too often opinion, intuition, and gut feel influence these funding decisions. Impact data and ROI are fundamental to resource management decisions because they reduce subjectivity and increase objectivity, allowing for better decisions while minimizing the risk of making the wrong one.

ROI Data Answers a Logical Question: Was It Worth It?

Almost any purchase requires weighing the benefits against the costs to answer the question, “Was it worth it?” The answer influences decisions about the value of the purchase and whether to purchase again or recommend to others. While impact data alone can answer this question to some extent, ROI answers it more clearly. ROI requires conversion of impact measures to money, normalizing them to the same unit of measure as the program costs. Doing so allows decision makers to compare benefits with costs mathematically. An even more compelling reason for using ROI is because it positions talent development as an investment, rather than a cost that can easily be cut. Training and talent development investments then rise to the same level of importance as marketing, supply chain, operations, and IT.

Executives Love Impact and ROI Results

Executives love to see the direct impact and ultimately the ROI of major investments. This includes expensive training and talent development programs that align with strategy and operational goals and involve a large number of people. In 2009, ROI Institute partnered with ATD to determine what organizations’ CEOs thought about the learning investment. Results indicated a gap between the types of measures executives were receiving and the kinds of measures they believed would help them better understand talent development’s value. Of least importance was data describing input, efficiency, and participant reaction to programs, yet that was the data most received. Impact (improvement directly attributable to training programs) and ROI were ranked the first and second most important data sets to CEOs, yet only 8 percent reported receiving impact data and only 4 percent reported receiving ROI (Phillips and Phillips 2009).

The call to talent development leaders to demonstrate real business value from these investments has grown even louder over the years, and talent development leaders are answering. In 2015, Chief Learning Officer’s Business Intelligence Board Measurement and Metrics Study reported that 71.2 percent of 335 chief learning officers were either using or planning to use ROI as a measure of learning performance. In 2017, Training Magazine’s Top 10 Hall of Fame report acknowledged that “ultimately, the success of any program is based on whether it improves business results.” A 2019 survey of Training Magazine’s Top 100 indicated that at least 92 percent of those responding used ROI as a measure of training’s value to the organization (Freifeld 2021).

Progress with ROI is also evident in ROI Institute’s 2019 benchmarking study. When comparing the suggested minimum percentage of programs that should be evaluated to impact and ROI, respondents reported that they evaluate 37 percent of their programs to the impact level compared with ROI Institute’s minimum standard 10 percent. Respondents also indicated they evaluate 18 percent of their programs to ROI compared with the minimum standard of 5 percent. On the other hand, use of the lower levels of evaluation (reaction and learning) was lower than the recommended minimum (Table 20-1).

Table 20-1. Percentage of Use of Levels of Evaluation

Level

Recommended Percentage*

Current Percentage**

Input

100%

100%

Reaction

100%

80%

Learning

80–90%

70%

Application

30%

49%

Impact

10%

37%

ROI

5%

18%

*ROI Institute’s minimum target percentage of programs evaluated at each level per year for the typical large organization.

**Current percentage of programs evaluated at each level per year by respondents to ROI Institute’s 2019 benchmarking study.

MULTIPLE STAKEHOLDER VALUE

Shared value is an essential focus for many organizations, and it’s also important to demonstrate impact and ROI for multiple stakeholders. For example, a major financial services company implemented a leadership development program, which ultimately was intended to drive value. However, a component of the program required participants to apply their newly acquired leadership skills to a project for a nonprofit organization and drive value for that organization as well. In the end, the program resulted in an ROI to the financial services company as well as the nonprofit, not to mention major intangible benefits for both.

Ensure Your Programs Deliver Positive Impact and ROI

W. Edwards Deming, the father of total quality management (TQM), has been quoted as saying, “Every system is perfectly designed to get the results it gets.” Design begins with a problem or opportunity and ends with a solution that works and includes a feature that provides insight on the actions to take if it does not work. Figure 20-1 presents a process model that will help you design your programs to deliver positive impact and ROI. The methodology has four phases with 12 steps; it’s flexible and appropriate for any type of program in any type of setting.

Plan the Evaluation

Planning an evaluation is a critical first phase in implementing and evaluating training programs, and it features three steps.

The phase begins with Start With Why: Align Programs With the Business, which addresses the business needs of the organization. Business needs include identifying the operational measures that need to improve and the value of improving them. This answers the question: Is this opportunity worth pursuing?

The next step begins to determine what is currently happening or not happening that, if changed, would address the business needs. Make It Feasible: Select the Right Solution is where having a mindset for curiosity is valuable. Doing the research that will lead to the most feasible solution includes quantitative methods and qualitative methods. Sometimes it is simply a matter of having a conversation with key stakeholders and asking a few pointed questions. You can download ROI Institute’s alignment conversation toolkit on the handbook website at ATDHandbook3.org.

In the Expect Success: Plan for Results step you develop specific, measurable objectives, including application, impact, and an ROI objective. Using the objectives as the architectural blueprint for program design will increase the chances of delivering positive results. In addition, using the objectives as the basis for evaluation will make data more compelling and evaluation much easier.

Output of the planning phase includes two documents: the data collection plan and the ROI analysis plan. Developing these plans up front helps designers build data collection into the program design. It also offers evaluators an opportunity to get buy-in to the approach prior to execution, eliminating pushback on process during the reporting stage. ROI Institute’s templates of these planning documents along with others are available for download on the handbook website at ATDHandbook3.org.

Figure 20-1. ROI Methodology Process Model

Collect Data

The data collection phase involves two steps that focus on designing for and measuring results at each level. To determine impact and ROI, positive results must occur at different timeframes.

The first step in this phase—Make It Matter: Design for Input, Reaction, and Learning—focuses on Levels 1 and 2. Collecting reaction and learning data is essential because the data can indicate the extent to which the program content matters to participants. Common data collection techniques at Reaction and Learning include end-of-course questionnaires, written tests and exercises, demonstrations, and simulations.

Make It Stick: Design for Application and Impact focuses on Levels 3 and 4. Follow-up data is collected after the program, when the application of the newly acquired knowledge and skills becomes routine and enough time has passed to observe an impact on key measures. A point to remember is that if you identified the measures that need to improve through initial analysis, you will measure the change in performance in those same measures during the evaluation. Therefore, it is feasible to believe that data collection methods used during the evaluation could be the same as those used during the needs analysis.

Analyze Data

Data collection is essential; the depth of analysis is even more so. When the data becomes available, analysis begins using the approach chosen during the planning stage. The data analysis phase involves five steps that focus on making the entire process credible.

Make It Credible: Isolate the Effects of the Program is the first step in this phase and occurs after collecting data at Level 4. Too often overlooked in evaluating the success of programs, this step answers the critical question, “How do you know it was your program that improved the measures?” This step isn’t as difficult as some might suggest, and your results will have little credibility without it.

The move from impact to ROI begins with converting impact measures to monetary value. Make It Credible: Convert Data to Monetary Value is often the step that instills the greatest fear in training and talent development professionals. But, once they understand the data conversion techniques, along with the five steps to do it, the fear usually subsides.

The next step is to Make It Credible: Identify Intangible Benefits, which are the impact measures not converted to monetary value. Intangible benefits can also represent any unplanned program benefits that are not identified during the planning phase.

Fully loaded costs are also developed during the data analysis phase. Make It Credible: Capture Cost of Program includes calculating costs for needs assessment (when conducted), design, delivery, and evaluation. The intent is to leave no cost out of the analysis to ensure a credible and accurate accounting of the investment.

Make It Credible: Calculate Return on Investment is the last step of the analysis phase. Using addition, subtraction, multiplication, and division, the BCR, ROI, and PP are calculated.

Optimize Results

Optimize results is the most important phase in the evaluation process. Two steps are involved.

Tell the Story: Communicate Results to Key Stakeholders is the first step. Evaluation without communication and communication without action are mere activities with no value. If you don’t tell anyone how the program is progressing, how can you improve the talent development process, secure additional funding, justify programs, and market your initiatives to future participants? There are a variety of ways to report data. Micro reports include the complete ROI impact study, while macro reports include scorecards, dashboards, and other reporting tools.

Regardless of the type of report, communication must lead to action—and that action requires stepping back and analyzing what is learned from the data. Optimize Results: Use Black Box Thinking to Increase Funding is the second step of this phase and the final step in the ROI Methodology. It’s inspired by the aviation industry’s safety system where each aircraft has black boxes that record technical flight data and pilot interactions. When an accident or near miss occurs, the black boxes are analyzed to understand what caused the incident and how to avoid it in the future. Black box thinking is all about learning from mistakes, which is essential if we want to learn why a program failed to succeed and how to improve it to ensure a positive ROI.

The job of talent development professionals is not to “train” people. Rather, the purpose is to drive improvement in output, quality, cost, time, customer satisfaction, job satisfaction, work habits, and innovation. This occurs through the development of others, and doing it well means assessing, measuring, evaluating, and taking action based on your findings. For more detail on each step in the ROI Methodology, download ROI Institute’s application guide on the handbook website at ATDHandbook3.org.

Final Thoughts

The message for this chapter is simple. If you need more support, commitment, and funding for major training and talent development programs, report results executives want—impact and ROI. Executives will begin to view talent development as an investment that yields a return, rather than a cost that can easily be curtailed, postponed, paused, frozen, reduced, or, in the worst case, eliminated. While executives may not explicitly ask for impact and ROI, in the end, that’s what they’ll want most.

About the Authors

Patti P. Phillips, PhD, is CEO of the ROI Institute. Since 1997, Patti has been a driving force in the global adoption of the ROI Methodology and the use of measurement and evaluation. Her work as a researcher, consultant, and coach supports practitioners as they develop expertise in evaluation. Patti serves as chair of the i4cp People Analytics Board; principal research fellow for The Conference Board; chair of the board for the Center for Talent Reporting; board of trustees member for the UN Institute for Training and Research (UNITAR); and board member of the International Federation of Training and Development Organizations. Patti also serves on the faculty of the UN System Staff College in Turin, Italy. Her work has been featured on CNBC and EuroNews, as well as in more than a dozen business journals. Patti is author, co-author, or editor of more than 75 books and dozens of articles focused on measurement, evaluation, accountability, and ROI.

Jack J. Phillips, PhD, is the chairman of the ROI Institute. He is a world-renowned expert on accountability, measurement, and evaluation. He provides consulting services for Fortune 500 companies and major global organizations. The author or editor of more than 100 books, he conducts workshops and presents at conferences around the world. Jack’s expertise in measurement and evaluation is based on more than 27 years of corporate experience in the aerospace, textile, metals, construction materials, and banking industries. Jack regularly consults with clients in manufacturing, service, and government organizations in 70 countries.

References

Freifeld, L. 2021. “Training Magazine Ranks 2021 Training Top 100 Organizations.” Training Magazine, February 8. trainingmag.com/training-magazine-ranks-2021-training-top-100-organizations.

McLeod, K. 2019. “2019 ROI Institute Benchmarking Study.” roiinstitute.net/2019-roi-institute-benchmarking-report.

Nas, T.F. 2016. Cost-Benefit Analysis: Theory and Application, 2nd ed. Lanham, MD: Lexington Books.

Phillips, J.J. 1983. Handbook of Training Evaluation and Measurement Methods. Houston, TX: Gulf Publishing.

Phillips, J.J., and P.P. Phillips. 2009. Measuring Success: What CEOs Really Think About Learning Investments. Alexandria, VA: ASTD Press.

Phillips, P.P., and J.J. Phillips. 2019. ROI Basics, 2nd ed. Alexandria, VA: ATD Press

Prest, A.R., and R. Turvey. 1965. “Cost-Benefit Analysis: A Survey.” The Economic Journal 300:683–735.

Sibbet, D. 1997. “75 Years of Management Ideas and Practice 1922-1977.” Harvard Business Review, September 28.

Recommended Resources

Doer, J. 2018. Measure What Matters: How Google, Bono, and the Gates Foundation Rock the World with OKRs. New York: Portfolio.

Grant, A. 2021. Think Again: The Power of Knowing What You Don’t Know. New York: Viking.

Sayed, M. 2015. Black Box Thinking: Why Most People Never Learn from Their Mistakes … But Some Do. New York: Portfolio.

Sinek, S. 2009. Start With Why: How Great Leaders Inspire Everyone to Take Action. New York: Portfolio.

Sunstein, C.R. 2018. The Cost-Benefit Revolution. Cambridge, MA: The MIT Press.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset