CHAPTER 48

Determining Talent Development’s Organizational Impact

David Vance

We all agree that talent development initiatives can make a significant contribution to achieving an organization’s goals and meeting its critical needs. There is much less agreement, however, on how to determine the impact of talent development. In other words, how much of a difference did the initiatives make and what is the organization getting for its investment in talent development?

IN THIS CHAPTER:

  Define effective approaches to determine talent development’s organizational impact

  Provide examples for each approach

There is no consensus within the profession on how best to answer the questions of how to show the impact of talent development on an organization. In fact, this is one of the most contentious issues in our field, and there are strongly held beliefs on all sides; this is also what makes it so interesting.

There are four approaches to determine organizational impact. I’ve listed them here in the sequence I recommend:

1.   Show alignment.

2.   Show results.

3.   Show impact.

4.   Show return on investment (ROI).

All practitioners, even the beginner, should be familiar with all four approaches. This chapter covers the first two (showing alignment and results), while the last two (impact and ROI) are covered in two tools on the handbook’s website at ATDHandbook3.org.

The first approach, show alignment, is an excellent planning tool and introduces the concept of planned impact. It answers questions like “Are the talent initiatives aligned to the organization’s goals?” and “What is the planned impact on each goal?”

The second approach, show results, does not seek to isolate the impact of the talent development initiative and consequently does not involve any calculations. While this is a plus for many, the drawback is that it will not provide a quantitative measure of impact. Instead, it simply attempts to answer the question, “Did the initiative produce results or meet the goal owner’s expectations?”

The third and fourth approaches require more work because they seek to provide definitive answers with regard to the isolated impact and value. However, the case for impact will be much stronger if you employ all four approaches rather than choosing just one.

Show Alignment

This is the natural starting point for any discussion about determining impact and showing value. It is also a key tenet of what it means to run learning like a business. If TD initiatives are not aligned to organizational goals or needs, senior leaders may not perceive any value whatsoever, even if there is measurable value. Or leaders might grudgingly admit to seeing some value in an unaligned initiative, but say it should not have been undertaken or that the funds would have been better spent elsewhere.

So, what do I mean by alignment? Alignment is the proactive process of meeting with senior leaders to understand their most important goals or needs, and then jointly agreeing on TD initiatives to help achieve those goals or meet those needs. The key here is “proactive,” meaning that alignment is completed before the initiative is begun. This process can take place at the enterprise level or business unit level and involve all the goals or needs or focus just on one.

Let’s start by examining the alignment process at the enterprise level.

Proactive Alignment to Enterprise Goals

Ideally, enterprise alignment begins when the chief talent officer (CTO) or chief learning officer (CLO) meets with the CEO several months before the start of the fiscal year. The meeting’s purpose is to:

•  Outline the key goals and needs for the coming year.

•  Understand the priority of those goals.

•  Learn the names of the goal owners.

The discussion should provide the CTO or CLO with a good sense of the challenges facing the organization and might even begin with a reflection on how the organization did in the current year. Because this is a discussion with the CEO, they’ll likely be outlining strategic goals and needs. Table 48-1 shows an example of the output of this discussion.

Table 48-1. CEO Prioritized Goals

Priority

Goal

Goal Owner

1

Increase sales by 10%

SVP of Sales Kronenburg

2

Decrease injuries by 5%

COO Tipton

3

Improve quality by 5 points

COO Tipton

4

Improve employee engagement by 3 points

SVP of HR Goh

5

Improve leadership by 4 points

SVP of Strategy Floyd

The CTO or CLO should leave the meeting knowing the goals, their relative importance, and the names and positions of the owners of those goals. For example, in Table 48-1, revenue is the number 1 priority and Kronenburg, SVP of Sales, is the goal owner.

The next step is to talk with each goal owner and learn more about the goal, its challenges, what has been tried before, what has worked, and what hasn’t, as well as how the goal owner is planning to achieve the goal this coming year. The discussion can then turn to whether any talent initiatives might help achieve the goal. If it appears there may be, both leaders can direct their staff to explore options and report back with a recommendation.

In the next meeting, staff make their recommendation for talent initiatives. For example, the CTO may recommend consultative selling skills and product features training to help SVP Kronenburg’s team meet its goal. They would then discuss target audiences, completion dates, and other particulars. If Kronenburg agrees that these initiatives will help achieve the goal, then we can say they are aligned because the initiatives were developed and approved in direct response to a need to increase revenue by 10 percent, which is a key CEO goal.

Ideally, these discussions will go one step further and the CTO and goal owner will agree on the planned or likely impact of the talent initiatives. For example, the two initiatives to help achieve the sales goal, taken together, might contribute about 20 percent to the goal, resulting in a 2 percent increase in sales due just to these two initiatives. (In equation form: 10 percent goal × 20 percent planned contribution from the initiatives = 2 percent increase in sales due just to the initiatives.) If it seems too daunting to discuss planned percent contribution, the alternative is to agree on a high, medium, or low impact. Either way, discussing planned impact is important for two reasons:

•  The higher the planned impact, the more effort and resources will be required from both the goals owner’s organization and the talent department.

•  The higher the planned impact, the earlier in the fiscal year the initiatives will have to be completed so they have time to influence the results.

Consequently, budget, staffing, and timing all depend on the planned impact, which is why it is important to reach agreement with the goal owner.

The CTO will repeat this process with each goal owner before the new fiscal year begins. In each case, a decision is made about whether talent initiatives have a role in achieving the organizational goal. When it seems likely that a talent development initiative will help achieve the goal, then all parties should agree on the particulars, including planned impact. As a result of this alignment process, we can generate a table like Table 48-2 showing the alignment of talent initiatives to the organizational goals, including the planned impacts.

Note that in Table 48-2, the goal owners were comfortable using quantitative impacts (percent contribution) for the first three goals, but only qualitative impact (use of an adjective like high) for the other two. Once the strategic alignment table is complete, it should be shared with the CEO and a high-level governing body for talent to review and approve.

This very powerful table can go a long way to show the potential impact of learning because it shows that:

•  The CTO or CLO knows the top goals of the organization in the CEO’s priority order

•  The goal owners have been consulted and agreement has been reached on the role of talent initiatives in achieving their goals, including the planned impact

•  The goal owners and talent leaders are committed to delivering this planned impact

Even if planned impact is not determined, the table is still a huge step forward for many organizations. Most CEOs will be impressed by the amount of planning that goes into it. Further, they are likely to believe that if the goal owners (many of whom are their direct reports) have agreed to the initiatives and are willing to be jointly responsible for their success, then they will have impact. Thus, the table becomes a very effective instrument for showing the planned impact of talent initiatives.

Table 48-2. Strategic Alignment of Talent Initiatives to Goals

Priority

Goal

Planned Talent Initiatives

Planned Contribution From Talent Initiatives

Planned Impact From Talent Initiatives

Goal Owner

1

Increase sales by 10%

•  Consultative selling skills

•  Product features training

20%

2% higher sales

SVP Kronenburg

2

Decrease injuries by 5%

•  Safety training for the plant

•  Safety training for the office

70%

3.5% reduction in injuries

COO Tipton

3

Improve quality by 5 points

•  Six Sigma

20%

1 point increase in quality

COO Tipton

4

Improve employee engagement by 3 points

•  IDPs for all employees

Medium

Medium

SVP Goh

5

Improve leadership by 4 points

•  Leadership training for VPs

•  Leadership training for department heads

•  Leadership training for managers

•  Leadership training for supervisors

High

High

SVP Floyd

Proactive Alignment to Business Unit Goals

This process can be used at the business unit level as well as for those who are supporting a particular unit instead of the enterprise. Simply substitute the business unit head for the CEO, and the business unit talent leader for the enterprise CTO or CLO, and then follow the steps outlined above. Your discussions will focus on how talent can help the unit leader achieve their goals.

Reactive Alignment for Individual Initiatives

The strategic alignment process described thus far has focused on ideal situations in which a good portion of the year’s talent initiatives can be planned in advance (proactively) with senior leaders. Unfortunately, the real world does not always meet this ideal, as the talent development team often receives one-off requests that weren’t addressed in these initial planning meetings. In this case, the goal should be to make the best of the situation, bringing as much focus as possible to the alignment and planning of particulars, especially planned impact.

The talent leader should ask the person requesting help how it will help achieve an enterprise or business unit goal. In other words, will the talent initiative be aligned to an important goal? If not, perhaps it should not be done or should be undertaken only after better aligned and higher-priority initiatives have been completed. If the initiative appears to be aligned and able to contribute to achieving a goal or meeting a critical need, then the discussion should turn to the particulars of the recommended initiative, including planned impact (either quantitative or qualitative).

Show Results

The strategic alignment approach is an excellent way to set yourself up for success in determining the impact of a talent initiative. The next three approaches focus on how to determine impact once an initiative has been concluded. (They may also be used to gauge impact while the initiative is under way, but for the purposes of this chapter, we will limit our discussion to their use after completion.)

The first and most obvious of these is to show the results of the talent initiative. This can be accomplished several ways, including employing a compelling chain of evidence, showing a comparison of the results with and without the initiative, and determining whether goal owner expectations were met. We begin with the compelling chain of evidence.

Compelling Chain of Evidence

This approach is very appealing intuitively. Because the talent initiative was designed to help achieve an organizational goal or help meet a critical need, the question is, “Did it?” Of course, we have to allow sufficient time for the initiative to show results. In some cases, like safety training, this may be immediate. In others, like initiatives to improve leadership or employee engagement, it may take several months or even quarters before results appear.

Assuming sufficient time has passed for the results to be visible, we are now ready to assess whether the desired results were achieved and whether it is likely that the talent initiatives contributed to them. For example, if the goal was to increase sales, did sales increase? If the answer is yes, can we show a compelling chain of evidence that the talent initiatives likely contributed to that increase?

Answering the first question is usually easy; determining the answer to the second part is harder.

Let’s say that sales did increase. We have the answer to the first question—yes. Now you need to find a compelling chain of evidence for impact from the training program. One approach would be to look at Levels 1 to 3 of Kirkpatrick’s Four Levels of Evaluation, plus participant completion:

•  The right people (target audience) completed the training.

•  The participants liked the training, found it helpful, and would recommend it to others (Level 1 participant reaction).

•  The participants took a knowledge test covering what had been taught and passed it (Level 2 learning).

•  The participants demonstrated their new behaviors or used their new skills on the job to improve their performance (Level 3 application).

Let’s return to the sales example. Suppose the agreed-upon sales representatives completed the required training on consultative selling skills and product features. The post-event survey showed they liked the training and would recommend it. The knowledge test showed they mastered the new skills, and in-class role plays demonstrated they had mastered the new behaviors. Let’s also assume that supervisors in the field observed the reps after training and confirmed their successful application of the new behaviors and knowledge. Wouldn’t a reasonable person conclude that the sales training had indeed been impactful? I believe most would.

The case for impact could be strengthened further if SVP Kronenburg and the talent program manager had agreed on target values for these measures ahead of time. These targets should represent the values that both parties believe will be required to have the intended impact. For example, they could agree that for training to increase sales, 100 reps need to complete the training by March 1 with an average test score of 90 percent, and then the reps need to apply at least 80 percent of their newly acquired skills and knowledge successfully on the job. If these targets are met, that strengthens the case for training impact.

The case could be further strengthened if they identified leading indicators in the sales process (like prospect identification and request for quotes) and if the initiative had been designed to achieve them. If targets were set for these leading indicators and if they were achieved, the case for training impact is strengthened even more.

This approach is used by many to demonstrate the impact of training programs, but the concept of collecting a compelling chain of evidence can be applied to other types of talent initiatives as well. For many, this approach is all that is required to convincingly show impact.

Comparison of Actual Results to Baseline

Another popular way to show results is to compare a baseline without the talent initiative to the actual results with the initiative. This will work if the initiative is the only thing (or at least the only significant factor) that has changed. The baseline is often represented by historical results, like last year’s results. The expectation is that this year’s results would be just like last year’s unless we did something different; thus last year serves as a baseline and any improvement this year over the baseline would reflect, at least in part, the results of the talent initiative. This effect is often shown graphically (Figure 48-1).

Figure 48-1. Graphical Depiction of Results

For example, suppose the sales training initiatives were deployed and completed in the first quarter of 2022. If nothing else of significance had changed since 2021, most people would probably agree that sales training positively impacted sales in 2022.

Of course, we cannot say for sure with this method. While the improved sales are certainly correlated with training, we have not proven that sales training was the cause—correlation does not mean causation. Still, many would be convinced that the training must have played a role in the sales increase.

While last year’s results are often used as the baseline, the goal owner could also establish a baseline for the plan year without the talent initiatives. This happens when historical results are not available or other factors are changing as well. For example, suppose two new products are being introduced and a competitor is going out of business. In this case, sales would be expected to increase regardless of any new talent initiatives, so the goal owner establishes a baseline of a 20 percent increase in sales without the new initiatives. If the year’s sales increase by more than 20 percent, then the goal owner might conclude that the sales training was at least partly responsible for exceeding the baseline. The obvious drawback to this method is that the baseline might have been set too low or too high, giving the training initiative undue credit or lack thereof.

Survey Question on Goal Owner Expectations

A final way to show results is to ask the goal owner. This approach will be most meaningful if the goal owner and CTO or CLO have agreed up front on expectations for the initiative. The question may take several forms and can be easily incorporated into a survey instrument. For example, questions for a learning program might ask:

•  The results from the learning program met my expectations.

•  The impact from the learning program met my expectations.

Then participants would answer using 5- or 7-point Likert scale, with responses ranging from strongly disagree to strongly agree.

This question may be added to the post-event Level 1 goal owner survey, which may also include additional Likert questions not related to impact, such as:

•  L&D was easy to work with.

•  The program was delivered on time and on budget.

•  I would recommend L&D to my colleagues.

Results can be aggregated for all goal owners, which should allow the CTO or CLO to make a statement like, “90 percent of goal owners indicated that the learning program met their expectations for results.”

Final Thoughts

Alignment is an excellent starting point in the quest to show impact; it ensures the planned initiatives are the right ones to undertake and provides estimates of planned impact. Once the initiative is completed, we need to show results using a compelling chain of evidence or comparisons to a baseline. This, however, does not quantify the results. For that, we need to take the third and fourth approaches, isolating the impact of learning and ensuring that the impact was worth it by determining the ROI.

The approaches of showing alignment and some level of results are good methods to use if your resources are limited. Showing impact is typically the most difficult and time consuming, so start with alignment and results. Plus, if the program is not aligned to critical goals or needs, you probably should not be doing it, regardless of impact and ROI. In other words, high ROI does not trump alignment.

I recommend using all four approaches, beginning with alignment and then proceeding through results to impact and ROI. This will provide the most convincing business case and mitigate the risk that your audience might not be convinced by any one of them taken alone. As a reminder you can learn more about the other two approaches—show impact and show ROI—on the handbook’s website at ATDHandbook3.org.

Determining impact using these approaches will not only answer questions about value for just-completed initiatives, but will also help make a convincing business case for future investments. Furthermore, it will uncover opportunities for continuous improvement and strengthen your strategic partnership with goal owners and senior leaders.

About the Author

David Vance is the executive director of the Center for Talent Reporting, which is a nonprofit, membership-based organization dedicated to the creation and implementation of standards for human capital measurement, reporting, and management. He is the former president of Caterpillar University, which he founded in 2001. Prior to this position, Dave was chief economist and head of the business intelligence group at Caterpillar. He teaches in the PhD programs at Bellevue University and the University of Southern Mississippi, as well as the executive education program at George Mason University. Dave also serves on the Metrics Working Group for the International Organization for Standardization. He is the author of The Business of Learning, now in its second edition, and co-author, with Peggy Parskey, of Measurement Demystified and Measurement Demystified Field Guide.

Recommended Resources

Kirkpatrick, D., and J. Kirkpatrick. 2006. Evaluating Training Programs: The Four Levels. San Francisco: Berrett-Koehler.

Parskey, P., and D. Vance. 2021. Measurement Demystified Field Guide. Alexandria, VA: ATD Press.

Phillips, J.J., and P.P. Phillips. 2016. Handbook of Training Evaluation and Measurement Methods, 4th ed. New York: Butterworth-Heinemann.

Vance, D. 2017. The Business of Learning: How to Manage Corporate Training to Improve Your Bottom Line, 2nd ed. Windsor, CO: Poudre River Press.

Vance, D., and P. Parskey. 2020. Measurement Demystified: Creating Your L&D Measurement, Analytics and Reporting Strategy. Alexandria, VA: ATD Press.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset