Chapter 19

Giving CEOs the Data They Want

Jack J. Phillips

In This Chapter

This chapter emphasizes that the executive point of view is the most critical perspective for today’s learning and development manager. After completing this chapter, you will be able to

  • identify eight types of data that can be reported to executives
  • explain why executives appreciate and do not appreciate certain types of data
  • identify specific actions to improve each type of data as it is reported to executives.
 

Setting the Stage

Regardless of what you may have heard, opinions do matter. The executive viewpoint is essential, particularly regarding learning and development. This chapter presents the executive’s view, using research data, details, examples, and action items. Now more than ever, one needs to understand executives’ views of learning and development that are based on their actual input, not just perceived assumptions.

Sources of Data

Various sources provide input into developing a profile of reliable, CEO-friendly data. The four key sources of data include executive surveys, executive interviews, executive briefings, and impact studies.

Executive survey. We targeted 401 Fortune 500 companies and 50 large private organizations; 96 CEOs gave insight into their perceptions, as the most senior executive in their organization, of learning and development measures of success.

Executive interviews. Structured interviews complemented the surveys and allowed us to probe for details. These interviews provided insight into executive concerns, desires, and opportunities.

Executive briefings. Almost 3,000 individuals have earned their Certified ROI Professional credential through the ROI Certification process. In some cases, individuals sought assistance from the ROI Institute in presenting an ROI study to his or her executive team. During this process, we received feedback yielding comments, discussions, and even lively debate about results and the need for additional results.

Impact studies. The ROI Institute has conducted hundreds of studies in the last two decades and presented the results to senior executive teams. Some of these presentations have been to Fortune 500 CEOs and boards of directors. These discussions served as data sources of insights into executive concerns and reactions.

Why It Matters

The dilemma surrounding the evaluation of learning is a source of frustration for many senior executives. They intuitively think that providing learning opportunities is valuable, and they logically anticipate a payoff in important, bottom-line measures, such as productivity, quality, cost reductions, time savings, and improved customer service. Yet, they are also frustrated by the lack of evidence showing that learning programs work. They assume, when soft-skills programs are involved, that the outputs can neither be measured nor credibly connected to the business. More rigorously calculated evidence must be reported or executives may feel forced to reduce future funding. The success of learning and development can be measured in ways that are credible, economical, and realistic within the resource boundaries of most learning and development functions.

The Investment Level: The Starting Point

Ultimately, top executives set the investment level for learning and development. Although some rely only on benchmarking, others adopt more well-defined strategies. In the CEO survey, five basic strategies for determining the investment level in learning and development were identified.

  • Some executives (4 percent) take a passive role when investing in employees, attempting to avoid the investment altogether, employing competent employees who need minimal exposure to developmental opportunities, or using contract and temporary employees rather than permanent staff.
  • An alternative strategy for some CEOs (20 percent) is to invest only the minimum in learning and development, providing training only at the job skills level with almost no development and preparation for future jobs.
  • Many executives (39 percent) prefer to invest in learning and development at the same level that others invest, collecting benchmarking data from a variety of comparable organizations, often perceived as implementing best practices.
  • Some CEOs (10 percent) operate under the premise that more is better, over- investing in learning and development beyond what is needed to meet the goals and mission of the organization.
  • A growing number of CEOs (18 percent) prefer to invest in learning and development when there is evidence that the investment is providing benefits. This strategy is becoming more popular following the increased interest in accountability, particularly with the use of return-on-investment (ROI) as a business evaluation tool. With this strategy, all learning programs are evaluated with a few key programs evaluated at the ROI level.
  • The remaining CEOs (9 percent) did not know the investment level or just decided not to respond.

The Executive View

The views on learning and development were obtained from the CEO survey. Ninety-six CEOs responded, representing 21.3 percent of the 451 targeted. This level of response is impressive when considering the difficult economic circumstances during the time the survey was conducted (2009). We wanted to know what measures were being reported now, what measures were missing but should be reported in the future, and how the executives would rank them in terms of value on a 1 (low ranking) to 8 (high ranking) scale. Table 19-1 shows the responses.

Table 19-1. Survey Responses


 

Measure We Currently Measure This We Should Measure This in the Future My Ranking of the Importance of This Measure
      Average Rank
Inputs: Last year, 78,000 employees received formal learning. (90) 94% (82) 86% 6.73   6
Efficiency: Formal learning costs $2.15 per hour of learning consumed. (75) 78% (79) 82% 6.92    7
Reaction: Employees rated our training very high, averaging 4.2 out of 5. (51) 53% (21) 22% 7.15   8
Learning: Our programs reflect growth in knowledge and skills of our employees. (31) 32% (27) 28% 4.79    5
Application: Our studies show that at least 78% of employees are using the skills on the job. (11) 11% (59) 61% 3.42   4
Impact: Our programs are driving our top five business measures in the organization. (8) 8% (92) 96% 1.45    1
ROI: Five ROI studies were conducted on major programs yielding an average of 68% ROI. (4) 4% (71) 74% 2.31    2
Awards: Our learning and development program won an award from ASTD. (38) 40% (42) 44% 3.23   3

The first column in the table provides the percentage of CEOs who checked each item as a measure being reported, the second column gives the percentage indicating that it should be reported, and the third column gives the average ranking number for the group. As shown, inputs and efficiencies ranked 6 and 7 respectively; these types of data are always reported. Reaction is ranked the lowest, although it’s the number 1 measure reported to executives. This particular measure could be improved with more focus on content. Awards ranked 3, which was higher than we expected. The highest-ranking categories were impact (8) and ROI (9). CEOs always want to see these types of data, especially during tough economic times. These are the least-reported data sets, yet are of the most value to executives.

Reaction Measures

Although participant feedback can be powerful data, the measures taken are often those of convenience versus quality. Executives rate this level of data as least valuable to them. Yet, reaction data are more likely to be reported to executives than any other type of data.

What’s Wrong with Them?

Here are a few issues related to reaction data.

Image problem. Often referred to “happy sheets,” “smiley feedback,” or “happiness ratings,” the feedback from these tools is perceived to measure the happiness of participants. Although happiness can be a rating, it has very little use when trying to predict learning and even less use when predicting application and impact.

Not taken seriously. Most stakeholders do not take reaction data seriously. Participants rarely provide quality feedback and often take little or no time to respond to a reaction questionnaire.

Too much data. At least 90 percent of programs are measured at the reaction level. This is too much data collection when compared with the value of the data. The process consumes precious resources, leaving many organizations without the resources to collect and analyze data higher on the value chain.

How to Make Them Executive Friendly

Although executives responding to the survey placed little value on these data, they do recognize their importance, because they help the learning and development team. Several tactics can be undertaken to create a renewed appreciation for reaction measures.

Manage the measures. Report to executives those measures that reflect the contribution of the program. Content-related measures have more meaning to the executives and other key stakeholders.

Use the data. Unfortunately, reaction data are often collected and immediately disregarded. The information collected must be used to make adjustments or validate early program success; otherwise, the exercise is a waste of time.

Forecast. Collect information related to improvement. Consider collecting data about results, including effect and monetary contribution. These data forecast value.

Learning Measures

Understanding how much learning has occurred is important, particularly in programs with significant skill building. Measuring learning has its share of problems.

What’s Wrong with Them?

Besides being an essential part of the comprehensive evaluation system, measuring learning is often misunderstood and misused.

Measuring learning does not equal taking a test. Measuring learning is sometimes equated with testing, and participants fear or resent being testing. The challenge is to make testing (or learning measurement) less threatening, and rarely, if ever, have testing scores affect a job situation.

Measuring learning requires resources. With tight budgets, spending excessive amounts of resources on developing and administering tests may be an issue of resources given the measures taken. There is always a tradeoff between resources and the accuracy of data desired by some individuals.

How to Make Them Executive Friendly

Executives do not rank learning measures very high. They see these data as important for the learning and development team, but not important enough to judge the success of learning and development. To make learning measures more relevant and executive friendly, several actions are possible.

Use formal measures sparingly. Formal measures are important in critical jobs involving safety and health, critical operational issues, and customer-facing jobs. The dilemma of having formal measures is that it commands resources to ensure that a test is both valid and reliable. It’s important to know when formal testing is necessary and when it is not.

Use informal measures. This involves a self-assessment for participants, ideally taken anonymously in a nonthreatening environment. This also may involve team assessments or, in some cases, facilitator assessments. Try building a learning measure into your scorecard. Because executives need only one or two measures on learning, it may be helpful to capture these types of data on a self-assessment basis and roll it into the scorecard.

Consider an ROI forecast with learning data. If there is a statistically significant relationship between test scores and on-the-job performance, and the performance can be converted to monetary units, then it is possible to use test scores to forecast the ROI from the program.

Application Measures

For some programs, measures of application represent the most critical data set. This level of results provides an understanding of successful implementation, along with the barriers and enablers that influence success. Many learning and development programs fail because of breakdowns in application.

What’s Wrong With Them?

This level of measurement is not without its share of issues. Executives have interest in this level of data (61 percent of CEOs say they should be receiving it). They also have concerns about it.

Not enough data. This important data category is essential to understand on-the-job behavior. Even in the best practice organizations, only about 30 percent of learning and development programs have any type of follow-up at the application level. Without the level of data, there is no evidence that learning and development is making a difference in the organization.

Much is perception. Because this evaluation often involves behavior change and specific actions taken, the data are based on perception. Even an observation taken by others is subject to error and misinterpretation.

No standards. Unlike some of the other levels, there are no standards at this level, at least, none that are accepted in the industry. Knowing what to ask and how to ask it is not standard, which makes it impossible to compare data across programs, even with other organizations.

How to Make Them Executive Friendly

Almost every executive would agree that behavior change is an important output of learning and development. After all, many of them are attempting to shape the behavior of the organization through development programs, change management programs, and leadership development efforts. However, most executives quickly point out that activity does not always translate into results. A few changes help.

Report only a few measures. Executives may find helpful some simple measures that reflect on-the-job behavior or actions across programs that bring a sense of success to the entire organization.

Address the transfer of learning issue. One of the important reasons for collecting data at this level is to uncover the barriers to, and enablers of, using skills and knowledge. Identify barriers and take actions to minimize, remove, or go around the barrier. Along with barriers are the enablers, which are the supporters or enhancers of the transfer of learning. Working on barriers and enablers with the executives provides an opportunity to make improvements beyond the success that was already achieved.

Use the data. Data become meaningless if not used properly. As we move up the levels of results, the data become more valuable in the minds of the sponsors, key executives, and other stakeholders who have a strong interest in the program.

Develop ROI with Level 3 Data. Although using skills on the job is no guarantee that results will follow, there is an underlying assumption if the knowledge and skills are applied, positive results will occur. A few organizations attempt to take this process a step further and measure the value of on-the-job behavior changes and calculate the ROI. If there are no plans to track the actual effect of the program in terms of specific, measurable business results (Level 4), then this approach represents a credible substitute.

Build the executive scorecard. As with the previous level, it is necessary to capture data at Level 3 to use on the macro-level scorecard. Specific questions that must be identified are always asked at this level of evaluation, whether the data collection is by survey, questionnaire, interview, focus group, or action plan. These questions are then transferred to the macro-level scorecard.

Impact and ROI Measures

For most executives, the most important data are impact. Today, a growing number of executives seek, request, and require impact and ROI. This is a typical executive comment regarding the kind of data they want to see: “Although the activities are reported, I’d prefer to see the business results. To me, it’s not so much what they’re doing that makes a difference. I’d like to see a connection to our major business goals.”

What’s Wrong With Them?

To explore what is wrong with this level, the focus is not the measures themselves, but the credibility of the data and the issues surrounding collection and presentation.

Not enough data. For various reasons, it is not a common process to connect major programs to business measures, and even less common to show the ROI. More is needed.

Improve the front-end analysis. Problems are sometimes tracked back to the beginning of a program. A new program should start with the end result in mind. This specified business need can be met if the solution is linked to that need to some way. This requirement shifts the traditional front-end analyses away from classic skills and knowledge assessment, to starting with the business, linking to job performance needs, and finally, to identifying learning needs.

Use higher level of objectives. The learning and development team is very capable of developing appropriate learning objectives but they are not enough. Higher levels of objectives, including application and impact objectives, are now required to give proper focus to the learning program.

Is the ROI credible? Assumptions about the ROI calculation must be understood by the audience. Otherwise, credibility is lost. In essence, there must be standardized assumptions with a conservative flair to make ROI data credible, and ultimately, believable by the audience.

How to Make Them Executive Friendly

This category of data makes the business connection. The measures that are reported clearly are the business measures that often represent key performance indicators of the executive and illustrate business alignment. However, many executives rarely see this being done, and in more cases, not at all, which has left them confused and frustrated. Here are a few ways to ease executive frustration.

Create a discipline and culture for sustainability. As this level of evaluation is pursued, a culture of accountability is created, where the results are expected from programs and actions must be taken through the process with a results-based focus. Measurement and evaluation is systematic, routine, and not necessarily an add-on process. Collecting and analyzing data and using the results becomes systematic.

Isolate the effects of programs. In this step, evaluation planners explore specific techniques for determining the amount of impact directly related to the program such as a control groups, forecasting models, estimates, and expert opinion. Collectively, these techniques provide a comprehensive set of tools to address the critical issue of isolating the effects of a program.

Convert data to money. Calculate the return-on-investment by converting impact data to monetary values and compare the values with program costs. For some programs, the impact is more understandable when the monetary value is developed. Many techniques for converting data to monetary values are available; which technique is appropriate depends on the type of data and the situation.

Treat intangibles with respect. In addition to tangible monetary benefits, most programs will have intangible nonmonetary benefits. The ROI calculation is based on converting both hard and soft data to monetary values. Intangible benefits are program benefits that individuals choose not to convert to monetary values. Sometimes, intangible nonmonetary benefits are extremely valuable, carrying as much influence with executives as the hard data items.

Use forecasting. Before a program is developed, forecasting can be used to anticipate what impact may occur, or what ROI may be generated. Also, the forecast can be conducted when the program is implemented with data collected at the end of the program, essentially using the reaction data. These time frames are very helpful for executives to see the value of projects before they are developed, or at least in the initial stages of implementation.

Executives are interested in impact and ROI, particularly for large-scale programs that are expensive and strategic. Because of the costs, both in time and money, and their perceived connection to results, executives often want to see the ultimate level of accountability. When this is the case, it must be pursued.

Call to Action

Results and measures of value can be developed and communicated to senior executives to influence their perception of and decisions about learning and development. To ensure this influence, the learning and development team must focus on six important actions.

Spend wisely. There is no room for waste, which means programs should be connected to business objectives.

Respond professionally. Quick, professional responses while delivering impeccable service and building professional relationships within the organization are a must.

Operate proactively. The connection to the business should consist of understanding its problems and opportunities as well as being able to examine, explore, and recommend programs that may solve problems before they are requested.

Build productive partnerships. Work with executives and understand their issues, while delivering value that they appreciate. This effort will help make a partnership productive and earn the respect necessary for the success of the learning and developments process.

Show results. Pursuing a results-focused approach will have a tremendous influence on executives’ attitudes toward and perceptions of the learning and development process.

Take risks. All executives take risks, and the learning and development team should follow suit. But the team can mitigate these risks by aligning potential programs with business objectives and by making immediate changes as needed.

Knowledge Check

Answer the following questions. Check your answers in the appendix.

1. What is the gap between what learning and development professionals provide to executives and what they want to see in terms of results? Be precise.

2. What are the implications of the gaps for business impact and ROI?

3. What are the implications of 18 percent of executives setting the investment level based on the payoff they see from the investment?

Actions to Bring More Executive Accountability to Learning and Development

1. Discover your gaps—have executives take the survey.

2. Review why top executives invest in learning and development. Identify the strategy your executives take.

3. Consider developing an executive-friendly learning scorecard.

About the Author

Jack J. Phillips, PhD, is a world-renowned expert on measurement and evaluation and chairman and cofounder of the ROI Institute. Through the ROI Institute, Phillips provides consulting services for Fortune 500 companies and major global organizations. Phillips has served as training and development manager at two Fortune 500 firms, senior HR officer at two firms, president of a regional federal savings bank, and a professor of management at a major state university. His academic accomplishments include degrees in electrical engineering, physics, and quantitative methods, and he has a PhD in human resources.

He is the author or editor of more than 50 books, and he conducts workshops and presentations at conferences in 50 countries. His most recent books include Measuring for Success: What CEOs Really Think about Learning Investments (2010); The Consultant’s Guide to Results-Driven Proposals: How to Write Proposals that Forecast Impact and ROI (2010); Show Me the Money: How to Determine ROI in People, Projects, and Programs (2007); and The Value of Learning (2007). He can be contacted at [email protected].

Recommended Reading

Phillips, J. J. and P. P. Phillips. (2007). The Value of Learning: How Organizations Capture Value and ROI and Translate It into Support, Improvement, and Funds. Hoboken, NJ: Wiley.

Phillips, J. J. and P. P. Phillips. (2010). Measuring for Success: What CEOs Really Think about Learning Investments. Alexandria, VA: ASTD Press.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset