Introduction to the ASTD Handbook of Measuring and Evaluating Training

Learning professionals around the world have a love-hate relationship with measurement and evaluation. On the one hand, they agree that good measurement and evaluation practices can provide useful data; on the other hand, they feel that measurement and evaluation take time and resources. However, no one argues that the need for across-the-board accountability is on the rise. This is especially true with training and development. With this demand comes the need for resources to support learning professionals in their quest to build capacity in measurement and evaluation. The ASTD Handbook of Measuring and Evaluating Training and complementary resources are an effort to support learning professionals in this quest.

Measurement and Evaluation: The Challenges and the Benefits

At the most fundamental level, evaluation includes all efforts to place value on events, things, processes, or people (Rossi, Freeman, and Lipsey, 1999). Data are collected and converted into information for measuring the effects of a program. The results help in decision making, program improvement, and in determining the quality of a program (Basarab and Root, 1992).

For decades experts in training evaluation have argued the need for measurement and evaluation. Many organizations have heeded this cry and have applied processes that include quantitative, qualitative, financial, and nonfinancial data. Training functions taking a proactive approach to measurement and evaluation have survived organizational and economic upheaval. Despite the call, however, many training managers and professionals ignore the need for accountability, only to find themselves wondering why the chief financial officer is now taking over training and development. So why is it that many training functions have failed to embrace this critical step in the human performance improvement process?

Measurement and Evaluation Challenges

Barriers to embracing measurement and evaluation can be boiled down to 12 basic challenges.

1. Too Many Theories and Models

Since Kirkpatrick provided his four levels of evaluation in the late 1950s, dozens of evaluation books have been written just for the training community. Add to this the dozens of evaluation books written primarily for the social sciences, education, and government organizations. Then add the 25-plus models and theories for evaluation offered to practitioners to help them measure the contribution of training, each claiming a unique approach and a promise to address evaluation woes and bring about world peace. It’s no wonder there is confusion and hesitation when it comes to measurement and evaluation.

2. Models Are Too Complex

Evaluation can be a difficult issue. Because situations and organizations are different, implementing an evaluation process across multiple programs and organizations is complex. The challenge is to develop models that are theoretically sound, yet simple and usable.

3. Lack of Understanding of Evaluation

It hasn’t always been easy for training professionals to learn this process. Some books on the topic have more than 600 pages, making it impossible for a practitioner to absorb just through reading. Not only is it essential for the evaluator to understand evaluation processes, but also the entire training staff must learn parts of the process and understand how it fits into their role. To remedy this situation, it is essential for the organization to focus on how expertise is developed and disseminated within the organization.

4. The Search for Statistical Precision

The use of complicated statistical models is confusing and difficult to absorb for many practitioners. Statistical precision is needed when high-stakes decisions are being made and when plenty of time and resources are available. Otherwise, very simple statistics are appropriate.

5. Evaluation Is Considered a Postprogram Activity

Because our instructional systems design models tend to position evaluation at the end, it loses the power to deliver the needed results. The most appropriate way to use evaluation is to consider it early—before program development—at the time of conception. With this simple shift in mindset, evaluations are conducted systematically rather than reactively.

6. Failure to See the Long-Term Payoff of Evaluation

Understanding the long-term payoff of evaluation requires examining multiple rationales for pursuing evaluation. Evaluation can be used to

  • determine success in accomplishing program objectives
  • prioritize training resources
  • enhance training accountability
  • identify the strengths and weaknesses of the training process
  • compare the costs to the benefits of a training program
  • decide who should participate in future training programs
  • test the clarity and validity of tests, cases, and exercises
  • identify which participants were the most successful in the training program
  • reinforce major points made to the participant
  • improve the training quality
  • assist in marketing future programs
  • determine if the program was the appropriate solution for the specific need
  • establish a database that can assist management in making decisions.

7. Lack of Support from Key Stakeholders

Important stakeholders who need and use evaluation data sometimes don’t provide the support needed to make the process successful. Specific steps must be taken to win support and secure buy-in from key groups, including senior executives and the management team. Executives must see that evaluation produces valuable data to improve programs and validate results. When the stakeholders understand what’s involved, they may offer more support.

8. Evaluation Has Not Delivered the Data Senior Managers Want

Today, senior executives no longer accept reaction and learning data as the final say in program contribution. Senior executives need data on the application of new skills on the job and the corresponding impact in the business units. Sometimes they want return-on-investment (ROI) data for major programs. A recent study shows that the number one data point to senior executives responding to the survey (N=96) is impact data; the number two data point is ROI (Phillips and Phillips, 2010).

9. Improper Use of Evaluation Data

Improper use of evaluation data can lead to four major problems:

  • Too many organizations do not use evaluation data at all. Data are collected, tabulated, catalogued, filed, and never used by any particular group other than the individual who initially collected the data.
  • Data are not provided to the appropriate audiences. Analyzing the target audiences and determining the specific data needed for each group are important steps when communicating results.
  • Data are not used to drive improvement. If not part of the feedback cycle, evaluation falls short of what it is intended to accomplish.
  • Data are used for the wrong reasons—to take action against an individual or group or to withhold funds rather than improving processes. Sometimes the data are used in political ways to gain power or advantage over another person.

10. Lack of Consistency

For evaluation to add value and be accepted by different stakeholders, it must be consistent in its approach and methodology. Tools and templates need to be developed to support the method of choice to prevent perpetual reinvention of the wheel. Without this consistency, evaluation consumes too many resources and raises too many concerns about the quality and credibility of the process.

11. A Lack of Standards

Closely paralleled with consistency is the issue of standards. Standards are rules for making evaluation consistent, stable, and equitable. Without standards there is little credibility in processes and stability of outcomes.

12. Sustainability

A new model or approach with little theoretical grounding often has a short life. Evaluation must be theoretically sound and integrated into the organization so that it becomes routine and sustainable. To accomplish this, the evaluation process must gain respect of key stakeholders at the outset. Without sustainability, evaluation will be on a roller-coaster ride, where data are collected only when programs are in trouble and less attention is provided when they are not.

Despite these challenges, there are many benefits to implementing comprehensive measurement and evaluation practices.

Measurement and Evaluation Benefits

Organizations embracing measurement and evaluation take on the challenges and reap the benefits. When the training function uses evaluation to its fullest potential, the benefits grow exponentially. Some of the benefits of training measurement and evaluation include

  • providing needed responses to senior executives
  • justifying budgets
  • improving program design
  • identifying and improving dysfunctional processes
  • enhancing the transfer of learning
  • eliminating unnecessary or ineffective projects or programs
  • expanding or implementing successful programs
  • enhancing the respect and credibility of the training staff
  • satisfying client needs
  • increasing support from managers
  • strengthening relationships with key executives and administrators
  • setting priorities for training
  • reinventing training
  • altering management’s perceptions of training
  • achieving a monetary payoff for investing in training.

These key benefits, inherent with almost any type of impact evaluation process, make additional measurement and evaluation an attractive challenge for the training function.

Measurement and Evaluation Fundamentals

Regardless of the measurement and evaluation experts you follow, the process to evaluate a training program includes four fundamental steps. As shown in figure A, these steps are evaluation planning, data collection, data analysis, and reporting. When supported by systems, processes, and tools, a sustainable practice of accountability evolves. This is why a focus on strategic implementation is important.

Evaluation Planning

The first step in any process is planning. The old adage “plan your work, work your plan” has special meaning when it comes to comprehensive evaluation. Done well, an evaluation can come off without a hitch. Done poorly, and evaluators scramble to decide how to go about collecting and analyzing data.

Data Collection

Data collection comes in many forms. It is conducted at different times and involves various data sources. Technique, timing, and sources are selected based on type of data, time requirements, resource constraints, cultural constraints, and convenience. Sometimes surveys and questionnaires are the best technique. If the goal is to assess a specific level of knowledge acquisition, a criterion-referenced test is a good choice. Data gathered from many sources describing how and why a program was successful or not may require the development of case studies. Periodically, the best approach is to build data collection into the program itself through the use of action planning. The key to successful data collection is in knowing what techniques are available and how to use them when necessary.

Data Analysis

Through data analysis the success story unfolds. Depending on program objectives and the measures taken, data analysis can occur in many ways. Basic statistical procedures and content analysis can provide a good description of progress. Sometimes you need to make a clear connection between the program and the results. This requires that you isolate program effects through techniques such as control groups, trend line analysis, and other subjective techniques using estimates. Occasionally, stakeholders want to see the return-on-investment (ROI) in a program. This requires that measures be converted to monetary values and that the fully loaded costs be developed. Forecasting ROI prior to funding a program is an important issue for many organizations.

Reporting

The point of evaluation is to gather relevant information about a program and to report the information to the people who need to know. Without communication, measurement and evaluation are no more than activities. Reporting results may occur through detailed case studies, scorecards, or executive summaries. But to make the results meaningful, action must be taken.

Implementation

Program evaluation is an important part of the training process. But the evaluations themselves are outputs of the processes you use. To make evaluation work and to ensure a sustainable practice, the right information must be developed and put to good use. This requires that the right technologies be put into place at the outset, that a strategy be developed and deployed, and that programs of all types are evaluated in such a way that meaningful, useful information evolves.

The ASTD Handbook of Measuring and Evaluating Training

The purpose of this book is to provide learning professionals a tool to which they can refer as they move forward with measurement and evaluation. Each step in the training evaluation process is addressed by experts from corporations, nonprofits, government entities, and academic institutions, as well as those experts who work with a broad range of organizations. Readers will have the opportunity to learn, reflect upon, and practice using key concepts. The handbook will assist readers as they

  • plan an evaluation project, beginning with the identification of stakeholder needs
  • identify appropriate data collection methods, given the type of data, resources, constraints, and conveniences
  • analyze data using basic statistical and qualitative analysis
  • communicate results given the audience and their data needs
  • use data to improve programs and processes, ensuring the right data are available at the right time.

Scope

This handbook covers various aspects of training measurement and evaluation. Intended to provide readers a broad look at these aspects, the book does not focus on any one particular methodology. Rather, each chapter represents an element of the four steps and implementation of evaluation as described above. The book includes five parts.

Section I, Evaluation Planning, looks at the three steps important to planning an evaluation project. Beginning with identifying stakeholder needs, developing program objectives, then planning the evaluation project, an evaluator is likely to have a successful project implementation.

Section II, Data Collection, covers various ways in which evaluation data can be collected. Although the chapter leads with surveys and questionnaires, other techniques are described. Techniques include using criterion-referenced tests, interviews, focus groups, and action plans. In addition, the Success Case Method is described, as is using performance records in collecting data.

Section III, Data Analysis, looks at key areas involved in analyzing data, including the use of statistics and qualitative methods. Other topics include how to isolate the effects of a program from other influences, convert data to monetary value, and identify program costs to ensure fully loaded costs are considered when assuming the training investment. In addition, a chapter has been included on calculating ROI, an important element given today’s need to understand value before investing in a program.

Section IV, Measurement and Evaluation at Work, describes key issues in ensuring a successful, sustainable evaluation implementation. This part begins with estimating the future value of training investment and reporting and communicating results. All too often data are collected and analyzed, only to sit idle. Then the issue of giving CEOs the data they really want is covered as the industry still often misses the mark when it comes to providing data important to the CEO. Of course, even if the data are the right data, if they are not put to use, they serve no real purpose in improving programs. With this issue in mind we’ve included a chapter on using evaluation data. To ensure a long-term approach to evaluation is integrated in the training function, a strategy for success is a must. In addition, the right technology must be selected to support this strategy. Chapters on implementing and sustaining a measurement practice and selecting technology are included in this section. Section IV wraps up with four case studies describing the evaluation of different types of programs.

Section V, Voices, is a summary of interviews with the experts in training measurement and evaluation. Rebecca Ray spent time with each expert, asking them their views of the status of training measurement and evaluation. This summary section provides readers a flavor of those interviews, which are available as podcasts at www.astd.org/ HandbookofMeasuringandEvaluatingTraining.

Contributors

Contributors were selected based on their expertise in each area. Expertise, in this case, is not defined by how many books one has written or how well known one is in the industry. Rather, expertise is defined by what these contributors are actually doing with training evaluation. Readers will hear from external consultants who touch a wide variety of organizations, internal consultants who focus on training evaluation within a single organization, individuals who have experience as both an internal and external experts, and professors who hone and share their expertise through research. Our contributors work in organizations across the United States, Germany, Indonesia, and Dubai, giving the book an international context.

Target Audience

Four groups serve as the target audience for this book. First and foremost, this publication is a tool that all training professionals need to round out their resource library. Managers of training professionals are another target audience. This resource will support them as they support evaluation within their function. Professors who teach training evaluation will find this publication a good resource to address all elements of the evaluation process. The exercises and references will help professors as they develop coursework, challenge their students’ thinking, and assign application projects. Finally, students of training evaluation will find this publication valuable as they set off to learn more about evaluation and how it drives excellence in program implementation.

How to Get the Most from the Book

The book is designed to provide a learning experience as well as information. Each chapter begins with key learning objectives. Throughout the text authors have included references to real-life applications, practitioner tips from individuals applying the concepts, additional resources and references, and knowledge checks to assess the reader’s understanding of the chapter content. Some knowledge checks have specific correct answers that can be found in the appendix; others offer an opportunity for the reader to reflect and discuss with their colleagues. To get the most out of the book readers should

1. review the table of contents to see what areas are of most interest

2. read the objectives of the chapter of interest and upon completion of the chapter, work through the knowledge check

3. follow up on prescribed action steps, references, and resources presented in the chapter

4. participate in the ASTD Evaluation & ROI blog (www1.astd.org/Blog/category/ Evaluation-and-ROI.aspx), where additional content is presented and discussed among your colleagues.

We hope you find the ASTD Handbook of Measuring and Evaluating Training a useful resource full of relevant and timely content. Over time we will add to this content through our Evaluation & ROI blog, the Measuring and Evaluating Training website, and other channels of delivery. As you read the content and have suggestions for additional information, workshops in content areas, and supporting material, please let us know. You can reach me at [email protected], and I will work with ASTD to ensure learning professionals get the information they need for successful training measurement and evaluation.

References

Basarab, D. J. and D. K. Root. (1992). The Training Evaluation Process. Boston: Kluwer Academic Publications.

Phillips, J. J. and P. P. Phillips. (2010). Measuring for Success. What CEOs Really Think About Learning Investments. Alexandria: ASTD.

Rossi, P. H., H. E. Freeman, and M. W. Lipsey. (1999). Evaluation: A Systematic Approach 6th ed. Thousand Oaks, CA: Sage.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset