2

An Overview of Evaluation

What’s Inside This Chapter

This chapter walks you through the basic tenants of evaluation, why you should evaluate, and why some avoid the process. It finishes up with a case study on evaluating a training program. You’ll also learn about:

a basic definition of evaluation

benefits of evaluation

why talent development professionals avoid doing evaluations

how training evaluations go wrong

roles and responsibilities for evaluation

basics of the four levels of evaluation.

2

An Overview of Evaluation

What Is Evaluation?

When you think of evaluating something, what do you think of? Assessing something against certain standards or criteria, determining its usefulness or quality, and comparing it against other similar programs or products are just a few situations that come to mind. For example, think of purchasing a car. Many buyers evaluate one vehicle against others, setting up criteria that may include style, cost, gas mileage, safety, reliability, and resale value. They then evaluate each potential car against those standards. Through this type of process, they make a decision.

The evaluation of training is similar—the course is evaluated using a standard that is formalized through some sort of instrument so you can consistently evaluate it against all other courses. You then use the results to make a decision regarding the course design, development, implementation, and impact. Therefore, one idea is that evaluation is measuring something to make a decision. The decisions could include stopping or expanding the offering, changing the content or instructional strategies, or using the evaluation results to secure additional funding. Think of evaluation as the process of appraising training to determine and improve its value. This idea includes measuring the quality of the learning event, the actual learning that has occurred, changes in learner behavior, application of new knowledge and skills on the job, and impact on the organization.

Purposes of Evaluation

Gathering data and conducting an analysis provides information. But for that information to be useful it must be used for a purpose:

To improve the design of the learning experience: Evaluation can help you verify the needs assessment, learning objectives, instructional strategies, target audience, delivery method, quality of delivery, and course content.

To determine if the objectives of the learning experience were met and to what extent: The objectives are stated in measurable and specific terms. Evaluation will determine if each stated objective was met. However, it is not enough to know only if the objectives were met; you must know the extent to which they were met. This knowledge will allow you to focus your efforts for content reinforcement and improvement.

To determine the adequacy of the content: How can the content be more job related? Was it too advanced or not challenging enough? Does the content support the learning objectives?

To assess the effectiveness and appropriateness of the instructional strategies: Case studies, tests, exercises, and other instructional strategies must be relevant to the job and reinforce course content. Does the instructional strategy link to a course objective and content? Is it the right instructional strategy to drive the desired learning or practice? Was there enough instruction and feedback? Does it fit with the organization’s culture? Instructional strategies, when used as part of evaluation, can measure the knowledge, skills, and abilities (KSAs) the learning experience offers.

To reinforce learning: Some evaluation methods can reinforce learning. For example, a test or similar performance assessment can focus on content so that content retention is measured and evaluated. The measurement process itself causes the learner to reflect on the content, select the appropriate content area, and use it in the evaluation process.

To provide feedback to the facilitator: Did the facilitator know the content? Did the facilitator stay on topic? Did the facilitator provide added depth and value based on personal experience? Was the facilitator credible? Will you use the evaluation information to improve the skills of the facilitator?

To determine the appropriate pace and sequence: Do you need to schedule more or less time for the total learning experience or certain parts of the learning? Were some parts of the learning experience covered too fast or slow? Does the flow of the content make sense? Does the sequence follow a building-block approach?

To provide feedback to participants on learning: Are the participants learning the course content? Which parts are they not learning? Was there a shift in knowledge and skills? To what extent can the participants demonstrate the desired skills or behavior?

To identify which participants are experiencing success in the learning experience: Evaluation can identify which participants are grasping the new knowledge and skills and those who are struggling. Likewise, evaluation can identify participants who are excelling in their understanding of the content and its use on the job.

To determine business impact, cost-benefit ratio, and ROI for the program: What was the shift in the identified business metric? What part of that shift was attributable to the learning experience? Was the benefit to the organization worth the total cost of providing the learning experience? What is the bottom-line value of the course’s impact on the organization?

To identify the learning used on the job: What part(s) of the learning experience are being used on the job? To what extent?

To assess the on-the-job environment to support learning: What environmental factors support or inhibit the use of the new knowledge, skills, abilities, and behaviors on the job? These factors could be management support, tools and equipment, recognition and reward, and so on.

To build relationships with management: The evaluation process requires a conversation with management about the business metric, evaluation plan, collection of information, and the communication of results. This continual interaction provides the opportunity to build relationships and add value to the accomplishment of objectives.

To decide who should participate in this or future programs: The needs assessment includes an audience analysis. In addition, the evaluation will help determine the extent to which the content applies to a person’s actual job.

To gather data for marketing purposes: Positive results can help promote the learning experience to other potential participants. It can also help position the talent development unit as adding value to internal clients.

As you can see, there are many purposes of evaluation, and the preceding list is not exhaustive. How is this information used? The evaluator determines the purpose of the evaluation as part of the evaluation plan (to be discussed later). This decision then relates to the decisions to be made, the type of data collection instruments used, timing, sources, and the location for the data.

Basic Rule 2

The purpose of the evaluation drives the evaluation plan.

Benefits of Evaluation

Implementing the evaluation of learning experiences confers several advantages to the talent development function and the organization. First, an effective, high-quality evaluation can secure client support and build client relationships. Discussing your evaluation plan demonstrates that you have a structured approach to ensure the quality and continuous improvement of your training efforts. This gives your clients confidence that their investments are well placed.

Second, and in concert with the first benefit, evaluation allows you to see if the results from the training program are consistent with the business opportunity analysis and needs assessment. What contribution did training make to a shift in the business metric? What was the organizational impact?

Third, evaluation helps focus the training program. Do you have the right content, directed at the right audience, delivered effectively? The evaluation results provide information regarding the target audience and individual participants. Evaluation also assesses the alignment of the content with the learning objectives, needs assessment data, and instructional strategies. It indicates how well the design and development process actually worked.

Think About This

The purpose behind a specific evaluation plan shapes your evaluation efforts. It constrains and guides your efforts so you don’t collect information that does not directly relate to the purpose. Likewise, the purpose is a guide to ensure you are collecting enough information to carry out your evaluation. This tension supports cost-effective evaluation.

Fourth, evaluation validates performance gaps and learner needs. Through various performance measurements (tests, behavioral checklists, action planning, and so forth), you can identify ongoing needs. If a learner cannot perform a skill or pass a test, there is still a gap that needs to be addressed.

Fifth, evaluation can help to determine if training is the solution to a performance gap. Training is generally part of the solution if a shift in the business metric occurred, participants learned and can apply their new knowledge and skills, or the original problem or opportunity was addressed. Evaluation can also determine whether or not the program was a cost-effective solution. By knowing the total costs of the learning experience and the dollar value of the benefit (calculated by looking at the shift in the business metric), you can determine the ROI. Obviously, a positive ROI is desirable.

Sixth, if you demonstrate value, you may gain access to more resources. Management will be more likely to fund initiatives that make a difference to it and the organization. By helping management meet its objectives, you become a partner in that success.

Think About This

Proving the value of the program: Demonstrate that the learning experience makes a difference and that the difference is worth the investment. In essence, there is benefit in judging the value of the program.

Improving the value of the program: The improvement may occur in such areas as facilitator delivery or content expertise; materials, facilities, and equipment; program sequence and pace; revision of content; and learning strategies.

Linking training to business needs: This goes beyond just identifying a business metric. It involves partnering with management for business unit success. This supports transfer to the job and the client in achieving the objectives. Therefore, talent development is seen as adding value to the organization.

Learning reinforcement: Some evaluation efforts (such as a pretest) can serve as course organizers. Evaluation also reinforces learning and supports transfer. This means that evaluation and course content focus on outcomes (Eyler, no year).

So, Why Doesn’t Everyone Do Evaluations?

With all these purposes and benefits, it would seem that everyone would be conducting comprehensive evaluations. If only this were true. Many organizations only do the minimum when it comes to evaluation. There are a variety of reasons, some more valid than others. Let’s take a look at nine reasons why talent development professionals do not evaluate their learning experiences:

Evaluation requires a particular skill set. Evaluators must not only know design, but also have an intimate knowledge of evaluation. This goes beyond the basics. The evaluator needs to know data collection methods and have the skills to design instruments, communicate, and plan projects. Evaluators should also be able to analyze data and influence others.

Basic Rule 3

It is not safe to assume that just because no one is asking for evaluation that doesn’t mean no one wants it.

Evaluation is not a priority. Let’s face it: Everyone is busy and evaluation requires time and effort. Although many evaluation instruments can be designed into the learning experience, it still takes time to collect, analyze, and report evaluation results. The issue here is really one of priority, not time. If evaluation were to become a priority, talent development professionals would make the time to do it.

It is not required. In some cases, no one asks for evaluation. Don’t be fooled. Just because no one is asking does not mean that evaluation is not important. Even if talent development is not pushing for evaluations, your internal clients—who are often results-driven managers with profit-and-loss responsibilities—are still asking the questions; they want to know if the training is effective.

It can result in criticism. Evaluation results are communicated to talent development management and the client organization. Because evaluation also takes place during the learning experience, the learners also receive feedback. Evaluation should be seen as driving efforts for continuous improvement, but it may also result in some criticism. Like all criticism, the receiver must look at the source and validity of the comments and act accordingly. In some cases, damage control may be in order. In all cases, criticism can be a catalyst for improvement.

You can’t measure training. Many in talent development look at training as an investment much like advertising. Investments are made without really knowing the results because they cannot be measured. Or, what can be measured has so many influences that training’s contribution to change cannot be separated from other factors. Much of the problem lies in the fact that a business metric is not determined. Without some change to measure, measurement is not possible.

Too many variables are beyond the talent development department’s control. The perception is that training’s impact cannot be isolated and measured because so many variables affect performance. The thinking is that there is no way to separate all these variables and focus on training. This is a difficult task, but it is not impossible.

The information is not available. Does this mean that although the organization has the information, you cannot get it? Or that the organization does not have the information? Or that it is not in a form that can be used for evaluation? If you work with an internal client to identify a business metric, that client will have the information, most likely as a performance objective. Most companies have systems that track a great deal of information. Availability of data is another issue. You client should make the information available if it is important to the evaluation effort.

There is no system to track data. If your clients have a performance objective that you can help them achieve, then the clients can track that information. While most companies will not set up a separate system to track a change for a training evaluation effort, you can still link to whatever the existing method is to track the data. If a company is accountable for effecting a change, it will have a tracking system.

It costs too much. Cost always rears its ugly head when undertaking an initiative. Evaluation has costs in terms of time—time to develop the instruments and to analyze and communicate the results. If these skill sets do not reside within the organization, you may need to develop the skills (training costs), hire an evaluator (staffing and personnel costs), or outsource the evaluation (vendor or consulting costs). You must also consider the opportunity costs of the forgone benefits that could have been derived from other uses of the time and dollars spent on the initiative.

Think About This

There are always costs for an initiative; it is a matter of tradeoffs. Evaluation costs time and money, but consider the costs in terms of dollars for staff time, reputation, funding, individual and organizational performance, and so on for training programs that are not effective. Evaluation can help you avoid wasting precious resources on programs that don’t work and focus on the programs that do.

As you can see, the reasons for not evaluating learning experiences may not be as valid as first thought. If you run up against some of these reasons, push back and test the thinking. Do a little investigation to see if the reason is valid or if it is based on faulty thinking and perceptions.

Think About This

The identification of a business metric (sales, turnover, defects, grievances, and so forth) is critical to evaluation. Just as critical is the identification of an internal client who owns the business metric and has it as a performance objective. Internal clients can provide access to the people and information that are critical to tracking and collecting data. Involve the clients in the evaluation planning and get their support.

Reasons for Poor Training Evaluations

Poorly conducted training evaluations are not uncommon. While some reasons are more general in nature, others are specific to the level of evaluation for form of delivery. They are unsuccessful for many reasons, which we’ll explore further in this section.

Issues around needs assessment: Some organizations still send out topical surveys for employees to indicate what training courses they are interested in taking. Other organizations maintain a catalog of courses for employees to pick from. Instead, organizations should use needs assessments to help determine what courses employees should take. Needs assessments help identify any gaps between current and desired performance requirements at the performance, job, and individual levels. Training programs should then be designed and developed to address these gaps.

Design and development: A poorly designed or developed learning experience will result in low program evaluations. If any major part of the design or development process is skipped, the result could be a low-quality learner experience. Poorly designed or executed instructional strategies include those that do not link course content to learning, demonstration, or transfer to the job. Of paramount importance is the development of the learning objectives and instructional strategies.

Everyone must attend: Organizations may require all employees to attend a certain training program. While mandatory programs may impart knowledge, the information is rarely retained and skills are not always developed. In these situations, those in talent development need to find a way to keep costs low while providing the mandated content in a way that encourages knowledge retention and use.

Flavor of the month: How often has an executive gone to a conference or read the latest management or leadership book and then wanted a training course on the topic? All too frequently, the resulting course is developed and provided for an internal target audience, but eventually deemed ineffective. Unless there is a strong link between this training and an identified need, transfer to the job is unlikely and the program will probably have a negative ROI.

Program “creep”: In these situations, the client asks us to add additional content to a training course that is currently being designed and developed based on an identified need. The new content has to either fit into the existing timeframe or add time to the program. These “add-ons” are often not consistent with the established learning objectives. This lack of alignment or fit then results in lower course evaluations.

Shorten the course: Many clients are “content driven” and do not understand the importance of instructional strategies. So it is not unusual for them to request that the training course be delivered in a shorter timeframe, which many trainers will accomplish by eliminating some instructional strategies or learning activities. However, these are the very aspects of the program that provide reinforcement, practice, and application. A better approach would be to present the learning objectives to the client and discuss which objectives the client wants to remove. If any are identified, then that content can be removed along with any activities associated with that learning objective (although if we have done a good needs assessment, involving the client and organization, this is rarely necessary).

Timing: Some roll-out plans, especially when geared to a large audience, take several months to deliver. For example, if training on a new computer system that will go live the following January begins in August, those taking the training early will not be able to effectively use the new system—too much time will pass between training and use. The training organization should either increase the effectiveness by beginning the training program closer to the start of use, or it should provide periodic reinforcement training.

Think About This

In flavor of the month situations, those in talent development need to seek ways to drive costs out while providing the mandated course(s).

For online or virtual classroom, poor course evaluations could be caused by poor learner or facilitator interactions, instructional strategies that are not intended for the online or virtual classroom environment, and an instructor’s lack of training on the software or technology. The learner experience can also suffer if the course lacks clear instructions and assignments, has ambiguous deadlines, and suffers from poorly written discussions that result in decreased learner collaboration, limited chat, and lack of a learner community.

Level 1 Reasons for Poor Evaluations

Level 1 evaluation (learner reaction to the training) covers many different areas, which are all in danger of having low evaluations. When objectives are not met, this is usually an indication of poor facilitation or inadequate design and development. This results in lower evaluation scores.

Low facilitator ratings could be due to poor course design or facilitation skills. If facilitators are simply reading from the leader’s guide or the media without more depth of knowledge and experience, it is obvious to learners and can reduce their credibility. Furthermore, while learners may have a good time during a training session, this should not come at the expense of meeting course learning objectives. Using a facilitator-selection instrument should allow for selecting skilled and knowledgeable facilitators.

Poor program management (going too fast or slow) and materials that don’t follow adult learning principles also frustrate the learners and result in loss of attention and learning. In addition, engagement suffers when the media quality is poor or fails to incorporate good variety (such as PowerPoint, YouTube, videos, whiteboards, and wall boards).

Level 2 Reasons for Poor Evaluations

Level 2 evaluations (change in knowledge, skills, and abilities; demonstration of new skills) can highlight poorly designed or executed instructional strategies that do not link course content to learning or demonstration. While course designers can be very creative in their activities, the further those activities are removed from simulating the required job skills, the less likely they will be to affect skill transfer and a good evaluation.

Not providing knowledge or demonstration assessments is another common problem. For example, if the goal of a coaching course is to teach learners the steps in the coaching process, then one way to show knowledge acquisition would be to have them list the steps in a short test. They could also engage in a role-play exercise and then be assessed on the extent to which they properly demonstrate those coaching steps. This method is especially helpful because it allows for more practice and reinforcement, which also increases the likelihood of improved performance and course evaluation.

When objectives are not met, it is usually a result of poor facilitation or inadequate design and development. Because objectives state what knowledge or skill a learner should possess, not meeting them has a negative impact on evaluation. In addition, the more a learner practices a skill, the more reinforcement, which increases the likelihood of improved performance and course evaluation. Thus, allowing for practice is key.

Level 3 Reasons for Poor Evaluations

Level 3 evaluation looks at transfer to the job as well as environmental factors enabling or hindering that transfer. Not providing action plans or other transfer strategies to move the new KSAs from the training environment to the job is one reason training programs fail at Level 3. So is not involving the client’s organization—lack of an immediate manager’s support is a primary reason training is not used on the job.

Any issues in the job environment that could hinder the transfer of training should be addressed early in the design and development process. The influence of peers as it relates to the use of new skills is one barrier that is often neglected. Likewise, strategies to strengthen the enablers to transfer should be incorporated in the training process. Doing so is essential to ensure new skills are applied on the job.

A lack of tools or resources—such as equipment, software programs, financial support, peer and manager coaching, new procedures, and lack of information—could derail employee progress with using new skills. Similarly, the recognition and reward system needs to support the use of the new knowledge and skills. In many cases, performance may initially decline as the learners gain new skills, only to excel once they are proficient. Performance accommodations need to be made to allow for this initial decline.

Level 4 Reasons for Poor Evaluations

Level 4 evaluation relates to the impact and ROI resulting from the training. Impact results from a shift in performance as measured by some metric (increase in sales, fewer defects, increase in the share of a customer’s business, reduced costs, and so on). When no metrics are identified, evaluation is impossible. However, many factors can cause a metric to shift, so it is important to determine what impact the training has. Failure to do so will result in loss of confidence in the findings.

Think About This

If there is not the initial identification of a metric associated with the training program, then there cannot be a measurable impact.

Some organizations may be reluctant to set up a new system to track a change due to training. Thus, the initial conversations with the client should not only identify the metric, but also the system or source of data to measure the change over time.

The list of cost factors to include in the ROI calculation can be extensive, but a cost should only be left out if the client agrees not to include it in the calculations. The ROI calculations also require the evaluator to the determine the dollar value of the benefit derived from the training. It is important to do this in a credible fashion, such as by asking the client to provide access to the information.

For online or virtual classroom training, poor course evaluations could be due to poor learner and facilitator interaction and response times; poorly written issues for threaded discussions resulting in lack of learner collaboration and limited chat; instructional strategies not lending themselves to online or virtual classroom environment; lack of training on the software, clear instructions, and assignments; ambiguous deadlines; and not building a strong learner community.

Roles and Responsibilities for Evaluation

Evaluation is not just the responsibility of the course designer, facilitator, or evaluator. Although the training organization takes the lead, the client, the participants’ managers, and the participants themselves also have a responsibility to ensure a complete evaluation. The following is a discussion of roles and responsibilities regarding training evaluation.

Basic Rule 4

A complete evaluation effort requires the involvement and support of the talent development staff, training participants, managers of the participants, and the client.

Training Organization

The talent development department is responsible for:

• working with the client to identify the business metric and complete the evaluation plan

• designing data collection instruments and collecting and interpreting the data

• implementing the evaluation plan

• designing a learning experience that can be evaluated beyond Level 1

• developing learning experiences that have evaluation design, principles, procedures, strategies, and instruments in place to measure results for traditional classroom delivery or for blended, online, or the virtual classroom

• facilitating the training (classroom, blended, online, or virtual) to ensure learning and transfer

• implementing the evaluation plan, instructional strategies, and measurement instruments before, during, and after the learning experience

• communicating the evaluation results to appropriate audiences

• using the evaluation data for making decisions according to the evaluation plan

• following up to make sure decisions based on the evaluation plan are implemented.

Client

Talent development professionals have a great deal to do with evaluating training. However, to be successful they must partner with the client, whose responsibilities include:

• working with the talent development professional to identify the business metric and develop the evaluation plan

• ensuring that a tracking mechanism is in place to monitor any changes in the metric

• providing access to people and data to support the evaluation plan

• being actively involved in the design and development process

• offering input and support to ensure that the training program links back to the business needs, the content is relevant to the participants’ jobs, instructional strategies and assessments are job related and consistent with the culture, and training is directed to the appropriate audience.

Think About This

In many cases, an evaluator must access information that is in the client’s database. Therefore, the evaluator needs to work with the people who are the keepers of the data.

Participants’ Managers

The managers or supervisors of the participants have important responsibilities regarding evaluation. As with the client, managers need to be actively involved in the design and development process. To support the evaluation effort, they should:

• Identify employees whose participation in the learning experience is critical for the desired business improvement.

• Work with the talent development professional to complete an audience profile.

• Participate in the needs assessment, design process, and curriculum development.

• Make the participant, other individuals, and any required data available to the evaluator after the training.

• Support the data collection efforts.

The client and the participants’ managers also have joint responsibilities for the learning and transfer processes, without which training cannot be effective. Some of these responsibilities include:

• ensuring that the required systems and processes are in place to support learning and transfer

• identifying employees whose participation in the learning experience is critical for the desired business improvement

• developing and sustaining an environment that is conducive for learning and transfer, including opportunities that support the use of new knowledge, skills, and abilities on the job

• identifying environmental factors that support or inhibit the transfer of learning to the job

• providing the required resources before, during, and after the training to support the participants’ learning and application of new knowledge, skills, and abilities on the job

• discussing the learning experience with participants prior to their participation to determine expected outcomes, explain the transfer of learning to the job, and complete the performance contract

• reinforcing behavior after the learning experience and providing rewards and recognition for success

• being proactive in identifying and removing barriers to the application of new knowledge and skills

• holding learners responsible for using and sharing their learning.

Participants

Participants cannot escape responsibility for being a part of the evaluation effort. They contribute to the evaluation process by:

• participating fully in the learning experience, including performing their best regarding instructional strategies, tests, and other assessments

• partnering with their managers to choose learning experiences intended to improve individual and business performance

• applying the new knowledge, skills, and abilities to the job

• providing feedback on the learning experience and environment

• working with their managers to remove any barriers to fulfilling the evaluation plan

• supporting the post-training data collection effort

• completing all evaluation instruments and submitting them on time.

Building on the Four Levels of Evaluation

Donald Kirkpatrick (2006) developed what is probably the best-known model for evaluating learning experiences. His model consists of four levels: reaction, learning, behavior, and results. Jack Phillips (1994, 2006) then expanded the fourth level, creating a five-level model consisting of reaction, learning, application, business impact, and ROI. This model allows the evaluator to determine just the business impact of a training course, the shift in the business metric. The evaluator can then use that business impact data to determine the ROI for the course, thus placing greater emphasis on both business impact and the link to ROI. Using Kirkpatrick’s model as a basis, the model presented in Evaluation Basics separates three of Kirkpatrick’s levels into subparts (Figure 2-1). This model depicts Levels 2, 3, and 4 each as having two parts.

Figure 2-1. A Four-Level Model (With Subparts) for Evaluating Learning Experiences

Reprinted with permission from Performance Advantage Group, 2016.

What is the value of separating out the levels? Why this model? As you recall, there are several purposes of evaluation. By separating the levels, you can better focus your evaluation efforts and report the evaluation results on specific areas of interest. Likewise, the model allows you to have a more detailed discussion with your client, including highlighting where some assistance is likely needed. For example, let’s look at a situation where a client wants the course content to transfer to the job and believes that this is the responsibility of the talent development department. By discussing the second aspect of Level 3 (environment), the evaluator can show the client that he must be involved in the process. The different levels allow the evaluator to better focus the development of data collection instruments and assign responsibilities to more specific areas of the evaluation plan. Next, by providing more definition around these levels, you can better justify the linkage between levels.

For example, if you are writing an evaluation report on the ROI of a course, you need to incorporate the change in the business metric, the business impact. Likewise, you need to demonstrate not only that the knowledge and skills were used on the job, but also the extent of their use, which directly relates to environmental factors. You will also want to demonstrate that there was not just a shift in knowledge but that the participants could actually demonstrate the application of the knowledge and skills in the training course. So, if the client asks, “How do I know if the use on the job is a result of training?” you can substantiate that the participants could apply the knowledge and skill before returning to the job.

Finally, the model used here can help target where a training problem occurred. For example, if the skills taught in the course are not being used on the job, you will want to prove that the participants could demonstrate those skills in the course. The lack of application may then be related to environmental factors.

Level 1

Level 1 gauges reaction—the participants’ immediate response to the learning experience—in much the same way as a customer satisfaction survey does. Level 1 looks at what the participants thought of the learning experience and includes such things as quality of participant materials (usually the pre-reading material and participant guide), facilitator skills (presentation and facilitation skills, management of time and content, content expertise, ability to “manage” participants), course content and its relevance to the job, facilities, administrative support (registration, information), accuracy of promotional material, and media. Level 1 is often measured using tools, sometimes called smile sheets, that look like customer satisfaction surveys.

Level 1 provides a first glance at the learning experience. Except for a few areas (changes in the physical environment, administrative support, accuracy of promotional materials), there is not enough information to make changes. Level 1 evaluation gives insights and indicates that more information is needed before making changes.

Level 2

Level 2 has two parts. The first part is learning, which is the extent to which the participants improve their knowledge, skills, and abilities as a result of the learning experience. For Level 2 we ask, was there a shift in learning? Did learning take place? For example, a diversity or racial awareness program is designed to shift attitudes. Keyboarding, technical training, and computer skills programs are meant to improve skills. Programs involving motivation, leadership, or communication are designed to address all three aspects of learning as they have aspects of knowledge, skill, and attitudes embedded within the learning experience.

Level 2 also addresses the demonstration of the learning within that learning experience. This is the demonstration side of the program content, and is where participants can practice their new skill or behavior. Usually an observer has a checklist to ensure that the demonstration of the new skill or behavior is up to standard. For example, a coaching program teaches the five steps in coaching. For practice, the participants can role play a coaching session demonstrating those five steps while an observer fills out an observation feedback (evaluation) instrument. This instrument could be as simple as a yes/no questionnaire designed to indicate if the behavior was observed, or complex as a scaled instrument to reveal the extent to which the learner demonstrated the five steps of coaching.

Level 3

Level 3 evaluation measures behavior or transfer to the job. The idea of transfer is simply the shifting of something from one place to another. You shift money from savings to checking. Likewise for learning, the knowledge, skills, and abilities gained in the learning experience shift to the work environment and job. This shift is measured in how much the training participants apply what they’ve learned on the job.

Transfer also has two parts. Effective transfer is both a design issue and an environmental issue. The first part is the use of the skills on the job. Are the participants using what they’ve learned on the job? To what extent? Did the learning experience provide them with not only the content for knowledge transfer, but also the skills and tools to apply the course content to their job?

The second aspect of Level 3 evaluation is the work environment: the barriers or enablers that support or hinder transfer. For example, one of the barriers to transfer can be a supervisor who prohibits the use of new skills. “We don’t do it like that!” is all too common. Another barrier could be timing, for example, if your organization is establishing a new system to become effective January 1, but the training on the new system takes place in September. Now, do you really think that the participants will be able to use their new knowledge and skills in several months? A third barrier is when tools or equipment that are used as practice in the training are not available on the job. There is also the issue of lack of recognition and reward. The individual goes back to the job and nothing is ever mentioned; the training is not even acknowledged and there is no attempt to provide opportunities for transfer. There are no new job assignments, sharing of knowledge and skills with peers, praise, or monetary rewards. Nothing! It is business as usual.

However, the environment also can support the transfer process. Some enablers can be incorporated in the design as well as management practices to support learning. The design can include instructional strategies that support transfer, such as a performance contract, action planning, involvement of management in delivery, and action learning. On the job, the immediate manager can incorporate the training in job assignments and ensure that the performance contract or action plans are completed. The participant can teach peers or do peer coaching, helping others gain knowledge and skills. Verbal praise or a challenging job assignment with exposure to other managers provides recognition. What about a bonus? The challenge for the designer, facilitator, participant, and manager is to create and sustain an environment that enhances the enablers and reduces the barriers to transfer.

Basic Rule 5

The designer, facilitator, participant, and manager all have responsibilities to support transfer.

Level 4

Level 4 evaluation assesses training results and includes both impact and ROI. This is where you go back to the business metric. Did the metric change? Were there fewer grievances or defects? Was turnover reduced? Did sales increase? Did costs decline?

The evaluator must monitor results to measure the impact. After all, the change in the business metric is the reason for the training in the first place. (It will be discussed later whether all training results in a measurable impact.) With the impact established, the ROI is a matter of comparing the net impact in dollar terms to the total program costs and expressing the ratio as a percentage:

Very few programs are evaluated using the ROI approach. However, even if you do not conduct an ROI analysis, knowing your program costs is important because it can help you manage those costs and deliver a cost-effective learning experience.

Think About This

The blended, online, and virtual classroom provide different delivery formats. Regardless of the format, the training can be evaluated at all four levels. The metrics may expand or change, but the basic process remains the same.

Getting It Done

This chapter provided an overview of evaluation. Now, you’ll have a chance to apply what you have learned and find ways to use evaluation more effectively in your organization. Exercise 2-1 can help you figure out why you or your organization does not conduct evaluations and suggests some ways to overcome this resistance. Exercise 2-2 allows you to rate various types of evaluation activities and develop action plans to implement these types of evaluation measures. Finally, Exercise 2-3 provides the opportunity to think about and apply some of the initial ideas around the levels of evaluation. The solutions to this exercise are in Appendix B.

Exercise 2-1. Why Don’t You Evaluate?

Many reasons exist why organizations do not conduct more extensive evaluations. Read the list below and then:

1. Add any additional reasons that relate to your organization.

2. Check any boxes that indicate the reasons why your organization does not conduct a more extensive evaluation of the training initiatives.

3. Note what actions you might take to overcome a particular reason for not evaluating training.

Reason for Not Evaluating

Actions for Overcoming Resistance to Evaluation

Evaluation requires a particular skill set that does not reside in my organization.

  

We don’t have the time; it’s not a priority.

  

Evaluation is not required.

  

Unfavorable results from evaluation can result in criticism.

  

You can’t measure training.

  

Too many variables are beyond my control.

  

The information is not available.

  

We don’t have a system to track data.

  

It costs too much.

  
   

Exercise 2-2. Reasons to Evaluate

Many reasons exist to perform an evaluation. Read the following list of common reasons for evaluation and then:

1. Rate the importance of each evaluation purpose for your organization using the following scale: 0 = not important at all, 1 = of little importance, 2 = of some importance, 3 = important, or 4 = very important.

2. If you rate an area 2 or less, develop an action plan to increase the importance of that evaluation’s purpose for your organization.

Exercise 2-3. Case Study: The Efficient Electric Company Case

In the hallway after a weekly departmental staff meeting, Jim Giles, president of Efficient Electric, asked Carolyn Benton, manager of the training department, about a fairly recent course that had been delivered to the company’s supervisors. The mandatory program, called the Personality Styles Workshop, taught the supervisors how to work together and better understand one another in their jobs. This program was also designed to improve their leadership skills. The objectives of the course indicated that the participants would be able to:

• Better understand their personality profile.

• Appreciate the personality profile of co-workers.

• Improve working relationships.

• Improve their personal communication.

Jim said, based on some supervisors’ comments and his review of the instrument, that he thought the workshop was thought-provoking and well done, and he identified with the computerized personality profile he received. However, he was curious whether the supervisors had retained the information and were using the material on the job. Jim thought that while the review segment at the end of the session was good, it did not demonstrate that learning or change really took place. He asked Carolyn if she had any way to show what was learned and retained, and if she had any transfer of learning data for the workshops.

Carolyn stated that she thought that there had been some improvement in team building and overall work group communications within the company. She had also heard a lot of positive comments from other employees about the usefulness of the process during lunch.

However, Jim wanted more than just testimonials, so he asked Carolyn if she had more detailed information on the transfer of training and the impact the course was having on the workplace. He also wanted to know what kind of support there was in the field for this program to be applied after the training. He reassured Carolyn that he was not against the workshop, but wanted to be certain it was being used when employees went back to their jobs. He thought that a plan should be made up front, before initiating a course, to determine how to analyze the learning, use, and application of the course content.

Carolyn replied that she did not think she had any firm data, but she could try to find out for him. She was concerned about the president’s remarks, which had surprised her, especially since he seemed to enjoy the workshop. Why was he asking about the program’s use and application now?

As Carolyn thought about the yearlong effort to train every supervisor at Efficient Electric, she reflected on how she had first come across the Personality Styles Workshop. She had attended one a couple of years ago while at a national training conference and was very impressed by how helpful she found it to be in her work and personal life.

As a result, Carolyn thought the workshop would be helpful for the other supervisors at Efficient Electronic. She initially offered a pilot session in the human resources division and then on a group of the company’s senior managers. The feedback was very positive, so she decided to offer the course more widely. Carolyn knew that it was expensive, but she thought teamwork had improved and people were using it. However, there was no plan or formal process in place to enhance its use, and at this time, she has no way of knowing whether it is being used on the job. She is not sure how to reply to Jim’s comments.

Questions:

1. Who is the client?

2. Given the learning objectives, how would you measure learning (Level 2)? How could you improve the objectives?

3. How could you collect Level 3 data?

4. What is the business metric?

5. How could you perform a Level 4 evaluation on this program?

6. What do you recommend for improvements?

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset