3

Evaluation and the Design Process

What’s Inside This Chapter

This chapter focuses on linking evaluation to course design. You’ll learn about:

criteria for course evaluation

key design concepts as they relate to evaluation

tips to evaluating online and virtual classroom training

internal certification programs and evaluation

integrating evaluation and design

developing an evaluation plan

advantages and disadvantages of evaluation instruments

The Evaluation Planning Project Plan.

3

Evaluation and the Design Process

Criteria for Course Evaluation

Can all courses be evaluated? Yes! Can all courses be evaluated at all levels? No. At a minimum, a Level 1 evaluation can be implemented for any course. But, for example, if there is no identified business metric for tracking, you cannot establish impact. If the metric cannot be put into dollar terms, then you cannot measure ROI. If no mechanisms are built into the training to assess knowledge or acquisition of skills, you cannot conduct a Level 2 evaluation. This is why evaluation planning and the integration of evaluation into the instructional design process are so important.

That said, some guidelines are available to help you determine whether a training course can be evaluated (or easily evaluated). First, does the program have clear, measurable objectives based on the business analysis and needs assessment? The objectives should be written for both the terminal objectives (what the person should be able to do) and the enabling objectives (what the participant needs to know). Second, is there a logical method to attain those objectives? This criterion encompasses everything from the design and development of the course to management’s support to the facilitator’s skill to the participant’s readiness and so on. Third, is there an evaluation plan with supporting methods, instruments, and responsibilities? (This criterion will be discussed in detail later.) Fourth, do the instructional strategies relate to and support the objectives? The instructional strategies provide for practice and application to the job—there is a direct link among the chosen instructional strategies, the course objectives, and evaluation. Fifth, can the business metric be tracked? If the training is built upon your client’s business objectives, data should be available and a tracking mechanism in place. Sixth, can the business metric be converted to a dollar value and can you determine the total costs of the training? This is critical for Level 4 evaluation (impact and ROI). Last, do you have access to the field? This is important for data collection to support Level 3 evaluation (transfer to the job).

Think About This

The criteria for course evaluation provide guidelines to determine if your course can be effectively evaluated to the desired level. If you cannot answer yes to the questions in the criteria, a red flag should go up. You need to do some planning to ensure that your course can be evaluated.

Key Concepts: Design and Evaluation

Evaluation is not simply something you do at the end of a program. Evaluation is part and parcel of the design and development process—it begins at the front end of the design process (business analysis and needs assessment) and is integrated throughout.

Basic Rule 6

Evaluation starts at the beginning of instructional design.

Business Opportunity Analysis and Needs Assessment

Evaluation is based on business opportunity analysis and needs assessment. The business opportunity analysis addresses the strategies and goals of the client. What is the client trying to achieve? Is the client trying to increase product sales? Is the client facing a quality problem and seeking to reduce defects or returns? What about turnover? The business analysis identifies the client’s issues. You then identify the learning components that can resolve that issue.

The needs assessment should identify the knowledge, skills, and abilities required to address the business opportunity analysis. The training program is then designed to address these needs. Evaluation assesses the extent to which the participants have mastered the knowledge, skills, and abilities and applied them on the job, and the subsequent impact on the organization.

Learning Objectives

Learning objectives are critical for evaluation. They are statements of what the participants will be able to do as a result of the training and should be written in measurable terms. Learning objectives are supported by statements of what the participants need to know to fulfill them. Evaluation assesses whether the participant can “do” and whether the participant “knows” related to the objectives.

Specifically, objectives have three components:

Performance: what the participant is expected to do and know to support the performance

Standard: the quality standard indicating acceptable performance

Conditions: a statement indicating the conditions under which the participant will apply the new knowledge, skills, and abilities.

Learning objectives describe what the learner must be able to do in order to demonstrate mastery. Here are some examples of learning objectives that state the performance, standard, and conditions:

• Given five overdue credit situations and credit agreements for each situation, calculate the interest to be paid, with no errors.

• Given two customer situations, methods of negotiations, and guidelines for negotiating settlements, demonstrate how to negotiate a win-win settlement, staying within guidelines and using negotiation methods.

• Given the tools, manual, and a broken laser printer, repair the printer so that the printouts are properly aligned, in focus, and in three colors.

Avoid writing overly general objectives. Many such objectives begin with understand, appreciate, or a similar verb. Here is an example of a learning objective that is too general:

• The participant will understand the coaching process.

How do you measure understand? If you want to measure knowledge, you can ask the participant to list the five steps in the coaching process, or you can state the objective thus:

• Given the guidelines for conducting a coaching session, demonstrate an effective coaching session using all the guidelines.

Learning objectives reflect needs assessment, content, instructional strategies, and evaluation. They are derived from the needs assessment and business analysis, determine what must be included in the program’s content, and guide the choice of instructional strategies. Finally, learning objectives provide the criteria you evaluate against.

Noted

Learning objectives are written at the level to which you want to evaluate. For example, if your evaluation plan indicates evaluating the learning to Level 2, then you must have learning objectives for Level 2, which must state what the performer will be able to do and know. For Level 3, the objectives must indicate what the participant needs to do on the job.

Business Objectives

The client owns the business objectives that must be met. These were identified in the business opportunity analysis. The client is responsible for implementing strategies and tactics to achieve those objectives. Learning is only one of those strategies. The client then identifies the business metric based on business objectives, establishes the value of the metric, and identifies and provides access to a tracking mechanism to see if there is a change.

You will complete the evaluation plan with the client, who gives input about the extent to which evaluation is necessary for business decisions. The client also helps implement the evaluation plan.

Evaluation links training objectives to organizational goals, which are the client’s business objectives that the training and development initiative is going to help attain. By helping the client be successful, you are not only viewed as a partner, but a partner that adds value to the business unit. The evaluation effort links the business metric (from the client’s objectives) to all four levels of evaluation.

Basic Rule 7

Involving the client is not optional.

Online and Virtual Classroom Training

McCain (2015) describes some of the benefits of online and virtual classroom training and then goes on to describe three main forms of online training: self-paced asynchronous learning, blended learning, and virtual training in a virtual classroom.

Online training and the virtual classroom provide an excellent alternative to face-to-face training. Some of the benefits include:

• delivering training in a cost-effective method while reducing travel and materials costs, which affects the cost side of training evaluation

• allowing a dispersed audience to effectively interact with other learners and the facilitator, thus reducing travel costs

• accessing a global audience within a single virtual classroom training course; reaching a wider audience in a cost-effective manner

• involving experts from within or outside the organization through virtual discussion, without having to be physically present, resulting in reduced costs and greater accessibility

• reaching participants who may not be able to attend a face-to-face training session.

There are many discussions and perspectives about what constitutes online training, but every definition includes a computer interface with some providers and a virtual classroom with a groupware option.

Self-Paced Asynchronous Training

Each learner in a self-paced asynchronous training program typically only interacts with the computer and moves through the content at her own pace. The programs are usually broken down into sequential modules with timelines, and in some cases the work may not be accepted if too much time lapses. The facilitator should provide feedback on the assignment within a couple days of submission. It is critical that the learner receives complete and timely feedback.

However, asynchronous online learning programs can also include media, such as PowerPoint, videos, and charts, as well as threaded discussions, discussion boards, chats with other participants, and virtual team work. Asynchronous training programs can be similar to a webinar because they are not restricted to simply an individual, her computer, and the facilitator. However, interactive sessions require the participant to give up some amount of freedom because they typically have more structure and a schedule.

Self-paced asynchronous training also allows for on-the-job application of ideas. You can give participants assignments (along with detailed instructions) and, if you wish, require management to sign off on the finished product.

This delivery method can be very cost-effective, because travel is eliminated for participants and the facilitator, as are some overhead costs. While the learner’s actual time in training may be more difficult to determine, an estimate can be developed during the design and development process.

Blended Learning

In blended learning programs, pre-work is sent to the learner to study and complete before the face-to-face sessions with other learners and the facilitator begin. Online follow-up can also be used to reinforce the learning or application on the job. This type of approach is common for learners who are not at the same knowledge and skill level prior to the classroom sessions. The online portion provides an easy way to distribute pretests and post-tests. Because blended learning may involve online learning, the virtual classroom, or both, it is not discussed as a separate learning environment.

Think About This

In both the self-paced and blended formats, learners can be engaged in threaded discussions or discussion boards. Here, the facilitator posts a question or scenario and requests that learners respond, engaging the facilitator and other learners. The result is participant interaction and learning from their respective locations, thus reducing costs.

Virtual Training

Virtual training is synchronous learning because the learners and facilitator are participating at the same time, usually in a virtual classroom. Cindy Huggett defines virtual training as “a highly interactive synchronous online instructor-led training class, with defined learning objectives, with participants who are individually connected from geographically dispersed locations, using a web-based classroom platform” (2013, 11).

In a virtual training program, all learners log in to a common site and engage in the session at the same time, regardless of their physical location. (Several software collaboration programs are available to support the virtual training environment.) The facilitator can present content and ask questions using the whiteboard, PowerPoints, and videos. Learners work in groups to make presentations, have side discussions, and work on projects. The facilitator can structure these virtual teams, building the diversity of the teams based on background, and provide 24/7 virtual breakout rooms.

Through these various forms of online and virtual classroom learning, the organization can reach its target audience with quality and effective training while reducing costs. All levels of evaluation can still be conducted; having a virtual focus group, electronically sending and tabulating surveys, and tracking and calculating data would also help reduce evaluation costs. However, some initial costs will go up, such as the design, development, technology infrastructure (used for more than training), and support costs. Furthermore, because blended learning incorporates some face-to-face interactions (usually brick and mortar), some of the cost advantages of online will be sacrificed. Nevertheless, if there is a large target audience, the per participant costs should go down.

Tips to Evaluating Online Training

Pappas (2015) provides 10 tips for evaluating online training, which relate to the strategy or a course and participants’ performance.

Assessments. Tests provide the opportunity to determine whether individual employees are absorbing information and developing skill sets, or if they are falling behind and need coaching. Assessments also provide insights into course and training strategy improvement.

Course tracking. Course tracking allows evaluators to view training data, key statistics, and detailed information about an employee’s on-the-job performance. Evaluators can use these data to gauge progress and identify areas of the online training course that may need to be modified.

Surveys and polls. Surveys and polls at the end of the training (Level 1) provide information to improve the online training strategy.

Measurable goals. Since the improvement in both employee and business performance are the key goals, it’s important to have identified and tracked measurable goals. These become important for Level 4 evaluation.

Application of knowledge. For online or virtual classroom training to be effective, it must be used on the job. This can be determined by conducting Level 3 evaluation.

Employee satisfaction. Employee satisfaction is a key indicator of success, because they will be less likely to participate if they are not enjoying the program. You can measure satisfaction using polls and interviews

Focus groups. A focus group, which can be held in the virtual classroom, can provide insight into a specific course or broader issues around online or virtual classroom training. The information will help determine who participates in the focus group.

Performance results and ROI. The Level 4 evaluation examines the training costs (investment) and the impact to the organization (the shift in an identified metric). This is discussed in detail in Chapter 7 of this book.

Level of employee support. Ask employees who are not meeting their performance goals what supplemental training you could offer to help them succeed. A solid support structure is essential to a successful online or virtual training experience and this is one way to evaluate your current online training strategy.

The longevity test. Will your online or virtual training strategy continue to meet its objectives in the future and remain cost efficient?

Internal Certification Programs and Evaluation

Certification is a form of recognition indicating a level of competence through demonstrated mastery of a professional body of knowledge and a dedication to staying current with new developments in the field. Within an organization, certification ensures professional competence resulting in improved individual and business unit performance. To become certified, individuals are tested against a set of criteria linking acquired knowledge and skills with the job performance requirements for their application.

Internal certification programs generally contain the elements of successful completion of a program of study (curriculum), such as:

• required and “elective” training courses

• criterion-referenced knowledge and skill test

• the application of the knowledge and skills to the job (many times through a project with management sign-off)

• a certain number of years of experience in the field.

Some programs could require a multirater behavioral competency assessment to measure desired behavior against a standard. Once certified, employees should be required to maintain their certification by taking additional training and development courses. Completion of certification requirements provides recognition by their peers and possibly customers as having achieved a level of competence.

The certification process is primarily linear or sequential, with a suggested path from beginning to end (see Figure 3-1). However, because the completion of all requirements is dependent upon the courses and when they are offered, some participants may have to take a course out of its recommended sequence. Within each course participants may be required to successfully complete assessments, which could occur in class (intermediate), immediately after class (comprehensive), or in the field (practical). In more rigorous programs each assessment is dependent upon the previous one because each assessment must be completed with performance reaching some minimum level or better before the next assessment can be taken. If participants do not score the required minimum, then they may have to enter a remediation process. The organization will decide how many tries are allowed before the participant has to stop the certification process.

Figure 3-1. Participant Certification Process

Reprinted with permission from Performance Advantage Group, 2016.

Evaluation for Internal Certification

While the level of assessment is course specific, the certification process lends itself to Level 2 and 3 evaluations. Many courses are facilitator led, so Level 1 evaluation can be conducted, and Level 4 evaluation is also possible because the process is performance based and built for transfer. As noted in Figure 3-1 there are several places to conduct assessments.

The types of assessments used in certificate programs are the same as those used in most training courses. Level 2 knowledge assessments can be customized for each course. Assessment instruments could include a computerized knowledge test, case studies, role plays, or simulations, as well as multirater instruments, which assess on-the-job behaviors. Skill-based application exercises, learning contracts, and work products support Level 3 evaluation.

One way to move beyond the knowledge test is to provide an assessment-centered approach to certification. Many certification programs have a cutoff score for passing, which requires determining on-the-job competencies and establishing a passing score based on jobperformance requirements.

The Integration of Evaluation and Design

The Designing for Impact model (Figure 3-2) depicts the major steps of design and development and the role of evaluation at each step. This model depicts the entire process, although evaluation-related segments are discussed here in more detail than the others.

The business opportunity analysis identifies the client’s objectives, which then become the business metric for evaluation. Some examples of metrics include product sales figures, turnover rates, number of defects, rework rates, number of grievances, customer satisfaction ratings, customer service level adherence, inventory turns, collection period, and so forth. Stage 1 then compares the current performance with the business objectives to determine the gap. Stage 1 also identifies the knowledge, skills, and abilities required to close the gap.

Think About This

Evaluation is linked to the business analysis and needs assessment and is part of the design. Each level of evaluation is linked to provide a strong case for learning, transfer, and impact.

The macro design, stage 2, is where the course starts to take shape. The activities in the macro design include developing the learning objectives, developing a content outline, identifying initial instructional and transfer strategies (for example, role plays, case studies, action plans, peer teaching, performance contracts), conducting the audience analysis, choosing potential delivery methods, and completing the evaluation plan. Once completed, you discuss the stage 2 work with your client and, if possible, some potential participants and their managers.

Then, the content is fully developed based on the learning objectives and outline (stage 3). The learning objectives determine what content is included (or excluded). Based on the information from the macro design and content, you determine the delivery method (stage 4), which could be classroom, self-study, e-learning, blended learning, and so forth. This stage is followed by a content review (stage 5) with the same individuals who provided input for the macro design. This discussion covers the content of the course and the method(s) to deliver the content. You want to verify both for appropriateness of the audience.

With client and participant support, develop the instructional and transfer strategies (stage 6). These should be consistent with the learning objectives and provide for assessment, practice, and application. The instructional strategies can become evaluation methods and tools. For stages 7 through 10, remember to get continued input from the client and the sample of participants. Not only does this input serve as a basis for continually refining the course, but it also transfers ownership from the design staff to the client organization.

Figure 3-2. The Designing for Impact Model

Reprinted with permission from Performance Advantage Group, 2016.

Stage 11 (secure facilitators) is not part of the traditional four levels of evaluation. Nevertheless, you must evaluate facilitators to determine their capabilities and acceptability to the target audience. Appendix A provides a facilitator checklist to assist you with selecting facilitators.

The pilot (stage 12) has an evaluation component to it. The pilot evaluation usually covers Levels 1 and 2 by securing extensive feedback. Stage 13 (conduct train-the-trainer) is where you make sure you have the skilled facilitators for delivery. This also helps to ensure consistency in delivery. While conducting train-the-trainer, areas for refinement could be identified. This in-depth information is used for course revisions before rollout (stage 14).

Stage 15 (conduct evaluations) includes the implementation of the evaluation plan and can extend several months after course delivery.

Noted

Continued input from the client and potential learners leads to a learning experience that is relevant to the learners’ jobs. As the client and learners provide more input, they take on more ownership and are less likely to resist the course content and more likely to support transfer.

Developing the Evaluation Plan

The evaluation plan provides a structure to guide your thinking about evaluation. This written plan describes how to monitor and evaluate the training, and helps identify how the training department and the client will use the results.

According to Phillips and Phillips (2003), there are four outcomes or benefits for developing an evaluation plan:

Save money and time. An evaluation plan saves time and makes data collection and analysis easier.

Improve the quality and quantity of data. An evaluation limits gathering useless or insufficient data. Quality of data improves by ensuring the appropriate response rate.

Ensure stakeholder needs are addressed. This addresses the issue of what level you are evaluating to. This metric determination should be part of the initial client meeting. Generally clients are more concerned with Level 3 (transfer and the environment) and some with Level 4 (impact and ROI).

Prioritize budgeting. Planning the data collection methods, sources, accountabilities, and timing can help properly allocate budget dollars to the evaluation plan.

Although learning experiences should be designed for learning, transfer, and impact, you will not evaluate all training to all four levels. The extent to which you evaluate any given course depends on several factors, with more extensive evaluations being conducted under the following conditions:

• The course is expected to be part of a core curriculum and have a long life.

• The training is linked to client’s objectives and is important for meeting organizational goals.

• The course supports a strategic initiative for the training organization.

• The more a program costs, the more extensive its evaluation should be.

• The training has high visibility with senior management.

• There is a relatively large target audience.

• Data are readily available.

• There is a defined business metric that has a dollar value associated with it.

• Change in performance is measurable.

• Attendance is mandatory for the learners.

• Senior management requests the evaluation.

• Data can be converted easily to monetary value.

• The redesign and development effort necessary to improve the course is not significant.

Allison Titcomb (2000, 1-2) discusses six questions to help with evaluation planning:

• What’s the question?

• What information will help answer the question?

• Who or what will provide the data?

• How will we gather the data?

• What might the results look like?

• Will the results answer the original question?

These questions help you define the program’s purpose and who will be involved in the evaluation, your target outcomes, what data sources to use, what evaluation measures to use, and your imagined results.

Noted

The level to which you evaluate a training course is a decision between the client and the training organization. However, as a guideline, only evaluate to the extent that the client needs the information for decision making; don’t do more analysis than the client requires.

Think About This

The further you evaluate a course (more levels), the more valuable the information but the more difficult, expensive, and time-consuming it is to get the information. As you move to Levels 3 and 4, you have less control over the data collection because the information comes from areas beyond the training course and outside the training organization.

Figure 3-3 depicts the evaluation plan. Notice that the four levels contain subparts, allowing more discrimination in data collection for decision making. The first area to complete is the business metric section. In the space provided, you indicate the business metric, provided by the client, which the training program is to address. Then complete the matrix for each level.

Figure 3-3. The Evaluation Plan

Reprinted with permission from Performance Advantage Group, 2016.

Basic Rule 8

Determine what and why the desired evaluation levels are being pursued before completing the remainder of the evaluation plan for a program.

The What and Why Behind Evaluation Levels

For each area of the evaluation plan it is possible to get more information by asking what and why: What do you want to know and why do you want to know it? What and why go hand in hand; they are the first two columns in the evaluation plan. The question of what usually pertains to facilitator skills, course content, instructional and transfer strategies, and the course rollout. The question of why relates to the decisions that have to be made. Table 3-1 offers some examples of what and why for Level 1 evaluation.

Table 3-1. The What and Why Behind Level 1 Evaluation

What Do You Want to Know?

Why Do You Want to Know It?

To what extent were the course objectives met?

Insights for revising content or instructional strategies; assessment of facilitator effectiveness

Was the facilitator effective?

Training for the facilitator; using another facilitator

Was the facilitator credible?

Training for the facilitator; using another facilitator who has more content depth

Did the facilitator promote an environment of learning?

Training for the facilitator; using another facilitator

Was the content relevant to the job?

Insights for possible revision of content

Was the content presented in the appropriate sequence?

Insights for possible revision of content; possible facilitator training

Were the instructional strategies effective?

Revision of instructional strategies

To what extent did the instructional strategies reinforce the content?

Revision of instructional strategies

Did the participant guide or other materials enhance the learning?

Revision of participant materials; change of format

Was the environment conducive to learning?

Change of locations; better facility management

Did the multiple forms of media enhance learning?

Change of media types used

Did the media meet quality expectations?

Improvement to meet expectations

Did the pre-course material prepare participants for the training?

Revision or elimination of pre-course material

Did the participants complete the pre-course assignments?

Elimination of pre-course work; change in timing or delivery method; strategies to ensure completion of the pre-course work

Level 1 evaluation (reaction) gives the evaluator insights into possible areas for revision. However, the training organization will want feedback from several deliveries before taking action on many of the findings. For example, you would not want to change course content or redesign the instructional strategies based on the feedback from just one or two deliveries. While trainers often make decisions based on Level 1, they should get more information.

Level 2 evaluation (learning) has two parts: learning and application (the demonstration of the learning within that learning experience). Table 3-2 offers some examples of what and why for Level 2 evaluation.

Table 3-2. The What and Why Behind Level 2 Evaluation

What Do You Want to Know?

Why Do You Want to Know It?

To what extent was there a shift in knowledge?

Facilitator development; test (re)development, revision of content or instructional strategies

Did the assessments accurately measure learning?

Redevelopment of assessments

Did the assessments accurately measure the participants’ demonstration of the learning?

Redevelopment of assessments; change in participant instructions

Did the training content prepare participants for successful learning?

Redesign and development of content; reanalysis of needs assessment or audience

Did the instructional strategies allow for practice and demonstration?

Revision or change of instructional strategies; explanation of instructions by facilitator

Were the behavioral checklists used effectively?

Facilitator training; revision of instruments; improvement in instructions

Level 3 (transfer or application) also consists of two parts: use on the job and the environmental factors that support or hinder the use of the new knowledge, skills, or abilities on the job. Consider some examples of the what and why for Level 3 evaluation (Table 3-3).

Table 3-3. The What and Why Behind Level 3 Evaluation

What Do You Want to Know?

Why Do You Want to Know It?

What parts of the course content are being used on the job?

Verification of needs assessment and content; facilitator training or change of facilitators; course design relative to practice and application activities

How is the participant’s manager supporting the use of the new knowledge, skills, and abilities on the job?

Learning reinforcement; learner recognition and reward; use of learners as advocates and source of testimonials

What Do You Want to Know?

Why Do You Want to Know It?

How is the participant’s manager hindering the use of the new knowledge, skills, and abilities on the job?

Development of strategies for management involvement and support; redesign or development of instructional strategies; assessment of appropriateness of content to the audience

Does the culture support training and development?

Development of strategies for culture change; identification of processes and leader behaviors that inhibit or support transfer; development of ways to relate training to other HR practices as career path; recognition and reward; staffing; job requirements

Level 4 (results) comprises impact as measured by the change in the business metric as a result of training and ROI. Consider some examples of the what and why for Level 4 evaluation (Table 3-4).

Table 3-4. The What and Why Behind Level 4 Evaluation

What Do You Want to Know?

Why Do You Want to Know It?

Did the business metric change?

Effectiveness of the training; extent to which client’s needs were met; value to the client; course continuance; course redesign and development

How much did the business metric change?

Value for the client; content for a communication plan

What part of the business metric change is attributable to the training?

Isolation of variables to see training’s contribution to the impact; cost-benefit analysis; program continuance

Were there other benefits?

Added value for client relationship and communication

What is the ROI?

Program funding and continuance

What are the total costs?

Budgeting; management of the training course; efficient use of resources

What is the cost breakdown?

Better cost management; comparison of vendor costs; reduction of program costs

What is the dollar value of the benefits?

ROI calculation; communication with the client and management

The Chain of Evaluation

You need to completely develop the evaluation plan to the level the course will be evaluated to. For example, it is difficult to demonstrate the impact due to training if you cannot demonstrate that the training (new knowledge, skills, and abilities) is being used on the job. This chain of causality (Figure 3-4) is critical to a comprehensive evaluation and the integrity of the process.

Figure 3-4. The Chain of Evaluation

The best way is to measure effectiveness at each level and improve the task accomplishment at each level. In other words, by making your program an effective learning experience at Level 1, as well as ensuring that knowledge and skills are mastered at Level 2 and transfer occurs and the work environment supports application at Level 3, you generate the greatest likelihood that the Level 4 measures will meet your goals.

The remainder of the evaluation plan indicates how you plan to gather the required information. How refers to the methods used to collect the information, some of which are better for one level than another. Sources are where you get the information. When refers to the timing of the collection of the information for your evaluation. Where is the physical location of the information. Who is the person or people responsible for providing the information.

Figure 3-5 presents a completed evaluation plan with some ideas. You will need to align the information with your course and organization.

Figure 3-5. Example of a Completed Evaluation Plan

The Flipped Evaluation

Pangarkar and Kirkwood (2015) present the Flipped Evaluation Model (Flipped K) to support the idea that learning improves performance supporting business objectives. Rather than beginning the evaluation process with Level 1, this model begins with Level 4 and concludes with Level 2; the authors see little value in Level 1 evaluation. This approach puts the emphasis on improving business performance as executives define goals and objectives to align the operations with expected performance.

The idea of a flipped evaluation model is similar to thinking about the chain of causality in reverse order. To gain a positive impact and ROI (Level 4) based on an identified metric, you need to know what knowledge and skills are used on the job (Level 3). Then, you need to figure out how to ensure that learners gain and can demonstrate the knowledge and skills (Level 2). Finally, based on your conclusions, you’ll create the course objectives, instructional strategies, materials, and facilitation practices to support the learning environment (Level 1).

Clark (2015) also advocates flipping the traditional model, beginning with results and ending with motivation as it relates to the learners’ perceived need for a change in performance. The revised model in Figure 3-6 integrates planning and evaluation.

Figure 3-6. Revised Evaluation Model

Jim and Wendy Kirkpatrick (2015) put Level 4 first in the belief that this is how training professionals organize their training evaluation work, placing a focus on what is most important. By using the four levels upside down, you begin the project by identifying the leading indicators or business metric the training is meant to influence (such as cost containment, sales, market share, customer satisfaction, employee engagement, and quality). This is followed by identifying what needs to happen on the job to ensure the results (Level 3). Given this, you then identify what training and support is needed for good on-the-job performance. Last, you identify the type of training required to teach the required skills. The New World Kirkpatrick Model (Figure 3-7) reflects this new way of thinking about evaluating.

Figure 3-7. The New World Kirkpatrick Model

Reprinted with permission from Kirkpatrick and Kirkpatrick (2015).

Noted

Evaluation for Levels 3 and 4 is conducted after the training and in the field. The timing is a function of the skill difficulty and the environment. The more complex the skills, the longer it would take for a participant to get proficient, because the learning curve is longer and requires more reinforcement. You also have to wait long enough for the environment to take its effect. It is not unusual for participants to go back and start using the new knowledge, skills, and abilities only to run into barriers; waiting allows these barriers to have an impact. Finally, there must be enough time for the business metric to shift. Generally, a Level 3 evaluation is conducted about three to six months after the training initiative and Level 4 evaluations are done about nine to 12 months after the initiative.

Instruments: Advantages and Disadvantages

The how in the evaluation plan refers to the method and instrument used to do the research. When selecting a method or instrument, you should also consider:

• the time it takes to develop the instrument and then collect and analyze the information for decisions

• the staff’s ability to develop data collection and assessment instruments

• whether the culture supports assessments and evaluation of learning

• the knowledge of the group coming to the training initiative

• how the instrument will be used

• how and to whom feedback will be presented.

With the various options available, knowing some advantages and disadvantages of some of the methods can help you decide which instrument to use. Table 3-5 is a good guide.

Table 3-5. Advantages and Disadvantages of Several Data Collection Instruments

Method

Advantages

Disadvantages

Written Questionnaires

• Relatively fast and easy to administer and calculate results

• Can be anonymous: if anonymous, individuals feel free to express their true individual feelings; increases honesty

• Relative low cost

• Variety of formats

• Easily quantified

• Bad reputation as sole tool for measuring reactions

• May lack accuracy as individuals hurry to complete it

• Questionable rate of return

• Requires specific directions

• Must be jargon free

• Responders determine actual return time

Phone Surveys

• Saves the travel expense associated with interviews

• Provides for probing

• Once contacted, provides immediate response

• Makes a personal contact

• Individuals are difficult to reach by phone

• Must develop protocol

• Interviewer must be trained

• Respondent bias, saying what he thinks the interviewer wants to hear

• Body language not seen

• Respondent may become impatient

Electronic and Online Surveys

• Relatively inexpensive

• Easy and quick to distribute and receive back

• Easy and quick to tabulate results

• Allows for easy analysis of results

• Can provide both objective and comment responses

• Results can be automatically stored on the server

• Difficult to construct

• Must provide explicit instructions

• Must select appropriate scale and format for responses

• Must have consistent software for tabulation of results

• Easy for recipients to simply respond without reading the question or statement

Chat Programs

• Provide group discussion in a classroom

• Gain feedback from a group of learners or one-on-one

• The evaluator must “interpret” the intent of the chat message(s)

• Some chat discussions can get off track

• Comments may be too brief or ambiguous for analysis

Email

• Provides the asynchronous alternative to chat

• Learners can use email to submit their comments directly to the facilitator or evaluator

• The evaluator must “interpret” the intent of the email messages

• Comments may be too brief or ambiguous for analysis

Interviews

• Permits individualized give-and-take

• Flexible

• Interviewers can follow up with questions and thereby probe for information

• Trained interviewers improve quality of information

• Protocol ensures consistency in format

• Costs, travel expense for field interviews

• Can be time-consuming

• Must have trained interviewers

• Labor intensive

• Face-to-face interviews may create fear and result in biased information

Focus Groups

• Allows face-to-face discussion and interaction of all learners

• Fast

• Low cost

• Permits group members to obtain ideas from one another

• Protocol ensures consistency in format

• Good qualitative responses

• Face-to-face discussions allow individuals to dominate the discussion, creating false conclusions that are not representative of a group

• Limited in the quantity of information that can be obtained

• May be hard to arrange

• Hard to summarize and interpret information

• Labor intensive

• Must have trained leader

Tests

• Can be written or oral

• Provide written documentation

• Reinforces content

• Easy to score

• Multiple formats

• True or false

• Multiple-choice

• Matching

• Completion

• Listing

• Essay

• Difficult to write

• Some people fear tests

• Must be part of the course design

• People worry that test results will become known to others and used inappropriately

Observation

• Can be nonthreatening

• Checklist provides consistency

• Good measure of change in behavior

• Must develop a checklist

• Can be obtrusive

• May get biased results

• Observer must be trained

• Can be threatening

Performance Test

• Reliable

• Job related

• Objective

• Takes time

• Costly

• Simulations or instruments difficult to construct

Extant Data or Client or Company Performance Records

• Accepted by client

• Objective

• Measurable

• Can determine dollar value

• Organization is tracking the data

• Reliable

• Job related

• May not be in a usable form

• Internal political issues

• Access to data

• May need to interpret the data

• May not be tracked according to your timeline for evaluation

Steps to Develop Instruments

According to the ATD Learning System, the following steps can be used to develop evaluation instruments:

• Determine the purpose(s) the tool will serve.

• Determine the format or media that will be used to present and track results.

• Select the items that are important for the business to track and will help maintain the quality of products and services.

• Determine what ranking or rating scales will be used (such as Likert or a different scale).

• Identify what, if any, demographics are needed (for example, length of time with the company, position, department), but keep an eye on confidentiality.

• Determine how open comments and suggestions will be captured (for example, as part of each question or at the bottom of the survey) and reported. Are they even necessary?

• Identify the degree of flexibility the tool needs.

• Determine when and how the tool will be distributed after the event.

• Determine how the results will be tracked, monitored, and reported.

• Determine how the results will be communicated.

The Project Plan

Trying to implement a comprehensive evaluation plan can be a daunting task. How are all the parts of the plan organized and when are the various components executed? It is helpful to create a simple project plan indicating milestones, timelines, and accountabilities. This can be easily created using the table function in Microsoft Word or Excel. Remember, some activities will be sequential and others will be concurrent. Be sure to show these by shading the time requirements. Remember, showing more detail will help you more effectively manage the project. The Evaluation Planning Project Plan (Figure 3-8) provides an example.

Figure 3-8. The Evaluation Planning Project Plan

Getting It Done

Here are two activities to help you apply the ideas in this chapter. Recall the guidelines presented in this chapter, which indicate whether a training course can be evaluated or easily evaluated. For Exercise 3-1, consider a training course that is either under development or one that you want to evaluate. Then, indicate whether the course meets the guidelines by checking the yes or no box. For areas where you indicated no, develop actions that can improve your ability to evaluate the course. Write them in the space provided.

Next, try your hand at developing an evaluation plan for one of your courses under development (Exercise 3-2). Start by identifying, in conjunction with your client, the business metric the training is to address and the level to which the course will be evaluated. For some hints, look back at Figure 3-5 and tables from earlier in the chapter.

Exercise 3-1. How Easy Is Your Program to Evaluate?

Exercise 3-2. Develop an Evaluation Plan for One of Your Programs

Business metric(s): _________________________________________________ (from business analysis)

Reprinted with permission from Performance Advantage Group, 2016.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset