Chapter 26

Evaluating Technical Training

Toni Hodges DeTuncq

In This Chapter

This case study describes how a basic maintenance course was developed and evaluated. Upon completion of this chapter, you will be able to

  • develop a plan for conducting an impact study for a technical program
  • develop an objective map that links learning objectives to performance objectives and performance objectives to business objectives
  • logically develop conclusions based on the results from an impact study and develop recommendations based on those conclusions.
 

Background

A large telecommunications company created a basic maintenance course to meet key organizational goals. They developed a task force to flesh out the business, performance, and learning objectives for the design of the course. Representatives from operations, the maintenance field group, the training organization, and an evaluator formed the task force. The goal was to determine what the training organization could do to meet the business and performance objectives and determine what other solutions would be necessary to assist in meeting the goals.

Business Objectives

The operations representative explained that two business areas needed improvement: customer satisfaction and the number of times a residential telephone service call must be repeated due to troubles not being taken care of the first time. In addition, operations wanted to be prepared for an internal audit occurring within the next year. The evaluator asked the operations representative to attach a metric to each of these objectives and record them on an objective map (Hodges, 2002). The evaluator also asked if they expected any barriers within the business, financial, or customer arena, and if enablers could be put into place to mitigate or eliminate the barriers. These were also documented on the map.

Performance Objectives

The field organization looked at the business objectives and brainstormed with the maintainers’ supervisors and several successful maintenance personnel for whom the course would be targeted. They listed performance objectives, linked directly to the business objectives, on the objective map. The evaluator asked the field personnel if they expected any barriers within the field operations to interfere with meeting the performance objectives, and if there were enablers that currently existed or could be put into place to mitigate or eliminate the barriers. These also were recorded on the objective map.

Learning Objectives

The training organization then reviewed the performance objectives to begin developing their learning objectives directly linked to each performance objective. They recorded these on the objective map. They also listed any barriers and/or enablers within the training environment and added those to the objective map.

Evaluation

Next up was the evaluation planning. The evaluator worked with the operations representative to determine how to collect the business objective data. This required the operations representative to decide on acceptable standards and delta measures, that is, how much improvement was required or wanted (Levels 4 and 5; Phillips, 2003). They discussed to whom the study reports would be targeted within operations. They worked with the field representative to determine the best approach for gathering performance data (Level 3). The evaluator asked the field representative to whom the communication of the evaluation results would be targeted within the field operations. At this point, they planned the method for isolation and conversion. The evaluator then worked with the training organization to determine how to measure the extent to which the learning objectives were met (Level 2). This completed the objective map for this program; it can be found in figure 26-1. There were some minor revisions made to the map as the design, implementation, and evaluation were conducted, particularly as more enablers were established to mitigate or remove the barriers.

Evaluation Methodology

The evaluators used the organization’s standard postcourse satisfaction questionnaire to measure satisfaction (Level 1). The program included a pre-post knowledge-based test to measure most of the learning objectives (Level 2). The test used true/false, multiple-choice, and short-answer questions; as well, the test underwent a content validation to ensure that each of the learning objectives had been addressed in the test (Hodges, 2003). The test was administered as soon as the course began, and an identical test was given after the course was complete. Evaluators reviewed test scores after the posttest completion to reinforce the learning objectives, and then they compared test scores and conducted a t-test to see if there was a statistically significant difference between the pre- and posttest averages. They also analyzed the test questions to determine if certain parts of the course provided lower scores than other parts.

Evaluators also constructed a performance-based test to measure the other learning objectives, which would be used during the in-class scenarios (Level 2). They designed a performance-based checklist and validated its content. In addition, they tested it for inter-rater reliability. This reliability testing is important to ensure that the test would not yield different scores based on the scorer (Shrock and Coscarelli, 2000). Participants reviewed the test before they took it, so they were completely aware of what they would be rated on. This helped them prepare and learn prior to the test and also helped reduce their test anxiety.

Evaluators designed a participant and supervisor online follow-up questionnaire to measure on-the-job performance (Level 3). They piloted the questionnaires and then administered it three months after the course was completed. Participants received a list of the performance objectives and asked to rate on an 11-point scale (0 to 100 percent) the extent to which they believe they or their direct report had been able to successfully complete each behavior. To isolate the effects of the program, they were asked the extent (0 to 100 percent) they believed the result was due to the basic maintenance course. After they completed each of these questions, they were asked how confident they were of these estimates on a 0- to 100-percent scale. They were asked to list barriers that they believed interfered with their ability to use their new skills and the enablers they believed assisted them to use their new skills.

Figure 26-1. Basic Maintenance Course Objective Map

Business Objectives

1. Improve customer satisfaction by 50 percent

2. Reduce repeat calls by 25 percent

3. Improve internal audit scores for maintenance department by 40 points or receive honorary mention award

Metric Customer service index scores Cost savings of repeat calls Audit scores/award
Enablers/ Barriers

Barrier: Service charges increased

Enabler: Products improved

Barrier: None

Enabler: Purchase new technology system

None
Performance Objectives

1a. Record customer needs using the customer contact tools

1b. Identify intervention during customer calls

1c. Check for understanding on each call ensuring the trouble is cleared

2a. Use correct tools for various maintenance work operations

2b. Inspect tools for replacement or repair each week

2c. Use the correct tools for required procedure(s)

2d. Complete the maintenance procedure using the correct procedure manual

3a. Complete all trouble reports thoroughly

3b. Properly code time reports

3c. Report all customer dissatisfactions and actions taken to resolve them

1d. Suggest products to customers to meet their needs on each call

1d. Close calls using the maintenance field procedures

1e. Determine callback procedures

2e. Isolate, diagnose, and repair customer troubles given trouble tickets/ maintenance work requests 3d. Conduct all maintenance procedures in accordance with company’s quality standards
Measurement Methodology

Participant follow-up questionnaire

Supervisor follow-up questionnaire

Methods to isolate: Participant and supervisor estimates and control versus experimental group design

Participant follow-up questionnaire

Supervisor follow-up questionnaire

Review of reports.

Follow-up questionnaire with supervisors

Enablers/ Barriers

Barrier: Customer contact tools not complete

Barrier: New procedures

Enabler: Job aids to be created

Barrier: Lack of supervisor coaching

Enabler: Job aid to be created

Barrier: Workload

Barrier: Complexity of reports

Enabler: None

Learning Objectives (partial listing)

1.a.1. Identify the four main reasons for customer dissatisfaction

1.a.2. Identify the eight customer contact tools

2.a.1. Identify the basic hand/power tools and materials essential to the performance of various maintenance work operations.

3.a.1. Explain the importance of a properly completed trouble report.

3.a.2. Describe what comprises the different parts of a trouble report

1.a.3. Select and apply the appropriate customer contact tools for various customer contact scenarios

1.b.1. Using the company’s troubleshooting process to identify a strategy to locate the fault in a customer’s line

2.a.2. Describe the proper usage and application of basic/ power tools while completing various maintenance work activities.

2.b.1. Correctly inspect basic hand/power tools and determine when to replace or request repair of any faulty tools uncovered

2.c.1 (see 2.b.1)

2.d.1. Identify common meters and their controls

3.b.1. Properly code a trouble report given various scenarios

3.c.1. Given a customer dissatisfaction example, list the different parts of a customer dissatisfaction report

3.c.2. Complete a customer dissatisfaction report

1.b.2. Identify the symptoms of residential troubles and common tests required locating these troubles

1.c.1. Identify the steps required to communicate with the customer during the process of clearing the trouble

1.d.1. Define the classes of service

2.d.2. Correctly read meter displays

2.d.3. Perform a series of simulated tests to identify, isolate, and locate faults, etc.

3.d.1. List the different company’s quality standards

3.d.2. Demonstrate an understanding of the maintenance quality standards.

1.d.2. Describe the various optional services (customer calling features)

1.d.3. Provide information on maintenance service plans

1.e.1. Describe customer contact, closeout, and callback procedures.

2.e.1. Demonstrate their ability to isolate, diagnose, and repair customer troubles given trouble ticket/maintenance work requests and appropriate tools.  
Measurement Methodology Pre-post knowledge-based test

Pre-post knowledge-based test

Performance-based test using performance-based checklist

Pre-post knowledge-based test

Performance-based test using performance-based checklist

Enablers/ Barriers Barrier: Not enough classroom time Enabler: Real-life scenarios for performance based test None

Evaluators used the customer satisfaction index to compare the scores before and after the course for the participants. They also compared after scores to those of a control group (maintainers who had similar attributes but who had not attended the class yet), and tracked and compared the difference in the repeat calls before and three months after the course. The repeat calls of the participants were also compared to those of the control group. The values of the repeat calls were easily converted because the company had an amount associated for each call required. Finally, evaluators reviewed the internal audit scores to see if they were deemed good by operations. Naturally, an honorary mention was hoped for because none had been achieved to date.

Results

Level 1. The course satisfaction questionnaire yielded a 4.2 average (on a 5.0 scale). This exceeded the overall average of 3.8 from all courses.

Level 2. There was a significant difference between the pre-test scores (40-percent average) and the posttest scores (85-percent average) on the knowledge-based test. The range between each was 30 percent and 15 percent respectively. The average score on the performance-based test was 72 percent. However, there was one single score that was different than the rest (an outlier)—45 percent—and it was removed so as to more accurately reflect how the class did as a whole. When the outlier was removed, the average was 82 percent.

Level 3. Evaluators conducted the following mathematics for each of the questions regarding the performances on the follow-up participant and supervisor questionnaires.

The percent the performance was rated × the percent to which they attributed the performance rating on the course × the confidence rating performance.

Example: Performance rating (80 percent) × the isolation (60 percent) × the confidence rating (65 percent) = 31 percent

This final rating was then averaged among the different performances. Table 26-1 provides the compilation of the average performance ratings.

All but one of the performances had similar ratings, as seen in table 26-1. The analysis of the individual participant ratings did not find any significant differences among the performances or the participants. Table 26-2 provides the average performance ratings provided by the supervisors.

As shown in table 26-2, there were little differences between the participant ratings and those of the supervisors.

Table 26-1. Average Performance Ratings by Participants


Performance Performance Rating Isolation Rating Confidence Rating Adjusted Rating
Record customer needs using the customer contact tools 85 80 60 41%
Identify intervention during customer calls 90 85 45 34%
Check for understanding on each call ensuring the trouble is cleared 84 90 80 60%
Suggest products to customers to meet their needs on each call 78 90 75 53%
Close calls using the maintenance field procedures 88 90 100 79%
Determine callback procedures 85 84 80 57%
Use correct tools for various maintenance work operations 90 50 80 36%
Inspect tools for replacement or repair each week 95 40 50 19%
Use the correct tools for required procedure(s) 65 80 75 39%
Complete the maintenance procedure using the correct procedure manual 75 88 80 53%
Isolate, diagnose, and repair customer troubles given trouble tickets/maintenance work requests 64 95 85 52%
Complete all trouble reports thoroughly 82 75 80 49%
Properly code time reports 88 90 90 71%
Report all customer dissatisfactions and actions taken to resolve them 75 65 100 49%
Conduct all maintenance procedures in accordance with company’s quality standards 90 90 90 90%
Averages 82% 79% 78% 52%

Table 26-2. Average Performance Ratings by Supervisors


Performance Performance Rating Isolation Rating Confidence Rating Adjusted Rating
Record customer needs using the customer contact tools 80 70 75 42%
Identify intervention during customer calls 85 85 40 29%
Check for understanding on each call ensuring the trouble is cleared 80 80 80 51%
Suggest products to customers to meet their needs on each call 75 90 75 51%
Close calls using the maintenance field procedures 80 80 90

58%

Determine callback procedures 90 85 90 69%
Use correct tools for various maintenance work operations 85 40 90 31%
Inspect tools for replacement or repair each week 90 30 60 16%
Use the correct tools for required procedure(s) 60 75 80 36%
Complete the maintenance procedure using the correct procedure manual 75 80 80 48%
Isolate, diagnose, and repair customer troubles given trouble tickets/maintenance work requests 75 95 75 53%
Complete all trouble reports thoroughly 75 70 80 42%
Properly code time reports 85 85 95 69%
Report all customer dissatisfactions and actions taken to resolve them 75 50 90 34%
Conduct all maintenance procedures in accordance with company’s quality standards 85 85 90 65%
Averages 80% 73% 79% 46%

Barriers and Enablers. The barrier noted most often by the participants was unavailability of tools needed to complete the job. The enablers noted most often by the participants were the training and peer coaching.

Level 4. The two comparisons for the customer service index scores can be found in table 26-3.

Although the difference in the before and after scores for the participant group was not significant, there was a difference with their scores where there was no movement in the control group. Although the difference between the participant and control group reviewed after the participants attended the course was not significant, there was a difference.

The internal audit conducted for the maintenance department three months after the training showed a 25-point improvement from the previous audit one year ago. But after reviewing the results of the postcourse questionnaire used for the basic maintenance course impact study (Level 3), the group received its first honorary mention. The maintenance department was cited for “being accountable for performance and taking steps that produced notable improvement in performance.”

The customer service improvement and the internal audit scores and honorary mention were benefits that could not be converted to a monetary value and are therefore considered intangible benefits of the basic maintenance course.

The monetary value of each repeat call (the organizations’ standard) was $135.60 each, which reflected the combined loaded hourly rate for the maintenance administrator and the technician and the average time it takes to complete the call. The two comparisons for the participant and the control group, as well as the before and after group, can be found in table 26-4.

These results not only show that both the participant and control groups had similar repeat reports before the training (which made them good comparison groups), but that the control group reports before and after were similar. The control versus experimental (participant) group was used for the isolation, so the difference between the participant and control group after (32) was used for the ROI calculation. This showed a cost savings of 32 × $135.60 or $4,339.20. Because the organization did not want to wait an entire year to get the results of the study, they wanted to assume the same amount would be seen in each subsequent quarter, or $4,339.20 × 4 = $17,356.80, for an annual savings for this group of 40. The control group and an additional 40 maintainers are expected to complete the course before the next year is out, so the savings represents one-third of the total population. The task force believed they can expect to see a $52,070 annual savings from the entire group. These savings are the tangible benefits of the program and were used for the ROI calculation.

Table 26-3. Customer Service Index Score Comparisons


Customer Service Index Scores Before After
Participant group average (n=40) 60 points 80 points
Control group average (n=35) 65 points 69 points

Table 26-4. Repeat Call Number Comparisons


Repeat Call Number Before After (three months)
Participant total number (n=40) 52 18
Control group number (n=35) 58 50

Level 5. Evaluators amortized the needs assessment (the objective mapping), design, and delivery of the course and the evaluation to determine the costs of the program. The time for the 115 participants spent or to be spent in the program with the facility and computer costs was added, totaling $40,405. The ROI and benefit-cost ratio (BCR) calculations were as follows:

Conclusions and Recommendations

The task force that was put together to develop the objective map was once again convened to look at the results of the study. The evaluator explained the results to the group and answered questions to clarify where needed. Together they developed a logic map to develop conclusions based on the results and recommendations (Hodges, 2002). Table 26-5 provides the logic map for the basic maintenance course impact study.

Table 26-5. Basic Maintenance Course Impact Study Logic Map


Results Conclusions Recommendations

1. The course yielded an average 4.2 out of 5.0 satisfaction rating on the Level 1 instrument.

The participants were satisfied with the course.  

2. A significant difference was found between the pre- and posttest scores with an average post test score of 85 percent.

There was a learning gain.  

3. The performance-based test yielded an adjusted rating of 82 percent.

There was an acceptable demonstration of skills during the training.  

4. Participants estimated an overall adjusted rating of 52-percent improvement in performance on the job.

The participants believed their job performance had improved as a result of the course.  

5. Supervisors estimated an overall adjusted rating of 46-percent improvement in performance on the job.

The supervisors believed their direct reports’ job performance had improved as a result of the course.  

6. “Inspecting tools for replacement or repair each week” was rated the lowest by each group (19 percent/16 percent).

Lack of available tools was rated at the single largest barrier on the job and could have caused this rating.  

7. “Use correct tools for various maintenance work operations” was rated the second lowest by each group (36 percent by both).

Lack of available tools appears to have had an effect here as well. An inventory of available tools must be made and shortfalls taken care of.

8. There was a 20 point score difference in the customer service index score for the participants before and after the training, while the control group showed an 8-point drop.

There was approximately a 33-percent improvement in customer satisfaction as a result of the course, falling short of the 50-percent improvement goal.  

9. The maintenance department received an honorary mention from the internal audit review, which noted the Basic Maintenance course as an example of accountability and improvement.

An honorable mention was achieved despite not achieving the goal of a 40-point improvement in internal audit scores.  

10. There were 34 fewer repeat calls by the participant group after the training, whereas there were eight fewer calls by the control group.

There was a 65-percent reduction in repeat calls, surpassing the 25-percent improvement goal.  

11. A 29-percent ROI and a 1.29:1 BCR were calculated for this course.

A positive ROI and BCR were achieved. This course as designed and implemented should be continued. The results of this study should be used to advertise the course offering and the course should be used as a model for other course designs and impact studies.

Communication of Findings

An executive briefing was conducted with the organizations’ human resource and operations vice presidents, along with representatives from the financial and training organizations. A complete report was distributed to the participants of the impact study and the results were published in the organizations’ monthly newsletter.

Summary

This technical training program yielded a 29-percent ROI, based on a reduction in repeat calls, that is, customer service calls that were unnecessary because calls were misdiagnosed by the maintainers. It also improved customer satisfaction. Based on these results and the fact that the program was evaluated for impact, the operations unit responsible for this program received an improved internal audit score, as well as its first honorary mention.

Knowledge Check

Answer the following questions and check your answers in the appendix.

1. Who should be the stakeholders to develop an objective map for a program and when should the map be initially developed?

2. What was the method(s) to isolate the effect of the program in this study?

3. In this study, the participants and the supervisors were asked the extent to which the performance objectives were met, how much of that success was due to the training, and how confident they were of their ratings. Is it important that the ratings from the participants and the supervisors be the same or similar? Why or why not?

4. Do you believe the recommendations made by the task force were warranted?

About the Author

Toni Hodges DeTuncq is principal of THD & Company. For the past 20 years, she has concentrated on measuring and managing human performance and has conducted and managed operational, systems, and group evaluations for corporate, defense contracting, and government organizations. Her work has included developing individual assessment tools, as well as large organizational tracking tools, all aimed at measuring the performance and monetary value of human resource and systems intervention programs. She currently provides consulting services, which help organizations establish accountable and effective training and evaluation programs, and provides skill enhancement workshops. She has conducted more than 50 impact assessments for organizations such as Bell Atlantic, Verizon Communications, BMW Manufacturing, and more. She has developed system-wide evaluation programs for Bank of America, National Aeronautics and Space Administration (NASA) Goddard Space Flight Center, and Scotiabank, Canada.

DeTuncq was selected as one of nine “Training’s New Guard–2001” by ASTD, and was featured in the May 2001 issue of the T+D magazine. In 2000, the ROI Network named her “Practitioner of the Year.” She has published numerous articles, was the editor of the best selling ASTD In Action series Measuring Learning and Performance, and is author of Linking Learning and Performance: A Practical Guide to Measuring Learning and On-the-Job Application, and coauthor of Make Training Evaluation Work. She can be reached at toni@ thdandco.com.

References

Hodges, T. K. (2002). Linking Learning and Performance. Amsterdam: Butterworth Heinmann.

Phillips, J. J. (2003). Return on Investment in Training and Performance Improvement Programs, 2nd ed. Amsterdam: Butterworth Heinmann.

Shrock, S. A. and W. C. Coscarelli. (2000). Criterion-Referenced Test Development: Technical and Legal Guidelines for Corporate Training and Certification. Washington, DC: International Society for Performance Improvement.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset