Chapter 22

Selecting Technology to Support Evaluation

Kirk Smith

In This Chapter

This chapter looks at the different technologies designed specifically for training evaluation; technologies that can capture, store, retrieve, and report evaluation data. In this chapter, you will learn to

  • discriminate among different tools
  • develop criteria from which to make your decision
  • select the most appropriate tool for your short-term and long-term evaluation needs.
 

Why Technology?

Evaluating training at a significant level can be an onerous job. On one hand, you want enough data to be able to make sound business decisions, and on the other hand, you do not want to create a measurement bureaucracy. You want actionable information—information that is going to improve your processes, job impact, and/or business results. Technology can make it easier for you, depending on what you are trying to accomplish. This requires asking the right questions.

Asking the Right Questions

The top three questions I have found to be most important are

  • What do you want to know?
  • How are you going to use it?
  • Who are the intended users of the information?

Once you have asked these questions and understand the answers with your team, the next step is to identify and prioritize the intended uses of the information. These answers and actions will form the basis for developing the criteria to make the decision about the right technology tool for you and your organization.

Examples of criteria include

  • keep costs to a minimum
  • easy reporting
  • good technical help desk
  • customizable questionnaires
  • access to raw data
  • automated report distribution
  • easy implementation.

Establishing your criteria for technology prior to discussing it is important. This type of decision analysis is not about identifying choices and making a case for one specific alternative. Instead, establish what needs to be accomplished, and then find the alternative that best accomplishes it (Kepner and Tregoe, 1997). Figure 22-1 provides a form to help you develop a list of your decision criteria for technology to support evaluation (Criteria and objectives are used interchangeably).

Tips for Developing Decision Criteria

Ask

• What short- and long-term benefits or results do we want?

• What resources should we use or save?

• What restrictions influence this choice?

• What minimums must we meet?

Thought Starters

• Consider how time, cost, customers, management, and so on influence this choice.

• Be clear and specific.

• Use short statements—include measures.

• Involve those who will approve or implement.

• Stay away from objectives that are just features of alternatives.

The Technologies

Now that you are armed with your criteria, let’s look at some alternatives. For the scope of this chapter, we have only chosen four alternatives out of many available. However, these four should give you a good idea of what is available and what the various options have to offer. A discussion of all possible is beyond the scope of this chapter, but we have selected alternatives in three categories: online survey software, stand-alone systems, and analytics modules within learning management systems (LMS).

SurveyMonkey (www.surveymonkey.com)

SurveyMonkey is probably the least expensive and easiest tool with which to get started. This online survey design tool allows you to quickly and easily design and administer your surveys. It is not specifically designed for the measurement and evaluation of training, but can be used—and is being used by many—to do so. If you are going to send out surveys with 10 or fewer items to 100 or fewer respondents, it can be free. Free is good. Three hundred dollars per year allows you to design as many surveys as you need with as many questions as you need. Most types of questions are supported, such as multiple-choice, Likert, and so on.

SurveyMonkey also collects responses for you and can be controlled by date or number of responses. The reporting allows you to show charts and graphs, but also allows you to drill down to individual responses. A filtering and cross-tabulation function allows you to view and report on segments of your respondent data. For more detailed analysis, you can download raw data into different formats, including spreadsheets.

Figure 22-1. Decision Criteria Exercise

Develop your own criteria below:

1.
2.
3.
4.
5.
6.

 

Metrics That Matter (www.knowledgeadvisors.com)

KnowledgeAdvisors was a pioneer in learning analytics when they introduced Metrics That Matter. It provided a technology-enabled way to capture, store, retrieve, analyze, and report on learning and development data. It is based upon Kirkpatrick’s and Phillips’ frameworks and includes concepts from Brinkerhoff’s Success Case Method (2003). It is a one-stop shop that allows you to measure all aspects of enterprise learning, from activity metrics to return-on-investment (ROI).

The standard surveys are for postevent and follow-up and are sent via email or can be done manually. Multirater 360-degree feedback instruments are also available. The postevent and follow-up survey items are in seven categories:

  • instructor
  • environment
  • courseware
  • learning
  • job impact
  • business results
  • ROI.

The standard reporting module contains more than 30 standard reports, and they are divided into four categories: ROI tools, executive tools, aggregate tools, and tactical tools. The ROI tools are based upon the Phillips ROI Methodology (Phillips, 2003). The executive tools provide a high-level look at results for senior management. The aggregate tools provide cumulative data, and the tactical tools are primarily to look at individual classes. Through filters, the reporting capability is virtually unlimited. Another useful and time- saving feature is that any report can be automated to be completed and emailed to individuals of your choosing regularly.

If you want to take data analysis to a more sophisticated level, you can perform raw data downloads. This is a fairly simple process within Metrics That Matter and allows you to import the data into spreadsheet programs and, from there, into statistical analysis programs. The active authoring feature offers you an opportunity to customize your surveys, develop other kinds of surveys, and write assessments for pre- and/or posttesting. Pricing is based upon the number of prospective users. KnowledgeAdvisors upgrades Metrics That Matter twice a year.

Sensei/ROI (www.senseiroi.com)

Sensei/ROI is a fairly new solution in the United States and is gaining ground. Developed by Galwey, Ireland-based Gaelstorm, this software is built exclusively on the Phillips’ ROI Methodology. It is extremely comprehensive in making sure all of the process steps of the ROI Methodology are followed. It is a process management tool that walks and coaches you through the entire ROI process. The administrative steps are automated so that you are free to spend more time at the higher-level objectives of the Kirkpatrick and Phillips frameworks and models. It allows you to more easily measure Level 3 application virtually 100 percent of the time. It makes it easier for you to customize and streamline your evaluation planning process by coaching you through the objective setting process that is so important in the initial stages of planning. Many of the input fields also have drop-down menus, which saves time as well. It uses customizable surveys to gather the pre- and postprogram data, including 360-degree multirater feedback instruments. The output is a clean, one-page summary of the ROI Methodology results, called a “Sensei Map.” The Sensei Map can be high level enough for senior management’s at-a-glance needs or detailed enough to be able to make business decisions based upon the data.

The Sensei/ROI process begins with planning and then follows the Phillips (2003) ROI Methodology process steps. The Sensei Map then shows, in a dashboard format, the results that can help make future decisions that align training to the business strategy.

Reports can be automatically completed and distributed. Most of the clients use the program hosted on Gaelstorm’s servers, but in-house, behind the firewall, hosting is available. The pricing is on a pay-as-you-go system, based upon the number of studies conducted, with pricing breakpoints available for higher volumes.

Learn.com (www.learn.com)

Learn.com is a learning management system provider. Traditionally, LMS providers have included basic analytics capability within the system and now, in response to market demand, many LMS providers have expanded their analytics modules to include the capability to evaluate training effectiveness; not just activity and compliance (Bersin, 2009).

Learn.com’s LearnCenter Performance Dashboard is based upon delivering at-a-glance, on-demand reporting to all levels of the organization. More than 60 standard reports are available. Because virtually everything done within training is captured inside the LMS, the amount of data is quite extensive and provides maximum reporting capability and flexibility. The robustness is increased dramatically by the access to Crystal Reports, allowing you access to disparate databases to merge into reports. Learn.com pledges an ROI guarantee for their clients. The guarantee promises a positive financial ROI in the first year of implementation. If that is not achieved, the second year fees are waived.

Direct learning effectiveness data are collected in the same way as the other technologies—through customizable surveys and questionnaires. Multirater 360◦ feedback capability is also included. The reporting and performance data can be also integrated into a client’s performance management system. Reports can also be done for both teams with formal reporting relationships and ad hoc groups, such as project teams.

Pricing is based upon the number of users, along with volume breakpoint discounts. One of the disadvantages of Learn.com’s LearnCenter Performance Dashboard module is that it is not a stand-alone system. It is available only within the LMS.

These four technologies represent some of the opportunities to support the training measurement and evaluation processes. Table 22-1 presents a list of other technologies available.

The Decision

Now that you have some basic information about four representative training evaluation technologies, how do you decide which one is for you? The rest of this chapter will be devoted to walking you through a decision analysis. The method will be based upon the work of Kepner and Tregoe (1997)—who present a rational process for making decisions. Here are the steps:

1. Develop objectives.

2. Classify them into musts and wants.

3. Weigh the wants.

4. Generate alternatives.

5. Evaluate the alternatives against the musts and wants.

6. Identify risks.

Table 22-1. Other Technologies


Technology Type URL
Zoomerang Online surveys http://www.zoomerang.com
Key Survey Online surveys http://www.keysurvey.com
C3 Analytics Stand alone http://www.c3analytics.com
SAS Analytics Stand alone http://www.sas.com/technologies/analytics
Sum Total LMS http://www.sumtotalsystems.com
Plateau LMS http://www.plateau.com

 

Classify Objectives

Let’s go back to the objectives you developed earlier. The first thing you want to do is classify these objectives into “musts” and “wants.” An objective is a must if you can answer yes to all three of these questions:

  • Is it mandatory (required)?
  • It is measurable (set limit)?
  • It is realistic (can be met)?

Out of a set of six to 10 objectives, you might have one to three musts. Everything else is a want. As you will see, the musts tell us who gets to play, and the wants tell us who wins. Let’s look at the sample criteria in table 22-2. The objective, “Keep costs to a minimum” cannot be a must, because it is not measurable with a limit. The word “minimum” is too vague to be a measurable limit. Anytime a phrase contains the words minimize, maximize, or optimize, it cannot be a must. The only must is “access to raw data.” It is mandatory in this case. It is measurable with a limit; either you can download raw data or you cannot. It is realistic. You signify a must with an “M” in the first column. Everything else is a want.

Weigh the Wants

The next step is to weigh the wants. Wants do not all have the same importance, so you must attach relative numerical weights to each of them. You determine your most important want(s) and give it (them) a weight of 10. You can have more than one 10. The other wants are weighted relative to the 10s. For example, if another want is half as important as a 10, then weigh it with a 5. Do not weigh the wants in an ordinal manner (that is, 10, 9, 8, 7, and so on). Keep in mind that you will more than likely be doing this with a group of people and facilitation skills are sometimes needed to come to agreement on weighting and alternative evaluation scores that are to come. Table 22-3 shows the weighting of your objectives.

Table 22-2. Must Objectives


  Objectives
Keep costs to a minimum
Easy reporting
Good technical help desk
Customizable questionnaires
Access to raw data
Automated report distribution
Easy implementation

Generate Alternatives

The next step is to generate your alternatives by identifying possible choices. To be fair and impartial, the alternatives are numbered and do not represent the technologies discussed above. This exercise is not to show you which technology to use, but a decision analysis method as a tool for you to decide. Then, you want to screen the alternatives against any must. If an alternative cannot satisfy the must, then it is eliminated from your choices. Remember, musts tell you who gets to play the game, and wants tell you who wins. Table 22-4 is a matrix with the alternatives across the top and your objectives in the first column after being screened through the must objective of having access to raw data. Alternative 3 is eliminated here.

Evaluate Alternatives

The next step is to compare alternatives against the wants, evaluating the relative performance for each want to determine which alternatives create the most benefit. This is done by asking how each alternative performs against each want objective. Score the best performing alternative for each want with a 10. Give the other alternatives scores relative to the best performer. It is possible, and sometimes likely, that there are ties for the best alternative. Then, multiply the performance score by the weighting of each want and total the weighted scores.

Table 22-3. Weighted Objectives


Weight Objectives
7 Keep costs to a minimum
6 Easy reporting
7 Good technical help desk
10 Customizable questionnaires
M Access to raw data
4 Automated report distribution
6 Easy implementation

 

Table 22-4. Screening Through the Musts


Weight Objectives Alternative 1 Alternative 2 Alternative 3 Alternative 4
7 Keep costs to a minimum        
6 Easy reporting        
7 Good technical help desk        
10 Customizable questionnaires        
M Access to raw data GO GO NO GO GO
4 Automated report distribution        
6 Easy implementation        

For example, the top performer for the first objective, “Keep costs to a minimum,” is alternative 1, so it gets a score of 10. Alternative 2 and alternative 4 rate scores of 7 and 4 respectively, based upon their costs relative to alternative 1. We then multiplied the weight (from the first column) of 7 by the performance scores to arrive at the totals for that objective; 70 for alternative 1, 49 for alternative 2, and 28 for alternative 4. After this process is done for every want objective, you add the total scores in each column to arrive at a total for each alternative. Table 22-5 shows the completed evaluation.

Identify Risks

You have a pretty clear winner in alternative 2, but there is still one step left. Identify any risks associated with the highest scoring alternative. If you can live with the risks, you have made your decision. If you cannot, you go to the next highest scoring alternative and do the same risk analysis. Some questions to ask during your risk assessment are

  • What could go wrong in the short and long term if we implement this solution?
  • What are the implications of being close to a must limit?
  • What disadvantages are associated with this alternative?
  • Did I make any invalid assumptions about this alternative?

For this exercise, we did not identify any intolerable risks, so the decision is made. To learn more about this decision analysis technique, see Kepner and Tregoe (1997).

Table 22-5. Completed Analysis


Weight Objectives Alternative 1 Alternative 2 Alternative 3 Alternative 4
7 Keep costs to a minimum 10/70 7/49   4/28
6 Easy reporting 5/30 10/60   8/48
7 Good technical help desk 3/21 10/70   7/49
10 Customizable questionnaires 10/100 10/100   10/100
M Access to raw data GO GO NO GO GO
4 Automated report distribution 3/12 8/32   10/40
6 Easy implementation 10/60 7/42   3/18
  Totals 293 353   283

Summary

Measurement and evaluation of training programs is something all of us should be doing at some level and probably at higher levels than what you are doing now. Time constraints and lack of analytical skills are two of the main roadblocks to doing it. The right learning analytics technology can help you in both areas. There are several alternatives for the right technology for you, depending on what you are trying to accomplish. We briefly described four of them: SurveyMonkey, Metrics That Matter, Sensei/ROI, and Learn.com. The key is the same as in conducting an evaluation study; develop your objectives early in your planning stage and let them be your guide. We walked though a decision analysis using a rational process. Whether you use this method or another, it does not matter as long as you are as objective as you can be and take politics and emotion out of the decision process. A rational method helps you do this so you end up with the right evaluation technology for your needs.

Knowledge Check: Musts and Wants

Now that you have seen the decision process, how do you know whether a decision objective is a must or want? Check your answer in the appendix.

About the Author

Kirk Smith, PMP, CPT, CRP, is a freelance performance consultant and also an adjunct faculty member for three universities. He teaches organization performance; organizational communications; project management; research and evaluation methods in human resources; business, ethics, and society; and human resource development. His primary practitioner focus is on measuring and evaluating the effectiveness of performance improvement projects, human capital analytics, transferring critical thinking skills in client organizations, and facilitating issue resolution through systemic solutions. Smith has significant experience in analyzing the human performance systems in organizations to resolve performance improvement issues.

He is a PhD candidate in technology management with a specialization in human resource development and expects to defend his dissertation in 2010. His research interests are in measuring and evaluating informal learning, organizations as complex adaptive systems, and the role of network science in HRD. He is also a Project Management Professional (PMP) through the Project Management Institute, Certified Performance Technologist (CPT) through the International Society for Performance Improvement, and a Certified ROI Professional through the ROI Institute. He can be reached at [email protected].

References

Bersin, J. (2009). The State of Learning and Talent Measurement. Bersin and Associates.

Brinkerhoff, R. O. (2003). The Success Case Method. San Francisco: Berrett-Koehler.

Kepner C. H. and B. B. Tregoe. (1997). The New Rational Manager. Princeton, NJ: Princeton Research Press.

Phillips, J. J. (2003). Return on Investment: In Training and Performance Improvement Programs. San Francisco: Butterworth-Heinemann.

Additional Reading

Barnett, K. and J. Berk (2007). Human Capital Analytics: Measuring and Improving Learning and Talent Impact. Tarentum, PA: Word Association Publishers.

Davenport, T. H. and J. G. Harris (2007). Competing on Analytics: The New Science of Winning. Boston: Harvard Business School Press.

Fitz-enz, J. (2000). The ROI of Human Capital: Measuring the Economic Value of Employee Performance. New York: American Management Association.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset