In Chapter 2, we discussed the process for conducting a needs assessment, which helps the instructional design professional scope out a problem or opportunity that can be addressed by an instructional or noninstructional intervention. We then covered designing and utilizing learning assessments, identifying the desired outcomes and aligning these with the overall goals of the instructional intervention. These chapters provide the foundation for evaluation which, while conducted after the intervention, should be established early in the ID process when business and performance needs are identified, desired behavior changes are determined, and learning objectives and measurement strategies are set. This chapter summarizes the evaluation strategies and tactics used to answer the critically important question—what difference did it make? The answer to this question provides insights into potential improvements that can be made and valuable information to share with a variety of stakeholders who want to know if their investments are yielding worthy returns.
In this chapter, we clarify assumptions about formative and summative evaluation, define key terms associated with these activities, and include a case study that dramatizes issues that can arise when developing a formative evaluation plan. We also describe the steps used to establish a formative evaluation plan and approaches to implementing that plan. This chapter also describes summative evaluation and covers various post-intervention evaluation models.
Instructional designers often hold the belief that their work is not finished until the targeted audience can learn from the material. The fourth edition of Mastering concentrated primarily on formative evaluation, which “involves gathering information on adequacy and using this information as a basis for further development” (Seels and Richey 1994, 57). This ensures that the instruction is sound before it is deployed to large numbers of intended learners. This edition provides more complete coverage of summative evaluation, which “involves gathering information on adequacy and using this information to make decisions about utilization” (Seels and Richey 1994, 57). It helps to determine the results of the instruction post-implementation. Evaluation in all its forms has figured prominently in instructional design practice as decision makers demand increasing accountability (Rothwell, Lindhold, and Wallick 2003).
According to The Standards (Koszalka, Russ-Eft, and Reiser 2013, 55), even though the competency and two performance statements associated with evaluation are classified as advanced, “even a novice instructional designer should be familiar with the need for evaluation and revision prior to any dissemination of instructional products and programs.” High accountability is now being demanded for all or most forms of instruction and evaluation skills are critical for all designers to possess. As organizations have tightened their budgets, the need to prove that training is a worthwhile investment has become more and more important. The performance statements associated with this competency indicate that instructional designers should be able to (Koszalka, Russ-Eft, and Reiser 2013, 56): “(a) Design evaluation plans (advanced); (b) Implement formative evaluation plans (essential); (c) Implement summative evaluation plans (essential); and (d) Prepare and disseminate evaluation report (advanced).”
Stakeholders are those who are interested in the results of an evaluation effort and can range from front line employees to senior level executives including the CEO (Phillips and Phillips 2010a; Phillips and Phillips, 2010b). The interests and objectives of stakeholders can vary and can even compete. Table 14.1 lists stakeholders and their potential interest(s) in the outcomes of the evaluation.
Table 14.1 Interest in Evaluation by Stakeholder
Stakeholder | Key Questions or Interests |
Instructional designers | Did the learning work as intended? Did learners respond favorably to the learning? What improvements can be made for next time? |
Instructors or facilitators | Were learners satisfied with the experience? Did learners achieve the intended outcomes? How can I improve my delivery? |
Learners | Did my knowledge or skill improve? Am I more productive or effective in my job? |
Managers of learners | Did my people acquire new knowledge or skills? Are they more productive or effective in their jobs? Did the benefits received outweigh the “cost” of participating? |
Executives or sponsors | Are learners more productive? Are learners demonstrating behaviors that will further our strategic objectives? Are learners adding greater value than prior to participating? |
When an instructional designer engages in evaluation, it is similar to a scientist conducting research. Many of the same principles, strategies, and techniques apply. Research has many purposes, but “there are two basic purposes for research: to learn something, or to gather evidence” (Taflinger 2011). Like the researcher, the instructional designer involved in evaluation efforts works to frame important questions and systematically attempts to answer those questions. The ability to collect and analyze data is central to the role of the evaluator.
There are many data sources that can be tapped into for evaluation. The data may be quantitative or numeric, such as survey responses using a 5-point scale. Qualitative data is non-numeric like the results of an interview. Some of the most commonly utilized sources of data include: the learners themselves, managers, subject experts, program sponsors, and what Allison Rossett (1987, 25) refers to as extant data or the “stuff that companies collect that represents the results of employee performance.” Examples of extant data include organizational performance data or reports (sales, profitability, productivity, quality), engagement or satisfaction surveys, and talent data or reports (retention, performance ratings, promotion, mobility, exit interviews).
Use of a single data source can lead to inaccurate evaluation results because the perspective being considered may be highly subjective or limited. It is therefore important to use multiple data sources whenever feasible and practical. Doing so can increase the validity of the data, a subject addressed later in this chapter.
Similar to the recommendation of tapping into multiple data sources when conducting an evaluation, we also recommend using multiple data collection methods when possible. Methods are how data are collected from source(s). Multiple sources and methods will help to ensure the designer does not have gaps in the data collected, thereby providing more valid information upon which to make decisions. Below are several data collection methods.
Evaluation that happens at the end of an intervention is summative evaluation. However, the intervention should also be evaluated throughout the design and development process. This evaluation is called formative evaluation and it helps to pinpoint adjustments that must be made during the design process, so desired results are more likely to be achieved.
Instructional designers make three fundamental assumptions when evaluating instructional materials and methods. First, they view evaluation as primarily a formative process. This assumption rests on the belief that instructional materials and methods should be evaluated—and revised—prior to widespread use to increase their instructional effectiveness. In this way, it is hoped that learner confusion will be minimized.
Second, instructional designers assume that evaluation means placing value on something. Evaluation is not objective and empirical; rather, it rests heavily on human judgment and human decisions. Human judgment reflects the individual values of instructional designers and the groups they serve.
Third, instructional designers expect to collect and analyze data as part of the evaluation process. To determine how well instructional materials and methods work, instructional designers must try them out. It is then possible, based on actual experience with learners, to make useful revisions to the materials.
Before undertaking a formative evaluation, instructional designers should take the time to familiarize themselves with at least two key terms: formative product evaluation and formative process evaluation. However, instructional designers should also minimize the use of this special terminology. Operating managers or clients will only be confused or turned off by it.
Instructional designers should develop a formative evaluation plan that focuses attention on the instructional materials. There are seven steps in developing a formative evaluation plan. We will describe them in the following sections.
The first step of formative evaluation is to determine the purpose, objectives, audience, and subject. Answer the question, why is this evaluation being conducted? How much is the focus solely on the quality of the instructional materials or methods, and how much is it on other issues, such as the following (Kirkpatrick 1996):
As part of the first step, clarify the desired results of the formative evaluation. For each purpose identified, establish measurable objectives for the evaluation. In this way, instructional designers help themselves and others assess the results against what was intended.
In addition, consider who wants the evaluation and why. Is it being conducted primarily for the benefit of instructional designers, senior leaders, key decision makers, immediate supervisors of the targeted learners, or some combination of all these groups? Always clarify who will review the results of the formative evaluation and what information they need from it. This will help to identify what to evaluate and how to present the findings.
Identify who will participate in the formative evaluation. Will the evaluation be focused on representative targeted learners only, or will it also focus on learners with special needs or low abilities? Subject-matter specialists? Representatives of the supervisors of targeted trainees? Their managers? Senior leaders? There are reasons to target formative evaluation to each group of subjects, depending on the purpose and objectives of the evaluation.
The second step in conducting formative evaluation is to assess the information needs of the targeted audiences. Precisely what information is sought from the results of the formative evaluation? Usually, the targeted audiences will provide important clues about information needs:
The third step in conducting a formative evaluation is to consider proper protocol. Several questions about the protocol of conducting formative evaluation should be answered:
Protocol is affected by five key factors: (1) the decision makers' experience with formative evaluation, (2) labels, (3) timing, (4) participation, and (5) method of evaluation.
The decision makers' experience with formative evaluation is the first factor influencing protocol. If the decision makers have had no experience with formative evaluation, instructional designers should take special care to lay the foundation for it by describing to the key stakeholders what it is and why it is necessary. If decision makers have had experience with formative evaluation, determine what mistakes (if any) were made in previous evaluative efforts so repeating them can be avoided. Common mistakes may include forgetting to secure the Evaluating Instruction permissions, forgetting to feed back to decision makers information about evaluation results, and forgetting to use the results in a visible way to demonstrate that the evaluation was worth the time and effort.
Labels are a second factor affecting protocol. Avoid using the imposing term “formative evaluation” with anyone other than instructional designers, since it may only create confusion. Try more descriptive labels such as walkthroughs, rehearsals, tryouts, or executive previews.
Timing is a third factor affecting protocol. Is it better to conduct a formative evaluation at certain times in the month or year than at other times, due to predictable work cycles or work schedules? Make sure that formative evaluations will not be carried out when they conflict with peak workloads or other events, like a company board meeting or an earning's call which may make it difficult for key stakeholders to approve or participate.
The participation of key stakeholders is a fourth factor affecting protocol. How essential is it to obtain permission from a few key individuals before conducting a formative evaluation? If essential, who are they? How is their permission secured? How much time should be allowed for obtaining the permissions?
The method of evaluation is the fifth and final factor affecting protocol. Given the organization's culture, should some instruments, methods of data collection, or analysis be used instead of others?
Instructional designers should never underestimate the importance of protocol. If protocol is forgotten, instructional designers can lose support for the instructional effort before it begins. Remember, any instructional experience is a change effort. Also, formative evaluation, like needs assessment, offers a valuable opportunity to build support for change. But if proper protocol is violated, it could work against success. The audiences will focus attention on the violation, not instructional materials or methods.
The fourth step in conducting formative evaluation is to describe the population for study and to select participants. Always describe from the outset the population to be studied. Usually, instructional materials or methods should be tried out with a sample, usually chosen at random, from the targeted group of learners. But take care to precisely clarify the learners with whom the materials will be used. Should participants in formative evaluation be chosen on the basis of any specialized information such as situation-related characteristics, decision-related characteristics, or learner related characteristics?
Sometimes, it may be appropriate to try out instructional materials or methods with such specialized populations as exemplars (the top performers), veterans (the most experienced), problem performers (the lowest performers), novices (the least experienced), high-potential workers (those with great, but as yet unrealized, performance capabilities), or disabled workers. Formative evaluations conducted with each group will yield specialized information about how to adapt instructional materials to account for unique needs rather than taking a one-size-fits-all approach.
Once the learners have been identified, select a random sample. Use automated human resource information systems for that chore. If a specialized population is sought for the study, other methods of selecting a sample may be substituted. These could include announcements to employees or supervisors, word-of-mouth contact with supervisors, or appeals to unique representatives. If specialized methods of selecting participants for formative evaluation are used, consider the protocol involved in contacting possible participants, gaining their cooperation, securing permission from their immediate supervisors or union representatives, and getting approval for any time off the job that may be necessary.
The fifth step in conducting a formative evaluation is to identify other variables of importance. Ask these questions to identify the variables:
The sixth step in conducting a formative evaluation is to create an evaluation design. The central question is this: How should the formative evaluation be conducted?
An evaluation design is comparable, in many respects, to a research design (Campbell and Stanley 1966), except that its purpose is to judge instructional materials and methods rather than make new discoveries. An evaluation design is the “plan of attack”—the approach to be used in carrying out the evaluation. In formulating a design, be sure to (1) define key terms; (2) clarify the purpose and objectives of the evaluation; (3) provide a logical structure or series of procedures for assessing instructional materials and methods; (4) identify the evaluation's methodologies, such as surveys, trial runs or rehearsals, and interviews; (5) identify populations to be studied and means by which representative subjects will be selected; and (6) summarize key standards by which the instructional materials and methods will be judged.
The seventh and final step in conducting a formative evaluation is to formulate a management plan, a detailed schedule of procedures, events, and tasks to be completed to implement the evaluation design. A management plan should specify due dates and descriptions of the tangible products resulting from the evaluation. It should also clarify how information will be collected, analyzed, and interpreted in the evaluation.
The importance of a management plan should be obvious. When a team is conducting a formative evaluation, the efforts of team members must be coordinated. A management plan helps avoid the frustration that results when team members are unsure of what must be done, who will perform each step, and where and when the steps will be performed.
There are two ways to establish a management plan. One way is to prepare a complete list of the tasks to be performed, preferably in the sequence in which they are to be performed. This list should be complete and detailed, since this task-by-task management plan becomes the basis for dividing up the work of instructional designers, establishing timetables and deadlines, holding staff members accountable for their segments of project work, and (later) assessing individual and team effort.
A second way is to describe the final work product of the project and the final conditions existing on project completion. What should the final project report contain? Who will read it? What will happen because of it? How much and what kind of support will exist in the organization to facilitate the successful introduction of the solution? Ask team members to explore these and similar questions before the formative evaluation plan is finalized, using their answers to organize the steps to achieve the final results.
Although there are many ways to conduct formative evaluation (Bachman 1987; Chernick 1992; Chinien and Boutin 1994; Dick and King 1994; Gillies 1991; Heideman 1993; Russell and Blake 1988; Tessmer 1994; Thiagarajan 1991), four major approaches will be discussed here. Each has its own unique advantages and disadvantages. These approaches may be used separately or in combination. These include:
We will describe each approach briefly.
There are two kinds of expert reviews: (1) those focusing on the content of instruction and (2) those focusing on delivery methods. Most instructional designers associate expert reviews with content evaluation. Expert reviews focusing on content are, by definition, conducted by subject-matter experts (SMEs), individuals whose education or experience regarding the instructional content cannot be disputed. Expert reviews ensure that the instructional package, often prepared by instructional designers experts (IDEs) who may not be versed in the specialized subject, follows current or desired work methods or state-of-the-art thinking on the subject.
A key advantage of the expert review is that it ensures that materials are current, accurate, and credible. Expert reviews may be difficult and expensive to conduct if “experts” on the subject cannot be readily located or accessed.
Begin an expert review by identifying experts from inside or outside the organization. Do that by accessing automated human resource information systems (skill inventories) if available, contacting key management personnel, or conducting surveys. Identify experts outside the organization by asking colleagues, accessing automated sources such as the Association for Talent Development's Membership Information Service, or compiling a bibliography of recent printed works on the subject and then contacting authors.
Once the experts have been identified, prepare a list of specific, open-ended questions for them to address about the instructional materials. Prepare a checklist in advance to ensure that all questions you want answers to are considered and answered thoroughly. See Exhibit 14.2.
Expert reviews are rarely conducted in group settings; rather, each expert prepares an independent review. The results are then compiled and used by instructional designers to revise instructional materials. Expert reviews that focus on delivery methods are sometimes more difficult to conduct than expert reviews focusing on content. The reason: experts on delivery methods are not that easy to find. One good approach is to ask “fresh” instructional designers, those who have not previously worked on the project, to review instructional materials for the delivery methods used.
For each problematic issue the reviewers identify, ask them to note its location in the instructional materials and suggest revisions. Another good approach is to ask experienced instructors or tutors to review an instructional package. If the package is designed for group-paced, instructor-led delivery, offer a dress rehearsal and invite experienced instructors to evaluate it. If the package is designed for individualized, learner-paced delivery, ask an experienced tutor to try out the material.
Management or executive rehearsals differ from expert reviews. They build support by involving key stakeholders in the preparation and review of instructional materials prior to widespread delivery. In a management rehearsal, an experienced instructor describes to supervisors and managers of the targeted learners what content is covered by the instructional materials and how they are to be delivered. No attempt is made to “train” the participants in the rehearsal; rather, the focus is on familiarizing them with its contents so they can provide support to and hold their employees accountable for on-the-job application.
To conduct a management or executive rehearsal, begin by identifying and inviting key managers to a briefing of the materials. Some instructional designers prefer to limit invitations to job categories, such as top managers or middle managers. Others prefer to offer several with various participants rehearsals.
Prepare a special agenda for the rehearsal. Make it a point to cover at least the following eight aspects: (1) the purpose of the instructional materials; (2) the performance objectives; (3) the business needs, human performance problems, challenges, or issues addressed by the instruction; (4) a description of targeted learners; (5) evidence of need; (6) an overview of the instructional materials; (7) steps taken so far to improve the instruction; and (8) steps that members of this audience can take to encourage application of the learning in the workplace.
Individualized pretests, conducted onsite or offsite, is another approach to formative evaluation. Frequently recommended as a starting point for trying out and improving draft instructional materials, they focus on learners' responses to instructional materials and methods, rather than those of experts or managers. Most appropriate for individualized instructional materials, they are useful because they yield valuable information about how well the materials will work with the targeted learners. However, pretests and pilot tests have their drawbacks: they can be time consuming, and they require learners to take time away from work and may pose difficulties for supervisors and co-workers in today's lean staffed, right-sized organizations.
Individualized pretests are intensive “tryouts” of instructional materials by one learner. They are conducted to find out just how well one participant fares with the instructional materials. A pretest is usually held in a nonthreatening or off-the-job environment, such as in a corporate training classroom or learning center. Instructional designers should meet with one person chosen randomly from a sample of the target population. Begin the session by explaining that the purpose of the pretest is not to “train” or evaluate the participant but, instead, to test the material. Then deliver the material one-on-one. Each time the participant encounters difficulty, encourage the person to stop and point it out. Note these instances for future revision. Typically, instructional designers should direct their attention to the following three issues: (1) How much does the participant like the material? (2) How much does the participant learn (as measured by tests)? (3) What concerns does the participant express about applying what he or she has learned on the job? Use the notes from this pretest to revise the instructional materials.
The individualized pilot test is another approach to formative evaluation. It is usually conducted after the pretest, and focuses on participants' reactions to instructional materials in a setting comparable to that in which the instruction is to be delivered. Like pretests, pilot tests provide instructional designers with valuable information about how well the instructional materials work with representatives from the group of targeted learners. However, their drawbacks are similar to those for pretests: they can be time consuming, and they require learners to take time away from work.
Conduct a pilot test in a field setting, one resembling the environment in which the instructional materials are used. Proceed exactly as for a pretest with the following six steps: (1) select one person at random from a sample of the target population; (2) begin by explaining that the purpose of the pilot test is not to train or evaluate the participant but to test the material; (3) progress through the material with the participant in a one-to-one delivery method; (4) note each instance in which the participant encounters difficulty with the material; (5) focus attention on how much the participant likes the material, how much the participant learns as measured by tests, and what concerns the participant raises about applying on the job what he or she has learned; and (6) use the notes from the pilot test to revise instructional materials prior to widespread use.
Group pretests resemble individualized pretests but are used to try out group-paced, instructor-led instructional materials. Their purpose is to find out just how well a randomly selected group of participants from the targeted learner group fares with the instructional materials. Held in an off-the-job environment, such as in a corporate training classroom or learning center, the group pretest is handled precisely the same way as an individualized pretest.
A group pilot test resembles an individualized pilot test but is delivered to a group of learners from the targeted audience, not to one person at a time. Typically the next step following a group pretest, it focuses on participants' reactions to instructional materials in a field setting, just like its individualized counterpart. Administer attitude surveys to the learners about the experience, and written, computerized assessments, or demonstration tests to measure learning. Realize in this process that a relationship exists between attitudes about instruction and subsequent on-the-job application (Dixon 1990).
Each approach to formative evaluation is appropriate under certain conditions. Use an expert review to double-check the instructional content and the recommended delivery methods. Use a management or executive rehearsal to build support for instruction, familiarize key stakeholders with its contents, and establish a basis for holding learners accountable on the job for what they learned off the job. Use individualized pretests and pilot tests to gain experience with, and improve, individualized instructional materials prior to widespread delivery; use group pretests and pilot tests to serve the same purpose in group-paced, instructor-led learning experiences.
One final issue to consider when conducting formative evaluation is how to provide feedback to key stakeholders about the study and its results. The shorter the report, the better. One good format is to prepare a formal report with an attached, and much shorter, executive summary to make it easier for the reader, a one to two page.
The report should usually describe the study's purpose, key objectives, limitations, and any special issues. It should also detail the study methodology (including methods of sample selection) and instruments prepared and used during the study, and should summarize the results. Include copies of the instructional materials reviewed, or at least summaries. Then describe the study's results, including descriptions of how well learners liked the material, how much they learned as measured by tests, what barriers to on-the-job application of the instruction they identified, and what revisions will be made to the materials.
Formative product evaluation results are rarely presented to management, since their primary purpose is to guide instructional designers in improving instructional materials. However, instructional designers can feed back the results of formative evaluation to management as a way of encouraging management to hold employees accountable on the job for what they learned.
Summative evaluation involves gathering information about a learning intervention after it has been deployed. It helps the instructional designer and other key decision makers identify what worked and what didn't work, determine value, and report on the difference made because of the solution. Besides identifying improvements, summative evaluation also helps to determine next steps such as accelerating the deployment to reach learners more quickly, expanding deployment to reach more learners, and sometimes discontinuing the intervention if results are deemed insufficient relative to the costs.
In 1960, Donald Kirkpatrick introduced his now famous “Four Levels” framework (Kirkpatrick 1959, 1960). Still today, this is the most widely used framework for thinking about and conducting learning evaluation in organizations. Level 1 focuses on learner satisfaction, level 2 evaluates acquisition of new knowledge or skill, level 3 examines learning transfer from the classroom to the workplace, and finally, level 4 determines the impact of the intervention on organizational or business outcomes. With each successive “level” of evaluation, starting with level 1, the focus moves from the individual to the organizational impact of the intervention. Each level yields different insights and is important for various reasons. Rigor, resource intensity, sophistication and expense also increases with each successive level. The frequency with which the levels are employed within organizations decreases as you ascend from level 1 through level 4. Various stakeholders place greater or lesser importance on the different levels. The time at which each level of evaluation is used also differs. Levels 1 and 2 occur during or immediately after the intervention is complete whereas Levels 3 and 4 evaluations can be conducted days, months, or even years after the invention. This section will detail each of the four levels and some of the key applications and considerations to apply them effectively.
As a new call center representative, Leslie went through three weeks of intensive training to learn both the service process of her new company, 800 Service Direct, and also “soft” skills such as dealing with an irate customer, active listening, and responding with empathy. After the training, Leslie fielded calls on her own. Her supervisor, Marco, used a common call center technology known as “double jacking” whereby he could listen in on her calls. After each call, Leslie and Marco reviewed and discussed what she did as well as how she could improve. Using an observation checklist, Marco filled in with notes during the phone calls. Over time, Leslie's ability to handle calls, even the proverbial “curve balls,” increased to a point where she was fully proficient and able to handle calls entirely on her own.
Despite its widespread adoption in organizations, the Kirkpatrick model is not without its critics. Some have lamented that it was introduced nearly five decades ago and has changed little since that time. Others suggest that it is too linear or has an over emphasis on training when multiple interventions may be used and needed in a performance improvement situation. Some have gone beyond merely being critical of Kirkpatrick and have proposed enhancements or alternative approaches. Phillips's (2011) ROI Model, Brinkerhoff's (2005) Success Case Method.
The Phillips (2011) ROI model extends beyond Kirkpatrick's fourth level and adds a fifth level, which attempts to calculate the return-on-investment (ROI) of the intervention. ROI is a financial calculation that quantifies the financial value of the impact measures identified in level 4 (such as sales or productivity) relative to the costs of the intervention. The ROI formula is: ROI (%) = Net Program Benefits/the Program Costs × 100.
Robert Brinkerhoff (2010) proposed an alternative to Kirkpatrick's framework and called it the Success Case Method (SCM). This approach looks beyond traditional training interventions and also recognizes that many variables may be at play for performance and results. Brinkerhoff (2005) asserts that “Performance results can't be achieved by training alone; therefore training should not be the object of evaluation” (87).
The Success Case approach takes more of a holistic and systemic approach and suggests that the following questions be addressed:
Brinkerhoff's approach to evaluation “combines the ancient craft of storytelling with more current evaluation approaches of naturalistic inquiry and case study” (91). Given its more qualitative nature, data collection methods associated with the SCM may include interviews, observations, document review, and surveys.
As the name suggests, the Success Case Method attempts to identify individuals and/or groups who have applied their learning and who achieved positive organizational outcomes or results. Besides identifying successful examples, the SCM attempts to answer why they succeeded (what enabled success?). And while less glamorous, the method can also look at examples of unsuccessful applications or outcomes and then attempt to pinpoint the reasons for this lack of success (what were the barriers or challenges encountered?). Both dimensions can be useful in identifying stories that help to paint a picture and determine the worth or value of the intervention and how it can be improved going forward.
An evaluation report can take a variety of formats and is used to capture the results so they can be communicated to various stakeholders. The following are questions that can help the instructional designer think through the best format to use for the report:
This section describes a more traditional or formal written format that an evaluation report might take as a way to summarize the contents and sequencing that are typically found (Torres et al. 2005). This framework can be expanded or condensed and sections can be removed depending on the audience and intent of the report. Also, this framework can be used to create a stand-alone report or the contents can be converted into other presentation formats such as PowerPoint for the basis of verbally communicating select information to key stakeholders. Below are the main elements of an evaluation report.
Once created, the next step is to distribute the report to stakeholders in the most effective way so the information in the report is most likely to be reviewed, processed, and acted upon. The timing of the dissemination is one consideration. If the report is distributed too long after the intervention, the value and relevance may be diminished. Likewise, if it is sent out at a time when it may compete with other priorities, such as the end or beginning of a business cycle, it may not get the attention it deserves from stakeholders. The method of dissemination, mentioned earlier, is ideally matched to the needs and desires of the key recipient(s).
Sometimes using multiple methods addresses the varied needs of the audience and reinforces the key messages being conveyed. A short preread could be sent prior to an in-person meeting that incorporates a presentation and discussion, which is then followed by a full report. Media involves the vehicles used to distribute the report. A traditional approach is to use a word-processing software such as Microsoft Word to create a print-based report that can be printed or distributed electronically. Other software such as PowerPoint can create an evaluation report that incorporates graphics and animation. More sophisticated web-based tools can create a multimedia-based approach that lets the recipient engage with the material in an interactive and dynamic manner (rather than a one-directional method).