Chapter Sixteen
Implementing Instructional and Noninstructional Interventions

Once all of the initial design, planning, pilot testing, adjustments, and other preparations have been made, the intervention is ready to be implemented and disseminated to the organization and intended audience. Once a vision of change that aligns learning and performance goals with organizational goals is established, plans for deploying the intervention are created. From here, it is launch time and the implementation plans are executed and the intervention is disseminated. Especially early on, deployment efforts are carefully monitored under the watchful eye of the instructional designer and other key partners to ensure everything is on track and going smoothly. If and when adjustments to the deployment efforts are needed, they are identified and made accordingly so that desired goals are achieved efficiently and effectively.

According to The Standards (Koszalka, Russ-Eft, and Reiser 2013), “This competency is considered to be advanced, and the performance statements are rated as being managerial or advanced” (61). The seven performance statements defined by Koszalka, Russ-Eft, and Reiser (2013, 61) are shown below:

  1. Create a vision of change that aligns learning and performance goals with organizational goals.
  2. Plan for the implementation of the interventions.
  3. Plan for the dissemination of the interventions.
  4. Plan for the diffusion of the interventions.
  5. Disseminate the interventions.
  6. Monitor implementation, dissemination, and diffusion progress.
  7. Identify required modifications to implementation, dissemination, and diffusion processes.

This chapter is all about execution—implementation of the instructional or noninstructional intervention to the target audience to achieve the intended change and desired performance or organizational outcomes. Implementation involves planning the logistical aspects of the deployment. The logistical plan includes “the personnel and resources needed, and it must indicate the needed time for learners. If planning for face-to-face sessions, the plan must also discuss the location and any needed arrangements” (Koszalka, Russ-Eft, and Reiser 2013, 61).

The Standards suggests that the instructional designer not stop at implementation of the intervention and introduces the importance of planning for its dissemination, which is considered a managerial performance. Dissemination involves the means by which the intervention is spread throughout the client organization. Koszalka, Russ-Eft, and Reiser (2013) note that “the instructional designer must work with management concerning the timing and scheduling of the intervention. Depending on the organization's fiscal year, certain periods may pose problems for the intended audience. Furthermore, coordination with other events and activities is critical” (61).

According to The Standards, “Planning for implementation and dissemination are critical steps, but the savvy instructional designer also plans for diffusion of the intervention. Diffusion goes beyond dissemination and involves activities and processes to encourage widespread and long-term adoption and buy-in. Diffusion may include various instructional and communication strategies to support the proposed change” (Koszalka, Russ-Eft, and Reiser 2013, 62).

Creating a Vision for Change

As Stephen R. Covey (1989) suggested for personal and leadership effectiveness in his book Seven Habits of Highly Effective People, instructional designers, too, must “Begin with the end in mind.” One of the most critical leadership competencies is the ability to create a clear and compelling vision for the change they are trying to achieve in the organization. Effective instructional designers must exhibit effective leadership by doing likewise. A vision for change is a clear articulation of the future—a depiction of what will happen because of the intervention in terms of the end outcomes and results, and the behaviors, skills, knowledge, and attitudes of those in the target audience. The more vivid, clear, simple, and compelling the vision is, the more people will understand the desired future state and work to achieve it. The more a vision for change is confusing, complicated, or not relevant, the less likely the change will occur. See Exhibit 16.1.

Aligning the Learning and Performance Goals with Organizational Goals

Throughout the design process, instructional designers must constantly ask themselves how the learning and performance goals support the overall goals of the organization. As interventions are being envisioned, designers must also ask themselves how the intervention will support the learning and performance goals. It must be remedied if it is impossible to articulate how the intervention supports the learning and performance goals and how these goals support the overall goals of the organization. Misalignment can lead people to question why an intervention is even being undertaken. Clear alignment ensures relevance of interventions and should be considered throughout the design process by making connections back to these goals.

Benefits of Alignment and Challenges of Misalignment

When learning and performance goals are tightly aligned with organizational goals, the greatest benefit is the dramatically increased likelihood of the intervention having an impact on the most critical measures of organizational success. The linkage starts with improved learning, which results in improved performance, which leads to better organizational outcomes. In this manner, not only will learners be more effective because of the intervention, but other stakeholders such as sponsors and executives will benefit because organizational goals (profitability, revenue, market share, customer retention, etc.) are likely to be attained.

Much has been written about goal alignment in the literature on performance management. One form of alignment is vertical alignment, where goals at the highest organizational level are broken down into smaller component parts and “cascade” down through an organization. In this manner every person can see how their individual objectives, tasks, and work contribute to the goals of the organization. Figure 16.1 depicts this cascading process.

“Screenshot outlining the vertical alignment, a form of goal alignment, in which goals at the highest organizational level (Division A) are broken down into three smaller components (Department X, Department Y, and Department Z).”

Figure 16.1 Vertical Alignment

As its name suggests, horizontal alignment entails ensuring that goals line up laterally across the organization. Horizontally aligned goals ensure that the objectives and work of different functional areas, business units, and teams all drive toward the same overarching outcomes. An often-used term in organizations today is the word silo, used to describe when an individual business unit, department, or team becomes insular and overly focused on its own goals to the exclusion of other areas. One author recalls a CEO who, jokingly, quipped that it was fascinating how all of her direct reports achieved their goals, yet the organization did not! “Funny how everyone can achieve their goals except me!” This insular way of thinking and operating becomes detrimental the more interdependent an organization is—and most modern organizations are more, rather than less, systemic with numerous interrelated processes, systems, and capabilities.

Engaging Stakeholders and Negotiating to Achieve Alignment

A common imperative stated by many HR, learning, and instructional design professionals is “you need to align to the business.” While it's easy to agree with this and to recognize the benefits and value in doing so, many are left wondering how to do this. For starters, instructional design professionals must first identify the right stakeholders with whom to engage. For aligning learning and development goals with organizational goals, managers and senior executives are often important stakeholders. Other stakeholders may be helpful as well, such as the key client (which may be the senior executive), subject matter experts, or other learning managers or professionals. Once identified, the designer must be able to interact with these stakeholders, which is easier when there is already a relationship established. If it's a new relationship, establishing trust and rapport quickly is critical.

Some of the important interaction skills include listening, verbal and written communication, questioning, negotiation, and sometimes conflict resolution. A synonym for the word align is support because the goal of alignment is to ensure that the goals of the learning or instructional intervention support the goals of the organization. Learning goals should support on-the-job performance goals that ultimately contribute to attainment of organizational goals. The clearer, easier, and closer the “line of sight” by which the outcomes of the intervention can be seen in achieving the goals of the organization, the greater the alignment that exists. Misalignment is when the opposite is the case and stakeholders, such as participants, are left asking “Why are we doing this?” or executives begin saying “I don't see the return we're getting on this investment.”

Once stakeholders are identified and the target of alignment is determined, the instructional designer engages in interactions to make the linkages between organizational goals and the goals of the intervention. This is where effective interaction skills come in to play. The designer may ask a senior executive client, “What results would be delivered if participants were performing at the highest level?” and “How would this contribute to organizational outcomes?” Listening skills become important as the instructional designer attempts to understand what is meant by the stakeholders' explanation. Sometimes the stakeholder's response leads to follow-up questions to probe further and to better understand what was said. Restating what was heard, either verbally or as a follow-up confirmation in writing, helps to ensure understanding and validate alignment.

What happens if the instructional designer perceives that alignment does not exist between the goals of the intervention and the goals of the organization? Sometimes this necessitates changes to the intervention to achieve closer linkage. In other cases, it may require negotiation skills so the instructional designer works to reach agreement with the stakeholder regarding reasonable expectations for the intervention. If an organizational goal is revenue growth of 20 percent, a learning intervention might be developed to equip sales professionals with skills to understand and cross-sell a new product. The instructional designer may need to negotiate with the stakeholder to agree upon the reasonable contribution the solution will achieve. In such a conversation, it may be pointed out that many variables, beyond sales people's knowledge and selling skills, contribute to increased revenue. This may lead to additional noninstructional interventions that support the organization's goals or it may help lead to agreement on the extent to which the instructional solution can contribute to the goal.

Negotiation is a back-and-forth discussion process, underscoring the need for excellent communication skills and a good amount of patience and persistence to arrive at the outcome of aligned goals.

Planning for the Implementation and Dissemination of the Intervention

Once the instructional designer is confident that goals are aligned and have received necessary stakeholder approvals, an implementation and dissemination plan guides the deployment of the solution. The specifics of the implementation plan vary depending on the intervention, but there are also essential elements that must be in place regardless. The most common interventions include self-paced, in-person, technology enabled, or blended types.

Self-Paced Interventions

In self-paced interventions, learners are largely the key drivers of when and where they engage the interventions. In the early years of self-paced learning, paper-based instructional materials were most commonly used. This was followed by delivery of content through technology such as computer-based programs and video instruction. Today, most self-paced learning is conducted through the Internet or another web-based platforms. The common theme, though, is that the learner dictates when and where to engage, providing a great deal of flexibility and overcoming many barriers and reducing expenses, such as travel, associated with other forms of delivery.

In-Person Interventions

Face-to-face interventions involve people coming together in person in large or small groups to participate in the intervention. Learners may need to travel to a host destination to participate or the facilitator or instructor may travel to the learners' location. Either way, such high touch experiences may drive up expenses to realize the benefits associated with in-person interaction. A variation of the in-person approach is the live virtual delivery where facilitators and learners are interacting together, but not face-to-face. In such cases, virtual technologies such as WebEx, GoToMeeting, and AdobeConnect are used to achieve the benefits of people working directly with one another. These approaches are more scalable and cost-effective, often without much, if any, degradation of the quality of the intervention or its ability to achieve similar outcomes as an in-person experience.

Blended Learning

Some implementations incorporate multiple delivery approaches, such as online and in-person. Blended learning approaches can achieve both the benefits of in-person delivery (networking and collaboration among participants, opportunity for practice, feedback, and coaching, and direct access to the facilitator for questions and answers) and the benefits of online learning (geographic reach, reduced in-person time, decreased costs, scale of deployment). Blended delivery is more complex with more requirements, resources, and moving pieces so ensuring smooth implementation can be more difficult than with other methods.

Aspects of Implementation

When creating an implementation plan, there are many dimensions to consider. Many issues must be addressed “behind the scenes” and are less visible than the main event of an in-person workshop or facilitated learning experience. It's only when one of these items is overlooked, goes awry, or otherwise has problems associated with it that it moves from being behind the curtain to center stage and, even if addressed, can leave a stain on the overall experience rendering it less positive than if it were avoided.

One dimension of implementation is the people involved—both the number of people resources and the people themselves. Depending on the intervention, there can be more or fewer people involved and roles played. In large scale, multifaceted, or longer-term implementations, large numbers of people playing different roles come together to execute the intervention successfully. Deploying a Leading Change program over multiple geographies, using a blended learning approach and a train-the-trainer model for in-person delivery in a short time frame may involve many resources performing many unique and critical tasks. In smaller scale, simpler, or shorter-term interventions, fewer resources may be necessary. To execute a one-time delivery of an Interviewing Skills workshop for a group of eight new hiring managers using an internal talent acquisition subject expert, fewer resources are needed.

Many roles may be required when implementing an intervention. Sometimes, such as in smaller or budget constrained organizations, individuals involved may be tasked with multiple roles, whereas in larger organizations, individuals or teams of people may specialize in various areas important to delivery. Sometimes roles may be outsourced and performed by external resources. Similarly, resources may reside in the organization's learning function or they may be in virtually any area of the organization. These and other factors determine both complexity and the capabilities and expertise available to execute the intervention. Table 16.1 displays a listing of some of the many roles that can be played by various individuals during the implementation phase. The instructional designer may play the role of, or at least play a part in, identifying which roles are needed, determining who is best suited and available to perform those roles, and enlisting them to do so.

Table 16.1 People-Resources Needed during the Instructional Design Process

Before the Intervention During the Intervention After the Intervention
Subject Matter Experts Provides information regarding content Helps to course correct if needed Supports evaluation and needed edits to curriculum
Supports learner application and follow up
Communication Specialists Creates communication plan to identified stakeholders Supports communication components of the evaluation plan
Project Managers Creates timeline and rollout plan for intervention
Collaborates with various stakeholders to bring resources together
Provides project management support for the evaluation plan
Instructional Designers Completes needs assessment
Creates intervention objectives
Designs and develops curriculum
Creates evaluation plan
Pilots the intervention
Communications with various stakeholders
Aids in any course correction throughout the intervention Provides participant support as they transition back to the workplace
Conducts evaluation
Reports evaluation results
Facilitators Supports the instructional designer (design of exercises, predicts audience reactions, design room layout, technology needs, etc.) Facilitates the intervention
Provides one-on-one coaching
Supports the instructional designer with the evaluation plan
Graphic Designers Provides graphic design support during the design and development of the intervention
Technology Experts Provides technology design and set up Provides technology support Removes any technology used during intervention
Business Leaders Provides business perspective during intervention design
Provides funding and resources
Sponsors or markets the intervention
Removes barriers to success
Provides support (guest speaking, coaching/mentoring participants)
Intervenes with unexpected issues
Reviews evaluation results
Decides next steps

Some implementation efforts may use existing and well-established processes such as the process used to print and ship materials to a location. Other projects or subelements of a project may be new or unique to that implementation. A new initiative may involve requesting and selecting volunteers to play the role of facilitator rather than the typical approach of using professional facilitators. Such a situation may necessitate a new process being established by which to make this happen. Especially with new, modified, or unique implementations, but with well-established processes for roles to be performed flawlessly, several factors must be covered to ensure smooth implementation. A great deal of time and attention must be spent in up-front planning, contingency planning, process mapping, role clarification, hand-off and transition management, problem identification, and escalation and attention to dependencies and integration of efforts.

Train-the-Trainer Approach to Dissemination

An approach sometimes used for implementing an intervention is the train-the-trainer method. This typically involves a master trainer who “certifies” or prepares a less experienced training or nontraining professional to deliver content or instruction. The train-the-trainer approach enables a greater number of people to be involved in the direct dissemination of the intervention. Once prepared, the certified individual may be involved with delivery on a full-time, part-time, or periodic basis depending on needs, capabilities, and demands. The master trainer may be internal to the organization, such as an internal subject expert, or an external consultant or consulting firm, which holds intellectual property rights to a program or content.

An early step in a deployment strategy that uses train-the-trainer is to determine the number of certified trainers required to deploy the initiative over time. A key consideration in this calculation is to examine the implementation plans and specifically the number in the target audience, the average size of the session, and the anticipated time frame. To illustrate, an organization may attempt to shift from a culture of advancement based on waiting for the “tap on the shoulder” to one of career self-management. To support employees in this shift a variety of tools, process changes, and supports are identified including a one-day Navigating Your Career workshop. The organization has 10,000 employees and feels strongly that everyone, regardless of level or tenure in the company, can benefit from attending and makes it required training. The instructional design team, working with managers, employees, and talent acquisition experts, creates and pilots a program implemented and disseminated to the organization. It is decided that a train-the-trainer approach will certify human resource and recruiting professionals to deliver the sessions and also to serve as on-the-ground resources following the rollout. Key decision makers, considering various factors such as business cycles, resource availability, time of year, and other priorities land on a two-year implementation time frame. An important consideration is the average class size, which is determined to be 25.

Based on these factors and assumptions, what follows is one means by which to calculate what resources will be needed. If 10,000 employees will attend in groups of 25 participants on average, there will need to be roughly 400 individual sessions (10,000/25 = 400 sessions). Given the two-year time frame, this means 200 sessions will be conducted each year. Looking at the trainers' other priorities and what is reasonable in terms of capacity, it's determined that delivering a one-day session twice per month is reasonable. This equates to 10 percent of the person's time in actual delivery (eight hours of delivery every two week period of 80 hours: 8/80 = 10 percent). To get a realistic picture of total time, delivery, and nondelivery, the number should be increased to 15–20 percent total, depending on the amount of preparation needed, outside of session involvement (such as classroom setup), and level of post-session support for participants. While an annualized capacity of 24 sessions (2 per month × 12 months) per trainer per year could be calculated, a decision is made to deflate this number to 20 to account for vacations, holidays, and other unforeseen activities that would compete with classroom time. With 20 sessions able to be reasonably delivered by each trainer, 10 trainers must be certified and dedicated to delivery over two years (200 sessions per year/20 sessions per trainer = 10 trainers needed).

The previous example shown is fairly straightforward; there are other factors and dynamics that can complicate matters and must be accounted for when applicable. The geographic dispersion and the trainers is one factor. If trainers are in corporate headquarters, but 75 percent of the employees are elsewhere, significant time and expense may be needed for travel. This could also raise issues of suitable facilities and on-the-ground resources to support logistics and other delivery related activities. These considerations should be thought through when deciding the ideal delivery method, perhaps leveraging technology that is more efficient yet no less effective. Budget constraints are another factor. Ramp-up time for the facilitators and the ability of employees to be freed up to attend the training are others still. Another is the ability to schedule and deliver consistently rather than haphazardly.

Monitoring the Quality of Implementation, Dissemination, and Learner Progress

As interventions are deployed, they must be monitored for quality and to ensure that objectives are being achieved and delivery is smooth and efficient. Through monitoring, problems and issues can be surfaced and addressed as needed. More proactive approaches can anticipate and prevent problems before they become larger. An example of this would be instructor travel based on the delivery schedule. If the delivery schedule cannot be fixed for some period in advance, instructor travel cannot be booked and either instructor availability may become a problem or travel expenses may become prohibitive. This avoidable situation could be addressed by locking in the delivery schedule well in advance.

Monitoring learner progress throughout delivery is critical. Kirkpatrick's four-levels framework can be part of the monitoring system. If learners are dissatisfied (level 1) with some aspect of the program (food, logistics, registration, facilitation, etc.), decisive corrective action can be taken to remedy the situation quickly. If learners are not acquiring new knowledge and skills as evidenced by various forms of testing and assessment (level 2), then root-cause problem solving can figure out why this is the case. Perhaps the content must be changed, instructions must be clarified, examples must be incorporated, or exercises must be improved. If learners are found to not be implementing what they've learned in the classroom back in their jobs (level 3) there may be barriers being encountered in the work environment that must be removed or support systems must be in place to enhance transfer of learning. Finally, if the expected impact (level 4) is not being achieved, then the reason(s) why must be uncovered and action taken so desired results and outcomes are realized.

Monitoring the implementation and dissemination of the intervention can be thought of on two levels. Macro-level monitoring looks across multiple events and attempts to detect trends occurring. Comparing the evaluation scores related to instructor effectiveness over time can be a way to determine which instructors are more effective than others. Additional data about participants could be introduced and analyzed to determine which instructors may be better matched to particular types of learners. Monitoring may reveal that one instructor is more effective with front-line employees whereas another may be better suited for senior executives. The larger the scale and scope of the implementation, the more potential data can be collected and analyzed to surface trends, themes, and information, which tells the instructional designer a great deal about strengths and potential areas of improvement.

Micro-level monitoring is the second way in which interventions can be observed. Such monitoring zeroes in on a variable rather than looking across many. Examining a newly certified trainer's evaluation scores following the first solo delivery experience is one such example. Looking at test scores for the first delivery of a new program can reveal quick insights about the effectiveness of the content, instruction, or facilitator. An instructional designer sitting in on a delivery allows him or her to make direct observations about what is working or not working with an eye toward enhancements to various aspects of the design.

Many people can be involved in various forms of monitoring throughout an implementation. Monitoring may be formal, such as a training manager running a level 1 evaluation report to track progress. It may also be informal such as an administrative assistant realizing that an office recently moved from one location to another necessitating a change of address in the system prior to materials being shipped.

Regular review meetings to discuss key aspects of implementation and dissemination with report outs by key team members can be an effective way to both monitor progress and also connect the dots and achieve greater integration of efforts. The instructional designer may play the role of bringing the team together or it may be done by someone else. Frequency and duration of the meeting can vary depending on what needs to be covered, who needs to be involved, and where the implementation is in its life cycle. Brief and informal daily “huddle” meetings can be effective, as can more formal and longer review meetings held on a less frequent basis. When problems surface, other communication vehicles may need to be activated, such as e-mail, text, videoconference, emergency meetings or other means by which the right people can be engaged and called into action.

Learning Management Systems to Monitor Learner Progress and Course Completion

A learning management system, or LMS, is a powerful internally or externally hosted software platform that forms the backbone of a learning organization. Depending on functionality, an LMS is used to create learning paths, prescribe required learning, register participants, track completions, launch e-learning courseware, manage participant and facilitator scheduling, and perform analytics and reporting.

A primary capability of most LMSs is the ability to push learning out or prescribe learning to the target audience, and to enable learners to search a catalog of learning assets and register on their own. Mandatory or required learning, such as compliance or job-related training, is often assigned to the target audience through the LMS so it shows up in the individual's learning plan as required until it is completed. Likewise, when a learner searches for a learning asset available via the LMS and self-registers for it, it also shows up on the learning plan until completed. LMS administrators, working with clients and leveraging organizational and employee data, can zero in on audiences or segments of the full population to target particular learning to individuals or groups, or automatic triggers may be set up. For instance, when someone is promoted or hired into a first-level supervisory role, the LMS may be set up to flag this individual and automatically assign a Management 101 type of offering.

Sometimes there is a predetermined time frame by which learning must be completed. An easy example is completion of an annual Ethical Awareness and Decision Making online course assigned early in the year that must be completed by December 31. In other cases, no time frame is given for completion and in other situations, learners themselves, or their managers, set the target time frame based on the development plan. In any of these scenarios, a major benefit of an LMS is the ability to run reports and analyze data to determine learner progress and course completion. This information is used by the learning team and can also be included in reports and various communications with stakeholders and key decision makers. Using the annual compliance example, in the last weeks and months leading up to the end-of-year due date reports can show the percentage of employees who have not completed the learning, where they are located, and other attributes such as organizational level or job function. These insights can be used to target additional communication or to engage in individualized conversations with employees themselves, their managers, or others, such as compliance or human resource professionals.

Sometimes an LMS has measurement capability built in and in other cases, it may be “bolted on” so learning measurement and evaluation data can be captured, integrated, and reported upon using the system. In either case, having the ability to monitor and track evaluation results becomes a useful tool for the implementation team.

Planning for Diffusion of the Intervention

Once initial implementation of the intervention has been achieved, there may be a need or desire to focus on full diffusion so greater sustainability and impact is achieved. Diffusion goes beyond implementation and dissemination and focuses on embedding the change even more deeply in the organization. When an instructional intervention is a key component or part of a more systemic change effort, diffusion is enhanced by identifying other supporting interventions that help enable the change to stick. An organization may implement a solution that focuses on building Coaching Skills for Managers to create a culture of feedback and coaching. Besides rolling out an instructional solution to managers, a diffusion strategy may involve several supporting efforts to achieve the objective of culture change. Managers identified as being exemplary coaches may be designated as point people who provide ongoing support to their peers as they attempt to build their skills. Coaching tools and job aids that managers can use before, during, and after conversations they are having represent another strategy that can further drive the change more deeply into the organization. Focusing on full diffusion requires additional planning, resources, and effort, but that cost must be weighed against the potential impact and upside of this investment.

Encouraging and Achieving Adoption and Buy-In

A foundational concept in change management, and a primary goal of any intervention, is adoption among the target audience. Adoption is the degree to which the change is accepted and implemented by the target of the change. For example, an intervention includes the deployment of a job aid or tool to assist a customer service representative in upselling a warranty to customers calling in, generating revenue for the organization. To achieve these objectives, the rep must use the tools to gain proficiency. If adoption is not achieved, then fewer sales are made and organizational goals such as revenue or profitability suffer. Several dimensions are associated with adoption of any new change. How quickly the intervention is adopted, or the speed of adoption, is one area. Another is the depth of adoption, which involves how fully or completely the change is embraced. Finally, the quality of adoption deals with the level of proficiency achieved among the target of the change effort.

Several factors can accelerate or impede adoption and buy-in. The more relevant the intervention is to the intended user, the higher the acceptance and usage is likely to be. The clarity provided helps the user understand what to do and how to do it, therefore increasing adoption. The more support, from managers, peers, or others, that is available or provided, the greater the buy-in and implementation. When distractions that interfere with the performer's focus and ability to apply the change are removed, adoption is higher.

Compliance versus Gaining Commitment

When attempting to gain adoption and buy-in as part of a change management effort, two opposing approaches can be taken. One is to drive toward compliance by requiring or forcing people through pressure, coercion, or inflicting some sort of penalty to push someone to change. While compliance may occur, the level of commitment or ownership among the audience or recipients of the change will likely be low. Little buy-in to the change exists. A compliance-oriented approach to the change may be more expedient, but the effectiveness and staying power of the change is questionable.

Another approach is to take steps that will lead to greater commitment to, or ownership of, the change. Several strategies can garner higher levels of commitment. Involvement in the process is a proven strategy that drives greater degrees of ownership. When stakeholders are asked for input, invited to play a role, or asked for their feedback, and other approaches are taken to engage them, the reciprocating response is higher commitment. Further, when those most directly in the process are involved in the design and deployment of the solution, the quality of the solution is likely to be higher because the expertise and wisdom of those closest to the situation have been included. An approach that engenders greater levels of buy-in and commitment may be less efficient because gaining stakeholder involvement can be time consuming. Therefore, it's important to think carefully about situations that would benefit more from having greater levels of ownership.

Monitoring Implementation, Dissemination, and Diffusion to Identify Potential Adjustments

Implementation, dissemination, and diffusion efforts must be monitored to know if things are on track, if adjustments are needed and, ultimately, if success is achieved and objectives are met. Monitoring lets the instructional designer and other stakeholders know how things are going. The ability to monitor progress is made easier when objectives and outcomes of the initiative, established early in the design process, are well crafted, clear, and easily measured. When this is the case, it's easier to determine whether progress is being made than when the objectives are nonexistent, confusing, or ambiguous. Monitoring creates a feedback mechanism to detect problems that must be corrected and also highlights what is working well so insights can be broadened to accelerate implementation efforts. It should be expected that any project will encounter obstacles, challenges, and problems. To expect flawless implementation, especially with large scale and complex initiatives, is foolhardy and not realistic. Monitoring can help to surface these issues so they can be addressed early and expediently and so progress toward the end goals is not hampered.

There are various ways to monitor implementation efforts. It's all about feedback. Chapter 15 introduces several data collection methods such as observation, interviews, surveys, and other approaches. These methods can also be used for monitoring. Besides these more formal methods, a simple yet powerful and frequently used approach is to listen, observe, and ask for feedback from all who are involved in the implementation efforts and who would share their perspectives. This “wide net” approach can gather real time insights in an informal but highly effective way.

Another aspect related to monitoring is communication. Who needs to know the information and insights accumulated during the monitoring process? The instructional designer should determine who (who needs to know?), what (what do they need to know?), when (when do they need to know?), and how (how should they be informed?). The instructional designer should consider the stakeholders discussed previously, and the information believed to be useful to them in both judging the effectiveness of the implementation and helping them plan and decide, including making adjustments.

Taking Action on Deviations or Problems Surfaced through Monitoring

Once potential adjustments are identified, the instructional designer makes a determination and, if warranted, plans for implementing identified changes based on monitoring efforts. Changes can be made to the learning or nonlearning intervention itself. If it's determined that to achieve more full adoption of a new skill or behavior, a job aid with process steps articulated would be useful to deploy to the performers mobile devices, then this support tool should be developed and deployed. In other cases, changes may need to be made to the delivery process. Through monitoring efforts, it may be determined that individuals who went through a train-the-trainer process and were positioned to facilitate sessions are being pulled by managers to work on other high priority tasks and are no longer available to facilitate the sessions. In such a case, the delivery approach may be altered to one that deploys full-time training professionals instead. An alternative to this dramatic change could be for the designer to work with the managers of the trainers to revisit or renegotiate the expectations, to get them back on track or to make refinements that strike the right balance given the goals and constraints that are present.

Whatever the change, once it's decided that the benefits of the change are greater than the costs associated with making them, action is taken to implement the alteration. The actual adjustment may be made directly by the instructional designer or, more typically, it is made by others involved in the process such as the delivery team, the training manager, the instructional design manager, technology specialist, client, or several other potential individuals. The person closest to the change being made is likely the one who receives word of the change and then implements it.

Similar to any organizational change being proposed or implemented, unintended consequences should be considered before making the change (Rothwell 2015; Rothwell, Stavros, and Sullivan in press). An unintended consequence is an action, reaction, or result that occurs and that was not anticipated. Unintended consequences can be positive, but they can also be negative and even create more or bigger problems than originally needed to be addressed. Avoiding such situations is sometimes as easy as being aware of the possibility of unplanned outcomes and asking where, when, and how they might occur. When a possible unintended consequence surfaces, the instructional designer determines whether it is important enough to address and, if so, the best means by which to do so. It's a question of risk mitigation.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset