CHAPTER 11

ADDIE: The Origin of Modern-Day ISD

Angel Green

Instruction is one of the earliest forms of documented human communication. Prehistoric drawings were likely used by tribal leaders to inform others of the location and type of animals to hunt, where to find freshwater sources, and how to perform religious rituals. The origin story of modern instructional design dates back over the past 100 years.

IN THIS CHAPTER:

  Explore the origin of ISD

  Compare and contrast a selection of ISD models

  Describe benefits and potential drawbacks of each ISD model

Why is an origin story so important? With the affordability and availability of historical archives and DNA testing, there has been a proliferation of people tracking their lineage and ancestral ties as far back as they can, scouring church records, government registries, and newspaper clippings to weave a story of those who came before them. Perhaps, in discovering our origin story, we feel like we’re part of a community with a shared history. We uncover a lineage we never knew we had. Maybe the roots we uncover expose hardships endured by those who made it possible for us to be where, and who, we are today. And sometimes, we have the chance to learn lessons from mistakes that were made before us; with that knowledge, we have the opportunity to avoid history repeating itself.

The same reasons for why a personal origin story can be valuable—building a sense of community, understanding our shared experience, respecting those who paved the way for us, and learning from past mistakes—ring true for the origin story of instructional design and development models.

The Need for an Instructional Systems Development Model

Often, researchers, books, articles, and experts use the term instructional design theories interchangeably with instructional design models. However, for this chapter, we will draw a distinct line between the two:

•  Theories provide instructional designers with evidence, research, conjecture, and criteria by which they can make design decisions (that is, the level of behavioral objectives, the setting and modality of training, the spacing and sequencing of instructional interventions, the appropriate evaluation method, and so forth).

•  Models are processes that an instructional designer or team can use to create an instructional product.

Having a solid understanding of instructional theory will help in your design, but without a process to follow, you can’t build an instructional product. On the other hand, simply using a model without an understanding of learning theory can push a product through to creation, but the product likely won’t be appropriately designed (or instructionally sound).

Key learning theories that instructional designers should be familiar with before moving into design and development include B.F. Skinner’s research on operant conditioning and programmed learning, Benjamin Bloom’s taxonomy of educational objectives, Robert Gagné’s Nine Events of Instruction, David Merrill and Charles Reigelueth’s Component Display Theory, and the Center for Creative Leadership’s 70-20-10 theory on executive success.

It is through these theories (and others) we begin to see instructional design become a process; gaining an understanding and taking these theories into consideration will be foundational to the creation of a successful training product. You’ll need to think through these questions before you begin:

•  Skinner: What behaviors need to be demonstrated and what reinforcement works?

•  Bloom: To what cognitive level does the learner need to know the subject matter?

•  Gagné: How can we design our program so that it meets the nine necessary events?

•  Merrill and Reigelueth: How can we best sequence and chunk the content to scaffold the learning?

•  70-20-10: How does the work environment support the practical application of formal training?

Beyond these considerations, other design decisions need to be made, such as modality of delivery, which is seemingly endless and continues to expand as new apps and technology emerge. The rate of change expected is another factor to consider in design decisions. And, of course, budget, location, and time have a major influence on design and development.

Because of the number of considerations and decisions involved in a single training project, the US military implemented structured plans for the development, delivery, and evaluation of training, referred to as Systems Approach to Training (SAT) models. They were created, in part, to allow for decision-based trade-offs in design and development efforts. Their goal was to create instruction that aligned with the mission, the problem, the time to develop, and the cost and budget considerations. Each branch of the military created its own model, and divisions within each branch also created their own SAT models. In all, by 1973, more than 100 different models had been developed.

In 1973, the US Army began working with the Center for Educational Technology (CET) at Florida State University to evaluate the existing SATs and recommend one standard approach to training design. It quickly became evident that other divisions of the military could also benefit from this approach. So, they formed a joint committee of chief training officers from the army, navy, air force, and marines. The goal was to create a model that could be useful for improving interservice training design and development, thereby saving money and, where appropriate, combining training efforts.

This joint force would need to account for:

•  A planning framework

•  A suggested sequence of work

•  The basis for a central management system

•  The basic inputs, processes, and outputs

•  The learning interfaces

•  A feedback control

Instructional Systems Development: The Interservice Procedures for Instructional Systems Development (IPISD)

After two years and several iterations and revisions, Branson, Rayner, Cox, Furman, King, and Hannum documented the Interservice Procedures for Instructional Systems Development (IPISD) in 1975. This set of manuals was described for use as guidance in military ISD applications (Figure 11-1). The ISD model was based on the phases of systems engineering, which was a standard approach used across industries for the structured design and development of any product (such as auto manufacturing or satellite development).

Figure 11-1. Interservice Procedures for Instructional Systems Development Model

The overall goal was to “train to specific job requirements, avoiding the expensive pitfalls of overtraining and undertraining” (Branson 1978). The complete executive summary of this model is available for download as an unclassified document from the US Army Combat Arms Training Board. The executive summary provides a detailed description of the goals of each phase, the activities that make up each “block” in the phase, the management decisions in each block, and the outcome of each block.

The model itself is a stage-gate, or waterfall, project management approach. As described in the summary, “Each of the phases is a separate and distinct function which could be carried out successively by one person, or each of the steps could be assigned to individuals.” This does not mean, however, that iterations and revisions within the blocks are not conducted. As it is written, iteration or revisions are occurring within most, if not all the blocks in each phase of the IPISD model.

For example, the authors state in the develop phase that “a very lean approach to writing initial drafts is required. As the materials are tried with students, weaknesses and discrepancies can be identified, and, where necessary, materials can be expanded to overcome any shortcomings…. When a small amount of instruction on a learning objective has been developed, it is tried with a single trainee from the target population to see whether it is successful. Since these materials should have been prepared in the leanest possible form, the tryouts should reveal weaknesses.”

Benefits of IPISD

The complete and full use of IPISD as it was designed should help those responsible for creating instruction to arrive at clear, data-based decisions about the product they create. In other words, IPISD helps instructional designers and managers resist the urge to develop training based on their own past experience, comfort with a modality, or desire to try something new or trendy. As described in Phase III, Block 2, “New delivery systems and techniques often become fashionable simply because they are available. In this block, procedures are defined for selecting one or more suitable media for specific learning events and activities. By using this approach, delivery systems can be selected on the basis of defined requirements rather than on the basis of availability or the appeal of currently existing fads” (Branson et al. 1975).

Limitations of IPISD

The IPISD model was designed for a very select audience using a unique use case: developing instruction for 1970s military personnel. Using the exact model as prescribed might work for an organization competing in the modern workforce, or it might not.

It is important to note that the creators of the IPISD model believed that whatever framework they developed would not be universally applicable. They stated that “the design of an efficient, effective, generic all-purpose model is no more likely than the design of an all-purpose drug” (Branson 1978).

This point is critical for readers today. From the origin of instructional systems development, it was never the expectation that any one model would be used for all instructional design and development efforts. And yet, immediately after its publication and in the decades following, we have seen organizations institutionalize a single ISD model for all their development efforts, professing that a standard model could and should apply universally.

Just as “two or more equally successful alternative solutions can be found for any instructional problem” (Branson 1978), there are likely two or more equally effective instructional design models that could be used in the creation of the solution. An instructional designer, therefore, should consider not just what to build, but also how to systematically approach the design and development.

Not long after the IPISD model was implemented, concerns began to arise regarding the practicality and consistency of adherence to a standard model throughout all branches of the military. By 1979, the Office of the Assistant Secretary of Defense Manpower, Reserve Affairs and Logistics (OASD MRA&L) engaged the help of Robert Vinberg and John N. Joyner from the Human Resources Research Organization (HumRRO) to answer the following questions:

•  Do the current methodologies, as represented in the major guidance documents used in the services, provide the means for attaining the goals of ISD?

•  Do current applications of ISD reflect these goals?

•  How can ISD methodologies and applications be made more effective?

Vinberg and Joyner evaluated three ISD models in practice at the time:

•  IPISD, which they referred to as ITRO

•  Marine Corps Order P1510.23B, which was described as “a greatly reduced version of the ITRO model”

•  Air Force Pamphlet 50-58, Handbook for Designers of Instructional Systems (1973–1975)

Additionally, they surveyed 209 units, agencies, and schools where instruction was developed and conducted detailed interviews of training developers to evaluate how they had designed and developed 57 courses.

In their summary report, Vinberg and Joyner (1980) stated that “ISD, a systems approach to training development, has many potential advantages, but it is demanding to carry out. It requires sustained commitment to a repetitive process of analysis, design, verification, and revision. Experience in attempts to institutionalize such a process has tended to reveal problems.”

It is interesting to note that they described instructional systems development as “procedures for the development of training which are characterized by: (1) Rigorous derivation of training requirements from job requirements. Training requirements are to be selected so as to maximize the combined effectiveness of the training and non-training components of a total operational system. (2) Selection of instructional strategies to maximize the efficiency of training. (3) Iterative trial and revision of instruction during development until training objectives are met.”

Again, it appears that from the early entry of ISD, iterations were anticipated. And, it is the iterative nature that they believed held the most potential, but also created the biggest obstacle to institutionalize. They went on to state that “its iterative and derivative character virtually assures that training will be relevant if available procedures are faithfully carried out. In practice, however, many of its components are omitted, and the close connection between components that is essential to make the process truly derivative is not maintained. Most important, the testing and revision necessary to insure job relevance generally do not occur. The potential of ISD to insure that training meets job requirements is not being realized” (Vinberg and Joyner 1980).

But perhaps the most impactful finding and recommendation from the study was the fault they found in the last phase, control. Training development and evaluation, they said, were “regarded as separate activities, which meant that the potential of the ISD process for improving the training was not happening.” In other words, after a training initiative was live and in place, the instructional design team was no longer involved or engaged. The evaluation of the effectiveness of the training was instead observed and measured by the commanding officers. And the two were rarely in contact. In many cases, the instructional designer was a contract civilian, while those who witnessed the impact of the training (students, instructors, or officers of the students) were enlisted.

Through this perspective, we can now see why, the model changed shortly thereafter, with the name of the final phase changing from control to evaluation.

The Beginning of ADDIE

Around the same time that Vinberg and Joyner were engaged in their study, a doctoral candidate named Russell Wayne Watson was working on his dissertation, “The Analysis and Design of an Instructional Systems Course.” In the dissertation, Watson described how, in the years following the 1975 launch of the IPISD model, it became increasingly difficult to meet the goal of developing all army training using the model.

After successful defense of his dissertation, Watson used his understanding of the challenges faced by the military to evolve the explanation of ISD in a paper he presented to the International Congress for Individualized Instruction. “The five phases of ISD are analysis, design, development, implementation, and evaluation and control,” he stated. “The first four are sequential in nature, but the evaluation and control phase is a continuous process that is conducted in conjunction with all of the others” (Watson 1981). Watson’s paper included a visual diagram of the ISD workflow, which shows how Phase V, evaluation and control, influences all other phases of ISD (Figure 11-2). This is in line with the changes to ISD recommended by Vinberg and Joyner.

Figure 11-2. Rendering of Watson’s ISD Workflow Diagram

While the main phases of Watson’s model also followed those of systems engineering, the supporting blocks in each phase differed slightly from the IPISD model (Figure 11-3). Obvious revisions that Watson made include renaming Phase V from control to evaluation and control, expanding the importance of task analysis during the analysis phase, and shifting the review of existing courses and materials from analysis to development.

The last two phases remained largely untouched, with the addition of the word evaluation and the removal of IPISD’s revise system block. This removal is an interesting point, considering that Watson obviously understood the importance of evaluation throughout the process. Graphically, however, revision does not appear to be part of the model.

There are several iterations of the 1981 model; some involved shifting the graphical representation to a circular model in which evaluation enveloped all other phases. Added arrows indicate an iterative design.

In 1984, three years after the Watson paper was presented, the US Army published A Systems Approach to Training, which once again revised the model. This version shortened the name of Phase V from Watson’s control and evaluation to evaluation. Thus, the acronym ADDIE was born (although the handbook still referred to the model as instructional systems development, ISD).

Although ADDIE is often accused of being a linear model, it is clear that the early users never intended it to be so. See, for example, the next iteration from the US Army Field Artillery School in 1984 (Figure 11-4).

Figure 11-3. Watson’s Revised Model

Figure 11-4. The ADDIE Process

The term ADDIE officially appeared nearly a decade later in A Handbook of Instructional and Training Program Design, by Michael Schlegel (1995). Within the abstract of the document he states, “This paper will utilize the generic Instructional Design Model of Analyze, Design, Development, Implementation and Evaluation (ADDIE) … and provide detailed job aids in the form of rating sheets and checklists (mechanically or hard copy) for each of the major steps.” Whether others had used the term ADDIE to describe the ISD model before the publication of Schlegel’s handbook is a question with a rich history of debate within the instructional design community.

Regardless of when or where the term was first used, ADDIE wins the prize in branding. It is an acronym that is easily remembered and aids in the communication of the phases of instructional systems design to those who aren’t familiar with the process.

Even though the origin story is a bit unclear, ADDIE is the most well known or referred to of the ISD models. But, because there is no true “author,” the acronym is open to interpretation and has been revised over the years to adapt to the needs of modern instructional systems design.

At this point, ADDIE has almost become an eponym for instructional systems design, as generic in use as Kleenex or Band-Aid. However, from person to person, the activities performed in each phase and even the order or sequence of the phase may vary widely.

There are many alternatives to IPISD or ADDIE, too many to include in this resource. Like the original IPISD authors stated, there is no all-purpose model that will work for everyone. Choosing which model to follow will likely vary over time, from project to project, or within different organizational structures. The rest of this chapter reviews some of these other models.

Systems Approach Model

Walter Dick, Lou Carey, and James Carey described a detailed process for the creation of instruction, which they referred to as the Systems Approach Model in their 1978 textbook, The Systematic Design of Instruction. While IPISD and SAT were popular in government, Dick and Carey’s model gained popularity in educational institutions.

In the Systems Approach Model, Dick and Carey describe 10 components that are necessary for the creation of effective instruction. The model “is based not only on theory and research but also on a considerable amount of practical experience in application.”

They also acknowledge from the outset that the model can be personalized, using an analogy of how a great cook first starts with a recipe and then uses intuition, experience, success, and failure to make a recipe their own, unique creation. Later, Dick and Carey (2021) go on to say that “the flexibility, insight, and creativity required for original solutions reside in experienced users and professionals—not in models.”

In their model, they lay out these components involved in the creation of instruction:

•  Identify instructional goals.

•  Conduct instructional analysis.

•  Analyze learners and contexts.

•  Write performance objectives.

•  Develop assessment instruments.

•  Develop instructional strategy.

•  Develop and select instructional materials.

•  Design and conduct formative evaluation of instruction.

•  Revise instruction.

•  Design and conduct summative evaluation.

In the diagram of their model, Dick and Carey distinguish iterative cycles of evaluation and revision with dotted lines (Figure 11-5; Dick and Carey 2021).

Figure 11-5. The Dick and Carey Systems Approach Model

The explanation of each component describes, in detail, the activities that are involved within the broader component. For example, for identify instructional goals, the authors provide a decision tree for front-end analysis, the main steps in performance analysis, structure for conducting a needs assessment, job and task analysis, and guidance on writing instructional goals.

Benefits of Dick and Carey’s Systems Approach Model

Probably the greatest benefit of the Systems Approach Model is how the original authors have evolved the process over the last 40 years to keep the model current. The descriptions and examples are relevant to today’s instructional designers. It also has the credibility of decades of practical evidence, research, and theory that some modern ISD models may lack.

Potential Limitations of Dick and Carey’s Systems Approach Model

The Systems Approach Model is a long process. There is little doubt that the instructional designer will be thorough, but the entire process may be unnecessary for some projects. And, while the authors encourage the reader to make it their own, for many instructional designers that takes a lot of courage and experience. With businesses continuing to compress training project timelines (anyone can make a YouTube video!), instructional designers who are still reliant on following the process as described might find themselves in an unfavorable position.

The Kemp Model

More than 50 years ago, Jerrold Kemp wrote, “Panning for a student’s learning should be a challenging, exciting, and gratifying activity” in his textbook Instructional Design: A Plan for Unit and Course Development.

Kemp then introduced a circular diagram of his process in the 1985 textbook The Instructional Design Process. Originally referred to as the Kemp model, this process is now sometimes called the Morrison, Ross, and Kemp model or the MRK model (Figure 11-6).

Although Kemp died in 2015, his model and revisions to the textbook continue through his fellow authors, Gary R. Morrison, Steven J. Ross, Jennifer R. Morrison, and Howard K. Kalman. Now in its eighth edition, Designing Effective Instruction outlines nine elements that are essential to instructional design:

•  Instructional problems

•  Learner characteristics

•  Task analysis

•  Instructional objectives

•  Content sequencing

•  Instructional strategies

•  Designing the instructional message

•  Development of instruction

•  Evaluation instruments

Figure 11-6. The Kemp Model

Similar to other models, the Kemp model posits that “there is never one perfect approach to solving an instructional design problem.” And the authors also reinforce the idea that “a design model must grow with the instructional designer.” Additionally, each of the nine elements in the Kemp model has an accompanying set of activities and considerations for the instructional designer and project team to make. For example, the task analysis element provides guidance for preparing for a task analysis, advice on ways to extend beyond a typical procedural analysis, and techniques on gathering and recording your data.

There are, however, a number of differences between Kemp’s model and the others. The most obvious is the graphical depiction of the model as a series of three concentric ovals, with nine independent ovals contained within the third. This is a visual departure from the series of lines and arrows connecting boxes in other ISD models.

The outer two ovals in the Kemp model represent ongoing factors that are present throughout the entire instructional design and development project. The first ring represents the managerial considerations (support services, project management, and planning) that an instructional designer and team will need to consider. The second oval contains the activities of evaluation (summative, formative, and confirmative) and revision. The third ring contains independent ovals representing each of the nine elements. The independence of these ovals is critical, as it is intended to visually represent the belief that each element can be addressed simultaneously, individually, or perhaps not at all.

Benefits of the Kemp Model

The Kemp model has received praise for its flexibility and nonlinear design. It is less prescriptive and allows for amplifying or downsizing the process based on a project or learner need.

The latest version includes a section on Lean instructional design, which describes the appropriate concessions to make when faced with limitations on time and resources. This guidance could help overcome potential trepidation of newer instructional designers who aren’t ready to make it their own without advice on how to do so effectively.

The Kemp model is also often noted for its emphasis on a learner-centric design; for example, considering the personal and social characteristics of the learner, including culturally diverse learners, learners with disabilities, and adult learners in the corporate environment.

Potential Limitations of the Kemp Model

As with the Systems Approach Model, it’s challenging to explain the Kemp model in a simple, concise fashion to non-instructional-design team members. And, while the authors attempt to address this in the textbook’s introduction, the model still falls short of having a catchy, easy way to explain the process.

Another potential challenge is a by-product of its flexible design—when do you cut out an activity or element and when do you know you’ve done enough? While there are benefits to having a flexible model, an instructional designer might unintentionally miss key details that could result in a better design, had they not excluded one (or more) of the nine elements.

Successive Approximation Model (SAM)

Allen Interactions, the custom learning content company founded by Michael Allen, created the Successive Approximation Model (SAM), which followed an interactive design process. In an effort to help instructional designers create better quality instruction, faster, Allen along with Richard Sites published Leaving ADDIE for SAM in 2012. In the book, Allen described the iterative nature of SAM, directly comparing and contrasting SAM with a genericized ADDIE model. The term ADDIE, he believed, had become ubiquitous even though no one applied the model in the same way beyond dividing “tasks into analysis, design, development, implementation, and evaluation phases” (Allen and Sites 2012). Beyond that, the actual activities varied widely between organizations and even among departments.

While IPISD was inspired by the early 1970s systems engineering models, SAM was based on the Agile, or iterative, design models prevalent in the software industry. The goal was to be an effective and manageable process that allows teams to “complete projects within time and budget expectations, predict the impact of in-process changes, and produce a product that meets established criteria for quality.” The nonlinear process was designed to help avoid a problem often encountered in more linear models—costly mistakes in design that are not caught until after the training is developed, and in some cases implemented.

SAM also attempts to simplify the process. There are two versions an instructional designer uses, based on the complexity of the project. SAM 1 is a very simple version that is most effective when there is a small design and development team and no complex media elements; SAM 2 is more robust.

SAM 1 is a three-stage iterative process of evaluate, design, and develop (Figure 11-7). Prototypes are revised through a series of three cycles, ultimately arriving on a final production quality training solution. Fewer than three cycles, Allen warns, may lead to sacrifices in design, and more than three may lead to perpetual cycling in the pursuit of perfection. SAM works on the principle that “good enough” is always better than nothing, and that no training product will ever be perfect.

Figure 11-7. SAM 1

The model and diagram that most people associate with SAM is that of SAM 2, which is better suited for more complex teams and training projects. SAM 2 is divided into three phases: preparation, iterative design, and iterative development (Figure 11-8).

Figure 11-8. SAM 2

SAM 2 also requires the use of prototypes in design followed by a series of four review moments (design proof, alpha, beta, and gold) each time the product in development increases in fidelity and completeness. By reviewing a tangible product, rather than a written explanation, misinterpretations and incorrect assumptions can be caught early in the development process, when they are less costly to fix.

Like the Kemp model, SAM is focused on learner-centric design. Representative or actual learners are involved, not only at the beginning and end of the process like in many ISD models, but throughout the preparation, design, and development phases.

Benefits of SAM

Prototyping at each phase allows designers to get feedback along the way from learners and project stakeholders. While tangible learning products are not developed until later in linear ISD processes (such as the second D in IPISD’s ADDIC and ADDIE), SAM designers immediately start developing tangible products, which are then used for evaluation.

Another benefit of SAM is how easy it is to describe to those outside the instructional design world, since many organizations use Agile and iterative design in their own product development. Even when not familiar, the model is simple to explain and SAM provides a simple, catchy, acronym (like ADDIE) that was missing in IPISD, the Systems Approach Model, and the Kemp model.

Potential Limitations of SAM

Perhaps the greatest risk of SAM is the temptation to continue iterating in a quest for unattainable perfection, which can start early in the process. Instructional designers, by and large, are not used to operating in the space of imperfection; however, overdesigning the prototype reduces, if not eliminates, the value of iteration. If colors and graphics are included in prototypes too early, the focus may shift from debating the effectiveness of the instructional interaction to how the graphic or color they chose was off-brand. Prototypes are intended to be imperfect because their value is found in how easy they are to discard.

There’s also a risk of getting stuck in an endless cycle of development iterations. Organizations need to be comfortable with versioned improvements of training, similar to the versioned improvements in software or applications. Think about how a new smartphone release, a new software update, or a new version of our favorite video game improves the user experience based on actual data while addressing known bugs. Similarly, SAM encourages organizations to launch a product with the expectation that it will undergo revisions, sometimes quickly, after go-live.

Final Thoughts

There are many ISD models from which to choose, and this chapter presents only a few that have been implemented across educational, governmental, and private institutions. And there’s also the option to create a custom process based on the unique needs of a business or institution.

Perhaps the biggest takeaway from this exploration is the affirmation that there is no standard model in instructional systems design. However, there is one similarity across all models: the intent to create training that helps learners and organizations perform better. How you achieve that will be up to you and can be limited, both in the design and process, only by your creativity, intuition, experience, and constraints (either real or imagined).

About the Author

Angel Green is a learner advocate who is passionate about driving business results through innovative solutions in organizational design, performance management, and learning programs. With nearly 20 years’ experience in the learning industry, she has led the creation of numerous award-winning programs, each dedicated to improving employee performance. Angel is dedicated to sharing her knowledge and experience on the benefits of empathetic design, introducing tools and techniques that designers can use to help create learner-centered instructional products. She is the co-author of Leaving ADDIE for SAM Field Guide and has written and spoken extensively within the learning and development industry.

References

Allen, M., and R. Sites. 2012. Leaving ADDIE for SAM. Alexandria, VA: ASTD Press.

Branson, R.K. 1978. “The Interservice Procedures for Instructional Systems Development.” Educational Technology 18(3). Special Issue: Military Training.

Branson, R.K., G.T. Rayner, J.L. Cox, J.P. Furman, F.J. King, and W.H. Hannum. 1975. “Interservice Procedures for Instructional Systems Development, 5 vols. (TRADOC Pam 350-30 NAVEDTRA 106A). Ft. Monroe, VA: U.S. Army Training and Doctrine Command, August. (NTIS No. ADA 019 486 through ADA 019 490).

Dick, W., and L. Carey. 1978. The Systematic Design of Instruction. Glenview, IL: Scott, Foresman, and Company.

Dick, W., and L. Carey. 2021. The Systematic Design of Instruction. New York: Pearson Publishing.

Kemp, J. 1971. Instructional Design: A Plan for Unit and Course Development. New York: Fearon Publishers.

Kemp, J.E. 1985. The Instructional Design Process. New York: Harper and Row.

Morrison, G.R., S.J. Ross, J.R. Morrison, and H.K. Kalman. 2019. Designing Effective Instruction. New York: John Wiley and Sons.

Schlegel, M.J. 1995. A Handbook of Instructional and Training Program Design. ERIC Document Reproduction Service ED383281.

U.S. Army Field Artillery School. 1984. “A System Approach to Training.” ST-5K061FD92. Washington, DC: US Government Printing Office.

Vineberg, R., and J. Joyner. 1980. “Instructional System Development (ISD) in the Armed Services: Methodology and Application.” Final Report, August 25, 1977, through March 19, 1979. (Report Number: HumRROLTR-80-1). Office of the Assistant Secretary of Defense for Manpower.

Watson, R. 1981. “The Analysis and Design of an Instructional Systems Course.” A Dissertation Submitted to The University of Arizona.

Recommended Resources

Biech, E. 2017. The Art and Science of Training. Alexandria, VA: ATD Press.

Bloom, B.S., and D.R. Krathwohl. 1956. Taxonomy of Educational Objectives; The Classification of Educational Goals by a Committee of College and University Examiners. Handbook I: Cognitive Domain. New York: Longmans, Green.

Gagné, R. 1985. The Conditions of Learning, 4th ed. New York: Holt, Rinehart & Winston.

Lombardo, M.M., and R.W. Eichinger. 1996. The Career Architect Development Planner. Minneapolis: Lominger.

Merrill, M.D. 1983. Instructional Design Theories and Models: An Overview of Their Current Status. Hillsdale, NJ: Prentice-Hall.

Reigeluth, C.M. 1983. Instructional Design Theories and Models. New York: Routledge.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset