Chapter Three
Collecting and Analyzing Data for Instructional Design Projects

Before an instructional designer or a performance improvement specialist can determine a course of action to address a perceived need, data must be gathered and analyzed. Some may argue this is the most critical part of the entire process, since what is discovered in the data collection and analysis phases will determine the direction that the solution or intervention should take.

Whether the intention is to conduct an instructional needs analysis or a performance needs analysis, the intended outcome is the same: to identify the gaps (if any) that exist between where the target audience is now compared to where they could or should be.

All of the data collection and analysis practices described in this chapter should achieve this outcome. The skill lies in knowing what data collection tools to use and how to analyze the results of the data collection process.

The Nature of Data

Before deciding what data to collect and how it should be analyzed, we need to agree on a few basic concepts related to the nature of data itself. Guerra-Lopez (2008) points out that data must meet three basic characteristics:

  1. Relevancy: The data must directly relate to the research questions being answered.
  2. Reliability: The data must be measured, trustworthy, and consistent.
  3. Validity: The data must measure what we intend to measure (p. 135).

Guerra-Lopez distinguishes between “hard” and “soft” data (an important consideration in our upcoming discussion of qualitative and quantitative measures). When we refer to “hard” data, we mean data that can be independently verified through external sources; that is, others studying the same situation using the same data collection and analysis process will reach the same conclusions.

“Soft” data is represented by attitudes or inferences not independently verifiable of outside sources. Even the use of rating scales in the hope that the data can be considered “hard” (that is, numerically quantifiable) will not meet the standards of “hard” data. It is best to combine hard and soft measures to get a more complete picture.

An Open or Closed Process?

As has been pointed out earlier in this work, the instructional design process should be an “open” process; that is, it is cyclical, always requiring the instructional designer to review the status of the work and compare to previous stages, goals, or objectives. If need be, adjustments are made and the process continues to evolve based on constant feedback and revision. This phenomenon applies to the data collection and analysis phase. When viewed as a linear process, the steps are taken in order, beginning with a statement of purpose and a research question, then moving to the data collection and analysis phases and concluding with a summary of findings in a final report.

When viewed as an open, or cyclical, process, however, several feedback loops might cause a readjustment. The Academy for Educational Development (2006) suggests that interpreting data may require a reinterpretation of the results of the data analysis methodology. In addition, once the data is disseminated to the organization's stakeholders, the instructional designer may discover that the original research question was off target or not adequately framed. These iterative steps will strengthen the outcome of the data analysis process and improve future data collection and analysis procedures.

One mistake that can easily be made is to choose one or more data collection tools without addressing the data needed to adequately answer the research question. The next section focuses on this issue.

Quantitative versus Qualitative Approaches

Historically, quantitative research has been the foundation for the social sciences. Its strength is the reduction or elimination of bias, empirical testing, and the “wall of separation” between the researcher and his or her subjects.

Qualitative research relies on a more constructivist approach; that is, the logic of the approach moves from the specific to the general in an environment that closely links the observer with the observed. As summarized by Terrell (2012), the quantitative tells us “if” and the qualitative tells us “how or why” (258). Terrell also points out the growing popularity of mixed methods (collecting both quantitative and qualitative data) in the fields of nursing, sociology, education, and other fields. By doing so, the researcher can paint a more complete picture of the environment under study.

Debrovolny and Fuentes (2008) point out some significant differences between the qualitative and quantitative approaches that an instructional designer should consider before deciding on the data collection methods to be used in Table 3.1.

Table 3.1 Comparing Quantitative and Qualitative Methods

Quantitative Qualitative
An assumption or hypothesis is made before data is collected The analyst looks at the big picture, or context, and attempts to describe what is happening
The analyst assumes the situation is constant and everyone sees it the same way The analyst assumes that people see the situation differently from one another
Requires a large random sample of people or data The analyst selects specific data or people to study in greater depth
The observer is separated from the observed The observer and the observed are known to each other and closely involved in the process
Results are described numerically Results are reported in words or stories
Use predetermined theories to determine the data to be collected Derive concepts and theories from the data that is collected
Analyze data using statistical data Use inductive methods to analyze data, drawing patterns and inferences accordingly

Source: Adapted from Debrovolny and Fuentes (2008).

Pershing (2006) suggests four myths that exist when comparing qualitative and quantitative methods. These are summarized in Table 3.2.

Table 3.2 Quantitative versus Qualitative Methods: Myths and Reality

Myth Reality
The philosophical positions of quantitative and qualitative methods are incompatible While both proceed from different premises, they produce more complete results when used in combination with one another
Quantitative research is more rigorous than qualitative research Both approaches have standards of rigor that, when followed, will yield useful information. When not followed, both approaches will not yield useful outcomes
Quantitative research uses a variety of methodological approaches, while qualitative research methods are all the same A variety of approaches exist with both methods and expertise for both comes in selecting and combining the most appropriate tools for the situation
Quantitative methods yield quantitative data while qualitative methods yield qualitative data This myth exists when, for example, a supposedly quantitative approach (e.g., a survey) in fact yields qualitative information (e.g., opinions)

Source: Adapted from Pershing (2006).

In deciding which data collection tools to use, Debrovolny and Fuentes suggest that we answer preliminary questions whose answers will help us to determine the approach we should take. These questions include:

  • What research question are we trying to answer?
  • Do we already have data we can use to answer our research question?
  • What data do we need versus what data can we access?
  • How much time have we been given to conduct the analysis (little time may guide us to quantitative methods, while more time might permit qualitative approaches or a combination of both)?

Let's first turn our attention to describing the data collection tools available to us and how the data should be analyzed. The chapter will conclude with a comparative table showing the advantages and disadvantages of each.

The Data Collection Process

There are a number of considerations when deciding which data to collect and how it should be collected. This section addresses these processes is detail.

Establishing Sampling Procedures

A sample is a small, representative group drawn from a larger group, called a population, and is used for quantitative data collection and analysis. Sampling is identifying smaller groups for examination. It is used to economize the time and expense of gathering information about needs, and it is often the focus of some questions (Thompson 2002).

Any sample will deviate to some extent from the “true” nature of the population from which it is drawn, a principle known as sampling error. Sampling error cannot be eliminated, but it can be predicted and conclusions can be reached in a way that considers its effects. A sampling procedure is the method used to select a sample.

Instructional designers commonly use any of four types of sampling procedures: (1) convenience or judgmental sampling, (2) simple random sampling, (3) stratified sampling, and (4) systematic sampling. To determine which one to use, instructional designers should consider the objectives of the needs assessment, certainty needed in the conclusions, the willingness of decision makers in the organization to allow information to be collected for the needs assessment study, and the resources (time, money, and staff) available.

Convenience or judgmental sampling is probably used more often than many instructional designers would care to admit. It is a nonprobability sampling in that the subjects for review are chosen for convenience or accessibility rather than representativeness. Sampling is tempting because it is usually fast and inexpensive. Unfortunately, convenience or judgmental samples do not yield unbiased results because the choice of cases may be biased from the outset.

To carry out convenience or judgmental sampling, instructional designers (1) select the number of cases to include in the sample based on convenience (they are easiest to obtain), access (capable of examination), or intuition (best guess of number to sample), and (2) choose the sample based on the results of Step 1.

Simple random sampling is probability sampling in which each subject in the population has an equal chance of being selected for study. This sampling procedure is appropriate when the population is large, and it does not matter which cases in the population are selected for examination. To carry out simple random sampling, instructional designers should (1) clarify the nature of the population, (2) list the population, (3) assign an identification number to each member of the population, and (4) select the sample by using any method that permits each member of the population an equal chance of being selected (use a random number table or the random number feature on certain electronic devices).

Stratified sampling is more sophisticated. It is appropriate when the population is composed of subgroups differing in key respects. In needs assessment, subgroups may mean people in different job classes, hierarchical levels, structural parts of the organization, or geographical sites. They may also mean classifications of people by age group, level of educational attainment, previous job experience, or performance appraisal ratings. Stratified sampling ensures that each subgroup in a population is represented proportionally in a sample.

For instance, suppose 10 percent of an organization comprises salespersons. If it is important in needs assessment to ensure that 10 percent of the sample comprises salespersons, then stratified sampling is appropriate. In simple random sampling, that may not occur. To carry out stratified random sampling, instructional designers should (1) clarify boundaries of the population, (2) identify subgroups within the population, (3) list members of each subgroup, (4) assign numbers to each member of each subgroup, (5) determine what percentage of the population comprises members of each subgroup, and (6) select the sample at random (each subgroup should be represented in proportion to its representation in the population).

Systematic sampling is an alternative to other methods. It is simple to use. Suppose that it is necessary to assess the training needs of 10 percent of all employees in an organization. First make a list of everyone in the organization. Then divide the number of persons by 10 percent. Finally, select every tenth name on the list. If names are listed in random order, the resulting sample will be as good as a simple random sample. But if there is any order to the list whatsoever, the resulting sample may be biased because of that order.

Many novices—and, occasionally, even those who are not novices—complain about sample size. On this subject, misconceptions are common. For instance, some people claim a sample size of 5 or 10 percent of a population is adequate for any purpose. Others may (jokingly) claim that any needs assessment is adequate if at least 345 cases are reviewed—because 345 is the minimum number of cases to achieve a representative sample of the entire U.S. population at a low confidence level! However, population size has nothing to do with sample size.

Three issues should be considered when selecting sample size. First, consider degree of confidence. To be 100 percent certain, examine the entire population. But if lower degrees of confidence can be tolerated, the percentage of the population to be examined can be reduced. Second, consider maximum allowable error, indicating what number it may not exceed. Third, consider standard deviation. It measures variations in the population. When these numbers have been computed, appropriate sample size can be determined.

Determining Data Collection Strategy and Tactics

How will information about instructional needs be collected? Answer this question in the needs assessment plan, making sure that the data collection methods chosen are appropriate for investigating the performance problem. Five methods are typically used to collect information about instructional needs: (1) interviews, (2) direct observation of work, (3) indirect examination of performance or productivity measures, (4) questionnaires, and (5) task analysis. Other possible data collection approaches include (1) key informant or focus groups, (2) nominal group techniques, (3) Delphi procedure, (4) critical incident method, (5) root cause analysis, (6) competency assessment, (7) assessment center, and (8) exit interviews.

To get a better picture of how these techniques work and where they should be used, let's examine each.

Interviews

Interviews are structured or unstructured conversations focusing on needs. Swanson (1994) points out that conducting effective interviews requires a good deal of skill and preparation. The author lists these skills:

  • The ability to develop questions that will get meaningful answers.
  • The ability to ask open-ended questions spontaneously.
  • The ability to create an atmosphere of trust—not defensiveness.
  • The ability to take complete and accurate notes without infusing one's own ideas (p. 80).

Instructional designers should usually focus these conversations on key managers' perceptions about the performance problem and the planned instruction to solve it. A key advantage of interviews is that they allow instructional designers the flexibility to question knowledgeable people, probing for information as necessary (Holstein and Gubrium 2001). A key disadvantage of interviews is that they may be time-consuming and expensive to carry out, especially if travel is required. To plan interviews, instructional designers should:

  • Prepare a list of general topics or questions.
  • Identify people knowledgeable about training needs.
  • Meet with the knowledgeable people and pose questions about training needs.
  • Take notes during or immediately following the interview.

Direct Observations

Direct observations of work are, as the phrase implies, first-hand examinations of what workers do to perform and how they do it. They may be planned or unplanned, they may rely on specialized forms to record the actions or results of performers, and they may even focus on behavior (Thompson, Felce, and Symons 1999).

Indirect Observations

Indirect examinations of performance or productivity measures are called indirect because they are unobtrusive and do not require instructional designers to observe workers performing; rather, they judge performance from such tangible results or indicators of results as production records, quality control rejects, scrap rates, work samples, or other records about the quantity or quality of work performed. Indirect examinations may be structured (in which results of observations are recorded on checklists) or unstructured (in which the researcher's feelings and perceptions about results are recorded).

Questionnaires

Questionnaires, sometimes called mail or web-based surveys, consist of written questions about instructional needs. They solicit opinions about needs from performers, their supervisors, or other stakeholders. They are sometimes developed from interview results to cross-check how many people share similar opinions or perceptions about needs (Dillman 1999). They may be structured (and use scaled responses) or unstructured (and use open-ended essay responses). In recent years, many people have moved from so-called paper-and-pencil questionnaires to web-based or web-supported questionnaires.

Anecdotal evidence suggests that response rates as poor as 5 percent are not uncommon, and such low response rates are not helpful for drawing generalizations—although they may provide intriguing information for subsequent data collection efforts. Due to these shortcomings, a designer using a questionnaire to collect data should be prepared to:

  • Gain the support of the target group's management so questionnaire recipients understand that their management is supporting the data collection process.
  • Keep the questionnaire to a reasonable length. Response rates are likely to be negatively affected if the instrument is perceived to be overwhelming.
  • Reassure the recipients (and their management) that the data will be held in strict confidence and that no one individual's responses will be divulged.
  • Follow up with the responders after the data has been collected and analyzed so they have a general idea of what was discovered because of the process and how the information will be used.

Task Analysis

Task analysis is a general term for techniques by which work procedures or methods are carried out (Annett and Stanton 2001; Watson and Llorens 1997). While there are many variations to the task analysis process, it focuses on identifying the primary duties of a job, the tasks required to successfully fulfill each duty and the sub (or supporting) tasks required to complete each major task.

A major challenge to the process is in creating a manageable set of tasks that can be validated and implemented. Many task analysis procedures take an additional step by labeling each major task based on three criteria:

  1. Frequency: That is, how frequent is the task performed. If done infrequently, the instructional designer may need to develop supporting job aids for use when the task is required.
  2. Difficulty: That is, how much skill, knowledge, or experience are needed to complete the task? A high level of difficulty implies extensive training on this task until proficiency is achieved.
  3. Criticality: That is, how important is this task to the overall success of the job? If the task is not that critical, perhaps it should not be included in the overall task analysis or be dealt with in a more informal (rather than a formal training) way.

There are several additional data collection approaches that can be used effectively. These are summarized below.

Key Informant Groups or Focus Groups

Key informant groups or focus groups rely on highly knowledgeable people or committees composed of representatives from different segments of stakeholders (Bader and Rossi 2002; Krueger and Casey 2000). Key informant groups are especially knowledgeable about a performance problem or possible instructional needs; focus groups are committees, usually created informally, that pinpoints instructional needs through planned participation of representatives from key stakeholder groups.

Nominal Group Technique

The nominal group technique (NGT) takes its name from the formation of small groups in which the participants do not, during the earliest stages of data collection, actively interact. Hence, they are groups in name only—they are only nominal groups. To use NGT in data collection, instructional designers should:

  • Form a panel of people representative of the targeted learners (or their organizational superiors).
  • Call a meeting of the panel.
  • Ask each panel member to write opinions about training needs on slips of paper.
  • Permit no discussion as the opinions are being written.
  • Record items on a whiteboard or by using Post-It notes for subsequent panel discussion.
  • Combine similar responses.
  • Solicit discussion from panel members about what they have written.
  • Ask panel members to vote to accept or reject the opinions about training needs recorded earlier.

The Delphi Procedure

The Delphi procedure takes its name from the famed Delphic Oracle, well-known during ancient Greek times. Similar in some ways to NGT, the Delphi procedure substitutes written questionnaires for small-group interaction to collect information about training needs. To use the Delphi procedure to collect data, instructional designers should:

  • Form a panel of people representative of the target group.
  • Develop a written questionnaire based on the training needs or human performance problems to be investigated. Posing open-ended questions is acceptable at the outset.
  • Send copies of the questionnaire to panel members.
  • Compile results from the initial round of questionnaires and create scales to assess levels of agreement among the experts.
  • Prepare a second questionnaire and send it and the results of the first round to the panel members.
  • Compile results from the second round.
  • Continue the process of feedback and questionnaire preparation until opinions converge, usually after three rounds.

The Critical Incident Method

The critical incident method takes its name from collecting information about critically important (critical) performance in special situations (incidents). Critical incidents were first used as a method of collecting information about the training needs of pilots during World War II and were subsequently used to identify special training needs of CIA agents (Johnson 1983).

To use the critical incident method, instructional designers should:

  • Identify experts such as experienced performers or their immediate supervisors.
  • Interview the experts about performance that is critical to success or failure in performing a job.
  • Ask the experts to relate anecdotes (stories) from their first-hand experience about situations in which performers are forced to make crucially important decisions.
  • Compare stories across the experts to identify common themes about what performers must know.
  • Use this information to identify training needs.

Alternative approaches to this critical incident process may be used and may focus on the most difficult situations encountered, common daily work challenges, or the most common human performance problems observed with newcomers.

Root Cause Analysis

The root cause analysis process was popularized by the quality movement that drew a lot of attention in the 1980s. The strength of the methodology is that it not only attempts to answer the what and how questions, but also the why question. If we know why something happened, then we can get to the source of the problem and address the underlying causes so the issue will not occur again.

There are several ways to depict the process, but, as Rooney and Vanden Heuvel (2004) point out, the process comprises four main steps:

  • Step One: Data collection. All information related to the problem is collected so it can be analyzed to uncover the eventual cause.
  • Step Two: Causal factor charting. A chart is created that displays the sequence of events that led up to the problem occurrence and the conditions that surrounded the event.
  • Step Three: Root cause identification. Now that the potential causal factors have been identified, the investigators can now analyze the causes to identify the reason(s) for each cause.
  • Step Four: Recommendation generation and implementation. In this final step, the investigators present recommendations for preventing the problem from occurring again.

Okes (2008) points out that effective root cause analysis can often be negatively influenced by human emotional characteristics. These include:

  • Recency bias. If the same or similar problem occurred recently, the current situation resulted from the same causes that affected the first problem.
  • Availability bias. In data collection, we tend to collect the data that's easy to obtain rather than the data we should collect.
  • Anchoring bias. Latching onto the first piece of data we collect while ignoring other, more relevant data.
  • Confirmation bias. Collecting only evidence that supports our theory of what caused the problem rather than looking for evidence that might disprove our theory.

Root cause analysis has proven to be an effective tool, when done properly, for uncovering the source of performance problems and correcting the causes to prevent future problems from arising.

Competency Assessment

Competency assessment has been growing in popularity in recent years (Rothwell and Graber 2010; Rothwell, Graber, Dubois, Zabellero, Haynes, Alkhalaf, and Sager 2015; Rothwell and Lindholm, 1999). Its purpose, according to one of many views, is to identify and isolate the characteristics of ideal (exemplary) performers (Dubois and Rothwell 2000). Those characteristics become a foundation for preparing instruction designed to raise average performers to ideal performers. A major advantage of competency assessment is that it is targeted toward achieving ideal performance more than rectifying individual performance problems or deficiencies. But a major disadvantage is that needs assessments using this form of data collection may be expensive and time-consuming to do if they are to be legally defensible.

To use the competency assessment method, instructional designers should:

  • Form a panel of managers or experienced performers.
  • Identify the characteristics of ideal performers. (In this context, characteristics may mean behaviors, results achieved, or both.)
  • Pose the following questions to the panel members: What characteristics should be present in competent performers? How much should they be present? Answering these questions may involve behavioral events interviewing in which exemplary performers are asked to relate a significant work-related story from their experience and describe exactly what they did, how they felt as they did it, and even what they thought as they did it.
  • Devise ways to identify and measure the characteristics.
  • Compare characteristics of actual performers to those described in the competency model.
  • Identify differences that lend themselves to corrective action through planned instruction.

Numerous alternatives to this approach exist. The reason: Views about what should be the basis for competencies may differ. According to one view, for instance, competencies are derived by studying the results (outputs) produced by performers; according to another view, competencies are derived from examining common characteristics shared by exemplary performers.

Assessment Centers

An assessment center is not a place; rather, it is a method of collecting information (Rothwell 2015; Thornton 1992). They are used to screen higher-level job applicants (those applying for senior leadership positions) or to assess the potential of existing employees for promotions. The process can range from a few assessment measures all the way to multiple measures and methods requiring days or even weeks to complete.

Assessment centers are expensive to design and operate, which is a major disadvantage of this approach to data collection. However, their results are detailed, individualized, and job-related, and that is a chief advantage of the assessment center method. To use the assessment center, instructional designers may have to rely on the skills of those who specialize in establishing them. The basic steps in preparing an assessment center are, however, simple enough. They require a highly skilled specialist, familiar with employee selection methods and testing validation, to:

  • Conduct an analysis of each job category to be assessed.
  • Identify important responsibilities for each job.
  • Use the results of Step 2 to develop games or simulations based on the knowledge and skills needed to perform the job successfully.
  • Train people to observe and judge the performance of participants in the assessment center.
  • Provide each individual who participates in the assessment center with specific feedback from observers about training needs.

Performance Records

Marrelli (2005a and 2005b) points out that existing performance records can also serve as a way of informing critical incidents. Examples include exit interviews, performance appraisals, and work diaries.

Exit interviews are planned or unplanned conversations carried out with an organization's terminating employees to record their perceptions of employee training needs in their job categories or work groups. Exit interviews are relatively inexpensive to do and have high response rates. However, they may yield biased results because they highlight perceptions of employees who left the organization.

Many instructional designers wonder when to choose one or more of these data collection methods. While there is no simple way to decide about choosing a method, several important issues identified by Newstrom and Lilyquist (1979, 56) in their classic treatment of this topic are still relevant:

  • Incumbent involvement: How much does the data collection approach allow learners to participate in identifying needs?
  • Management involvement: How much does the data collection approach allow managers in the organization to participate in identifying needs?
  • Time required: How long will it take to collect and compile the data?
  • Cost: What will be the expense of using a data collection method?
  • Relevant quantifiable data: How much data will be produced? How useful will it be? How much will it lend itself to verifiable measurement?

In considering various data collection methods, instructional designers are advised to weigh these issues carefully. Not all data collection methods share equal advantages and disadvantages.

Work Samples

The outputs or actual products of work can be useful in the data collection and analysis phase because they represent the directly observable outcome of the work being studied that are collected. Because they represent the output of work, they have high validity, unlike the data collected using focus groups or interviews in which the information is subject to filtering and individual perceptions.

Process Mapping

Process mapping is used to identify the steps taken in sequence to produce a work output. The output of this process is a matrix or a process map. Such a map may also identify the time required to complete each step, a measure of success for each step, and the conditions under which the tasks are performed. The target group for process mapping is a representative group of performers or a small subset of top performers. Marrelli (2005b) points out that process mapping “…can be an excellent approach to identifying the content that should be included in an instructional course, manual, or job aid intended to help workers execute a process” (41).

Unobtrusive Measures

While our goal as instructional designers is to collect valid and relevant data we can use to make sound decisions, not every environment can be said to support this outcome. In some organizations, labor and management conspire, either knowingly or unknowingly, to produce the desired output through less-than-desirable behaviors. This concept is “shadowboxing with data,” in which work conditions or other impediments may be hidden by what appears to be desirable outcomes. One example that is much discussed today in education is the concept of “teaching to the test.” While scores may meet a standard, the behavior behind these results may be less than ideal. Winiecki (2009) describes the concept this way:

“In shadowboxing with data, individuals who know what sorts of measured outputs are desired (by themselves or their organization) may modify their practice so as to produce ‘good numbers’ rather than what we consider to be ‘good performance’” (32).

To address this phenomenon, we might consider unobtrusive measures described as follows. Marrelli (2007) points out that many of the commonly used data collection methods (surveys, focus groups, questionnaires, interviews) are inherently biased by their altering the situation being studied, they are reactive and the participants, by agreeing to participate, will naturally impose their own biases and perceptions. These limitations make it “…difficult to distinguish between typical, real behaviors and behaviors induced by the measurement” (44).

To compensate for these limitations, unobtrusive measures correct for this limitation. They can be grouped into three categories: (1) physical traces; (2) archives; and (3) observations. An education program that encourages recycling of used paper can be assessed by the volume of recycled paper before and after the program (physical traces). Data stored or collected by an organization not intended for analysis can be a rich source of information (archives). Target groups performing a job or task under analysis can be observed without their knowledge (observations) to gather valid and reliable information.

Specifying Instruments and Protocols

What instruments should be used during the needs assessment, and how should they be used? What approvals or protocols are necessary for conducting the needs assessment, and how will the instructional designer interact with members of the organization? These questions must be addressed in a needs assessment plan. The first has to do with specifying instruments; the second has to do with specifying protocol.

Many instruments may be used in needs assessment. Common methods of collecting information about instructional needs rely on commercially available or tailor-made questionnaires, interview guides, observation guides, tests, and document review guides. Commercially available instruments and online data collection methods have been prepared for widespread applications, although some consideration of how to use an instrument or groupware program in one organizational setting is usually necessary and should be described in the needs assessment.

Tailor-made instruments are prepared by instructional designers or others for assessing instructional needs in one organization or one job classification. Developing a valid, reliable questionnaire may require substantial work in its own right, and this process should be described in the needs assessment plan. Using groupware necessitates establishing an approach to data collection. Table 3.3 summarizes methods of needs assessment.

Table 3.3 Strengths and Weaknesses of Selected Data Collection Methods

Criteria
Methods Incumbent Involvement Management Involvement Time Required Cost Relevant Quantifiable Data
Interviews High Low High High Moderate
Direct observation of work Moderate Low High High Moderate
Indirect examinations of performance or productivity measures Low Moderate Low Low High
Questionnaires High High Moderate Moderate High
Task analysis Low Low High High High
Key informant or focus groups High Moderate Moderate Moderate Moderate
Nominal group technique High Moderate Moderate Moderate Moderate
Delphi procedure Low Moderate Moderate Moderate Moderate
Critical incident method Moderate Moderate Low Low Low
Competency assessment Low High High High High
Assessment center High Low High High High
Exit interviews Low Low Low Low Low

Source: J. Newstrom and J. Lilyquist. Reprinted from Training and Development Journal (1979), 56. Copyright © 1979. The American Society for Training and Development. Reprinted with permission. All rights reserved.

Protocol means diplomatic etiquette and must be considered in planning needs assessment. It stems from organizational culture—the unseen rules guiding organizational behavior. “Rules” should be interpreted as the means by which instructional designers will carry out the needs assessment, interact with the client, deliver results, interpret them, and plan action based on them. In developing the needs assessment plan, instructional designers should seek answers to such questions as these:

  • With whom in the organization should the instructional designer interact during the needs assessment? (How many people? For what issues?)
  • Whose approval is necessary to collect information? (For example, must the plant manager at each site grant approval for administering a questionnaire?)
  • To whom should the results of the needs assessment be reported? To whom should periodic progress reports be provided, if desired at all?
  • How have previous consultants interacted with the organization? What did they do well, or what mistakes did they make, according to managers in the organization?
  • What methods of delivering results are likely to get the most serious consideration? (For instance, will a lengthy written report be read?)

Instructional designers should always remember that the means by which needs assessment is carried out can influence the results and the willingness of the client to continue the relationship. Use effective interpersonal skills.

Data Analysis

Before discussing the data analysis process, we should know of the answers to several critical questions that must be addressed early in the entire data collection and analysis process. How will results of the needs assessment be analyzed once the information has been collected? This question must be answered in a needs assessment plan. It is also the one question that instructional designers may inadvertently forget. But if it is not considered, then subsequent analysis will be difficult because instructional designers may find they did not collect enough information, or they collected the wrong kind to make informed decisions about instructional needs.

Selecting a data analysis method depends on the needs assessment design, corresponding to a research design previously selected. They include: (1) historical, (2) descriptive, (3) developmental, (4) case or field study, (5) correlational, (6) causal-comparative, (7) true experimental, (8) quasi-experimental, and (9) action research (Isaac and Michael 1984).

Historical and case or field study designs usually rely heavily on qualitative approaches to data analysis. The instructional designer describes conditions in the past (historical studies) or present (case or field study). Hence, analysis is expressed in narrative form, often involving anecdotes or literature reviews. Anecdotes have strong persuasive appeal, and they are selected for their exceptional or unusual nature. They are rarely intended to represent typical conditions or situations.

Descriptive designs include interview studies, questionnaires, and document reviews. Data are presented either qualitatively as narrative or quantitatively through simple frequencies, means, modes, and medians. A frequency is little more than a count of how often a problem occurs or an event happens. A mean is the arithmetic average of numbers. A mode is the most common number, and the median is the middle number in a sequence. Perhaps examples will help to clarify these terms. Suppose we have a series of numbers: 1, 4, 9, 7, 6, 3, 4. The frequency is the number of times each number occurs. Each number occurs one time, except for 4. The mode of this series of numbers then is 4, since it occurs most frequently. The median is the middle number, found by arranging the numbers in order and then counting: 1, 3, 4, 4, 6, 7, 9. The median in this array is 4, since it is the middle number. To find the mean (arithmetic average), add the numbers and then divide by how many numbers there are. Here, the sum of 1 + 4 + 9 + 7 + 6 + 3 + 4 equals 34 divided by 7 equals 4.8 (rounded). Frequencies, means, modes, and medians are used in analyzing needs assessment data because they are simple to understand and are also simple to explain to decision makers. In addition, they lend themselves especially well to the preparation of computerized graphics.

The analysis used in other needs assessment designs—developmental, correlational, experimental, quasi-experimental, or causal-comparative—requires sophisticated statistical techniques. For these designs, the most commonly used data analytical methods include the analysis of variance, chi square, and the t test. When these methods must be used, instructional designers should refer to detailed descriptions about them in statistics textbooks.

Before selecting the data collection tools used, the instructional designer must decide whether quantitative data, qualitative data or a combination of both must be gathered to answer the needs assessment question(s).

Whether we're involved in assessing learning or performance gaps, the data collection and analysis phase of the process is critical to the outcome. If we don't know the data to collect or how to analyze it, the outcome of our efforts will be less than stellar.

From a high level, the instructional designer must answer three basic questions in this phase:

  1. What are the needs this project is attempting to satisfy?
  2. What data or information will I need to clarify these needs?
  3. How will I need to analyze this data to arrive at a reliable and valid conclusion?

The answers to these three questions should be front and center in our needs assessment plan. They should be communicated to our clients and rigorously upheld during the process. As members of the instructional design community, we are ethically bound during the data collection and analysis phase to:

  • Avoid “confirmation bias”; that is, identifying a predetermined outcome at the beginning, then collecting and analyzing only the data that supports our predetermined conclusion. Establish guidelines at the beginning with the client and stakeholders about what data will be collected and how it will be used.
  • Select the most appropriate methods based on the general data needed to address the needs assessment questions (that is, quantitative, qualitative, or a combination of both).
  • Avoid any attempt to manipulate or otherwise hide data for political or other reasons.

Now that we've reviewed the steps required to complete the data collection and analysis phases of our project, we are ready to move on to the needs assessment phase itself in the following chapter.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset