Chapter 10
Methods of Data Collection

With the planning accomplished, the project is ready for execution. From the evaluation perspective, data collection becomes the first task after execution. Essentially, data are collected at four different levels (reaction, learning, application, and impact), matching the levels described in previous chapters. This chapter presents the methods of data collection that span all levels. The list is comprehensive, beginning with questionnaires and ending with monitoring business performance data from the system and records. The chapter concludes with tips on selecting the data collection methods to use on specific projects.

Using Questionnaires and Surveys

The questionnaire is probably the most common data collection method. Questionnaires come in all sizes, ranging from short surveys to detailed instruments. They can be used to obtain subjective data about participants' perceptions as well as to document data for use in a projected ROI analysis. With this versatility and popularity, it is important for questionnaires and surveys to be designed properly to satisfy both purposes.

Types of Questions and Statements

Five basic types of questions or statements are available. Depending on the purpose of the evaluation, the questionnaire may contain any or all of the following types of questions:

  1. Open-ended question. Has an unlimited answer. The question is followed by ample blank space for the response.
  2. Checklist. A list of items. A participant is asked to check those that apply to the situation.
  3. Range of responses. Has alternate responses, a yes/no, or other possibilities. This type of question can also include a range of responses from disagree to agree.
  4. Multiple-choice question. Has several choices, and the participant is asked to select the most appropriate.
  5. Ranking scales. Requires the participant to rank a list of items.

Figure 10.1 shows examples of each of these types of questions.

c10f001

Figure 10.1 Types of Questions

Design Issues

Questionnaire design is a simple and logical process. An improperly designed or worded questionnaire will not collect the desired data and is confusing, frustrating, and potentially embarrassing. The following steps will help ensure that a valid, reliable, and effective instrument is developed.

  • Determine the information needed. The first step of any instrument design is to itemize the topics, issues, and success factors for the project. Questions are developed later. It might be helpful to develop this information in outline form so that related questions can be grouped together.
  • Select the type(s) of questions. Determine whether open-ended questions, checklists, ranges, multiple-choice questions, or a ranking scale is most appropriate for the purpose of the questions. Take into consideration the planned data analysis and variety of data to be collected.
  • Develop the questions—keep it simple. The next step is to develop the questions based on the types of questions planned and the information needed. The questions should be simple and straightforward enough to avoid confusion or leading the participant to a desired response. Unfamiliar terms or expressions should be avoided.
  • Test the questions. After the questions are developed, they should be tested for understanding. Ideally, the questions should be tested on a small sample of participants in the project. If this is not feasible, the questions should be tested on employees at approximately the same job level as the participants. Collect as much input and criticism as possible, and revise the questions as necessary.
  • Prepare a data summary. A data summary sheet should be developed so data can be tabulated quickly for summary and interpretation. This step will help ensure that the data can be analyzed quickly and presented in a meaningful way.

A Detailed Example

One of the most difficult tasks is to determine specific issues that need to be addressed on a follow-up questionnaire. Although the content items on a follow-up questionnaire can be the same as questionnaires used in measuring reaction and learning, the following content items are more desirable for capturing application and impact information (Level 3 and 4 data). Figure 10.2 presents a questionnaire used in a follow-up evaluation of a consulting project on building a sales culture. The evaluation was designed to capture the ROI, with the primary method of data collection being the follow-up questionnaire. This example will be used to illustrate many of the issues involving potential content items for a follow-up questionnaire.

c10f002a

Figure 10.2 Example of Questionnaire

c10f002b
c10f002a
c10f002a
c10f002a

Progress Bank, following a carefully planned growth pattern through acquiring smaller banks, initiated a consulting project to develop a strong sales culture. The project involved four solutions. Through a competency-based learning intervention, all branch personnel were taught how to aggressively pursue new customers and cross-sell to existing customers in a variety of product lines. The software and customer database were upgraded to provide faster access and enhanced routines to assist selling. The incentive compensation system was also redesigned to enhance payments for new customers and increase sales of all branch products. Finally, a management coaching and goal-setting system was implemented to ensure that ambitious sales goals were met. All branch employees were involved in the project.

Six months after the project was implemented, an evaluation was planned. Each branch in the network had a scorecard that tracked performance through several measures such as new accounts, total deposits, and growth by specific products. All product lines were monitored. All branch employees provided input on the questionnaire shown in Figure 10.2. Most of the data from the questionnaire covered application and impact. This type of feedback helps consultants know which parts of the intervention are most effective and useful.

Improving the Response Rate for Questionnaires and Surveys

Given the wide range of potential issues to explore in a follow-up questionnaire or survey, asking all of the questions can cause the response rate to be reduced considerably. The challenge, therefore, is to approach questionnaire and survey design and administration for maximum response rate. This is a critical issue when the questionnaire is a key data collection activity and much of the evaluation hinges on the questionnaire results. The following actions can be taken to increase response rate. Although the term questionnaire is used, the same rules apply to surveys.

  • Provide advance communication. If appropriate and feasible, consulting participants and other stakeholders should receive advance communications about the plans for the questionnaire or survey. This minimizes some of the resistance to the process, provides an opportunity to explain in more detail the circumstances surrounding the evaluation, and positions the evaluation as an integral part of the consulting project rather than an add-on activity that someone initiated three months after the project is completed.
  • Communicate the purpose. Stakeholders should understand the reason for the questionnaire, including who or what initiated this specific evaluation. They should know if the evaluation is part of a systematic process or a special request for this consulting project only.
  • Explain who will see the data. It is important for respondents to know who will see the data and the results of the questionnaire. If the questionnaire is anonymous, it should clearly be communicated to participants what steps will be taken to ensure anonymity. If senior executives will see the combined results of the study, the respondent should know it.
  • Describe the data integration process. The respondents should understand how the questionnaire results will be combined with other data, if available. Often the questionnaire is only one of the data collection methods utilized. Participants should know how the data are weighted and integrated into the entire impact study, as well as interim results.
  • Keep the questionnaire/survey as simple as possible. A simple questionnaire does not always provide the full scope of data necessary for a comprehensive analysis. However, the simplified approach should always be kept in mind when questions are developed and the total scope of the questionnaire is finalized. Every effort should be made to keep it as simple and brief as possible.
  • Simplify the response process. To the extent possible, it should be easy to respond to the questionnaire. If appropriate, a self-addressed stamped envelope should be included. Perhaps e-mail could be used for responses, if it is easier. In still other situations, a response box is provided near the project work area.
  • Utilize local management support. Management involvement at the local level is critical to response-rate success. Managers can distribute the questionnaires themselves, make reference to the questionnaire in staff meetings, follow up to see if questionnaires have been completed, and generally show support for completing the questionnaire. This direct managerial support will prompt many participants to respond with usable data.
  • Let the participants know they are part of the sample. For large consulting projects, a sampling process may be utilized. When that is the case, participants should know they are part of a carefully selected sample and that their input will be used to make decisions regarding a much larger target audience. This action often appeals to a sense of responsibility for participants to provide usable, accurate data for the questionnaire.
  • Consider incentives. A variety of incentives can be offered, and they usually are found in three categories. First, an incentive is provided in exchange for the completed questionnaire. For example, if participants return the questionnaire personally or through the mail, they will receive a small gift, such as a T-shirt or mug. If identity is an issue, a neutral third party can provide the incentive. In the second category, the incentive is provided to make participants feel guilty about not responding. Examples are money clipped to the questionnaire or a pen enclosed in the envelope. Participants are asked to “take the money, buy a cup of coffee, and fill out the questionnaire.” A third group of incentives is designed to obtain a quick response. This approach is based on the assumption that a quick response will ensure a greater response rate. If an individual delays completing the questionnaire, the odds of completing it diminish considerably. The initial group of participants may receive a more expensive gift, or they may be part of a drawing for an incentive. For example, in one project, the first 25 returned questionnaires were placed in a drawing for a $400 gift certificate. The next 25 were added to the first 25 in the next drawing. The longer a participant waits, the lower the odds of winning.
  • Have an executive sign the introductory letter. Participants are always interested in who sent the letter with the questionnaire. For maximum effectiveness, a senior executive who is responsible for a major area where the participants work should sign the letter. Employees may be more willing to respond to a senior executive than to a member of the consulting team.
  • Use follow-up reminders. A follow-up reminder should be sent a week after the questionnaire is received and another sent two weeks later. Depending on the questionnaire and the situation, these times can be adjusted. In some situations, a third follow-up is recommended. Sometimes the follow-up is sent in a different media. For example, a questionnaire may be sent through regular mail, whereas the first follow-up reminder is from the immediate supervisor, and a second follow-up is sent via e-mail.
  • Send a copy of the results to the participants. Even if it is an abbreviated report, participants should see the results of the questionnaire. More important, participants should understand that they will receive a copy of the impact study when they are asked to provide the data. This promise will often increase the response rate, as some individuals want to see the results of the entire group along with their particular input.
  • Estimate the length of time to complete the questionnaire. Respondents often have a concern about the time it may take to complete the questionnaire. A very lengthy questionnaire may quickly discourage the participants and cause it to be discarded. Sometimes lengthy questionnaires can be completed quickly because they contain forced-choice questions or statements that make it easy to respond. However, the number of pages may put off the respondent. Therefore, it is helpful to indicate the estimated length of time needed to complete the questionnaire, perhaps in the letter itself or at least noted in the communications. This provides extra information so that respondents can decide if they are willing to invest the required amount of time in the process. A word of caution: the amount of time must be realistic. Purposely underestimating it can do more harm than good.
  • Explain the timing of the planned steps. Sometimes the respondents want to learn more about the process, such as when they can see the results. It is recommended that a time line of the different phases be presented, showing when the data will be analyzed, when the data will be presented to different groups, and when the results will be returned to the participants in a summary report. This provides some assurance that the process is well organized and professional and that the length of time to receive a data summary will not be too long. Another word of caution: The timetable must be followed to maintain the confidence and trust of the individuals.
  • Make it appear professional. While it should not be an issue in most organizations, unfortunately, there are too many cases in which a questionnaire is not developed properly, does not appear professional, or is not easy to follow and understand. The participants must gain respect for the process and for the organization. To do this, a sense of professionalism must be integrated throughout data collection, particularly in the appearance and accuracy of the materials. Sloppy questionnaires will usually elicit sloppy responses, or no response at all.
  • Explain the questionnaire during the project meetings. Sometimes it is helpful to explain to the participants and other key stakeholders that they will be required or asked to provide certain types of data. When this is feasible, questionnaires should be reviewed question by question so that the participants understand the purpose, the issues, and how to respond. This will take only 10–15 minutes but can increase the response rate, enhance the quality and quantity of data, and clarify any confusion that may exist on key issues.
  • Collect data anonymously, if necessary. Participants are more likely to provide frank and candid feedback if their names are not on the questionnaire, particularly when the project is going astray or is off target. When this is the case, every effort should be made to protect the anonymous input, and explanations should be provided as to how the data are analyzed while minimizing the demographic makeup of respondents so that the individuals cannot be identified in the analysis.

Collectively, these items help boost response rates of follow-up questionnaires. Using all of these strategies can result in a 70–90 percent response rate, even with lengthy questionnaires that might take 30 minutes to complete.

Using Interviews

Another helpful collection method is the interview, although it is not used as frequently as the questionnaire. The consultants, the client's staff, or a third party can conduct interviews. Interviews can secure data not available in performance records, or data difficult to obtain through written responses or observations. Also, interviews can uncover success stories that can be useful in communicating evaluation results. Consulting participants may be reluctant to describe their results in a questionnaire but will volunteer the information to a skillful interviewer using probing techniques. The interview is versatile and appropriate for reaction, learning, and application data. A major disadvantage of the interview is that it is time consuming. It also requires time and training of interviewers to ensure that the process is consistent.

Types of Interviews

Interviews usually fall into two basic types: structured and unstructured. A structured interview is much like a questionnaire. Specific questions are asked with little room to deviate from the desired responses. The primary advantages of the structured interview over the questionnaire are that the interview process can ensure the questionnaire is completed and that the interviewer understands the responses supplied by the participant.

The unstructured interview permits probing for additional information. This type of interview uses a few general questions, which can lead to more detailed information, as important data are uncovered. The interviewer must be skilled in the probing process. Typical probing questions are as follows:

  • Can you explain that in more detail?
  • Can you give me an example of what you are saying?
  • Can you explain the difficulty that you say you encountered?

Interview Guidelines

The design steps for interviews are similar to those of the questionnaire. A brief summary of key issues with interviews is outlined here.

  • Develop questions to be asked. After the decision has been made about the type of interview, specific questions need to be developed. Questions should be brief, precise, and designed for easy response.
  • Test out the interview. The interview should be tested on a small number of participants. If possible, the interviews should be conducted as part of the early stages of the project. The responses should be analyzed and the interview revised, if necessary.
  • Prepare the interviewers. The interviewer should have appropriate skills, including active listening, the ability to form probing questions, and the ability to collect and summarize information into a meaningful form.
  • Provide clear instructions. The consulting participant should understand the purpose of the interview and know what will be done with the information. Expectations, conditions, and rules of the interview should be thoroughly discussed. For example, the participant should know if statements will be kept confidential. If the participant is nervous during an interview and develops signs of anxiety, he or she should be encouraged to relax and feel at ease.
  • Administer interviews with a plan in mind. As with other evaluation instruments, interviews need to be conducted according to a predetermined plan. The timing of the interview, the person who conducts the interview, and the location of the interview are all issues that become relevant when developing an interview plan. For a large number of stakeholders, a sampling plan may be necessary to save time and reduce the evaluation cost.

Using Focus Groups

As an extension of the interview, focus groups are particularly helpful when in-depth feedback is needed. The focus group involves a small group discussion conducted by an experienced facilitator. It is designed to solicit qualitative judgments on a planned topic or issue. Group members are all required to provide their input, as individual input builds on group input.

When compared to questionnaires, surveys, or interviews, the focus group strategy has several advantages. The basic premise of using focus groups is that when quality judgments are subjective, several individual judgments are better than only one. The group process, where participants often motivate one another, is an effective method for generating new ideas and hypotheses. It is less expensive than the interview and can be quickly planned and conducted. Its flexibility makes it possible to explore a consulting project's unexpected outcomes or applications.

Applications for Evaluation

The focus group is particularly helpful when qualitative information is needed about the success of a consulting project. For example, the focus group can be used in the following situations:

  • Assessing the potential impact of the project
  • Evaluating the reaction to the consulting project and the various components of it
  • Assessing learning of specific procedures, tasks, schedules, or other components of the project
  • Assessing the implementation of the consulting project as perceived by the participants immediately following the project's completion
  • Sorting out the causes of success

Essentially, focus groups are helpful when evaluation information is needed but cannot be collected adequately with a simple questionnaire or survey.

Guidelines

While there are no set rules on how to use focus groups for evaluation, the following guidelines should be helpful:

  • Plan topics, questions, and strategy carefully. As with any evaluation instrument, planning is the key. The specific topics, questions, and issues to be discussed must be carefully planned and sequenced. This enhances the comparison of results from one group to another and ensures that the group process is effective and stays on track.
  • Keep the group size small. While there is no magical group size, a range of 8 to 12 seems appropriate for most focus group applications. A group has to be large enough to ensure different points of view but small enough to give every participant a chance to talk freely and exchange comments.
  • Ensure a representative sample of the target population. It is important for groups to be stratified appropriately so that participants represent the target population. The group should be homogeneous in experience, rank, and influence in the organization.
  • Insist on facilitators with appropriate expertise. The success of a focus group rests with the facilitator, who must be skilled in the focus group process. Facilitators must know how to control aggressive members of the group and diffuse the input from those who want to dominate the group. Also, facilitators must be able to create an environment in which participants feel comfortable to offer comments freely and openly. Consequently, some organizations use external facilitators.

In summary, the focus group is an inexpensive and quick way to determine the strengths and weaknesses of projects. However, for a complete evaluation, focus group information should be combined with data from other instruments.

Measuring with Tests

Testing is important for measuring learning in project evaluations. Pre- and postproject comparisons using tests are very common. An improvement in test scores shows the change in skill, knowledge, or capability of the participant attributed to the consulting project. The questionnaires and surveys, described earlier, can be used in testing for learning.

Performance testing allows the participant to exhibit a skill (and occasionally knowledge or attitude) that has been learned in a consulting project. The skill can be manual, verbal, or analytical, or a combination of the three. For example, computer systems engineers are participating in a system-reengineering project. As part of the project, participants are given the assignment to design and test a basic system. The consultant observes participants as they check out the system, then carefully builds the same design, and compares his results with those of the participants. These comparisons and the performance of the design provide an evaluation of the project and represent an adequate reflection of the skills learned in the project.

Measuring with Simulation

Another technique for measuring learning is job simulation. This method involves the construction and application of a procedure or task that simulates or models the work involved in the consulting project. The simulation is designed to represent, as closely as possible, the actual job situation. Participants try out their performance in the simulated activity and have it evaluated based on how well the task is accomplished. Simulations may be used during the project, or as part of a follow-up evaluation.

Task Simulation

One approach involves a participant's performance in a simulated task as part of an evaluation. For example, in a new system implementation, users are provided a series of situations and they must perform the proper sequence of tasks in a minimum amount of time. To become certified to use this system, users are observed in a simulation, where they perform all the necessary steps on a checklist. After they have demonstrated that they possess the skills necessary for the safe performance of this assignment, they are certified by the consultant. This task simulation serves as the evaluation.

Business Games

Business games have grown in popularity in recent years. They represent simulations of a part or all of a business organization. Participants change the variables of the business and observe the effects of those changes. The game not only reflects the real-world situation, but may also represent a consulting project. The participants are provided certain objectives, play the game, and have their output monitored. Their performance can usually be documented and measured. Typical objectives are to maximize profit, sales, market share, or operating efficiency. Participants who maximize the objectives are those who usually have the highest performance.

Role-Playing/Skill Practice

When skill building is part of the consulting project, role-playing may be helpful. This is sometimes referred to as skill practice: Participants practice a newly learned skill and are observed by other individuals. Participants are given their assigned role with specific instructions, which sometimes include an ultimate course of action. The participants then practice the skill with other individuals to accomplish the desired objectives. This is intended to simulate the real-world setting to the greatest extent possible. Difficulty sometimes arises when other participants involved in the skill practice make the practice unrealistic by not reacting in the same way that individuals would in an actual situation. To help overcome this obstacle, trained role players (nonparticipants trained for the role) may be used in all roles except that of the participant. This can possibly provide a more objective evaluation.

Using Observation

Observing participants and recording changes in behavior and specific actions taken may be appropriate to measure application. This technique is useful when it is important to know precisely how the consulting participants are using new skills, knowledge, tasks, procedures, or systems. For example, participant observation is often used in sales and sales support projects. The observer may be a member of the consulting staff, the participant's supervisor, a member of a peer group, or an external resource, such as a mystery customer.

Guidelines for Effective Observation

Observation is often misused or misapplied to evaluation situations, forcing some to abandon the process. The effectiveness of observation can be improved with the following guidelines:

  • Observers must be fully prepared. Observers must fully understand what information is needed and what skills are covered in the intervention. They must be prepared for the assignment and provided a chance to practice observation skills.
  • The observations should be systematic. The observation process must be planned so that it is executed effectively without any surprises. The individuals observed should know in advance about the observation and why they are being observed, unless the observation is planned to be invisible. In this case, the individuals are monitored unknowingly. Observations are planned when work situations are normal. Eight steps are necessary to accomplish a successful observation:
    1. Determine what behavior will be observed.
    2. Prepare the forms for the observer's use.
    3. Select the observers.
    4. Prepare a schedule of observations.
    5. Prepare observers to observe properly.
    6. Inform participants of the planned observation, providing explanations.
    7. Conduct the observations.
    8. Summarize the observation data.
  • The observers should know how to interpret and report what they see. Observations involve judgment decisions. The observer must analyze which behaviors are being displayed and what actions the participants are taking. Observers should know how to summarize behavior and report results in a meaningful manner.
  • The observer's influence should be minimized. Except for “mystery” or “planted” observers and electronic observations, it is impossible to completely isolate the overall effect of an observer. Participants will display the behavior they think is appropriate, performing at their best. The presence of the observer must be minimized. To the extent possible, the observer should blend into the work environment and be unnoticeable.
  • Select observers carefully. Observers are usually independent of the participants. They are typically members of the consulting staff. The independent observer is usually more skilled at recording behavior and making interpretations of behavior and is usually unbiased in these interpretations. Using an independent observer reduces the need to prepare observers. However, the independent observer has the appearance of an outsider, and participants may resent the observer. Sometimes it is more feasible to recruit observers from inside the organization.

Observation Methods

Five methods of observation are suggested and are appropriate depending on the circumstances surrounding the type of information needed. Each method is briefly described below.

  • Behavior checklist and codes. A behavior checklist is useful for recording the presence, absence, frequency, or duration of a participant's behavior or action as it occurs. A checklist does not provide information on the quality, intensity, or possible circumstances surrounding the behavior observed. The checklist is useful, though, since an observer can identify exactly which behaviors should or should not occur. The behaviors listed in the checklist should be minimized and listed in a logical sequence if they normally occur in a sequence. A variation of this approach involves coding behaviors or actions on a form. While this method is useful when there are many behaviors, it is more time consuming because a code is entered that identifies a specific behavior or actions instead of checking an item. A variation of this approach is the 360-degree feedback process in which surveys are completed on other individuals based on observations within a specific time frame.
  • Delayed report method. With a delayed report method, the observer does not use any forms or written materials during the observation. The information is either recorded after the observation is completed or at particular time intervals during an observation. The observer tries to reconstruct what has been witnessed during the observation period. The advantage of this approach is that the observer is not as noticeable, and there are no forms being completed or notes being taken during the observation. The observer becomes more a part of the situation and less of a distraction. This approach is typical of the mystery shopper for retail stores. An obvious disadvantage is that the information written may not be as accurate and reliable as the information collected at the time it occurred.
  • Video recording. A video camera records behavior or actions in every detail. However, this intrusion may be awkward and cumbersome, and the participants may be unnecessarily nervous or self-conscious while they are being videotaped. If the camera is concealed, the privacy of the participant may be invaded. Because of this, video recording of on-the-job behavior is not frequently used.
  • Audio monitoring. Monitoring conversations of participants is an effective observation technique. For example, in a large communication company's telemarketing department, sales representatives were prepared to sell equipment by telephone. To determine if employees were using the skills and procedures properly, telephone conversations were monitored on a randomly selected basis. While this approach may stir some controversy, it is an effective way to determine if skills and procedures are being applied consistently and effectively. For it to work smoothly, it must be fully explained and the rules clearly communicated.
  • System monitoring. For employees who work regularly with checklists, tasks, and technology, system monitoring is becoming an effective way to “observe” participants as they perform job tasks. The system monitors times, sequence of steps, use of routines, and other activities to determine if the participant is performing the work according to specific steps and guidelines of the consulting intervention. As technology continues to be a significant part of the workplace, system monitoring holds much promise.

Using Action Plans

In some cases, follow-up assignments can develop application and impact data. In a typical follow-up assignment, the consulting participant is asked to meet a goal or complete a particular task or project by a set date. A summary of the results of the completed assignments provides further evidence of the success of the consulting project.

With this approach, participants are required to develop action plans as part of the consulting project. Action plans contain detailed steps to accomplish specific objectives related to the project. The process is one of the most effective ways to enhance support for a consulting project and build the ownership needed for the successful application and impact of the project.

The plan is typically prepared on a printed form, such as the one shown in Figure 10.3. The action plan shows what is to be done, by whom, and the date by which the objectives should be accomplished. The action-plan approach is a straightforward, easy-to-use method for determining how participants will implement the project and achieve success with consulting.

c10f003

Figure 10.3 Action Plan Example

Using Action Plans Successfully

The development of the action plan requires two major tasks: determining what measure to improve and writing the action items to improve it. As shown in Figure 10.3, an action plan can be developed for a safety consulting project. The plan presented in this figure requires participants to develop an objective, which is related to the consulting project. In this example, the objective is to reduce the slips and falls on a hospital floor from 11 to 2 in six months. In some cases, there may be more than one objective, which requires additional action plans. Related to the objective are the improvement measure, the current levels, and target of performance. This information requires the participant to anticipate the application of the consulting project and set goals for specific performances that can be realized. In another example, an objective may be to reduce equipment downtime for the printing press. The measure is the average hours of downtime with the current performance at six hours per week and a target of two hours per week.

The action plan is completed during the early stages of the consulting project, often with the input, assistance, and facilitation of the consultant. The consultant actually approves the plan, indicating that it meets the particular requirements of being very Specific, Measurable, Achievable, Realistic, and Time-bound (SMART). The plan can actually be developed in a one- to two-hour time frame and often begins with action steps related to the implementation of the project. These action steps are actually Level 3 activities that detail the application of the consulting project. All of these steps build support for, and are linked to, business impact measures:

  • Define the unit of measure. The next important issue is to define the actual unit of measure. In some cases, more than one measure may be used and will subsequently be contained in additional action plans. The unit of measure is necessary to break the process down into the simplest steps so that the ultimate value of the project can be determined. The unit can be output data, such as an additional unit manufactured or additional hotel room rented. In terms of quality, the unit can be one reject, error, or defect. Time-based units are usually measured in minutes, hours, days, or weeks, such as one minute of downtime. Units are specific to their particular type of situations, such as one turnover of key talent, one customer, complaint, or one escalated call in the call center. The important point is to break them down into the simplest terms possible.
  • Require participants to provide monetary values for each improvement. During the consulting project, participants are asked to determine, calculate, or estimate the monetary value for each improvement outlined in the plan. The unit value is determined using standard values, expert input, external databases, or estimates (the consultant will help with this). The process used to arrive at the value is described in the action plan. When the actual improvement occurs, participants will use these values to capture the annual monetary benefits of the plan. For this step to be effective, it is helpful to provide examples of common ways in which values can be assigned to the actual data.
  • Participants implement the action plan. Participants implement the action plan during the consulting project, which often lasts for weeks or months following the intervention. Upon completion, a major portion, if not all, of the consulting project is slated for implementation. The consulting participants implement action-plan steps and the subsequent results are achieved.
  • Participants estimate improvements. At the end of the specified follow-up period—usually three months, six months, nine months, or one year—the participants indicate the specific improvements made, sometimes expressed as a monthly amount. This determines the actual amount of change that has been observed, measured, or recorded. It is important for the participants to understand the necessity for accuracy as data are recorded. In most cases only the changes are recorded, as those amounts are needed to calculate the value of the project. In other cases, before and after data may be recorded, allowing the evaluator to calculate the actual differences.
  • Ask participants to isolate the effects of the project. Although the action plan is initiated because of the project, the improvements reported on the action plan may be influenced by other factors. Thus, the action planning process, initiated in the consulting project, should not take full credit for the improvement. For example, an action plan to reduce employee turnover in a division could take only partial credit for an improvement because of the other variables that affect the turnover rate. While there are several ways to isolate the effects of a consulting project, participant estimation is usually most appropriate in the action-planning process. Consequently, participants are asked to estimate the percentage of the improvement actually related to this particular intervention. This question can be asked on the action plan form or in a follow-up questionnaire.
  • Ask participants to provide a confidence level for estimates. Because the process to convert data to monetary values may not be exact and the amount of the improvement actually related to the project may not be precise, participants are asked to indicate their level of confidence in those two values, collectively. On a scale of 0 to 100 percent, where 0 percent means no confidence and 100 percent means the estimates represent certainty, this value provides participants a mechanism for expressing their uneasiness with their ability to be exact with the process.
  • Collect action plans at specified time intervals. An excellent response rate is essential, so several steps may be necessary to ensure that the action plans are completed and returned. Usually participants will see the importance of the process and will develop their plans in detail early in the consulting project. Some organizations use follow-up reminders by mail or e-mail. Others call participants to check progress. Still others offer assistance in developing the final plan as part of the consulting project. These steps may require additional resources, which must be weighed against the importance of having more data.
  • Summarize the data and calculate the ROI. If developed properly, each action plan should have annualized monetary values associated with improvements. Also, each individual should have indicated the percentage of the improvement directly related to the project. Finally, participants should have provided a confidence percentage to reflect their uncertainty with the process and the subjective nature of some of the data that may be provided.

Advantages/Disadvantages of Action Plans

Although there are many advantages to using action plans, there are at least two concerns:

  1. The process relies on direct input from the participant. As such, the information can sometimes be inaccurate and unreliable. Participants must have assistance along the way.
  2. Action plans can be time consuming for the participant and, if the participant's manager is not involved in the process, there may be a tendency for the participant not to complete the assignment.

As this section has illustrated, the action-plan approach has many inherent advantages. Action plans are simple and easy to administer, are easily understood by participants, are suitable in a wide variety of consulting, and are appropriate for all types of data.

Because of the tremendous flexibility and versatility of the process and the conservative adjustments that can be made in analysis, action plans have become important data collection tools for consulting project evaluation.

Using Performance Contracts

The performance contract is essentially a slight variation of the action-planning process. Based on the principle of mutual goal setting, a performance contract is a written agreement between a participant, the participant's manager, and the consultant. The participant agrees to improve performance on measures related to the consulting project. The agreement is in the form of a goal to be accomplished during or after the consulting project. The agreement spells out what is to be accomplished, at what time, and with what results.

The process of selecting the area for improvement is similar to the process used in the action-planning process. The topic selected should be stated in terms of one or more objectives. The objectives should state what is to be accomplished when the contract is complete. The objectives should be as follows:

  • Written
  • Understandable by all involved
  • Challenging (requiring an unusual effort to achieve)
  • Achievable (something that can be accomplished)
  • Largely under the control of the participant
  • Measurable and dated

The details required to accomplish the contract objectives are developed following the guidelines for action plans presented earlier.

Monitoring Business Performance Data

Data are available in every organization to measure business performance. Monitoring performance data enables management to measure performance in terms of output, quality, costs, time, job engagement, and customer satisfaction. When determining the source of data in the evaluation, the first consideration should be existing databases and reports. In most organizations, performance data suitable for measuring improvement from a consulting project are available. If not, additional record-keeping systems will have to be developed for measurement and analysis. At this point, the question of economics surfaces. Is it economical to develop the record-keeping systems necessary to evaluate a consulting project? If the costs are greater than the expected return for the entire project, then it is pointless to develop those systems.

Existing Measures

Existing performance measures should be researched to identify those related to the proposed objectives of the project. In many situations, it is the performance of these measures that has created the need for the project. Frequently, an organization will have several performance measures related to the same item. For example, the efficiency of a production unit can be measured in several ways, some of which are outlined below:

  • Number of units produced per hour
  • Number of on-schedule production units
  • Percentage of utilization of the equipment
  • Percentage of equipment downtime
  • Labor cost per unit of production
  • Overtime required per unit of production
  • Total unit cost

Each of these, in its own way, measures the efficiency of the production unit. All related measures should be reviewed to determine those most relevant to the consulting intervention.

Occasionally, existing performance measures are integrated with other data, and it may be difficult to keep them isolated from unrelated data. In this situation, all existing related measures should be extracted and tabulated again to be more appropriate for comparison in the evaluation. At times, conversion factors may be necessary. For example, the average number of new sales orders per month may be presented regularly in the performance measures for the sales department. In addition, the sales costs per sales representative are also presented. However, in the evaluation of a consulting project, the average cost per new sale is needed. The average number of new sales orders and the sales cost per sales representative are required to develop the data necessary for comparison.

Developing New Measures

In some cases, data are not available for the information needed to measure the effectiveness of a consulting project. The consulting staff must work with the client organization to develop record-keeping systems, if economically feasible. In one organization, a turnover problem with new professional staff prompted a consulting project to fix the problem. To help ensure success of the project, several measures were planned, including early turnover defined as the percentage of employees who left the company in the first three months of employment. Initially this measure was not available. When the intervention was implemented, the organization began collecting early turnover figures for comparison.

Several questions regarding this issue should be addressed:

  • Which department will develop the measurement system?
  • Who will record and monitor the data?
  • Where will it be recorded?
  • Will new forms or documentation be needed?

These questions will usually involve other departments or a management decision that extends beyond the scope of the consultants. Often the administration department, operations, or the information technology unit may be instrumental in helping determine whether new measures are needed and, if so, how they will be developed.

Selecting the Appropriate Method for Each Level

This chapter and the previous chapter presented several methods to capture data. Collectively, they offer a wide range of opportunities for collecting data in a variety of situations. Eight specific issues should be considered when deciding which method is appropriate for a situation. These should be considered when selecting data collection methods for other evaluation levels as well.

Type of Data

Perhaps one of the most important issues to consider when selecting the method is the type of data to be collected. Some methods are more appropriate for Level 4, while others are best for Level 3, 2, or 1. Table 10.1 shows the most appropriate types of data for specific methods of data collection at all levels. Follow-up surveys, observations, interviews, and focus groups are best suited for Level 3 data, sometimes exclusively. Performance monitoring, action planning, and questionnaires can easily capture Level 4 data.

Table 10.1 Collecting Application and Impact Data

Method Level 1 Level 2 Level 3 Level 4
Surveys tick tick tick
Questionnaires tick tick tick tick
Observation tick tick
Interviews tick tick tick
Focus Groups tick tick
Tests tick
Simulations tick
Action Planning tick tick
Performance Contracting tick tick
Performance Monitoring tick tick

Participants' Time for Data Input

Another important factor in selecting the data collection method is the amount of time participants must spend with data collection and evaluation systems. Time requirements should always be minimized, and the method should be positioned so that it is a value-added activity (i.e., the participants understand that this activity is something valuable so they will not resist). This requirement often means that sampling is used to keep the total participant time to a minimum. Some methods, such as performance monitoring, require no participant time, while others, such as interviews and focus groups, require a significant investment in time.

Manager Time for Data Input

The time that a participant's direct manager must allocate to data collection is another important issue in the method selection. This time requirement should always be minimized. Some methods, such as performance contracting, may require much involvement from the supervisor before and after the intervention. Other methods, such as questionnaires administered directly to participants, may not require any supervisor time.

Cost of Method

Cost is always a consideration when selecting the method. Some data collection methods are more expensive than others. For example, interviews and observations are very expensive. Surveys, questionnaires, and performance monitoring are usually inexpensive.

Disruption of Normal Work Activities

Another key issue in selecting the appropriate method—and perhaps the one that generates the most concern with managers—is the amount of disruption the data collection will create. Routine work processes should be disrupted as little as possible. Some data collection techniques, such as performance monitoring, require very little time and distraction from normal activities. Questionnaires generally do not disrupt the work environment and can often be completed in only a few minutes, or even after normal work hours. On the other extreme, some items such as observations and interviews may be too disruptive to the work unit.

Accuracy of Method

The accuracy of the technique is another factor to consider when selecting the method. Some data collection methods are more accurate than others. For example, performance monitoring is usually very accurate, whereas questionnaires can be distorted and unreliable. If actual on-the-job behavior must be captured, observation is clearly one of the most accurate methods.

Utility of an Additional Method

Because there are many different methods to collect data, it is tempting to use too many data collection methods. Multiple data collection methods add to the time and costs of the evaluation and may result in very little additional value. Utility refers to the added value of the use of an additional data collection method. As more than one method is used, this question should always be addressed. Does the value obtained from the additional data warrant the extra time and expense of the method? If the answer is no, the additional method should not be implemented.

Cultural Bias for Data Collection Method

The culture or philosophy of the organization can dictate which data collection methods are used. For example, some organizations are accustomed to using questionnaires and find the process fits in well with their culture. Some organizations will not use observation because their culture does not support the potential invasion of privacy often associated with it.

Final Thoughts

This chapter outlines techniques for data collection—a critical issue in determining the success of the project. These essential measures determine not only the success achieved but areas where improvement is needed and areas where the success can be replicated in the future. Several techniques are available, ranging from questionnaires to action planning and business performance monitoring. The method chosen must match the scope of the project resources available and the accuracy needed. Complicated projects require a comprehensive approach that measures all of the issues involved in application and impact. Simple projects can take a less formal approach and collect data from only a questionnaire. The next chapter explores the issues of collecting data at the different levels.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset