Chapter 4

Using Surveys and Questionnaires

Caroline Hubble

In This Chapter

This chapter discusses questionnaires, which are one of the most common data collection instruments used in capturing evaluation data. A planned, structured questionnaire that asks the right questions the right way ensures the needed data are collected. In this chapter you will learn to

  • secure a plan
  • create the content
  • optimize the effectiveness
  • prepare for distribution
  • execute the tool.
 

Steps in Developing Surveys and Questionnaires

Once evaluation planning is complete, the plans are put into motion, thereby initiating data collection. Collecting the data is a critical activity in the evaluation process, and using effective and efficient data collection instruments is essential. Developing and implementing the questionnaire or survey evolves by completing the following five phases:

  • Securing a plan
  • Creating the content
  • Optimizing the effectiveness
  • Preparing for distribution
  • Executing the tool.

By completing these phases and maintaining the scope of the data collection instrument, the foundation is set to collect the data needed to answer the study’s questions.

Securing a Plan for the Questionnaire

By definition, a questionnaire is an instrument designed to ask questions to capture a wide range of data from attitudes and specific improvement data. A survey, which is a type of questionnaire, is more limited and focuses on capturing attitudes, beliefs, and opinions of the respondents (Phillips and Stawarski, 2008). Because questionnaires can be customized to meet specific evaluation needs, they are one of the more frequently used data collection instruments. Don’t assume, however, that they require minimal effort to implement. As with all projects, successful questionnaires require a carefully thought out plan.

A Clearly Defined Purpose Is Critical to Success

Successful questionnaires are built on sound design and are linked to the evaluation study’s research questions. To be successful in using a questionnaire, consider the following:

  • Why is the questionnaire needed?
  • What research questions will the collected data answer?
  • Is the questionnaire the right instrument to collect the needed data?
  • Based on the organization’s needs, culture, and data sources, will the questionnaire work for gathering the data?

Answering these questions helps define the feasibility and purpose of the questionnaire. The data collection plan is a critical tool in determining the practicality of using a questionnaire. The objectives, specific measures, and data sources provided in the data collection plan help formulate the purpose of the questionnaire.

Maintaining the Scope Keeps the Data Collection Focused

Once you decide that the questionnaire is feasible and you define the purpose, the next step is documenting the questionnaire’s scope. As mentioned above, questionnaires are flexible tools able to collect a wide variety of data. While this is a valuable asset, it can present a challenge. The questionnaire has the potential to evolve beyond its intended purpose because of its ability to collect a wide range of data. When this happens, content not related to the study can be incorporated, and the tool loses focus. To ensure this doesn’t occur, it is essential to adhere to the scope of the questionnaire. The purpose and goals of the questionnaire define the scope. Once the scope is documented, it should be maintained throughout the life cycle of the questionnaire.

Detailed Plans Ensure Critical Activities Are Completed

Once you identify the purpose and scope of the questionnaire, create a detailed plan for the design, development, and implementation of the tool. In addition to the data collection plan (see table 4-1), develop a portion of the evaluation project plan to identify the design, testing, and implementation steps. Once identified, incorporate the timeline and resources to complete the work (see table 4-2). When developing the timeline, it is vital to factor in sufficient time for development and testing of the questionnaire. Although these activities are time consuming, they are important. It is extremely challenging to make any changes to an implemented questionnaire without compromising the study’s credibility.

Creating the Questionnaire’s Content

After you define the purpose and outline the plan, the next task is to develop the questionnaire’s content. This process is probably the most labor-intensive part of developing the questionnaire and involves two primary activities. First, determine the structure of the questionnaire. Once identified, you can then develop the actual questions. These steps ensure the questionnaire has the required elements to gather the needed data to answer the study’s research questions.

The Structure of the Questionnaire Identifies Where to Put the Content

Developing an outline of what and where the content should be placed within the questionnaire is the first step in successfully creating the content. Placing all the relevant information, instructions, and questions in the appropriate place and in an organized manner ensures the respondents are able to provide accurate and valuable data. There are three main areas of the structure: the introduction, body, and conclusion.

The Introduction Sets the Tone

The introduction is the first section of the questionnaire that the respondent reads, and it sets the tone for the remainder of the instrument. Because of this, it is important to include all the relevant information to engage the reader. The primary content includes informing the respondent of the purpose of the questionnaire and evaluation study, why he or she was chosen to participate, what will be done with the data he or she provides, the timeline for completing the questionnaire, and a point of contact if questions arise. Finally, incorporate any specific details into this section, such as whether responses will remain anonymous and the estimated time it will take to answer the questions.

Table 4-1. Sample Data Collection Plan


Level Broad
Program
Objective(s)
Measures Data Collection Method Data Sources Timing Responsib-
ilities
1

Satisfaction/
Planned
Action

• Relevance to job

• Recommend to others

• Overall satisfaction with course

• Learned new information

• Intent to use material

At the end of program, 90% of participants will rate the applicable questions a 4.00 out of 5.00 (strongly agree/agree indicating satisfaction) End of course evaluation Participants End of course

• Participants

• Data in LMS

• Facilitator (introduce/ remind)

2

Learning

Increase in knowledge, skills, and attitudes regarding selling

• 100% participants are able to demonstrate use of the 5 selling steps

• 100% of participants achieve a passing score (85%) in one attempt at end-of-program assessment

• Role play

• End of program assessment

Participants End of course

• Facilitator

• Data in LMS

3

Application/ Implementation

• Ability to use selling steps (including extent able to use)

• Frequency of using selling steps

• Enablers / barriers to applying selling steps

• 100% of participants are able to successfully use the selling steps (4.00 out of 5.00 on success scale)

• 100% of participants achieve a passing score (85%) in one attempt at end- of-program assessment

Questionnaire Participants 60 days postcourse Evaluation team/lead
4

Business Impact

N/A

N/A N/A N/A N/A N/A
5 ROI N/A Comments: Evaluating program to Level 3—Application/Implementation

Table 4-2. Sample Evaluation Project Plan—Data Collection


ID Task Start Finish Resource(s) Done
2.1 Data Collection        
2.1.1 Identify data utility 05.01.08 06.01.08 CH X
2.1.2 Develop data collection instrument(s)        
2.1.2.1 Create draft questionnaire 06.01.08 06.12.08 CH X
2.1.2.2 Review 06.12.08 06.30.08 CH, team X
2.1.2.2.1 Review with evaluation team 06.13.08 06.20.08 CH X
2.1.2.2.2

Review with stakeholder team

06.21.08 06.22.08 Team X
2.1.2.3

Update based on results

06.23.08 06.28.08 CH X
2.1.2.4 Finalize/test questionnaire 06.29.08 07.06.08 CH, team X
2.1.2.5 Develop high-response strategy 06.01.08 07.06.08 CH, team X
2.1.2.6 Finalize questionnaire communication 06.15.08 07.06.08 CH, team X
2.1.3 Collect data during program        
2.1.3.1 Level 1 and Level 2 data 05.17.08 05.17.08 SH X
2.1.4 Collect data postprogram        
2.1.4.1 Level 3 data 07.15.08 07.29.08 CH, team X
2.1.4.1.1 Implement questionnaire admin/communications plan 07.10.08 07.31.08 CH, team X
2.1.4.1.2 Implement questionnaire 07.15.08 07.15.08 CH, team X
2.1.4.1.3 Close questionnaire 07.29.08 07.29.08 CH X

The Body Drives Successful Data Collection

The body is where the questions are incorporated. Use different sections to identify the focus of the questions. A best practice is to divide the questions into sections based on the levels of evaluation. This format provides a flow that supports successful data collection.

The Conclusion Thanks the Respondent

Finalizing the questionnaire with a short conclusion section acknowledges the end of the questionnaire and provides an opportunity to thank the respondent for his or her time and contributions. This section can include any final instructions and reminders. If using a paper-based questionnaire, provide critical information to remind the respondent what to do with the completed document. Finally, repeat the point-of-contact information in case the respondent would like to follow-up after completing the questionnaire.

Developing the Perfect Question Involves a Few Critical Steps

Of all the activities involved in developing a questionnaire, creating the questions is the most important and time-consuming task. Asking the right question the right way drives the data collection and supports efficient data analysis. By following the three key steps below, you increase your likelihood of collecting the data you require.

Step 1: Identify the question’s intent. The intent of the question derives from the specific objectives identified on the data collection plan. For each level of evaluation targeted for collecting data, the specific measure defines the exact data needed to determine if the program’s objective was achieved or not. This information forms the intent of the questions.

Step 2: Determine the type of question. Various question types are available, and selecting the right one ensures you’re asking the question the right way. Becoming familiar with the different question types (see table 4-3) facilitates the selection of the best one for the need. When selecting the question type, consider the consequences of the question format. For example, open-ended questions enable the respondent to freely provide information but may lead to extensive analysis. For rating-scale questions, using odd- or even-numbered scales depends on the need of the questionnaire, because there is no conclusive right or wrong way (Fink, 2003). Ultimately, the goal is to ensure you select the best question type to support the intent of the question.

Step 3: Finalize the question. Content that is well written further guarantees that the right question is being asked the right way. As each question is formulated, keep the point of view (first or second person), tone, and tense consistent. The question should not contain unfamiliar words, acronyms, or make assumptions about what the respondent knows. When working with rating-type questions, confirm that the question’s statements align to the actual options available. For ratings scales that involve terminology that may not be commonly known (for example, very successful to not successful), provide definitions for each choice. This will not only help the respondent select the most accurate choice for his or her situation but ensure the question’s intent is understood by all respondents. Last, review the questions to validate that they do not contain leading or loaded statements that could potentially influence the respondent into answering a certain way.

Table 4-3. Sample Question Types


Close-ended question

 

Multiple-choice:

While applying the skills and knowledge acquired from the Leadership Development program, I was supported /enabled by the following (select all that apply):

Management

Support from colleagues and peers

Confidence to apply

Networking

Other (please specify below)

One answer:

Of the measures provided below, which one measure is most directly affected by your application of the collaborative problem-solving process?

Personal productivity

Cost savings

Sales/revenue

Quality

Time savings

Other (please specify below)

Open-ended question

 

Free text:

In addition to the above, what other benefits have been realized by the Leadership Development Program? Use the text box below to provide your answer.

Numeric:

Approximately how many new sales leads did you identify as a result of your participation in the Sales Marketing Retreat? ______

Rank or order question

 

For there to be successful virtual networking within the Virtual Learning Community, rank the following items in order of importance where 1 is the most important and 5 is the least important:

__Discussion groups

__Professional place to meet with program peers and faculty (for example, chat rooms)

__Student, alumni, and professor contact information

__Student expertise captured, shared, and valued

__Student personal/professional profiles (for example, interests, success stories, etc.)

Rating-scale question

 

Likert:

Since attending the Leadership Development Program, you have confidence in your ability to meet with individuals to discuss performance concerns.

    Neither    
Strongly   Agree nor   Strongly
Agree Agree Disagree Disagree Disagree

Semantic differential:

Based on your participation in the Brand Awareness Seminar, please indicate the extent to which your attitude has improved regarding the company’s brand message.

 

No             Significant
Improvement   1 2 3 4 5 Improvement

 

Practitioner Tip

Before building your questionnaire, confirm the objectives (at each level) with key stakeholders and be knowledgeable of the survey tool’s capabilities and limitations.

—Clifton Pierre

Certified ROI Professional

Optimizing the Effectiveness of the Questionnaire

Two final steps occur before launching the questionnaire. First, draft and review the completed questionnaire to ensure the needed information is incorporated. Second, test the instrument. When these steps are completed, the questionnaire’s effectiveness is stronger and there is greater data collection success.

Draft Questionnaires Validate That the Elements Are Incorporated

Drafting the questionnaire ties the content together to develop the completed draft data collection instrument. This is the opportunity to review the introduction, body, and conclusion of the questionnaire to confirm the needed information is included. A thorough review also verifies the flow of the questionnaire is acceptable, the reading level is appropriate, and overall appearance is professional and appealing. While reviewing the draft questionnaire, consider how the questionnaire will be administered. Does the survey tool support the type of questions and functionality (for instance, skip logic) needed to administer the questionnaire?

Review the specific content to validate that the needed questions are integrated. Compare the questions included in the questionnaire against the data needed to complete the analysis. Finally, review the entire document for typos and other editing elements. Remember, the final draft should be an accurate, complete representation of the questionnaire so it can be tested to confirm it is ready for distribution.

Questionnaire Tests Confirm Content and Functionality

Completing a thorough test of the tool is one consistent activity that occurs with successful questionnaires. The four main areas to focus on during testing are the functionality, experience, accuracy, and alignment of the tool. The feedback regarding these areas either confirms the soundness of the tool or provides insight into improvements that need to occur before it is administered. When completing the test, have a sample group of the actual respondent population complete a test run. If this is not possible, have individuals participate who are similar to the respondent group. Developing a specific document to capture the required feedback from the test group is a best practice.

Focus Area 1: Functionality. Whether the questionnaire is administered electronically or is paper-based, the functionality of the tool needs to be tested. If it is an electronic questionnaire, check to make sure the links work, the question features are functioning correctly (for example, drop-down boxes, rankings, and so on), and other specific elements are performing as desired (for instance, moving between pages, skip logic, and so on). For a paper-based questionnaire, review the layout to ensure there is space to provide responses and the respondent can see the questions.

Focus Area 2: Experience. Another valuable area to enlist feedback involves the experience when completing the questionnaire. Ask the test group to provide feedback regarding the questionnaire’s appearance, flow, layout, and ease to complete. This information, along with identifying how long it took to complete, further supports a successful launch.

Focus Area 3: Accuracy. Accuracy of the questionnaire involves confirming that needed content and instructions are included, and the right questions are asked the right way. Also review the validity and reliability of the instrument during the test. To be an effective data collection instrument, the questionnaire needs to provide consistent results over time (reliability) and measure what it is intended to measure (validity) (Phillips and Phillips, 2007).

Focus Area 4: Alignment. One area that is frequently overlooked during the development of a questionnaire involves how the collected data will be used and analyzed. After the results of the test are back, they should be reviewed to confirm their utility. Can the results be analyzed to answer the research questions? Is there the ability to compare the results with the baseline data? Addressing these types of questions allows for adjustments to be made, which ultimately supports efficient data analysis.

Preparing for the Questionnaire’s Distribution

With the questionnaire completed, there are final activities to complete to successfully collect the data. The first step is to finalize the respondent population. After the group of respondents is determined, perform various administrative tasks before the questionnaire is ready for distribution.

The Right Respondent Group Reinforces Credibility

Determining the respondent population is a key step that you must complete before you launch the questionnaire. This information is captured on the data collection plan. The sources represent individuals who can provide relevant and accurate information from their experiences related to the evaluated program. Once the population has been verified as a credible source, the next step is to determine the sample size.

Determining the sample size involves identifying the number in the population, the confidence interval and level, the degree of variability, and the organization’s normal response rate (see figure 4-1). These factors, and using a sample size table, determine the sample size.

The final consideration is determining whether or not the responses will be anonymous. The best practice is to maintain respondent confidentiality because it encourages open, candid responses. There is a link between respondents remaining anonymous and their honesty (Phillips and Stawarski, 2008). If it is challenging, however, to collect anonymous responses, have a third-party resource collect the results.

Administrative Tasks Clear the Path for a Seamless Launch

Obtaining a 100-percent response rate is the ultimate goal. Although this may not always be feasible, there are strategies that support achieving high response rates. Guiding Principle #6 of the ROI Methodology states if no improvement data are available for a population or from a specific source, it is assumed that little or no improvement has occurred (Phillips and Phillips, 2005). Following this principle, if individuals do not provide data, no assumptions about the improvements experienced will be made. Therefore, it is critical to obtain as many responses as possible to ensure the data are collected to answer the research questions.

Figure 4-1. How to Determine the Sample Size

Determining the sample size can be completed using sample size tables and the five steps below.

Step Example

1. Determine the population size

350

2. Determine the confidence interval (margin of error or results accuracy)

±5

3. Determine the confidence level (risk willing to accept that sample is within the average)

95% confidence

4. Determine the degree of variability (degree to which concepts measured are distributed within population)

• Estimate divided more or less 50%–50% on the concepts

• Using a sample size table, base sample size needed is 187

5. Estimate the response rate to determine final sample size needed

Based on the organization’s normal 85% response rate, the final sample size needed is 220 (187/.85)

Key Take-Away: Various sites on the Internet provide useful tools (for example, sample size tables, calculators) for determining the sample size.

 

Source: Watson (2001).

Incorporating applicable strategies identified in table 4-4 greatly improves the chances of reaching the desired response rate. Additionally, you should follow a comprehensive communication strategy that includes four key components.

Component 1: Content. The communication’s content provides relevant information so respondents are fully aware of the questionnaire and its purpose. The details describe the action needed, the process for successfully providing the information, and the expected use of their responses. Other information to incorporate includes point of contact for questions, completion due date, and approximate time it will take to complete the questionnaire. Last, identify information relevant to reminders and thank you communications.

Component 2: Delivery Method. Use a variety of methods to deliver the needed communications. Although written communications are the most common method, using various delivery methods (for example, reminder phone call, in-person dialogue) can positively influence the response rate.

Component 3: Resources. Using a variety of resources (for example, executives, managers) to communicate the information about the questionnaire increases the awareness and potential response rate.

Component 4: Timeline. The timeline is the last piece of the communication strategy. As needed, adjust the timeline to prevent distraction with other initiatives. To further support achieving the desired response rate, build in extra time in the event the questionnaire’s response time needs to be extended.

Executing the Questionnaire to Collect the Data

With all the elements in place, it is finally time to launch the questionnaire (see the sample in figure 4-2). On the day the questionnaire is to be implemented, prepare the communication and if applicable, verify the electronic questionnaire is ready to accept responses. A best practice is to check the responses shortly after launching the questionnaire. If responses have been received, it is confirmation that the process is working. However, if there are no responses, double check to confirm the questionnaire request was received. When sending the reminders, providing the response rate further encourages individuals to complete the questionnaire. Finally, once all the responses are collected, officially close the questionnaire and begin analyzing data.

Table 4-4. Examples of Strategies for Improving Response Rates


The following strategies can improve the response rate of the questionnaire:

Have the introduction letter signed by a top executive, administrator, or stakeholder

Indicate who will see the results of the questionnaire

Inform the participants what action will be taken with the data

Keep the questionnaire simple and as brief as possible

Make it easy to respond to; include a self-addressed, stamped envelope or email

If appropriate, let the target audience know that it is part of a carefully selected sample

Send a summary of results to the target audience

Review the questionnaire at the end of the formal session

Add emotional appeal

Allow completion of the survey during work hours

Design the questionnaire to attract attention, with a professional format

Use the local point of contact to distribute the questionnaires

Identify champions who will show support and encourage responses

Provide an incentive (or chance for incentive) for quick response

Consider paying for the time it takes to complete the questionnaire

 

Source: Phillips and Phillips (2005).

Figure 4-2. Sample Follow-Up Questionnaire

Our records indicate that you participated in the Leadership Program (LP). Your participation in this follow-up questionnaire is important to the program’s continuous improvement and the effect the program is having on the organization. Completing this questionnaire will take approximately 30 minutes, and we request your responses by January 31, 2010. Should you have any questions, please contact [email protected]. Thank you in advance for your contributions!

APPLICATION

  Yes   No
1. I applied the knowledge/skills I learned during the Leadership Program.  

2. I spend the following percent of my total work time on tasks that require the knowledge/skills covered in the Leadership Program (circle the applicable answer).

0%   10%   20%   30%   40%   50%   60%   70%   80%   90%   100%

  Strongly Agree  Agree  Neutral  Disagree  Strongly Disagree

3. I used at least one technique learned from the Leadership Program to improve my leadership capabilities.

4. I completed at least one step in my action plan for becoming a better leader.

5. While applying the knowledge and skills from the Leadership Program, I was supported by the following:

(check all that apply)

tools and templates provided

my management

support from colleagues and peers

confidence to apply the materials

networking

other

If “other” selected above, please describe:

6. The following deterred or prevented me from applying the Leadership Program’s knowledge and skills:

(check all that apply)

no opportunity to use the material

lack of support from management

not enough time

lack of confidence to do it

lack of resources

other

If “other” selected above, please describe:

RESULTS

7. As a result of applying skills I attained from participating in the Leadership Program, the below measures have improved as follows

Note: When answering these questions please use the following scale:

(5) Significant improvement = the measure has improved by at least 90% in the past three months

(4) Strong improvement = the measure has improved by at least 75% in the past three months

(3) Some improvement = the measure has improved by at least 50% in the past three months

(2) Limited improvement = the measure has improved by at least 25% in the past three months

(1) No improvement = the measure has improved by 0% in the past three months

(0) N/A = this measure is not applicable to my work

  Significant Strong Some Limited No  
  Improvement Improvement Improvement Improvement Improvement N/A
  5 4 3 2 1 0
Productivity
Sales / revenue
Quality of work
Cost savings
Efficiency
Time savings
Teamwork
Innovation
My job satisfaction
My employees’ job satisfaction
Customer satisfaction
Other

 

If “other” selected, please describe the other measures that were positively influenced by the program:

8. Recognizing that other factors could have influenced the above improvements, I estimate the percent of improvement that is attributable (i.e., isolated) to the Leadership Program is (express as a percentage where 100% represents fully attributable)

_________%

9. My confidence in the estimation provided in the above question is (0% is no confidence; 100% is certainty)

_________%

10. I have the following suggestions for improving the Leadership Program:

 

Thank you again for your time and valuable contributions!
Practitioner Tip

When using a questionnaire, think ahead and have an administrative plan that includes effective strategies, timelines, and resources that support data collection. Be creative—consider including healthy competition across departments or facilities to motivate responses.

—Melissa Scherwinski

Measurement Coordinator

Knowledge Check

For each practice exercise below, select the option that you think represents the best formatted and written question. After you are done, check your answers in the appendix.

Practice 1:

Option A:

Following your participation in the program, did you receive the right quantity and quality of resource material?

Option B:

Following your participation in the program, did you receive

1. Quantity of resource material? (yes, no, other—please specify)

2. Quality of resource material? (yes, no, other—please specify)

Practice 2:

Option A:

I think the new call-tracking database is effective (strongly agree to strongly disagree).

Option B:

I think the new, top-of-the-line call-tracking database is effective (strongly agree to strongly disagree).

Practice 3:

Option A:

The facilitator was effective (strongly agree to strongly disagree).

Option B:

The facilitator encouraged participation in discussions during the course (strongly agree to strongly disagree).

Practice 4:

Option A:

As a result of participating in the Process X program, to what extent were you able to use the five processing steps? (completely successful—guidance not needed; somewhat successful—some guidance needed; limited success—guidance needed; no success—not able to do even with guidance; N/A—no opportunity to use)

Option B:

Were you able to successfully use the five process steps? (yes, no)

Practice 5:

Option A:

As a result of the Team-Building Conference, there has been a reduction in silo thinking in the departments (strongly agree to strongly disagree).

Option B:

As a result of the Team-Building Conference, the departments are exchanging ideas and best practices (strongly agree to strongly disagree).

About the Author

As director of consulting services with the ROI Institute, Caroline Hubble, CPLP, CRP, facilitates various courses on the ROI Methodology and provides expert coaching to individuals working toward ROI certification. Hubble’s professional background includes financial industry experience, where she managed training evaluation, analytics, and operations for business line and enterprise-wide training departments. She has successfully designed and implemented evaluation and reporting strategies for various complex programs. Her operational, project, and relationship management expertise is noted for significantly contributing to improved business practices.

Hubble holds a BA in psychology from Rollins College and is a Certified ROI Practitioner. She received her ASTD Certified Professional in Learning and Performance (CPLP) credentials in 2006. She can be reached at [email protected].

References

Fink, A. (2003). How to Ask Survey Questions. Thousand Oaks, CA: Sage Publications.

Phillips, P. P. and J. J. Phillips. (2005). Return on Investment Basics. Alexandria, VA: ASTD.

Phillips, P. P. and J. J. Phillips. (2007). Show Me the Money. San Francisco: Berrett-Koehler.

Phillips, P. P. and C. A. Stawarski. (2008). Measurement and Evaluation Series, Book 2 Data Collection. San Francisco: Pfeiffer.

Watson, J. (2001). How to Determine a Sample Size: Tipsheet 60. University Park, PA: Penn State Cooperative Extension.

Additional Reading

Alreck, P. L., and R. B. Settle. (1995). The Survey Research Handbook: Guidelines and Strategies for Conducting a Survey, 2nd ed. New York: McGraw-Hill.

Fink, A. (2003). The Survey Handbook, 2nd ed. Thousand Oaks, CA: Sage Publications.

Walonick, D. S. (2004). Survival Statistics. Bloomington, MN: StatPac, Inc.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset