CHAPTER 12

Using Evidence to Assess Performance Gaps

Ingrid Guerra-López

Talent development professionals add measurable value to their clients and organizations when using a systems-oriented framework for assessing needs and opportunities. The framework can help clarify what questions should be asked to collect relevant data that defines the problem and appropriate solutions, and in turn, contributes to human and organizational performance improvement.

IN THIS CHAPTER:

  Discuss the importance of a systems approach in assessing needs and selecting solutions

  Describe a strategic alignment framework for ensuring needs and solutions are aligned to measurable value

  Examine key considerations and methods for collecting relevant and useful evidence

What would your stakeholders consider to be a valuable use of learning and talent development initiatives? What concrete organizational returns and benefits has your organization received for its investment in a recent talent development initiative?

To maximize worthy accomplishments, these questions must be answered prior to selecting solutions, rather than after implementing them. Much of the learning and talent development literature starts with a preimposed solution mindset, particularly a training mindset, and assumes that positive results will follow. Beginning with a solution in search of no known problem can take us down a dangerous path. Conversely, if we use performance data to inform the selection of solutions and actions, we have a much better chance of measurably contributing to organizational success and justifying our resource spending.

We fail more often because we solve the wrong problem than because we get the wrong solution to the right problem.
—Russell Ackoff, management science pioneer

The most successful talent development professionals view their roles as much more than mere deliverers of training and learning products. They work to nurture strong partnerships with managers and other organizational stakeholders to support human performance that is aligned with organizational priorities. To this end, they generate, share, and use timely and relevant performance data to support decision making and action.

A Systemic Approach to Assessing Needs and Improving Performance

A needs assessment has the greatest utility and impact when we take a systems approach because it provides a holistic view of reality, helps distinguish assumptions from facts, validates evidenced-based needs, and reduces the risk of wasting precious resources on solutions (particularly training) that will not address underlying issues or get us much closer to expected outcomes. Therefore, a performance improvement mindset requires a systems approach to assessing needs.

Assessment and Analysis in Human Performance Systems

The term needs assessment is often used interchangeably with other terms such as performance assessment, front-end assessment, performance analysis, and diagnosis. Fundamentally, a needs assessment process provides a framework for measuring gaps in results and generating the performance data you require to make sound decisions about how to close these gaps. It starts with asking the right questions so you can align the right solutions to the right problems and devise a plan for effectively implementing those solutions.

While it’s necessary to define needs in terms of a results gap to solve a performance problem, it is not sufficient. In addition to defining the performance problem, we must also understand why these gaps exist. Organizational solutions must be thoughtfully aligned to the factors driving the problem you want to solve. To this end, we employ a causal analysis to break down a performance gap into its component parts and identify interrelated root causes that are driving or sustaining the problem. It’s important to note that the overwhelming majority of performance problems are a product of circular patterns of events at the organizational level. For example, your data indicate a dip in sales for the sales team, so you quickly jump into solution mode and tackle the problem in two key ways: First, you retrain the sales team to make sure it has the best knowledge in the industry, and second, you enhance the incentives for meeting sales targets. However, you still don’t see a noticeable improvement in sales.

Why not? Well, perhaps you never asked the “why” question. You should have done that immediately upon noticing the decreased sales figures. Asking why may have revealed that the trained staff do not have a chance to apply the “best industry knowledge” if their supervisors are communicating conflicting expectations, are not providing relevant feedback and coaching on the job, or are discouraging the sales team from applying those techniques once it is back in front of the customer.

In addition to causal analysis, talent development professionals are also engaged in conducting other types of analyses, particularly if we already have a compelling body of evidence to conclude that we have skills and knowledge gaps that are best addressed through training. These include audience analysis, task analysis, and environmental analysis:

•  Audience analysis helps us better understand who our target learners are by collecting data related to relevant characteristics. These can include specific demographics and background information such as relevant prior knowledge and skills, work experience, and other characteristics that should be factored into the training design and delivery.

•  Task analysis helps us clarify what trainees should be able to do. Concentrating on doing rather than knowing will help us focus the training activities and content on what is directly related to performance requirements, rather than what might be nice to know but not essential. Training is much more effective when it has direct relevance to a trainee’s work requirements, and when this relevance is made explicit. Tasks are concrete activities that make up the duties of a given job role; in a task analysis, each one must be considered individually. The essential steps in task analysis include clearly defining the task that must be performed, breaking it down into component subtasks, and breaking each component subtask into a clear, chronological, step-by-step process. While it is important not to make unfounded assumptions about what might be obvious to the learners, the trick is finding balance and providing just the right level of detail. Direct observation and expert interviews using out loud process protocols are particularly helpful data collection methods for task analysis.

•  Environmental analysis helps us understand the learner’s actual performance context (that is, the work setting) so we can design an instructional environment that resembles the performance context as much as possible. Various learning theories and sound instructional design practices support the importance of performance cues for helping individuals learn and perform effectively. Understanding performance cues is also critical for enhancing the transfer of training to the performance context. For example, if the work setting requires the trainee to perform a given task with the use of specific tools and under time constraints, the instructional activities included in the training should provide the same conditions. Environmental analysis can also be used to better understand the learning environment of the target audience, including what resources might be available to them during the training, the timing, their preferred training modality, and instructional strategies.

In summary, addressing skills and performance gaps requires us to understand the system, which is made up of interrelated factors and dynamics that create and sustain those recurring issues. A systems approach to needs assessment allows us to clearly define the outcomes that the system should deliver, the root causes or barriers that are getting in the way of achieving those outcomes, and the requirements that must be met by the solutions. This, in turn, gives us a strong foundation with which to judge the appropriateness of proposed solutions (Guerra-López 2018, 2021; Guerra-López and Hicks 2017; Kaufman and Guerra-López 2013).

The Strategic Alignment Process

The strategic alignment process integrates all these essential elements and considerations into a structured yet flexible process for ensuring that your talent development efforts clearly align to organizational priorities and generate useful feedback for decision making as well as hard evidence of your contributions to the organization’s success. Therefore, the process rests on a performance measurement backbone and connects needs assessments to other evidence-generating processes such as analysis, monitoring, and evaluation. It offers a pragmatic way to establish effective partnerships with your stakeholders through a series of key questions and activities that help ensure you have the information required to make the best decisions possible.

The strategic alignment process comprises four phases, which are all equally important and require specific outputs to successfully complete the other stages (Figure 12-1; Guerra-López and Hicks 2017).

Figure 12-1. The Strategic Alignment Process

Aligning Expectations

The initial phase, aligning expectations, helps us gain an understanding of the expectations, wants, and perceived performance needs from various perspectives. Stakeholders include the person who made the original request as well as those who will influence or be affected by the selected solutions, which could include top leadership, frontline supervisors, staff, or other relevant functional unit representatives. With a calibration of these perspectives, you will gain an understanding of what is or will be driving stakeholder decision making, assumptions, and satisfaction. In a sense, this step represents the beginning of creating and managing organizational change because it engages people in the process and, in turn, their views focus the improvement efforts. This helps generate a comprehensive picture of the issues and some of the factors that may affect the initiative’s success.

Aligning Results

The aligning results phase helps identify measurable gaps in various levels of organizational results. Here, we work with stakeholders to translate their wants and expectations into the current and desired levels of results in skills, performance, value-added contribution to clients and community, and other strategic consequences that affect organizational sustainability. Many find it challenging to articulate their wants in terms of specific and measurable performance results, so we play an instrumental role in aligning their wants and valuable results. This is the foundation of the measurement framework and provides the focus of our data collection through relevant performance indicators. Data is collected to determine the critical gaps between current and desired results. These priority gaps are the foundation for further analysis, recommendation of solutions, and implementation plans.

Aligning Solutions

In the aligning solutions phase, we focus on the deliberate analysis of priority gaps. Now that we have defined the important problems to solve, it is critical to understand each one. What are the contributing factors? How do the contributing factors affect each other? What elements of the environment are perpetuating recurrent patterns? The answers will lead to a thorough understanding of the concrete changes our potential solutions should deliver.

The process of identifying alternative solutions should be collaborative; include input from stakeholders, beginning by identifying relevant and useful criteria for selecting solutions. This helps ensure that this process is not only driven by evidence, but also informed by the needs of the organization’s culture and resources. The alternatives are then reviewed and the solutions are selected that most likely offer the best payoffs in the most resource-efficient ways.

Aligning Implementation

The aligning implementation phase deals with the critical success factors necessary to effectively implement the proposed organizational improvement initiatives to ensure successful execution, integration, and sustainability. Implementation leverages specific strategies for driving the transfer of results from training and development contexts to the performance environment. It’s also important to consider thoughtful change management strategies that include defining who needs to be informed about what, when, and how, as well as how to gain useful input about other issues. In addition, we should define mobilization strategies that must be aligned to effectively implement our improvement initiatives. For example, should a core group or change coalition be formed to support the change? If so, who will be involved and in what ways? Mobilization strategies may also include defining implications for job descriptions, feedback mechanisms, performance evaluation, and process redesigns. Finally, a clear monitoring plan to track the progress of improvement initiatives must be articulated, including what data to track, how frequently to collect data, who should use it and when, and how to use data for corrective or improvement actions.

To help you identify objectives and activities for the alignment process, download the tool available on the handbook website at ATDHandbook3.org.

Collecting Relevant and Useful Evidence

A central premise of assessment is that we use relevant evidence to define needs and select appropriate solutions. Unfortunately, a common mistake is to force connections between easy-to-collect data, or data that the organization has already captured, and our definition of needs. In other words, people look at the available data and then ask questions they can answer with it. When this happens, they overlook important questions that do not naturally stem from the data—questions that they should ask and answer, but currently lack the data to do so. There is absolutely nothing wrong with using data that is already available (in fact, this can save time and other resources) if, first and foremost, it is relevant for answering the assessment questions.

Likewise, the data collection methods we use must be relevant for the type of data we seek. Data can mean any documented record of something—an event, a performance, a result—that took place during the period of interest for the assessment. It’s typically driven by our selection of the relevant indicators for the results or phenomena we want to measure to answer our assessment questions. Examples of indicators could include account retention rates, turnover rates, net promoter scores, customer support tickets, employee satisfaction, salary competitiveness ratio, revenue growth, revenue per client, and customer lifetime value.

However, not all data carries the same weight in reaching conclusions, and some data may be misleading due to bias. Sound decisions are directly related to the appropriateness and quality of the data used to make them. Thus, data must meet four critical characteristics:

•  Relevant. Data is directly related to the assessment questions (overarching and specific) that must be answered to clearly define and address important problems.

•  Reliable. Data is rigorously measured, trustworthy, and consistent across various types of observations.

•  Valid. Data truly indicates or relates to the results we want to measure; it measures what we say it measures.

•  Complete. When systematically collected, analyzed, and synthesized, the data helps us generate an accurate and holistic view of reality.

Two related and essential terms that refer to both data and the techniques used to collect it are qualitative and quantitative. The qualitative technique requires careful and detailed observation and description, expressed through narrative rather than figures. Some appropriate ways to collect this type of data are observations, interviews, focus groups, open-ended questions on surveys, and reviews of existing documents.

Quantitative techniques are used to establish facts numerically, based on independently verifiable observations. Methods commonly used to collect quantitative data include Likert scale surveys and other validated scales, as well as a review of secondary data sources that could include a wide range of automated performance figures and statistics.

The distinction between qualitative and quantitative is not an either/or proposition. Typically, we gain a much better understanding of needs and problems when we use a mixed methods approach. For example, we may start with decreased employee engagement survey scores (quantitative), and subsequently use focus groups to help collect rich, in-depth narratives about employee experiences (qualitative). These provide a more complete picture of interrelated organizational climate issues and stronger evidence for making decisions about how to address those issues (or at a minimum, areas that require further inquiry and evidence).

Identifying Data Sources

Carefully considering from where or whom we collect data is a critical part of developing a useful data collection plan. This helps improve access to the data, ensure the appropriateness of data collection tools, and prevent unnecessary data gaps on the back end. Ongoing technology innovations are improving access and the timeliness of data on a continuous basis, both within and outside the organization, by linking reports, databases, experts, and other sources. As much as feasible, it is important to triangulate various sources to increase data confidence and subsequent conclusions and recommendations.

Selecting Data Collection Methods

The quality of the evidence we collect reflects the appropriateness and quality of our data collection methods. The type of data we seek and the sources we plan to use will inform the type of data collection methods we use. A common mistake is picking a data collection tool (like a survey) simply because it’s familiar or what was used before. The data collection method should be chosen based on the function we want it to perform. For example, if you want to measure error rate across various sites or teams, you don’t need a survey to collect attitudes about error rates. Instead you want to review quantitative data that is likely already being generated by automated performance reports. If you seek a deep understanding of low employee engagement survey scores (quantitative), you may want to select a data collection method that renders rich, in-depth qualitative data, such as focus groups or interviews. Many other resources provide detailed descriptions and steps for deploying data collection methods, so this chapter will not describe them at length. You will find a tool on the handbook website at ATDHandbook3.org that provides deployment tips to maximize the utility of several data collection methods, including observation, interviews, surveys, focus groups, and data reviews.

Data Analysis

Both qualitative and quantitative data are subject to rigorous analysis. Analysis involves organizing, summarizing, reviewing for quality, and synthesizing data to discover patterns or relationships, strengthen interpretations, and support conclusions and recommendations. Just as the type of data we want plays a major role in selecting data sources and collection methods, it also influences the type of data analysis techniques we select. Quantitative analysis techniques can be further subdivided into descriptive and inferential statistics. Common descriptive statistics include measures of central tendency such as the mean (average), mode (most frequent), or median (the middle) scores or responses, as well as measures of data variability such as the range of scores, standard deviation, or variance. Frequencies and percentages are also commonly used to represent quantitative data. Inferential statistics is a method that deduces a measure from a small random sample that represents the characteristics of a larger population. It allows you to make assumptions for a larger group based on the sample results.

Qualitative analysis can also be divided into major approaches: deductive and inductive. A deductive approach is based on a predetermined set of categories or domains selected by the assessor, which can make the analysis process quicker and easier. This is a feasible approach when we know enough about the subject matter to define logical categories of information that we can use to identify themes and patterns from the data. Conversely, an inductive approach can be more time consuming and is probably the best option when little is known about the subject matter and we have to take a more exploratory approach to identify themes. With this approach, we code and organize information around major emerging themes, and likely further subcategorize it into more specific themes.

Data Collection and Analysis Planning

One practical way to build your methodological plan is by using a data collection and analysis planning matrix, which you can find on the handbook’s website, ATDHandbook3.org. Use the outputs generated during the initial phase (align expectations) to list each overarching needs assessment question (first column); then for each assessment question, work with stakeholders to gain consensus on the indicators to measure (the data you will collect). For each indicator, identify the data source and methods you will use to collect the data, as well as how you plan to analyze the data you collect. For larger or more comprehensive needs assessment projects, you might also consider adding two additional columns to define the timeline for collecting the data and the parties responsible for deploying the data collection methods.

Note that data collection in the context of performance improvement typically requires multiple rounds, with initial collection and analysis providing answers to initial assessment questions, as well as generating additional questions (which are typically related to why and how for gaps) for additional data collection and interpretation. These questions should be answered before you can prepare a report with well-supported conclusions and actionable recommendations.

The importance of effective communication cannot be overstated and should occur throughout the needs assessment process. Keeping key assessment stakeholders engaged throughout the process promotes transfer of ownership of the assessment findings and recommended actions. In addition to ongoing communication, a needs assessment report is a common way to share the assessment results. The report should be clearly aligned to stakeholder expectations and the decision-making needs to maximize the use of its findings and recommended solutions. It often includes an executive summary, an introduction, a description of methods, findings, conclusions, and recommendations. The executive summary is a good way to communicate key takeaways for leadership and should include essential highlights of the initial situation, opportunity, or presenting symptoms; aims of the assessment; findings and conclusions; and concrete solutions.

Oral presentations are another typical deliverable. As with any presentation, it is important to understand the audience in order to communicate effectively. Stories can be a powerful way to convey key issues and bring the data to life. The presenter should have a thorough understanding of the needs assessment process, the findings, and the recommended solutions; they should also be prepared to effectively address questions. The presenter’s perceived credibility can influence the perceptions of the needs assessment’s findings and recommendations.

Final Thoughts

It is important to reiterate that needs assessment plays a foundational role in the performance improvement process. Therefore, articulating concrete considerations for implementing recommendations, as well as suggesting which stakeholders or partnerships are best suited to support specific elements of solutions, will also improve success. Having a clear plan in place ensures the results you will require.

About the Author

Ingrid Guerra-López is a professor of learning design and technology and interim dean of Wayne State University’s College of Education. She has held numerous leadership roles in a variety of prominent groups and organizations, including the International Society for Performance Improvement (ISPI) board of directors, editor in chief of the peer-reviewed journal Performance Improvement Quarterly, chair of ISPI’s research committee, and various other key committees and task forces that set standards and future direction for the instructional design and performance improvement field. Ingrid has led major educational and institutional effectiveness initiatives for international development agencies, government, education, and private organizations, including strategic planning efforts, educational and workforce needs assessments, program design and development, and program evaluation and quality assurance projects. In this capacity, she has led and mentored diverse groups of students, work teams, and institutional leaders in more than 40 countries. Ingrid may be reached at [email protected].

References

Guerra-López, I. 2018. “Ensuring Measurable Strategic Alignment to External Clients and Society.” Performance Improvement Journal 57(6): 33–40.

Guerra-López, I. 2021. “An Ounce of Good Assessment Is Worth a Pound of Analysis and a Ton of Cure: Revisiting Seminal Insights in Performance Improvement.” Performance Improvement Journal 60(1): 26–30.

Guerra-López, I., and K. Hicks. 2017. Partner for Performance: Strategically Aligning Learning and Development. Alexandria, VA: ATD Press.

Kaufman, R., and I. Guerra-López. 2013. Needs Assessment for Organizational Success. Alexandria, VA: Association for Talent Development.

Recommended Resources

Dearborn, J. 2015. Data Driven: How Performance Analytics Delivers Extraordinary Sales Results. New York: Wiley.

Evergreen, S. 2020. Effective Data Visualization: The Right Chart for the Right Data, 2nd ed. Thousand Oaks, CA: Sage Publications.

Guerra-López, I., and A. Hutchinson. 2013. “Measurable and Continuous Performance Improvement: The Development of a Performance Measurement, Management, and Improvement System.” Performance Improvement Quarterly 26(2).

Guerra-López, I., and A. Hutchinson. 2017. “Stakeholder-Driven Learning Analysis: A Case Study.” Journal of Applied Instructional Design.

Phillips, P.P., and J.J. Phillips. Measuring ROI in Learning & Development. Alexandria, VA: ASTD Press.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset