William J. Rothwell

On how it all started…

I started my career 30 years ago, and I started as a practitioner, not as an academic. I was first a training director and was delivering many training programs in state government in Illinois. After that, I was widely involved in the insurance industry and my company and in setting up a training department from scratch, looking at everything from executive development to hourly workers and skill and technical training in an insurance environment. I have also done a number of consulting projects on training evaluation. I won’t drop names, but one state government’s entire system of community colleges came to me and asked for consulting assistance in setting up a uniform approach to training evaluation for community colleges that are training to support local businesses and their economic and workforce development efforts. Of course, some years ago I published a book with ASTD, The Role of the Evaluator, which talked about the field of learning and performance and the role of the evaluator, among many other roles, in terms of establishing a framework for properly measuring and evaluating the impact of training and other kinds of interventions related to learning or performance improvement purposes. I guess I have had quite a lot of involvement with training evaluation over the years, both in teaching graduate courses, teaching public workshops on training evaluation, and conducting training workshops on other kinds of evaluation such as organization development evaluation or evaluating performance improvement efforts.

On how training evaluation has changed over the years…

Generally speaking, practitioners in our field, learning and performance, have become more sensitized over the years to demonstrate the results of what they do. That is particularly difficult in our field because when we teach people new things, many factors back on the job can affect whether they applied what they learned. Of course we know that people say that only about 8 percent of off-the-job training transfers back to the job in changed behavior. When we look at some of the reasons for that, we see that short-term memory theory plays a part. We forget about 80 percent of what we have heard within 48 hours. Co-workers or supervisors who were not in attendance in the training that the trainee attended are not positioned well to support what the trainee has learned back on the job. Over the years, I have come to some conclusions about training evaluation. I think one of those is that we have become more sensitized to the relationship between evaluation and needs assessment.And we tend to think of needs assessment as something that is to specify the needs to be met by training. When needs assessment is not done properly, or management mandates training be done without being totally sure that training is the best way to solve the problem, the training doesn’t work. And so, this is one of the things that I have come to realize over the years: that training needs assessment and training evaluation go hand in hand and that, many times, requests for return-on-investment information or other things are merely a symptom that the needs assessment was not done properly.

On the progress the profession has made in embracing evaluation…

One of the dilemmas we always face in our field is that we really have two groups of people. One consists of professionals in the field who get a degree, or they become ASTD certified through the Certified Professional in Learning and Performance, or both. Those people tend to regard the training, learning, and performance field as their career, and they stick with it, perhaps for their entire lives. There is another group of people, usually a larger group, who are promoted from within the organization, hold short-term stints in the training function, and then go back out into the line organization or into other capacities. Generally speaking, I think the professionals in this field have become much more sensitized to the need to demonstrate the value of what they do, but that some of the “promoted from within people,” that’s the very large number who come into the field and leave the field every year, those people tend to drag down the average in terms of awareness about how to do training evaluation, how to collect data, how to convince decision makers that those data are accurate, and how to eventually demonstrate results.

What I do not see is the commitment of or the willingness to commit the staff or the resources necessary to collect those data. Some years ago, in the mid 1990s, I did a small-scale survey of practitioners, and I asked them several questions about evaluation. One of questions was this: “When do the decision makers most often ask you for evaluation information: Before you deliver a training effort, during the delivery of the training effort, or following the delivery of the training effort?” Which would you guess they said most often? The last of course—after the training was delivered. I believe that is too late. We are much better advised to gather the metrics before we make the investment in the training and get agreement with our decision makers about what metrics we will use to measure the success or relative success of the venture. We get them to buy in, and we do that as part of the needs assessment or performance analysis process. And if we can get the “jury” in agreement on the metrics to use, then it will be very difficult for them to change their minds later, not impossible, but more difficult. So then at least we have a target to shoot for. So we know what the grading criteria are by which our efforts are going to be evaluated.

I believe that canny practitioners probably figured this out themselves, and many of them have been doing this already, routinely. They have been trying to collect those metrics, even during the initial interviews and during initial needs assessment so they have got a basis by which to track achievements, during and after, the training is delivered.

On how executives view learning and development and investment in it...

We should never forget that the word evaluate contains within it the word value, and values, I believe, are what this is all about. Why is it that decision makers always question the return-on-investment or the impact of training, but we rarely hear the same issues come up for accounting ventures or for large computer systems? Sometimes those are taken at face value as being worth it. So, I think at the base, one of the issues we are talking about is this: What does management really value? Does the human side of the enterprise command the same level of management support that technology does, or that financial services does, or that marketing does? I really wonder about that. Over the years, I have often wondered why we rarely hear people ask, “What is the impact of our executive bonus plan on achieving business results?” Lately we have heard that question come up after the financial crisis. But before that, people rarely questioned the need for bonuses. So, you see what I am saying—what do we really believe is important? What is worth measuring, and why do we hold one type of activity, like training, to one standard, but sometimes other activities are not held to the same standard?

I worry that people think that evaluation information alone is all they need without realizing that there is a political element to evaluation. Political not in a sense of political parties, but political in the sense of organizational politics where it is one thing to collect data and it’s another thing to convince decision makers. And I think focusing on how to convince decision makers and getting their involvement are really key, very important in the whole evaluation arena. If we can pinpoint what a problem is costing us before we make an investment in training, one of many kinds of solutions, then I think we are headed in the right direction. So, forecasting benefits means getting clear on the metrics, getting buy-in from the decision makers, getting them to agree those are appropriate metrics, and then tracking accordingly.

On how evaluation makes a difference in the perception held by executives of training and development…

A better way is to think of training evaluation as akin to a legal problem in terms of the way lawyers think of convincing a jury. Now, there is a difference between evidence and proof. Evidence is something we give people to sway them to believe something. And so, a trial attorney would place convincing evidence in front of a jury. The jury is the “try-er of fact.” They decide whether the evidence has made the case or not and whether someone is guilty or not guilty. The same is true in our field. Regard the jury as all of the key stakeholders, senior managers, middle managers, even learners who are participating in the effort, other trainers in the training divisions, and even customers and other stakeholders. If we ask the question, “What evidence would it take to convince them that our training had an impact?” I think we are thinking along the right lines. Remembering that no matter what level of evidence we give them, if they have already made up their mind, we will never change it. We could have a fool-proof mathematical formula and a fool-proof research design to show business results, but if the decision makers didn’t accept it, the point is not proven. And we have a luxury that trial attorneys do not have, which is that we can ask the jury in advance of the intervention what evidence it will take to convince them that there was an impact.

On why we still see such a low investment in training measurement and evaluation within organizations and how can we facilitate more investment in the future…

It goes back to a topic that I discussed in one of my books, Beyond Training and Development. Like “Murphy’s Law”—you have heard of that, if something can go wrong it will go wrong—you have something called “Rothwell’s Theory of Visible Activity,” which states that management only values what they see us do directly. They see us perform things like classroom delivery or online instruction, and they equate that activity with meaningful results. But “back-office” activities like needs assessment, which is very important to scope the training, like performance analysis to determine whether training is even a suitable solution to the problem, evaluation efforts which may take place following the training, during the training or before the training, tend not to be highly visible instructional delivery efforts.

I believe we have to overcome one issue. We have heard the term used in the quality movement: “the cost of quality.” There is also a cost of evaluation. If we are an understaffed training function or learning and performance function, what are we likely to get the greatest payoff in doing? Presenting visible activity or evaluating the results of what we have done? This goes back to the cost of the evaluation, until we feel it is important enough to make the investment in collecting the data and learning what the decision makers want to know. The first question to ask in evaluation is always “Who wants to know?” The second question is “What are they going to do with the information once we find it out. What decisions will they make?” Different groups are going to make different decisions. If we hand evaluation data back to our training instructors, which is often a group that does receive them, the expectation is that they will improve the next delivery. But if we present those data to senior executives, what do we expect them to do? I would guess either continue to fund or increase the funding for the training function as a result of being satisfied with the results achieved.

Some years ago, I wrote a book, What CEOs Expect from Corporate Training. In that book, we published quotations from more than 80 CEOs we interviewed about the competencies of the training professional. And when we asked them about evaluation, a number of CEOs expressed some skepticism about trusting evaluation information that is gathered by the same people who were responsible for making the change. In other words, one CEO said that makes about as much sense as trusting an accountant to do his or her own audit. In short, they were saying that they found data collected by training and development professionals about interventions that they themselves had done or had been involved with to be suspect. And it came across to some senior leaders as looking like a “cover your butt” activity. I think we have to be sensitive that not all stakeholders look at us with complete trust.

On what the future holds for measurement and evaluation…

I have seen the field moving increasingly to the “gee-whiz gizmos.” I did a literature study recently in the last year, and I found the biggest number of articles about the unveiling and use of new technology and delivery: Second Life and wikis and all of that. So there seems to be great interest in all of these exciting new delivery modalities. But I believe that unless we stay focused on how the training helps us achieve business objectives and get very clear at every step about taking it back to that, it will be a problem for the field longer term.

I just think people are excited about new delivery options that may increase the interest and the motivation level of current or future generations in the workplace—who may be better attuned to certain delivery options than other generations might be. But I am just pleading for people to keep their eye on the ball. The “eye on the ball” means this: How do we get our efforts aligned with business results? How do we make our contributions more visible and make decision makers more aware of those effects? I have nothing against new media; I think that they are exciting. I am simply saying they can distract us from keeping our eyes on the real ball, which is helping the people in the organization get results.

I would like to say that the roles of the business manager, the learning professional, the organization development professional, and the performance consultant are to some extent converging. I think it would be a desirable thing if they were able to do so. Unfortunately, many educational institutions and other places that teach future business leaders still do not adequately emphasize the human side of the business, and this is all the more surprising because every business observer and pundit says that the future rests with innovation: the ability to think creatively and outsmart competitors. And yet, if all we can think of is meeting today’s balance sheet numbers without thinking about other things that may not be as easily made tangible, like investments in people, if people and their innovation is what is key to the business then, I think, we face a problem long term.

The organization teaches newcomers what the organization has learned from its experience to preserve institutional memory. But that is past oriented, and I believe what we are starting to see is more use of face-to-face opportunities, few though they may be, particularly in industries that do not have an R&D function like service firms, which, as you know, is so important in this economy. We use group settings for what they should be used for, to generate new ideas. If we understand that training is simply a way to get people to overcome deficiencies and get them up to some standard, then that is kind of past oriented. If we look at group venues as something different from an online venue, where it is fairly easy to convey information, some of that information may well have come from our organizations’ past experience. If we look at group settings as an opportunity to generate new knowledge and to make the training—which may now be misnamed—essentially an R&D function for service firms where we can pull people together and generate new ideas, then I think it is dead on to this future of helping to facilitate innovation and creative thinking. So I would like to say that I see training splitting, with a lot of the online and other virtual forms taking over the old, traditional role of training as helping people overcome deficiencies, helping people meet requirements, helping get people up to a standard for their current level and keep them there as technology changes, and helping people prepare for the next level, but then see group settings more frequently used to generate new knowledge.

About William J. Rothwell

William J. Rothwell, PhD, professor of Human Resource Development at Pennsylvania State University, oversees a graduate program in Human Resource Development and Employee Training; teaches graduate courses on the full range of performance technology issues; directs research projects; and consults with organizations in business, industry, government, and nonprofit sectors. He has consulted widely on succession planning and management. A prolific writer, he is author or coauthor of numerous books devoted to training and performance management issues.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset