Jack J. Phillips

On how it all started…

It all started in about 1970. I was on the training staff of Lockheed Aircraft, which is now Lockheed Martin, and I was disappointed with what we were doing with evaluation. So I complained about it, and our training director at that time asked me to chair a task force to make it better. And that is what often happens when you complain. I started looking around at what was being done, and we were doing some evaluation at the reaction and learning levels at that point—testing as we called it—and we started looking at what others were doing. At that time, I contacted Don Kirkpatrick who had published some articles about 10 years before then that described those four steps that are now called levels, and I tried to get more updates to see how do you do it, give me some systems, give me something that goes beyond those articles. And Don told me that he was not really working on evaluation at that time. He was a professor at a university teaching management courses, and he said I’m looking forward to other people taking those four levels and doing something with them. And that’s what I did. So we started working with this in a more systematic way, trying to understand how to collect data beyond the classroom and how to deal with it. I had the fortune to have a request to conduct a study. It was a study of a cooperative education program, in which engineering students alternated work and school, and when they graduated, we often hired them. We had 350 co-op students. That was a very large budget, and it was on my budget at that time as the co-op director. And I had a request from the chief engineer to show the value of that program. Though it had lots of discussion around it, basically he wanted to see the value it was bringing to the company, yes up to “show me the ROI.” That was in 1972 if you can imagine. So, even as early as those days we were getting some requests for that kind of data. So I worked on that as part of a master’s thesis, and the statistics were taken at that time and I finished the study. I thought it was a marvelous study, and it was published in the Journal of Co- operative Education. We were able to show the actual monetary value of the program using a classic experimental versus control group. But what I noticed wasn’t that I had a nice study with some “gee whiz” approaches, what I noticed was the impact it had. I got to keep the program. We actually got funding, so we were able to secure funding. But I think more important is that I got more support for the program. We had problems getting management to support it, and they stepped up and started doing that because when I presented my study not only was the chief engineer in the audience so were the division’s engineers. So they became my supporters, improving my relationship with that group which was so critical to what I was doing at that time. And so from that, the journey began. I went to another company as the head of learning and development and continued to work on this. I had a CEO who was interested in seeing the value up to and including ROI. And then in 1983, I published the first book in the United States on training evaluation titled Handbook of Training Evaluation and Measurement Methods. It became widespread in terms of adoption and use all over the world. We measured and evaluated not only training and development while I was there, I moved into human resource functions as well. And then went into senior executive roles that still required and conducted evaluation.

In 1992, we founded the ROI Institute to help others with this. Our mission is to help organizations around the world evaluate their programs—all types of programs. So now we are in 52 countries, and we’ve got about 30 books that support the ROI Methodology. Our books are in 38 languages and are moving the methodology to new applications, to new cultures, and to new countries. We have worked hard to refine a process that can deliver bottom-line results that is executive, professor, researcher and user friendly. We now count about 4,000 organizations using this, and that number is growing. And it is going into all types of organizations, so I’d say it is so pleasing to see the use and acceptance of what we have created over a period of time.

On how training evaluation has changed over the years…

It certainly has mushroomed as an important part of learning and development. It is more evidenced based today than it was before. It is also more quantitative; we started off basically evaluating or collecting a lot of qualitative data. And in the last decade it became more financial. It is identifying the value to an organization in financial terms and getting more data about the contribution of learning. We have come a long way from just using “happiness sheets” many, many years ago as our only evaluation to tremendous focus on evaluation of learning and development functions, and we have had a lot of success. So I am pleased to see the evolution and change over the last three or four decades.

On the progress the profession has made in embracing evaluation…

When we have a large expenditure, it comes naturally these days to see the contributions that expenditure is making. That is the return-on-investment. We have two clients now, for example, that have more than $1 billion of annual expenditures in learning and development. As you can imagine, with that level of expenditure, you have to think about the return. So I think the executives have driven this, of course, they push this same accountability in other functions as well. Now, I also think that it is driven in part by business-minded learning and development managers who have looked at what value we are delivering. I remember a comment made by a large package delivery company about 10 years ago; at that time the organization had a greater than $600 million budget in learning and development. And the head of that budget said to the group, as they embarked on this ROI journey, he said, “To this point, the executives have not asked us to show the value of this $600 million expenditure. But I can’t imagine them not asking that, and I think it is important for us to do it because we really need to see the value delivered.”

On how executives view learning and development and investment in it...

So many of our clients tell us that they earn respect by showing the value. I remember the training manager from Guinness Brewery sending me a note telling me he conducted an ROI study for one of the major programs and presented it to the CEO. He said the CEO probably knew who he was in the organization but had never had a conversation with him directly. And the training manager had a meeting with him to present these data, and it was very positive. The CEO got excited about the data and actually took them to the holding company’s quarterly CEO meeting. And the training manager said that since then, the CEO had contacted him and dropped in to see him and began to ask him his opinion on things. He said that it broke the ice of communicating with and getting support from that top group. I think that is an important lesson that we see repeated over and over. This allows us to connect to the business, make improvements in the business, and gain friends at high levels. These executives are not so concerned about reaction data and learning data, in fact they normally don’t even want to see that. But they do like to see if there are changes in the way we approach our jobs, so that is what we call application data or if it made an impact in the organization in some business measure, that is impact as we call it. And then a few, in growing number, want to see the actual ROI for major programs. So, if you can do that, if you can provide these data, they change their perception of the function, of the people involved in that function, and certainly their perception of the funding you may or may not need in the future.

On how evaluation makes a difference in the perception held by executives of training and development…

We have a chapter in this handbook that gets right to that issue. Let me cover what I think is probably some of the key findings of that particular research piece. We had heard from 96 CEOs at the top of very large organizations on the Fortune 500 list, and what we saw was a tremendous gap in terms of what we report to them and what they really want to see. If we take the three levels that I mentioned before, I was showing the executives the data that suggest we are making a difference in the organization, that people are operating differently, and that they are improving their work processes—the application data. Eleven percent of those CEOs said that they are getting this now, and 61 percent said they would like to have it in the future. But the biggest gap occurs at the next level—impact—8 percent of the CEOs said they have data that shows the connection to the business, but 96 percent said they wanted to see these data. That’s a huge gap. And third, in terms of ROI, 4 percent said they get this now but 74 percent said they wanted to see this. So those gaps really highlight some challenges for us. Now on the positive side, we are making progress in that we have those percentages at Levels 3, 4, and 5: 11 percent, 8 percent, and 4 percent. A decade ago not even that would have been there. But the challenge is that they want to see more at these levels. So we have to keep working on this and pushing our evaluation to those levels in our data collection and analysis and particularly in our reporting.

On why we still see such a low investment in training measurement and evaluation within organizations and how can we facilitate more investment in the future…

I think I would say the number one reason people avoid training measurement and evaluation is a fear of the results—a fear of this level of accountability. After all, if you show the executives who fund these projects that a particular program is not delivering enough value or money to overcome the cost that results in a negative ROI, there’s a fear this may reflect on them or their staff or their team. So there is a reluctance to go down that path sometimes. Of course we know it is a mistake to wait on the request, but nevertheless, it’s a huge impediment that we see. Also, some people just don’t understand how we connect learning to business impact. Particularly with soft skills, we get this question so much: how do we do it? They just can’t see it, and they don’t know the techniques; it appears complex and consumes too many resources; they don’t understand it, and they fear the results.

If you could look across any other function in an organization, you would see a much larger investment in measurement and evaluation than in the learning area. Measurement is basically collecting data, and evaluation is making some sense out of the data. Our best guess is that about 1 percent of the learning and development budget is spent on measurement and evaluation. In our best-practice benchmarking that number ought to be in the 3- to 4-percent level. So we’ve got to really increase our investment three-fold. And what it will take, I think, is for the chief learning officer to start pushing the evaluation envelope to this level.

On what the future holds for measurement and evaluation…

I think that we will keep pushing the envelope and making progress in probably five areas. First, we will show more connection to the business, showing data to our executives that they appreciate and can relate to. That often includes the business impact data being driven by our programs. And occasionally ROI, showing how the money was wisely invested. Second, I think technology is going to help, and we have technology that can make this less painful and also keep the cost down because it addresses the complexity and the cost of doing some impact studies. Third, I think we are going to build more of it in. I would like to see evaluation positioned as an application tool. For example, we may have an action plan in a program that is there to show the participants how this applies and the impact it will have, but obviously it is evaluation data for us. Action plans need to be built into programs so they don’t appear to be add-ons, because an add-on process is always resisted. Fourth, I think we will see more preparation for people coming into this field. Historically, they had little, if any, training in the measurement and evaluation area, but we are seeing a lot of degree programs are putting some serious evaluation and measurement processes into the curriculum. So, people are coming in more prepared for this, and that knowledge often cuts down on resistance to doing it. They come in with the expectation of doing it, and I think that is going to help. And last but not least, I think we have to change the ADDIE model: analysis, design, development, implementation, and evaluation. When the instructional design steps are listed in that sequence, we think about evaluation only after it is all over.

About Jack J. Phillips

Jack Phillips, PhD, is chairman and co-founder of the ROI Institute and developer of the ROI Methodology. Phillips’ work spans more than 50 books and 200 articles. Former bank president, Fortune 500 human resource director, and management professor, he provides consulting services in more than 50 countries. His research and publications have won numerous awards, including the Society for Human Resource Management’s Book of the Year and the Yoder-Heneman Personnel Creative Application Award. Phillips is a former member of ASTD’s Board of Directors and the recipient of ASTD’s highest honor, the Distinguished Contribution Award for Workplace Learning and Performance.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset