Thinking outside the Evaluation Box

Article excerpt

It has been 40 years since Kirkpatrick introduced his four-level evaluation model. Is it time to reexamine how we value workplace training? Some say yes.

Dilbert creator Scott Adams emailed me this tongue-in-cheek scenario on the value of training: "Dilbert's Boss would use the training department to hide funds that could be cut during the next budget adjustment. You can always cut training and be safe in assuming that no direct negative impact will show up for a few months."

If that Dilbertism rings a bell, then you recognize the attitude of "training last." Managers aren't always receptive to training talk - and even more reluctant when no firm return-on-investment figure is in the discussion. (Double that disinterest when the budget axe is throwing sparks at the grindstone.)

Proving training's value is no easy task. There's return-on-expectation, employee retention, employee development, job performance, performance improvement, customer satisfaction, bottom-line results, and lots more to consider. Many people talk about training's payoff and the implications for training departments, but, overall, we're evaluating training the same way we have for years.

For example, how do we value training that has tangible results versus that which has intangible results? It's relatively easy to measure the ROI of technical skills training. We can examine before-and-after productivity numbers, for one thing. But how do we measure the value of leadership training? Should we try to measure it?

The supermodel

Mention training evaluation and Donald J. Kirkpatrick's Level 4 model springs to mind. He first published a series of articles in 1959, describing a four-stage evaluation model - reaction, learning, behavior, and results - and he and others have been refining it ever since. Kevin Oakes of Asymetrix Learning Systems sums up the Kirkpatrick levels this way:

Level 1: Smile-sheet evaluation. Did you like the training?

Level 2: Testing. Did you understand the information and score well on the test?

Level 3: Job improvement. Did the training help you do your job better and increase performance?

Level 4: Organizational improvement. Did the company or department increase profits, customer satisfaction, and so forth as a result of the training?

Says Paul Bernthal, manager of research at Development Dimensions International, "Kirkpatrick's classic model has weathered well. But it has also limited our thinking regarding evaluation and possibly hindered our ability to conduct meaningful evaluations.

"Too often, trainers jump feet first into using the model without taking the time to assess their needs and resources, or to determine how they'll apply the results. When they regard the four-level approach as a universal framework for all evaluations, they tend not to examine whether the approach itself is shaping their questions and their results. The simplicity and common sense of Kirkpatrick's model imply that conducting an evaluation is a standardized, prepackaged process. But other options are not spelled out in the model."

Kirkpatrick offers some flexibility, though. He says, "Borrow evaluation forms, procedures, designs, approaches, techniques, and methods from other people." He also encourages trainers to understand the difference between proof and evidence of training results. He says, "As we look at the evaluation of reaction, learning, behavior, and results, we can see that evidence is much easier to obtain than proof. In some cases, proof is impractical and almost impossible to get."

Certainly, plenty has been written about variations on the evaluation theme. According to professors Glenn M. McEvoy, Utah State University, and Paul F. Buller of Gonzaga University, "Written guidelines for evaluation have, in fact, remained quite consistent over time and across authors.... Furthermore, evaluating training is definitely not easy, at least not in our experience. …