Evaluation Framework, Design, and Reports

Article excerpt

Evaluation Framework, Design, and Reports

A training program is a success if it achieves timely results consistent with pre-established participant performance objectives related to wider organizational goals.

Much the same is true of evaluation methods. Evaluation only needs to provide sufficient information to assure that a training program is meeting its objectives--and that those objectives further attainment of organizational goals and objectives. Evaluation methods must provide results in time to inform decision makers as they consider choices for current and future training. After all, the purpose of training is to improve performance, and the purpose of evaluation is to improve training's effectiveness and efficiency.

Historical failures or inabilitites to evaluate both training's costs and benefits have rendered training particularly vulnerable to cost-cutting pressures, and have inhibited its use as a lever for effecting strategic change. The fact that fewer than half of America's training programs are formally evaluated indicates implicit managerial trust that, somehow or other, training facilities attainment of organizational goals. Yet, as one trainer put it, "The worst thing that ever happened to training is that it was taken on faith that it was good." As cost pressures increase, trainers must demonstrate training's value in more substantive ways if it is to gain its rightful place among investment alternatives.

Admittedly, measurement can never completely ascertain a training program's effectiveness or its efficiency in achieving beneficial effects. What worked at one time at one training location with a unique group of participants can't necessarily be transferred to another time, setting, and group and be expected to work as well. Still, evaluations build a case of support for training by providing an approximation of its value.

The Kirkpatrick Model

The evaluation framework that most training practitioners use is the Kirkpatrick Model. Although this model doesn't accommodate all the evaluation methodologies that training managers employ, it's the most widely known evaluation model and illustrates a commonly used set of levels or rigors of evaluation.

Almost universally, organizations evaluate their training programs by emphasizing one or more of the model's four levels. In summary, these levels are as follows:

* Reaction. How well did training participants like the program?

* Learning. What knowledge (principles, facts, and techniques) did participants gain from the program?

* Behavior. What positive changes in participants' job behaviors stemmed from the training program?

* Results. What were the training program's organizational effects in terms of reduced costs, improved quality of work, increased quantity of work, and so forth?

Participant reactions are easy to collect, but provide little substantive information about training's worth. At the other end of the scale, results-level information is more difficult to collect, but provides data to analyze for assessment of training's organizational impact. In general, the more data sources used to evaluate a training program, the more complete is the picture of its effectiveness.

The appropriate levels of evaluation data to gather and analyze depend on the evaluation "clients." As a rule, line managers have more interest in performance change and organizational results than in participant reaction and learning. A training department, on the other hand, would have an interest in collecting reaction and learning data to determine what components of training could be improved.

Currently, most employee training is evaluated at the reaction level. Evaluation at this level is associated with the terms "smile test" or "happiness test," because reaction information usually is obtained through a participant questionnaire adminstered near or at the end of a training program. …