By Carnevale, Anthony P.; Schultz, Eric R.
Training & Development Journal , Vol. 44, No. 7
ASTD's research revealed that the actual practice of evaluation doesn't often follow the strict recommendations of evaluation literature. This is largely explained by the fact that many training practitioners haven't found the literature's advice applicable or useful for their organizations.
But, as well-known author and management consultant Thomas J. Peters has said, "What gets measured gets done. . . . Even imperfect measures provide an accurate strategic indication of progress, or lack thereof," So practitioners have employed various practical evaluations.
Here's an overview of current evaluation practices among organizational leaders in training, telling how and why they subscribe to their various practices. The evaluation techniques and practices explored don't meet traditional academic notions of rigor, but do provide valuable information, are reproducible, and can be quickly and easily conducted. Most of the training managers that participated in ASTD's research effort believe that there's value in a concerted effort to increase the practice of employee training evaluation along these lines.
All the organizations represented in this study evaluate some aspect of their training programs. In terms of the four-level Kirkpatrick model (see page S-15), 75 to 100 percent of them evaluated training programs at the participant reaction level. Virtually all of them also evaluated participants' knowledge gains in some of their training programs. Twenty-five percent of their training programs were evaluated at this, the learning level.
Behavior change on the job was the least measured: among companies surveyed, only about 10 percent evaluated training at this level. Employee training was only evaluated at the organizational results level about 25 percent of the time, despite new pressures on training practitioners to assess the economic worth of HRD activities.
Sixty-six percent of the training managers reported that HRD professionals are under increasing pressure to show that programs are producing favorable bottom-line results. These managers had a strong track record in training evaluation or had high management acceptance of training as a way to meet real operational needs. So, in their experience, increased pressure did not mean that upper management doubted that training could be beneficial.
The reason usually given for closer scrutiny by management was that employee training is being recognized as a significantly large expenditure. But greater management attention coupled with movement toward cost reduction can be particularly injurious to expenditures (such as those for employee training) that have hard-to-isolate or long-term payoffs.
Although most training programs are evaluated at the reaction and learning levels, these levels aren't always consistent with the reasons for evaluation. Research suggests that evaluation conducted for the proper reasons helps determine training's impact on job performance and economics within an organization.
Most organizations evaluate training programs to meet the following demands:
* Training department demands. For quality assurance, trainers gather information to direct their efforts to improve training effectiveness. Trainers also want to demonstrate training programs' worth to top or operating management. And, training managers want to build databases for future planning and analysis.
* Employee demands. After a decade of downsizing and flattening of organizational pyramids, employees are seeking training that's timely and useful in meeting their new job responsibilities.
* Management demands. Managers are scrutinizing training as a tool for gaining a competitive advantage. Many managers believe that h aving the best trained work force increases competitiveness.
In an economically competitive environment, however, it's necessary to justify training expenditures to ensure adequate returns on training investments. …