Involving Managers in Training Evaluation

Article excerpt

Involving Managers in Training Evaluation

Do your training programs suffer from predictable-results syndrome? Do you find yourself perplexed over which training programs will best suit the needs of your diverse employees? Do you have a good grasp of the value of your training? Most training managers don't, and the predictable-results training blues seem to be catching. Consider the following based-on-life situation, and discover a prescriptive method for overcoming training blues.

No one-size-fits-all training

course

After sending three of his line supervisors to an interpersonal-skills training course, Ian Keen, a senior manager at a medium-size tool-and-die plant, felt frustrated. The results were just as he would have predicted.

Gail Wilson, his top supervisor, made good use of the learning experience. But Gail was the type who could be stranded on a deserted island and still learn something. It didn't make much difference what she learned in the training; she'd apply it to the job somehow.

Then there was Walt McFarland - an average supervisor whose initial response to training was generally positive. When Walt attended training, he would say, "It's useful," or "I got a lot out of it." But back on the job, nothing happened that showed he was doing anything differently. Walt's track record never changed - no matter how good or bad the training was.

Ian's third supervisor, Anita Rodriguez, was an enigma. Sometimes she took initiative and used what she learned in training; other times she seemed to do worse after training. If Ian showed an interest in What Anita was doing, she'd improve, but only for a while.

Ian knew of no sure way to tell how training would affect his supervisors. With or without training, Gail would figure out the task and get it done. Walt always did the usual, which was the same as what he'd done the day before. And Anita might do the job just fine, or she might ask Ian to share in the work. Ian never could tell.

Are you sure that the training your employees receive is valuable or at least helpful? Do you have sufficient data or expertise to evaluate the training effectively? If Ian's problem sounds familiar, you could probably benefit from a simple, effective method that can help you gauge the real value of training. The secret? Involving employees in the process of selecting and evaluating training programs.

Training Impact

Assessment

The Training Impact Assessment (TIA) is a process that requires managers to look collectively at what happens to their employees as a result of training. The thinking is that if most of the "Anitas" and even a few of the "Walts" use the training effectively, then the training has value. If the "Anitas" haven't changed or developed, then the training is probably ineffective. TIA helps eliminate managers' uncertainties about the effectiveness of training programs.

The TIA approach also results in managers supporting and committing to effective training. By communicating their support to higher management, they can influence executives who may not recognize effective training programs.

TIA has been used to evaluate programs for both technical and non-technical activities in Australia, Canada, South Africa, and the United States. In at least one case, the technique was used to evaluate a proposed marketing plan - with excellent results. In another case, a director of employee development who used TIA to assess a management-development program was personally rewarded with a bonus check for using the high-impact evaluation process.

Steps to success

The TIA method follows six steps. 1 Invite key clients to participate in the assessment sessions. In practice, the key client is often the trainee's boss. The boss is responsible for evaluating or gathering relevant data on the impact of the training on the job and on the unit or person. In some cases, other managers may participate as key clients, provided that they are willing to undertake the data-gathering assignments. 2 At the first session, ask the key clients to gather data on the effectiveness of employee training. Be sure to clarify what clients are being asked to do and to emphasize the importance of gathering ample data before the next session in two weeks. 3 At the second session, ask subgroups to share positive results of training. Divide the clients into subgroups of four to twelve people, six being an ideal group size. Having three to five subgroups increases the validity of common conclusions. (More than five subgroups may become cumbersome, though one client reported success with seven.) Have each subgroup discuss and list the advantages and positive aspects of the training program. 4 Ask subgroups to list negative or unachieved results. Keep the list of weaknesses separate from the strenghts to avoid confusion and debate and to help clients focus on specific areas of needed improvement. Post all lists in the general-session room for all to see. 5 Have the entire group reconvene to share overall results. All clients should be present during the general session so that everyone has access to all possible data on which to evaluate the training program. For the first time, many managers will be able to see for themselves the value of evaluation and will feel their initial uncertainty about the usefulness of assessing training vanishing.

Senior executives may also want to attend this session to witness the excitement and intensity. Their presence can help them sense the impact - both positive and negative - of the training, as well as get a clear idea of the intensity of the assessment process.

One executive said later that she had never realized just how excited about and committed to employee training her managers were until she saw it for herself. The paper reports she received just didn't capture the enthusiasm conveyed in the group session.

For people who cannot attend the assessment session, try videotaping the entire process and making available an edited version for later viewing. Videotape is a good medium for communicating the impact and energy of the TIA assessment process. 6 Consolidate lists, agree on actions, and set a follow-up date. Make one comprehensive list of strenghts and one of weaknesses, including input from all clients. Tell the clients that within two weeks, they will receive a follow-up written report, summarizing the results of the meeting and the agreed-on plans for action.

But does it work?

Since its introduction in 1979, the TIA process has been used many times. It has been proved to be as valid today as it was 10 years ago. The process is only as strong, however, as the organizational commitment behind it.

Some practitioners report they are nervous about the first session. They fear their clients won't gather enough data to bring to the second session. There are several ways to ensure that enough data are gathered. In evaluating its management-development program, one organization provided key clients with a patterned questionnaire. It also arranged for 39 training participants to be available for interviews immediately after the first session. Each client agreed to interview one other participant before the second session.

In Chicago, a hospital client mixed program trainees with supervisors during the second session, ensuring that the supervisors had firsthand data about the actual program results. The client reported favorable results and recommended TIA as a way to gather plenty of accessible data. Other clients have discussed data-gathering methods during the first session.

Another approach to guarantee data on the training is to have the facilitator contract with key clients in front of the entire group as to exactly how and by when they will gather the data. The public agreement tends to motivate the clients to follow through on their commitments.

The validity of the evaluation is another area of concern. As mentioned, having three or more subgroups increases the validity of common conclusions. Another benefit of having several subgroups is that personal biases among the group usually become obvious when the lists are posted in the general-session room.

As the subgroups work on their lists and attempt to gather, analyze, and reach objective conclusions about their data, they realize that their results are going to be posted and compared with results from peers in other subgroups. In effect, two dynamics are at work: the desire to do well in front of peers and the motivation to compete with other subgroups.

Evaluation factors

In evaluating the training program, pay particular attention to conclusions that appear on most of the subgroups' lists, especially the negative lists. Such conclusions as "insufficient interviewer training" or comments about a particular training module that did not work are often the primary criteria for determining the importance and validity of certain factors to the clients.

To consolidate the lists, first ask the clients if any conclusions - positive or negative - are common to all the subgroups' lists. Write down those common conclusions. Then ask if there are conclusions common to all the lists except one. Consider those conclusions as somewhat important. Conclusions that appear on only an occasional list are not worth pursuing.

Time is another important consideration when evaluating training programs. Distribute a report to all participants within two weeks after the second assessment session. Two weeks gives you time to analyze the results and to determine what actions need to be taken. If you wait longer, the clients may sense a lack of interest in their evaluation conclusions; it may become more difficult to capitalize on their enthusiasm and to gain their commitment.

Finally, you should recognize the potency of this process. For the first time, key clients (usually managers or supervisors) may be involved in researching the value of an activity that they aren't really sure about. But their involvement in the process increases their certainty of the value of evaluation. Once they have seen it and done it, they are more certain that it works. Your immediate action on their evaluations will reinforce that certainty.

In one management-development program assessment, a common conclusion among the subgroups was that the supervisors needed training in interviewing and selection. The facilitator suggested that the next group of managers to go through the evaluation program receive interpersonal skills training. That quick response to the clients' assessment showed real commitment to improving the process and appreciation for their input.

No more guesswork

So what happened to Ian Keen? Using the Training Impact Assessment process, Ian learned how to evaluate the training programs that were best suited to his particular line supervisors' needs. He subsequently scheduled Gail for management training, Walt for assertiveness training, and Anita for decision-making training - without worrying about whether the programs would be valuable or helpful. He no longer had to guess which types of training would be best for his people or how they would respond. He had answered his own question before even signing them up.

Using the TIA process in your own organization can help you evaluate and select specific, targeted training programs for your people. You'll find you no longer suffer from that deadly malady - predictable-results syndrome. By involving managers and executives and showing them how evaluation can work, you'll generate greater commitment and support for employee-training programs - from the people who will ultimately benefit most.

Coffman is a senior consultant with Development Dimensions International, 1225 Washington Pike, Box 13379, Pittsburgh, PA 15243-0379.