Leadership Development: What's Evaluation Got to Do with It?

Article excerpt

[ILLUSTRATION OMITTED]

It is common to encounter skeptical attitudes toward the measurement of leadership development programs. In truth, the research team was somewhat skeptical as we began the ASTD/ICF study titled "The Impact of Leadership Development Programs." Our initial foray into the literature only reinforced this skepticism. Searching the literature databases, we found thousands of hits on leadership development programs (LDPs) and thousands more on training evaluation programs. However, when the two terms were put together in a search, very little appeared.

Even those articles whose abstracts indicated we had a "hit" often turned out to be off-topic when read. Most articles gave advice on how to do evaluation of LDPs, but very few provided compelling examples. Our big break came when one of our expert panel members, Laurie Bassi, made the comment, "Maybe we have a case here of publishing bias?" That is, people who really do LDP evaluation well do not have time to publish articles. Bassi's hypothesis turned out to be correct, and our team was able to identify practitioners in the field who had some compelling stories to tell.

As our skepticism subsided, we were faced with a new question: Why is it so rare that practitioners and their organizations engage in Level 4 and 5 measurements? Don't people want to know that their LDP investments are paying off and making the organization more successful? After 18 months of research, we are still not sure we can definitely answer that question, but we now have some great clues. Here is what the research taught us:

* "Defense versus improvement." Practitioners have to approach evaluation from the angle of "continuous improvement" rather than "defense of the program." It seems that many practitioners only engage in Level 4 and 5 analyses when forced to defend their programs. If evaluation became a natural part of the instructional design process and the data were used to constantly make the program better, practitioners would become more enthusiastic about investing resources in evaluation.

* Expertise. Many training departments don't have the internal expertise for conducing rigorous Level 4 and 5 evaluations. Many of the "best case" companies in the study had evaluation experts on staff, and they were committed to conducting valid and practical evaluations.

* Barriers, really? Survey respondents tended to assume a significantly higher number of potential barriers if they had never tried to implement a particular evaluation technique than those who had used the technique. While barriers are certainly organization-specific, this finding also calls into question the accuracy of practitioner's assumptions regarding the required resource level to conduct LDP evaluations.

* Lack of creativity. Many practitioners falsely believe that evaluation is about math. In truth, the math element of evaluation is best left to software. The value that practitioners add to evaluation is in the methodology they devise for efficiently and effectively answering the evaluation questions. With a bit of logic and creative thinking, we can often find an innovative way to show clear evidence of the

LDP's effect.

So what does it take to make LDP evaluation work? The ingredients may be simpler than you think.

* Leadership support. As with most change initiatives in organizations, you must have leaders who really understand what LDP evaluation is and why it is important. "Best case" companies in this study invariably had forged a relationship with senior leaders who helped drive the LDP program, as well as the evaluation, forward.

* A culture that supports evaluation. "Best case" companies tended to be in environments where their products or services were of critical importance to their customers. For example, several of the companies were in the healthcare field. …