Professional evaluation is essentially about generating information that assists others in making judgments about a program, service, policy, organization, person, or whatever else is being evaluated. Over the past forty years, evaluation has developed out of a variety of activities to become a specialized field that relies on many different approaches for generating that information. It is important to continuously take stock and to ask which approaches should continue to be used and which should be laid aside. In this volume, Daniel L. Stufflebeam presents his analysis of twenty-two approaches that have guided the conduct of evaluation.
Stufflebeam analyzes twenty-two evaluation approaches that have been sufficiently well articulated and frequently used in making evaluative judgments about programs and services over the past forty or so years. He describes each approach, its orientation, purpose, typical questions being addressed, and methods. In many cases these approaches will be familiar to us by more than one label. Many times, he contrasts the approach being discussed with other approaches to illuminate what the approach is and is not.
However, his description of the approaches are but a prelude to the main event. Stufflebeam is true to his evaluator roots in applying his own systematic approach to the analysis of the twenty-two approaches. He takes the role of a connoisseur of evaluation and his perspective is meta metaevaluation. He systematically assesses the approaches by rating them in each of four areas previously defined by the Joint Committee Program Evaluation Standards: utility, feasibility, propriety, and accuracy. These ratings are then combined to yield an overall score. Stufflebeam’s extensive experience in conducting evaluations, his founding work with the Context, Input, Process, Product model, and his leadership in the development of evaluation standards gives him standing as a connoisseur of evaluation. The perspective he adopts could be considered doubly meta-evaluative in that he evaluates approaches to evaluation not specific evaluations.
In the end, Stufflebeam recommends that nine diverse approaches receive continued use. His analysis shows that these nine have very different strengths and few severe weaknesses. Some of Stufflebeam’s conclusions reflect the working of the “survival of the fittest” in the evaluation field in that approaches such as clarification hearings came and went rather quickly. Other of his conclusions will be quite controversial, I suspect, such as his relatively low rating for program theory evaluation, which recently was the subject of an entire volume of New Directions for Evaluation (Petrosino, Rogers, Heubner, and Hasci, 2000). One virtue of the systematic process Stufflebeam has used is that it allows us to trace the specific ratings that landed a particular approach on the recommended list.