For a good fifty years, the educational research and evaluation community has tried to find procedures that have local legitimacy, technical robustness, and the capacity to move a 'client' public along a recommended trajectory. No one has succeeded cleanly, but several communities, including the community represented in this set of papers, are still at it. It is, in fact, a tall order. How to provide crucial evaluative information unequivocally? How to make certain that this information is understood and actually used in sensible ways? How to marshal this data in order to consolidate or reorient the implementation of new measures?
Some of the answers are contained in two cognate fields: evaluation utilization and knowledge utilization. Alkin (1991) has reviewed some of the key variables in the evaluation field, such as the quality of the evaluation, its local credibility, its relevance, the means and amount of communication about the evaluation results, the agreement between findings and expectations, and the timeliness of the findings. In the knowledge utilization field, Huberman (1993) has homed in on 'dissemination competence', and 'quality of dissemination products'. The first set includes targeted products, interpersonal modes of disseminating information, follow-up, multiple channels of dissemination and reinforcement of users. The quality of dissemination products category includes a focus on malleable or 'alterable' factors, the contextualization of findings and the operationalization of key findings. These factors then determine whether users will invest time and resources in the data and whether they will act on the information, even if it differs from their own assumptions.
As it happens, these two communities-evaluators and dissemination specialists-have come together, conceptually speaking. For example, many of the variables just mentioned are identical or reconcilable. There has not, however, been as much feverish work on two of the directions taken here. First, the objective in each of the forgoing chapters has been to achieve some measure of 'organizational learning'. This is a slippery measure, and I am one of several who is wary of it.
The guiding-and plausible-premise behind organizational learning is that knowledge is socially constructed, and that these social symbol systems can become salient and operative in a given organization. Changes in understanding, perceptions and interpretations are then indicators of shifts in reflective frameworks. These changes, logically enough, occur more frequently when there is a dense interpersonal network