commission, which has been most active in supporting DSM evaluation, has one individual out of 180 people devoted to DSM evaluation.
A key question that remains unanswered is whether the regulatory process can cope with the data overload that will be generated by the DSM evaluations. For many commissions with very small staffs, the New York approach to evaluation is unfeasible. Can evaluation be an effective regulatory tool under these constrained conditions? What will be used instead?
Ultimately, we must consider the impact that imperfect evaluations will have on the regulation of DSM. To date, there are several lessons that can be gleaned from the evaluation record.
The expectations for evaluation as a panacea for regulating DSM must be discounted. Evaluations cannot guarantee definitive answers to energy savings estimates nor will they provide unambiguous results for setting incentive levels and determining lost revenues. In the end, evaluations must be recognized for the incremental improvements they can add to our understanding of DSM effectiveness.
Does this mean that evaluation expenditures should therefore be limited? In fact, the current situation implies quite the opposite. The inability to predict energy savings stems from our limited knowledge of customer behavior and DSM program delivery mechanisms. Expenditures in broadly defined process and impact evaluation will greatly enhance our understanding and should be encouraged. Large expenditures to pinpoint specific savings may not generate commensurate benefits, however.
As regulators push utilities into DSM activities, the regulatory process must adapt to new monitoring and control functions. The nature of the DSM process will generally require regulators to take a more active role in this process. If the experience of New York, California, and Wisconsin are any indication, then other commissions must be prepared to greatly expand their commitment in this area.