Program Evaluation: The Accountability Bridge Model for Counselors

Article excerpt

Program evaluation in counseling has been a consistent topic of discourse in the profession over the past 20 years (Gysbers, Hughey, Starr, & Lapan, 1992; Hadley & Mitchell, 1995; Loesch, 2001; Wheeler & Loesch, 1981). Considered an applied research discipline, program evaluation refers to a systematic process of collecting and analyzing information about the efficiency, the effectiveness, and the impact of programs and services (Boulmetis & Dutwin, 2000). The field of program evaluation has grown rapidly since the 1950s as public and private sector organizations have sought quality, efficiency, and equity in the delivery of services (Stufflebeam, 2000b). Today, professional program evaluators are recognized as highly skilled specialists with advanced training in statistics, research methodology, and evaluation procedures (Hosie, 1994). Although program evaluation has developed as a distinct academic and professional discipline, human services professionals have frequently adopted program evaluation principles in order to conduct micro-evaluations of local services. From this perspective, program evaluation can be considered as a type of action research geared toward monitoring and improving a particular program or service. Because micro-evaluations are conducted on a smaller scale, they may be planned and implemented by practitioners. Therefore, for the purposes of this article, we consider counseling program evaluation to be the ongoing use of evaluation principles by counselors to assess and improve the effectiveness and impact of their programs and services.

* Challenges to Counseling Program Evaluation

Counseling program evaluation has not always been conceptualized from the perspective of practicing counselors. For instance, Benkofski and Heppner (1999) presented guidelines for counseling program evaluation that emphasized the use of independent evaluators rather than counseling practitioners. Furthermore, program evaluation literature has often emphasized evaluation models and principles that were developed for use in large-scale organizational evaluations by professional program evaluators (e.g., Kellaghan & Madaus, 2000; Kettner, Moroney, & Martin, 1999). Such models and practices are not easily implemented by counseling practitioners and may have contributed to the hesitance of counselors to use program evaluation methods. Loesch (2001) argued that the lack of counselor-specific evaluation models has substantially contributed to the dichotomy between research and practice in counseling. Therefore, new paradigms of counseling program evaluation are needed to increase the frequency of practitioner-implemented evaluations.

Much of the literature related to counseling program evaluation has cited the lack of both counselors' ability to systematically evaluate counseling services and of their interest in doing so (e.g., Fairchild, 1993; Whiston, 1996). Many reasons have been suggested for counselors' failure to conduct evaluations. An important reason is that conducting an evaluation requires some degree of expertise in research methods, particularly in formulating research questions, collecting relevant data, and selecting appropriate analyses. Yet counselors typically receive little training to prepare them for demonstrating outcomes (Whiston, 1996) and evaluating their services (Hosie, 1994). Consequently, counselor education programs have been criticized for failing to provide appropriate evaluation and research training to new counselors (Borders, 2002; Heppner, Kivlighan, & Wampold, 1999; Sexton, 1999; Sexton, Whiston, Bleuer, & Walz, 1997). Counselors may, therefore, refrain from program evaluation because of a lack of confidence in their ability to effectively collect and analyze data and apply findings to their professional practice (Isaacs, 2003). However, for those counselors with the requisite skills to conduct evaluations, their hesitance may be related to the fear of finding that their services are ineffective (Lusky & Hayes, 2001; Wheeler & Loesch, 1981). …