Academic journal article Education & Treatment of Children

Using Logic Models and Program Theory to Build Outcome Accountability

Academic journal article Education & Treatment of Children

Using Logic Models and Program Theory to Build Outcome Accountability

Article excerpt

Department of Child and Family Studies Louis de la Parte Florida Mental Health Institute

Abstract

This article offers a way to build accountability by utilizing outcome relevant information along with outcomes in order to ensure relevance of evaluation results to program planners and managers for decision-making. It describes a process that guides agencies in articulating their underlying beliefs and "theory of change" concerning why their services may be effective in serving children and families, and making this the basis for comparing their mission and methods to actual outcomes. This approach incorporates the use of logic models as the primary tool for comparing the child, family, and community context for the program, the service delivery strategies, and the expected short and long term outcomes. Advantages to the use of this strategy for building effective decision-oriented evaluations are described, such as establishing a balance between compliance oriented collection of data in response to funding agency mandates and active use of results for service improvement. Challenges related to utilization and relevance of evaluation results are also described, including the need to redefine the role of the evaluator as facilitator, and the need to provide information in a timely, predictable and understandable manner.

Efforts are underway by both government and private provider agencies to build accountability measures into human services. Typically, government efforts are attempting to create accountability by requiring agencies and providers to coiled performance measures and report results to external funders and/or policy makers. Texas and Virginia are two examples of states that have created accountability measures for such purposes. The Texas Department of Mental Health and Mental Retardation has instituted a statewide accountability system known as the "Texas Children's Mental Health Plan" (Rouse, Toprac, & MacCabe, 1998). Virginia's Department of Mental Health, Mental Retardation, and Substance Abuse Services has developed a multi-stakeholder-based initiative for developing standardized outcome assessment for public mental health services (Koch, Lewis, & McCall, 1998).

Other efforts are emerging from private providers and provider associations. The Pressley Ridge Schools, a large private non-profit provider based in Pennsylvania, has self-initiated an agency wide effort supported through the development of an outcome oriented software system (Beck, Meadowcroft, Mason, & Kiely, 1998). The Maryland Association of Resources for Families and Youth has initiated a similar effort across its provider members, and has made participation in the process a requirement for association membership (Strieder, 1998).

The movement towards accountability is not new. As early as 1975 the Community Mental Health Centers (CMHC) amendments (P.L. 9463) included program evaluation as an integral element needed to improve the quality of services (U.S. General Accounting Office, 1976). This evaluation was expected to occur through a self-evaluation process involving a wide range of stakeholders (Flaherty & Windle, 1981). However, little benefit to accountability and service improvement was yielded because compliance oriented data collection and reporting overshadowed the process.

The current trend in building accountability in human services is to select outcomes and indicators, either from familiar measures such as the Child and Adolescent Functional Assessment Scale (CAFAS) (Hodges, 1996) and the Child Behavior Checklist (CBCL) (Achenbach, 1991) or to generate a list of performance measures. Neither of these approaches has been satisfactory (Hodges & Hernandez, 1996). Both approaches fail to anchor the selected measures in a context of other information that can later facilitate the use of the collected information. Moreover, both approaches fail to involve stakeholders in a problem-solving process that guides their understanding of the program's mission and the methods used to achieve the mission. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.