The call for accountability harkens loudly. As policymakers and foundations increasingly base decisions about funding on evidence of outcomes, human service providers face pressures to demonstrate that positive changes occur for the populations they serve. For new programs, it is not always clear what effects occur. Given the open-ended nature of constructivist research, this is an opportune time to use qualitative inquiry. By studying the experiences of participants as a social phenomenon, evaluators can capture their perceptions of program effects. The information-rich (Patton, 2002) data gathered provides meaningful stories about real people and their perceptions of the impact of the program on their lives.
This article presents an example of how qualitative data were used to refine a program logic model (e.g., Julian, 1997) for a human services training program called the Family Development Training and Credentialing (FDC) Program (Cornell University, 2008).
Using the logic model
Elucidation of a program's theory of change is an important first step in theory-based evaluation of multi-level effects in comprehensive, interagency programs (Knapp, 1995). Using a logic model one can present a graphic depiction of assumptions about how the program works to achieve particular results. Program logic models are varied in their level of detail. The model I used has five columns (1), as shown in Table 1.
The first two columns of the model, Inputs/Resources and Activities, represent implementation theory in that they list the elements necessary for a program to produce desired results. The Activities listed in the second column, which are crucial to successful implementation, depend on the inputs/resources available, and are required for the outcomes that can ensue. There is a timing sequence to the set of activities, although all do not have to be completed before the effects start to take place.
The effects of the program are represented in the third, fourth, and fifth columns. The third column, Initial Outcomes, includes first-level effects that may occur, whereas the Intermediate Outcomes column indicates those effects that may occur subsequent to the earlier changes. In deciding where to place outcomes, I considered whether any particular effect could reasonably be expected to happen, for most people, in the first few months of involvement. If so, I placed it in the Initial Outcomes column. If one could assume that an effect might take longer, it became an Intermediate Outcome. This placement suggests, for future researchers, when it might make sense to assess for that effect. Assignment of outcomes within the columns is somewhat arbitrary in the sense that many of these effects happen simultaneously. I see this as reasonable because change is not a linear process. The items in the final column, Long-term Impact/Vision, are meant to represent the larger, long-term goals to which the program may contribute. While these are important for a program to identify as a vision of the possible, they are seldom evaluated.
Evaluators often draft logic models based on understanding of the program. Then, stakeholder perceptions of assumptions, activities, and outcomes are added until a comprehensive program theory emerges. Researchers in the field of family services have argued for the usefulness of logic models in conceptualizing intended program outcomes and causal pathways (Rogers, 2003; Weiss, Klein, Little, Lopez, Rothert, Kreider, et al., 2005). As well, funders such as the Centers for Disease Control (CDC) Program Evaluation Working Group (n.d.), W. K. Kellogg Foundation (2004), and the United Way (Hatry, van Houten, Plantz, & Greenway, 1996), encourage the development of logic models as a tool for program planning and evaluation.
In this research qualitative data gathered from a purposive sample of stakeholders in the FDC program were first analyzed for the purpose of identifying program outcomes. …