Human service practitioners face challenges in communicating how their programs lead to desired outcomes. One framework for representation that is now widely used in the field of program evaluation is the program logic model. This article presents an example of how qualitative data were used to refine a logic model for the Cornell Family Development Training and Credentialing (FDC) Program. This interagency training program teaches a strengths-based, family support, empowerment-oriented approach to the helping relationship. Analysis of the qualitative data gathered from interviews and focus groups with stakeholders led to revisions and further development of the program's initial logic model. The logic model format was then used to organize the representation of findings relative to program activities and outcomes. Key Words: Qualitative Inquiry, Program Logic Model, Empowerment, Outcomes Evaluation, Human Service Training, Strengths-Based Practice, Family Development, and Family Support
The call for accountability harkens loudly. As policymakers and foundations increasingly base decisions about funding on evidence of outcomes, human service providers face pressures to demonstrate that positive changes occur for the populations they serve. For new programs, it is not always clear what effects occur. Given the openended nature of constructivist research, this is an opportune time to use qualitative inquiry. By studying the experiences of participants as a social phenomenon, evaluators can capture their perceptions of program effects. The information-rich (Patton, 2002) data gathered provides meaningful stories about real people and their perceptions of the impact of the program on their lives.
This article presents an example of how qualitative data were used to refine a program logic model (e.g., Julian, 1997) for a human services training program called the Family Development Training and Credentialing (FDC) Program (Cornell University, 2008).
Using the logic model
Elucidation of a program's theory of change is an important first step in theorybased evaluation of multi-level effects in comprehensive, interagency programs (Knapp, 1995). Using a logic model one can present a graphic depiction of assumptions about how the program works to achieve particular results. Program logic models are varied in their level of detail. The model I used has five columns1, as shown in Table 1.
The first two columns of the model, Inputs/Resources and Activities, represent implementation theory in that they list the elements necessary for a program to produce desired results. The Activities listed in the second column, which are crucial to successful implementation, depend on the inputs/resources available, and are required for the outcomes that can ensue. There is a timing sequence to the set of activities, although all do not have to be completed before the effects start to take place.
The effects of the program are represented in the third, fourth, and fifth columns. The third column, Initial Outcomes, includes first-level effects that may occur, whereas the Intermediate Outcomes column indicates those effects that may occur subsequent to the earlier changes. In deciding where to place outcomes, I considered whether any particular effect could reasonably be expected to happen, for most people, in the first few months of involvement. If so, I placed it in the Initial Outcomes column. If one could assume that an effect might take longer, it became an Intermediate Outcome. This placement suggests, for future researchers, when it might make sense to assess for that effect. Assignment of outcomes within the columns is somewhat arbitrary in the sense that many of these effects happen simultaneously. I see this as reasonable because change is not a linear process. The items in the final column, Long-term Impact/Vision, are meant to represent the larger, long-term goals to which …