Academic journal article Cityscape

Lessons for Conducting Experimental Evaluations in Complex Field Studies: Family Options Study

Academic journal article Cityscape

Lessons for Conducting Experimental Evaluations in Complex Field Studies: Family Options Study

Article excerpt

Introduction and Study Objectives

The U.S. Department of Housing and Urban Development (HUD) sponsored the Family Options Study to develop evidence to inform policy decisions about the best ways to resolve homelessness for families with children. The study was also intended to help community planners and local practitioners examine homeless assistance systems to optimize limited resources for assisting families.

When the Family Options Study launched in 2008, previous research was limited by lack of direct comparisons of different housing and services interventions for homeless families. Prior studies had explored the characteristics and needs of homeless families and some observational studies contributed lessons about program implementation and outcomes for families who use specific types of programs. To our knowledge, no evidence existed prior to the Family Options Study about the relative effectiveness of alternative types of programs on the outcomes of interest, including housing stability, family preservation, self-sufficiency, and adult and child well-being. A systematic review of literature on family homelessness completed before the results of the Family Options Study were available highlighted the paucity of rigorous studies and lack of evidence about intervention effects. The author of that review noted, "substantial limitations in research underscore the insufficiency of our current knowledge base for ending homelessness" (Bassuk et al., 2014: 457).

This article examines lessons learned from the implementation of the Family Options Study. The study team addressed several challenges in executing the experimental design adopted for the study, including identifying interventions for study, selecting study sites, addressing ethical considerations, and implementing random assignment. The strategies applied to overcome these challenges can inform future experimental research.

Why Random Assignment?

Considerations of feasibility and ethics led initial study designers at HUD to favor an observational, rather than an experimental study design. An observational study would examine outcomes for the families who participated in the different types of assistance selected for study. The results of an observational study would describe the program models and outcomes for families who participated but would not produce unbiased estimates of the relative effects of the alternative types of assistance. In an observational study, people choose to enroll in a particular intervention or are assigned by program staff. These processes result in different interventions being applied to groups of people who may differ from one another in both observed and unobserved ways.

An alternative to an observational study design is experimental design, which uses random assignment to determine which type of assistance is offered to which families. The strength of the random assignment design is that it produces equivalent families receiving different intervention models, isolating the effect of the interventions separate from all other factors. Randomized controlled trials are viewed as the gold standard in policy research and the preferred method for program evaluation (Orr, 1999). Although observational and quasi-experimental study designs suffer from selection bias, experimental study designs minimize systematic differences between experimental groups that could bias impact estimates. In large samples, the preexisting differences, both observed and unobserved, among two or more groups that are randomly assigned approach zero. Thus, significant differences in group outcomes will reflect the influence of the interventions. Results from an experimental evaluation therefore offer decisionmakers strong evidence about the causal effects of policy interventions. Designing and executing an experimental evaluation, particularly in a heterogeneous service delivery environment with a highly vulnerable population, can pose challenges however. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.