For decades, treatment integrity has received minimal conceptual or empirical attention in education research. However, treatment integrity has become a more frequent topic of discussion during the past 10 years as traditional service-delivery approaches were challenged and evidence-based practice (EBP) and response to intervention (RTI) became more prevalent in education research (Kratochwill, Albers, & Steele-Shernoff, 2004). The enactment of the No Child Left Behind Act (2001) mandated that educators implement research-based instruction, which made EBP central to education research and practice. EBP encompasses (a) conducting high-quality intervention evaluation research, (b) selecting interventions proven effective in high-quality evaluation research, (c) implementing the selected intervention as intended by developers, and (d) evaluating the local effectiveness of the intervention (Kratochwill et al., 2004), which are also components of any RTI model (National Association of State Directors of Special Education, 2008). As the EBP and RTI movements gained momentum, it became apparent that treatment integrity was integral to both (e.g., Kratochwill et al., 2004; Noell & Gansle, 2006).
Treatment integrity data are necessary for EBP because they allow (a) researchers to draw valid conclusions about intervention effectiveness in research trials, (b) consumers to select an intervention to understand whether it can be adapted to their setting, (c) practitioners to ensure the intervention is implemented as intended in an applied setting, and (d) teams to evaluate the effectiveness of the intervention in their setting. Likewise, ensuring high levels of treatment integrity is at the crux of RTI (Brown & Rahn-Blakeslee, 2009) because decisions about intervention intensity (potentially including special education placement) are based on student response to evidence-based interventions. The EBP and RTI movements motivated scholars (e.g., Brown & Rahn-Blakeslee, 2009; Fixsen, Naoom, Blase, Friedman, & Wallace, 2005; Jones, Clarke, & Power, 2008; Noell, 2008; Noell & Gansle, 2006; Power et al., 2005) to attend to the role of treatment integrity in contemporary education research and practice.
Early definitions of treatment integrity (e.g., Gresham, 1989; Moncher & Prinz, 1991; Yeaton & Sechrest, 1981) focused primarily on the degree to which an intervention was implemented as intended. More recent conceptualizations (Dane & Schneider, 1998; Fix sen et al., 2005; Jones et al., 2008; Noell, 2008; Power et al., 2005; Waltz, Addis, Koerner, & Jacobson, 1993) suggest that treatment integrity is a multidimensional construct. Although approximately 20 different dimensions (e.g., adherence, quality of implementation, participant exposure, participant responsiveness) have been proposed across multiple models, four dimensions are common across all: (a) adherence (degree to which an intervention was implemented as intended), (b) quality of implementation (how well the intervention is implemented), (c) exposure (the duration and/or frequency of the intervention), and (d) program differentiation (the difference between the intervention and another intervention or practice as usual; Sanetti & Kratochwill, 2009). Although there is emerging empirical support for some of these dimensions (e.g., Dusenbury, Brannigan, Falco, & Hansen, 2003; Dusenbury, Brannigan, Hansen, Walsh, & Falco, 2005; Gullan, Feinberg, Freedman, Jawad, & Leff, 2009; Hirschstein, Edstrom, Frey, Snell, & MacKenzie, 2007), adherence remains the sine qua non of treatment integrity; without adherence, there may be little reason to consider issues such as quality or exposure (Schulte, Easton, & Parker, 2009).
Despite considerable evolution in how we conceptualize treatment integrity, the methodological importance and role of the construct has not changed. The demonstration of a functional relationship between the implementation of an intervention (i. …