Academic journal article School Psychology Review

An Analysis of Learning Rate and Curricular Scope: Caution When Choosing Academic Interventions Based on Aggregated Outcomes

Academic journal article School Psychology Review

An Analysis of Learning Rate and Curricular Scope: Caution When Choosing Academic Interventions Based on Aggregated Outcomes

Article excerpt

Policies requiring schools to report outcome data (e.g., No Child Left Behind Act, 2002) are designed to influence educators to enhance their efforts to improve academic achievement. Support for these efforts may come from a variety of organizations whose tasks include identifying, summarizing, and reporting on empirically validated educational treatments including the Comprehensive School Reform Quality Center, Best Evidence Encyclopedia, and What Works Clearinghouse (Slavin, 2008). Products from these and related efforts should help educators select programs and procedures that may prevent academic problems and remedy deficits in struggling students. However, the success of these efforts will likely be influenced by the validity of the summarized research evidence (Wolery, 2013).

Perhaps the most basic query from the evidence-based practice movement is, Which interventions are supported by scientific data demonstrating targeted behavior change or learning? This question addresses whether an intervention produces more learning than a control condition. When there is sufficient evidence to support that an intervention produces more learning than the control condition, it may be considered evidence based, scientifically supported, or empirically validated (What Works Clearinghouse, 2014). A more practical yet complicated question is that of relative effectiveness, which is answered by comparing two or more empirically validated interventions across the same student or students or across equivalent groups of students (Skinner, 2010). When one is running relative-effectiveness studies, threats to internal validity are minimized by holding constant or evenly distributing all variables that may influence learning with the exception of the treatments (Campbell & Stanley, 1966; Kazdin, 2011).


One variable that is not always controlled in comparative effectiveness studies is cumulative instructional time (CIT), or the amount of time students spend with each intervention (Bramlett, Cates, Savina, & Lauinger, 2010; Skinner, 2008; Yaw et al., 2014). When this threat is not controlled, students may spend more instructional time working on one intervention and researchers may use scientific data to recommend the application of remedial procedures that reduce as opposed to enhance learning rates (Skinner, 2008). To compare learning interventions both within and across studies, researchers could apply outcome measures that include precise measures of CIT and report learning rates (behavior change / Cates et al., 2003; Joseph & Nist, 2006; Skinner, Belfiore, & Watson, 1995/2002).

Skinner et al. (1995/2002) showed how conclusions regarding intervention effectiveness are influenced by how we measure learning (i.e., learning versus learning rate). Specifically, they reanalyzed alternating-treatment data from a prior study that measured the relative learning of two word-reading interventions: One used 1-s intertrial intervals, and the other used 5-s intervals. Initially, the researchers compared intervention outcomes as a function of cumulative sessions (the x-axis was measured across sessions) and found little difference in the amount of learning resulting from the two approaches. On the basis of these data, consumers (both practitioners and researchers) could conclude that the two interventions were equally effective. However, when Skinner et al. reanalyzed the data as a function of CIT (i.e., the x-axis was measured in cumulative instructional seconds), as opposed to cumulative instructional sessions, the 1-s intervention was more effective (i.e., higher learning rate) than the 5-s intervention.

Skinner et al. (1995/2002) demonstrated how altering the measurement scale from a coarse measure (i.e., cumulative sessions or cumulative school days) to a finer measure (i.e., cumulative instructional seconds) produced different results, which supported different conclusions regarding relative intervention effects. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed


An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.