Academic journal article Education & Treatment of Children

An Examination of Methods for Testing Treatments: Conducting Brief Experimental Analyses of the Effects of Instructional Components on Oral Reading Fluency

Academic journal article Education & Treatment of Children

An Examination of Methods for Testing Treatments: Conducting Brief Experimental Analyses of the Effects of Instructional Components on Oral Reading Fluency

Article excerpt


Brief experimental analyses of academic performance are emerging as a new tool educators can use to link assessment to intervention. This approach involves testing treatments directly using single-case experimental design elements to select intervention strategies for oral reading fluency problems. The purpose of this investigation was to refine the methods reported in previous studies. The procedures were revised to examine a different format for making brief treatment comparisons for selecting intervention components on an individual basis. Effective treatment packages were identified and confirmed for all five participants. The packages themselves differed across the participants. The results are discussed in terms of the advantages of the new procedures, implications for practice, and directions for future research.


There is an emerging area of research that has been combining direct measures of student academic performance (Shapiro, 1996; Shinn, 1989) with academic intervention research (Daly, Lentz, & Boyer, 1996) in an effort to develop brief experimental analysis procedures for academic performance problems. This research is unique not only in that it targets students' academic responding, but also in the way that it approaches treatment selection. Because academic performance problems are behavioral deficits, the goal of these studies has been to increase rates of accurate responding by directly applying treatments. This approach has been applied successfully to spelling and reading comprehension (McComas et al., 1996), spelling and math computation (Hendrickson, Gable, Novak, & Peck, 1996), classroom behavior (Kern, Childs, Dunlap, Clarke, & Falk, 1994), and oral reading fluency (Daly, Martens, Dool, & Hintze, 1998; Noell et al., 1998).

In each of these studies, instructional and/or reward conditions were alternated with control conditions to determine which intervention procedures improved student responding the most. For example, Daly et al. (1998) administered test conditions in an alternating fashion until an increase in oral reading fluency was observed. A mini-reversal was used to confirm the results. Outcomes were measured both in passages in which instruction was delivered and in passages with high content overlap. These latter passages allowed the authors to probe for generalization of effects. For some of the participants, the analyses were extensive. Daly, Martens, Hamler, Dool, and Eckert (1999) reduced the number of treatment conditions necessary for each participant by combining instructional components across conditions, increasing the efficiency of the analyses without compromising treatment effects.

These studies were promising, preliminary attempts at developing a technology that might be useful and feasible for school settings because of their emphasis on (a) directly testing treatments, and (b) comparing a small number of treatments using a brief format. Several issues, however, still need to be resolved through further research. For example, due to the brief nature of the experimental designs used by Daly and his colleagues, the procedures do not allow for evaluation of level, trend, and/or variability in student responding (Martens, Eckert, Bradly, & Ardoin, 1999). In addition, the procedures required significant decision making regarding next steps, and it was not always clear which test condition to implement next.

In these studies, decisions regarding treatment effectiveness were made based only on their effect relative to baseline levels of responding. There was no analysis of how many intervention sessions with a passage were necessary to improve student responding to desired reading fluency rates. Finally, the contingent reward condition had minimal effects across virtually all of the participants, limiting its utility in the decision making process. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed


An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.