Academic journal article School Psychology Review

Curriculum-Based Measurement of Oral Reading: Standard Errors Associated with Progress Monitoring Outcomes from DIBELS, AIMSweb, and an Experimental Passage Set

Academic journal article School Psychology Review

Curriculum-Based Measurement of Oral Reading: Standard Errors Associated with Progress Monitoring Outcomes from DIBELS, AIMSweb, and an Experimental Passage Set

Article excerpt

Abstract. There are relatively few studies that evaluate the quality of progress monitoring estimates derived from curriculum-based measurement of reading. Those studies that are published provide initial evidence for relatively large magnitudes of standard error relative to the expected magnitude of weekly growth. A major contributor to the observed magnitudes of standard error is the inconsistency of passage difficulty within progress monitoring passage sets. The purpose of the current study was to evaluate and estimate the magnitudes of standard error across an experimental passage set referred to as the Formative Assessment Instrumentation and Procedures for Reading (FAIP-R) and two commercially available passage sets (AIMSweb and Dynamic Indicators of Basic Early Literacy Skills [DIBELS]). Each passage set was administered twice weekly to 68 students. Results indicated significant differences in intercept, weekly growth, and standard error. Estimates of standard error were smallest in magnitude for the FAIP-R passage set followed by the AIMSweb and then DIBELS passage sets. Implications for choosing a progress monitoring passage set and estimating individual student growth are discussed.

**********

In the late 1970s and early 1980s, Stan Deno and colleagues developed the procedures for curriculum-based measurement of oral reading (CBM-R) to enable teachers to systematically monitor and evaluate the effects of instruction on student performance (Deno, 1985; Deno, Marston, & Tindal, 1985; Deno, Mirkin, & Chiang, 1982). Research suggests that students are likely to make greater academic gains if their teachers use CBM-R in conducting systematic formative evaluations to determine when and if instructional modifications are needed (Fuchs, Deno, & Mirkin, 1984; Fuchs & Fuchs, 1986; Fuchs, Fuchs, Hamlett, & Allinder, 1991). Systematic formative evaluation using CBM-R involves administering passages twice weekly, weekly, or monthly, and plotting a student's observed words read correctly per minute (WRCM) in time series fashion. CBM-R allows for the depiction of student growth through the plotting of data in time series fashion because it departs from traditional psychometrics as it "integrates the concepts of standardized measurement and traditional reliability and validity with features from behavioral and observational assessment methodology" (Deno, Fuchs, Marston, & Shin, 2001, p. 508).

Unfortunately, with few exceptions (e.g., Christ & Ardoin, 2009; Poncy, Skinner, & Axtell, 2005), the majority of research evaluating CBM-R has failed to examine its technical features beyond its reliability and validity, which is necessary but not sufficient for measures used to depict student growth (Deno et al., 2001; Francis et al., 2008). Francis et al. (2008) suggested that measures used to depict growth must have a sufficient number of alternate forms and the construct and difficulty level of the forms must remain constant across measurement occasions. When there is variability in the construct measured or the difficulty of the probes, variability appears in student performance that is not a function of changes in skills, resulting in less than accurate depictions of student growth.

Initially the multiple forms that made up CBM-R passage sets for progress monitoring were developed by randomly selecting passages from students' curricula. This method, however, was discovered to be flawed because of considerable variability in the difficulty of text within curricula (Fuchs & Deno, 1992, 1994). In an effort to provide schools with sets of standardized controlled passages, researchers have developed passage sets consisting of probes with difficulty levels equated using readability formulas. Readability formulas assess the difficulty of passages based on factors such as the number of single-syllable words per 100 words, syllables per 100 words, or percentage of words in the passages not found in a word list (Ardoin, Suldo, Witt, Aldrich, & McDonald, 2005). …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.