Academic journal article School Psychology Review

Progress Monitoring in Reading: Comparison of Weekly, Bimonthly, and Monthly Assessments for Students at Risk for Reading Difficulties in Grades 2-4

Academic journal article School Psychology Review

Progress Monitoring in Reading: Comparison of Weekly, Bimonthly, and Monthly Assessments for Students at Risk for Reading Difficulties in Grades 2-4

Article excerpt

Data-based decision making is a key component of effective multitiered systems of support (MTSS). Within MTSS, all students are screened three times per year. Students who do not meet norm-referenced expectations are identified as being "at risk," and have their progress monitored requently. Scores from frequent progress monitoring are used to make decisions about the effectiveness of core instruction and/or supplemental intervention for individual students. Despite a nearly 40-year history, research investigating the interpretations and use of scores from curriculum-based measurement in reading (CBM-R) for making progress monitoring decisions at the individual level is less than robust, and many practices are based upon expert opinion or anecdotal evidence (Ardoin, Christ, Morena, Cormier, & Klingbeil, 2013; Gersten et al., 2008). For example, the impact of different progress monitoring assessment schedules for accurately estimating student growth is understood poorly. The current study addressed this important limitation in the literature by investigating differences in estimates of reading growth (as measured by CBM-R) when progress monitoring data were collected weekly, bimonthly, or monthly across the academic year for a sample of general education students in Grades 2-4.

Curriculum-Based Measurement in Reading

CBM-R is a 1 min task in which students read a grade-level passage aloud while the examiner records their errors, directly measuring students' oral reading rate with accuracy (Deno, 1985). CBM-R was originally developed to monitor students' progress in their special education curriculum (Deno, 1985, 2003); however, its use in schools has expanded tremendously. This expansion is due in part to increased emphasis in elementary schools on prevention and data-based decision making and to federal legislation (i.e., the Individuals with Disabilities Education Improvement Act of 2004) allowing for the consideration of students' responsiveness to instruction and intervention when making decisions about special education eligibility. In addition to being quick, inexpensive, and easy to administer and score, CBM-R is useful within MTSS because the resulting score is a strong estimate of students' reading achievement (Ardoin, Eckert, et al., 2013; January & Ardoin, 2015; Reschly, Busch, Betts, Deno, & Long, 2009) and can accurately distinguish among students with and without reading difficulties (January, Ardoin, Christ, Eckert, & White, 2016; Kilgus, Methe, Maggin, & Tomasula, 2014). After screening all students within a school, those identified as at risk for reading difficulties are provided with supplemental intervention and have their progress monitored frequently (e.g., once weekly; Deno et al., 2009; Silberglitt, Parker, & Muyskens, 2016). Evidence suggests that, at the group level, CBM-R is sensitive to students' growth in reading over time (Ardoin, Christ, et al., 2013; Deno, Fuchs, Marston, & Shin, 2001).

CBM-R for Progress Monitoring

Using CBM-R to monitor the progress of students with reading difficulties is fairly straightforward. First, educators set an ambitious goal of one to two words gained per week in oral reading rate (Deno et al., 2001; Fuchs, Fuchs, Hamlett, Waltz, & Germann, 1993) while students receive evidence-based intervention(s). As CBM-R data are collected, scores (which are reported in the metric of the number of words read correctly per minute [WRCM]) are plotted in a time-series fashion. Then the observed level and rate of growth in WRCM (i.e., CBM-R slope) is compared with the expected level and rate of growth (Silberglitt et al., 2016). If it is determined that the student is making adequate progress, the intervention is continued; however, if the student is not making adequate progress, the intervention may be changed, or more intensive intervention may be provided. Ultimately, students not responsive to evidence-based interventions are evaluated for special education supports. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.