Academic journal article School Psychology Review

Curriculum-Based Measurement of Reading Progress Monitoring: The Importance of Growth Magnitude and Goal Setting in Decision Making

Academic journal article School Psychology Review

Curriculum-Based Measurement of Reading Progress Monitoring: The Importance of Growth Magnitude and Goal Setting in Decision Making

Article excerpt

Curriculum-based measurement of oral reading (CBM-R; Deno, 1985) is used to monitor student response to instruction. Educators use CBM-R to make instructional decisions by administering grade-level passages of connected text across time, calculating the number of words read correctly in 1 minute (WRCM), graphing those observations on time series graphs, and evaluating the trajectory of data (Deno, 1986). Decision rules (Ardoin, Christ, Morena, Cormier, & Klingbeil, 2013) are often used in conjunction with an expected rate of growth (i.e., goal line) to evaluate whether an instructional change should be made. Although the term aim line is sometimes used interchangeably with goal line, for the purposes of this article, the term goal line will be used to refer to an expected rate of growth. Educators and researchers estimate a line of best fit, or trend line, through WRCM observations to summarize student growth. If the rate of improvement (ROI), or the slope of the student's trend line, is less than the slope of the goal line, an instructional change is considered. Conversely, if the ROI is greater than the goal line, a more ambitious goal is considered. Finally, if the ROIs for both the trend and goal lines are generally equivalent, the instructional program and goal are maintained. As straightforward as this process may seem, the technical adequacy of ROI estimates has called into question the accuracy of recommendations from trend line decision rules.

In the context of CBM-R progress monitoring, variability in observations across time is not solely attributable to instructional effects. The average deviation, or residual, from the line of best fit is quantified as the standard error of the estimate (SEE). Growth estimates with low SEE have observations tightly grouped around the trend line, and growth estimates with high levels of SEE have observations widely spread around the trend line. Previous research suggests that observations deviate an average of 10 WRCM from trend lines (Ardoin & Christ, 2009). Instrumentation along with the degree of standardization of administration and scoring influence the magnitude of SEE. Hastily constructed instruments, as well as inconsistent data collection procedures, introduce unwanted variability in scores across time. To understand the implications of SEE, consider a scenario where a student reads 60 WRCM. If the SEE associated with that growth estimate was 10 WRCM, a 68% confidence interval would suggest the student's true score may be as high as 70 or as low as 50. High levels of SEE undermine the ability to infer a student's true oral reading rate at any given week.

The precision of growth estimates is quantified as the standard error of the slope (SEb) and is calculated from the SEE. Whereas SEE captures the precision of static scores in a time series, SEb can be used to create confidence intervals around a student's ROI. For instance, a student may be improving at a rate of 1.50 WRCM per week, with an SEb of 1.10, and be expected to improve at a rate of 2.00 WRCM per week. Using a 68% confidence interval, that student's true ROI may be as high as 2.60 or as low as 0.40 WRCM per week. In this instance, it is uncertain whether that student's true ROI is greater than or less than the expected ROI. Large magnitudes of SEb obscure accurate interpretations of student progress.

The SEb is related to the length of time data are collected, the frequency with which data are collected, and the amount of variability in the data (i.e., SEE; Christ, 2006). Christ, Zopluoglu, Long, and Monaghen (2012) found that requisite levels of reliability for low-stakes decisions (e.g., day-to-day instructional programming; r = .70) could be achieved after 14 weeks if data were collected once a week with superior instruments and under tightly standardized conditions. High-stakes decisions (e.g., using progress monitoring results as part of special education eligibility determination; r = . …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.