Abstract. Despite a body of evidence that curriculum-based measurement of reading (R-CBM) is a valid measure of general reading achievement, some school-based professionals remain unconvinced. At the core of their argument is their experience with word callers, students who purportedly can read fluently, but do not understand what they read. No studies have been conduced to determine if teachers' perceptions about these word callers are accurate. This study examined the oral reading and comprehension skills of teacher-identified word callers to test whether they read fluently, but lacked comprehension. Two groups of third graders (N = 66) were examined: (a) teacher-identified word callers (n = 33) and (b) similarly fluent peers (n = 33) who were judged by their teachers to read as fluently as the word caller but who showed comprehension. They were compared on R-CBM, CBM-Maze (an oral question-answering test), and the Passage Comprehension subtest of the Woodcock Reading Mastery Test. Results disconfirmed that word callers and their similarly fluent peers read aloud equally well. Word callers read fewer correct words per minute and earned significantly lower scores on the three comprehension measures. Teachers were not accurate in their predictions of either group's actual reading scores on all measures, but were most inaccurate in their prediction of word callers' oral reading scores. Implications for addressing resistance in using CBM as a measure of general reading achievement are discussed.
More than 20 years of research on curriculum-based measurement of reading (RCBM) has demonstrated that counting the number of words read aloud correctly in 1 minute from standard passages is an excellent measure of general reading proficiency, including reading comprehension. From a traditional psychometric perspective, alternate-form reliabilities typically exceed .90 and 1-week to 1-month test-retest reliability estimates range from .82 to .97 (Good & Jefferson, 1998; Marston, 1989). Criterion-related validity studies typically show correlations of .60 to .80 between R-CBM scores and commercial reading achievement tests and other reading tests (Fuchs, Fuchs, & Maxwell, 1988; Good & Jefferson, 1998; Marston, 1989).
More sophisticated construct validity studies using confirmatory factor analyses have consistently demonstrated that R-CBM scores explain a significant proportion of the variance in reading comprehension construct scores (Petetit, 2000; Shinn, Good, Knutson, Tilly, & Collins, 1992). Additionally, the strong relation of R-CBM as a measure of general reading proficiency has been cross-validated with English language learners (ELL). For example, Baker and Good (1995) reported that correlations between R-CBM and criterion reading measures were comparable for both ELL and English-only students. Similarly, Ramirez (2001) reported that in fifth-grade ELL students, approximately 80% of the variance in reading comprehension construct scores was explained by their English RCBM reading scores.
Also important, R-CBM has been constructed to satisfy the validity standards from a more contemporary perspective such as the one proposed by Messick (1986). Of these standards, no single standard is more important than that of consequential validity; test use should result in decisions that contribute positively to improved outcomes. R-CBM was designed to provide teachers a simple and accurate way of monitoring the progress of their students for purposes of formative evaluation (Deno, 1985, 1986). Repeated studies have shown significant and positive effect sizes in students' achievement when R-CBM is used in formative evaluation (Fuchs & Fuchs, 1986; Lloyd, Fomess, & Kavale, 1998). In their meta-analysis, for example, Fuchs and Fuchs (1986) reported effect sizes of .70. This effect size translates into a student who would be expected to be at the 50th percentile when progress is not evaluated formatively to performing at the 76th percentile when this approach is used. …