A Growth Curve Analysis of Literacy Performance among Second-Grade, Spanish-Speaking, English-Language Learners

Article excerpt

The English learner (EL) population has risen 51% since 1998 according to data published by the National Clearinghouse for English Language Acquisition in 2011. According to data reviewed by National Clearinghouse for English Language Acquisition during the 2008-2009 school year, more than 5.3 million or 10.8% of students enrolled in the nation's public schools (from prekindergarten to Grade 12) were classified as limited English proficient (National Clearinghouse for English Language Acquisition, 2011). In California, ELs account for approximately 33% of the total number of students enrolled in public schools. Of these students, the native language for approximately 85% of them is Spanish (California Department of Education, 2011).

Many culturally and linguistically diverse students fail to meet academic expectations. For example, the National Assessment of Educational Progress (1999) reported that despite having gains in reading achievement since the 1970s, the reading growth of ELs has lagged behind that of their native English speaking (NS) peers. In addition, ELs continue to fall behind their NS peers in reading (National Assessment of Educational Progress, 2011). Although ELs made progress in reading achievement in 2002, a review of recent literacy data showed a lack of growth from 2002 to 2011 (National Assessment of Educational Progress, 2011). Thirty-one percent of fourth-grade ELs performed at "Basic" levels of overall reading achievement, 7% performed at "Proficient" levels, 1% at "Advanced" levels, and the greatest percentage (69%) of ELs met "Below Basic" levels of reading achievement (National Assessment of Educational Progress, 2011). The data presented are troublesome and indicate that ELs are not accessing the core literacy instruction like their NS peers, which affects their ability to improve their literacy skills and ultimately show progress on core standards of education.

Klingner, Artiles, and Barletta (2006) suggested that inaccuracy in reporting procedures and the operational definitions of ELs in federal and state laws may actually serve to mask or minimize an even larger achievement gap than what is currently known. Moreover, ELs are a heterogeneous group and are not defined consistently across states or within the literature, which yields different views about how to classify EL students and measure their progress (Hakuta & Beatty, 2000; Rhodes, Ochoa, & Ortiz, 2005). In the following section we will discuss the implications of the variability in how EL is defined and how literacy skills are assessed for this population.

ELs Defined

This diversity of ELs must be considered when observing learning outcomes because they may differ with respect to the type of instruction they receive and the language and literacy skills they exhibit. This information can help guide educators about the expectations for the subgroups of students and when to consider a student as at risk. In addition, examining within-EL distinctions may help minimize overgeneralization of assumptions commonly made about EL students (Artiles, Rueda, Salazar, & Higareda, 2005).

Students' level of English language proficiency (ELP) may contribute to different learning outcomes. For example, Kiefer (2008) conducted a longitudinal analysis of reading achievement outcomes among NS students, students who entered kindergarten with fluent English proficiency, and students with limited English proficiency. Results suggested that rates of growth among students with FEP were similar to those of NS students through Grade 5, and slopes of growth among students with limited proficiency diverged from those of NS students. Similarly, growth curve modeling comparisons of oral reading fluency performance among Latino NS, EL, and EL-exited students indicated similar growth for NS and EL-exited students in third grade, with fluency growth for Latino ELs significantly lower than the former groups through the third grade (Al Otaiba et al., 2009). In addition to ELP, the amount of time that an EL student's family has lived in the United States has also been shown to affect reading achievement among ELs (Betts, Bolt, Decker, Muyskens, & Marston, 2009). Few studies have examined within-group distinctions among ELs. That is, when making distinctions about EL students' response or lack of response to general education instruction, the research is lacking to determine whether the heterogeneity of this group should be taken into account when making resource allocation and pedagogical decisions. It is important to understand how ELs of differing English proficiency perform on critical reading predictors like phonemic awareness, phonics, and reading fluency as we can use the data to structure interventions. The research is clear that providing early intervention support in these fundamental skill areas yields improved reading outcomes (August & Shanahan, 2006; National Reading Panel, 2000).

Literacy Screening With ELs

There is an emerging research base related to the use of problem solving and response to intervention screening procedures with ELs (Baker, Plascencia-Peinado, & Lezcano-Lytle, 1998). Vanderwood and Nam (2007) argued that the assessment and intervention research with ELs suggests similar methods can be employed with ELs within an response to intervention framework. Previous research found moderate to large correlations for nonsense word fluency (NWF) measures with third-grade reading outcomes and that NWF accounted for significant variation in third-grade oral reading fluency (32%), third-grade maze (9%), and third-grade reading achievement (8%) (Vanderwood, Linklater, & Healy, 2008). Thus, NWF performance at the end of first grade could play a substantial role in determining who will be at risk at the end of third grade, but low sensitivity values of using NWF at the end of first grade might under-categorize the number of at-risk ELs when they may be at risk at another time (Vanderwood et al., 2008).

The fact that NWF predicted third-grade reading status of ELs provided useful information for the development of theoretical models and practical protocols. For example, NWF, along with other sources of data, can be used by school districts to provide proactive services to EL students who may be at risk of not meeting standards on state accountability tests. However, research must continue to assess these measures with specific reading criterion measures as generalizations typically cannot be made about the predictive nature of curriculum based measurement (CBM) from one state accountability test to the next (Jenkins, Hudson, & Johnson, 2007).

Other studies have addressed the value and sensitivity of screening tools in English and Spanish. Townsend and Collins (2008) found that for young first-grade ELs, English early literacy screening measures (i.e., letter word fluency, decoding, and word recognition) predicted English reading outcomes better than alternate-forms native language measures predicted reading in the native language, especially if the measures matched the classroom instruction. Comparisons using progress monitoring measures have also been used to address literacy growth in English (Al Otaiba et al., 2009; Keifer, 2008; Linklater, O'Connor, & Palardy, 2009) and oral reading fluency growth in English and Spanish (Dominguez de Ramirez & Shapiro, 2006) for students instructed in English. Growth models of oral reading fluency progress monitoring data have shown that EL students made slower gains in oral reading fluency as compared to NS students, but that the measures were sensitive to capture gains in oral reading fluency in English across grades than were alternate forms in Spanish (Dominguez de Ramirez & Shapiro, 2006).

Literacy screening measures in English were also differentiated between good and poor readers, but not necessarily language status (Townsend & Collins, 2008). Similarly, when removing the variance associated with language status, early literacy measures were sensitive to changes in level of performance (Vanderwood et al., 2008). A synthesis of research released by the Institute of Education Sciences concluded that current screening assessments and benchmark levels established for NS students could be used to screen and monitor ELs (Gersten et al., 2007).

The recommendations made by Gersten and colleagues (2007) were done so without considering the heterogeneity of ELP levels and the report indicated that early literacy screeners may overrepresent the number of ELs who are encountering challenges with literacy skills (i.e., higher rates of false positives). Thus, the recommendation to use current screening tools and benchmark standards for ELs could assists school systems practically because it suggests that current English tools provide equivalent information for ELs. However, the recommendation has the potential to increase the burden of providing more resources to ELs who may not have academic problems or to EL students whose level of ELP may be the major factor, which would point to a problem of English language development programs within core instruction. Thus, without disaggregated data that consider the heterogeneity of EL students, educational agencies have to assume that English screening measures are sensitive and provide equivalent information to students' at all ELP bands.

In addition to the effect of ELP level and issues of test equivalence, the predictive accuracy of screening measures may be affected by floor effects that may not be evident when screening data are aggregated. For instance, Catts, Petscher, Shatschneider, Bridges, and Mendoza (2009) reported that floor effects affected the predictive accuracy of tools to a greater extent the earlier the screening tool was administered (i.e., greater effect on the measure's predictive validity in kindergarten than in third grade). Hosp, Hosp, and Dole (2011) found bias in predictive validity when fluency measures of the Dynamic Indicators of Basic Early Literacy Skills (DIBELS; Good & Kaminski, 2002) were examined in relation to state criterion-referenced test.


The previously described studies provide important information related to the use of English screening tools with EL students. However, there are challenges with extending the findings of the studies in that the EL participants were coded as a homogenous group or differences in English language proficiency were controlled. None of the studies presented earlier disaggregated the data by level of language proficiency to account for potential differences in the sensitivity of the measures. In this case, determining whether these findings were dependent on English language level could not be deduced. Further exploration using literacy screeners is warranted as research has documented that some screening tools might not be as sensitive for ELs with lower levels of ELP (Vanderwood et al., 2008).

The current study extends research by Kiefer (2008) and Al Otaiba and colleagues (2009), but it is unique in that the goal was to address within group level and growth differences solely among Latino ELs based on level of English language proficiency (e.g., Beginning, Early Intermediate, Intermediate, Early Advanced, and Advanced). Kiefer (2008) conducted growth comparisons among NS youth, ELs with limited oral language skills, and language minority youth with proficient oral language skills. Al Otaiba and colleagues (2009) contrasted the rates of oral reading fluency among Latino NS, EL, and EL-exited students. Both studies provide significant information about how native speakers, ELs, and EL-exited students change with respect to their trajectories of literacy development and acceleration of literacy skills as they move onto higher grade levels. The research at this point is too limited to make generalizations across all levels of ELP.

The literature pertaining to rates of growth among ELs needs to address intra group distinctions. Growth curve models conducted with native English speakers have found that when CBM data are disaggregated by performance deciles (i.e., 10th percentile, 90th percentile, and so on) growth rates vary based on initial performance (Silberglitt & Hintze, 2007). These findings suggest that looking specifically at aggregated growth curves may give a different impression about students' progress toward targeted goals. Response to intervention models seeks to differentiate between students who respond to effective interventions and students for whom we have yet to find effective interventions, and further data regarding ELs growth during or universal instruction (Tier 1) is necessary for making this determination. Given that relatively little is known about the effect of ELP level on students' response to universal instruction at Tier 1, it is important to explore this line of research. These data would provide important information for understanding when modifications and/or intensity of reading interventions are necessitated for ELs with different English proficiency levels. Hence, the purpose of this study is to address the heterogeneity among levels of ELP and how this diversity affects the quality of English literacy assessment measures. The following research question guided the study: To what extent does ELP level affect reading level and growth on measures of early literacy?


School administrators in a large southern California school district were sent an electronic letter asking whether they would be willing to participate in a longitudinal study addressing literacy skills among EL students. A total of 10 school administrators gave permission for the research to be conducted at the school site. Six of the 10 schools were randomly selected to participate in the study. A stratified (by school and English proficiency level) random sample was drawn to provide potential participants in second grade for this study. Students and parents were informed that their participation would help the researchers assess the quality of screening and progress monitoring tools and to understand how EL students' reading outcomes change over the course of the academic year. A consent form asked parents to give permission to have access to language and achievement test scores as well as administer assessment tools.

Once the groups of students were established, measures of phonological awareness (PA), alphabetic principle, and oral reading fluency were given to the students in the fall, winter, and spring of second grade. This study was part of a larger study that also examined student performance in their first language and addressed additional language related variables. The school district administered the California English Language Development Test (CELDT; California Department of Education, 2006) in the fall to determine the level of English proficiency of ELs for program planning.


Data for 260 students (N = 136 female) with a mean age of 7.39 years (0.40 years SD) were used for the analyses. It is important to note that when we began to collect the first wave of data, a total of 35 students at the various school sites were no longer available to participate. The 35 cases were removed from the data set because data were not collected for these individuals. Forty-nine students were at the Beginning level of English language proficiency, 90 at the Early Intermediate level, 81 at the Intermediate level, 30 at the Early Advanced, and 10 at the Advanced level of English language proficiency. The total EL student enrollment at each school was at or above 70% (range: 70% -83%). Most students at each site qualified for free or reduced lunch (range: 95% -99%). The total number of students at each school site ranged from 400 to 1300. All teachers held a California Teaching Credential. Twenty of the 31 teachers were female. One teacher held a doctoral degree (EdD), 25 teachers held a master's degree, and 6 teachers held a bachelor's degree. All of the teachers had some form of English language development training. Years of teaching experience ranged from 2 to 13 years. All teachers consistently taught at elementary school levels.

Universal Instruction

Literacy instruction was provided in blocks of 2 hr. Instruction followed a "Units of Inquiry" approach (San Diego Unified School District, n.d.) wherein children were provided with a developmental progression of units of study to develop a growing culture of inquiry. Each teacher followed a Unit of Implementation Map, which outlined the literacy (reading and written language) instruction, instructional materials and approaches. Systematic English language development for second-grade EL learners was composed of units of instruction that were introduced based on the needs of the students. The district provided a rubric for teachers to determine which strategies to introduce for students with specific ELP skills. The curriculum used at the sites with push-in support and pull-out support focused on word analysis, fluency, systematic vocabulary development, reading comprehension, literary response strategies for grade level text, writing strategies, writing applications, listening and speaking strategies.


CELDT. The CELDT is a test of English proficiency administered to any student in California whose parent reports a home language other than English on the Home Language Survey. The CELDT is administered annually to all EL students. The technical manual reports reliability coefficients ranging from .85 to .89 across grades and subtests. Students in second grade are tested in the areas of listening, speaking, reading, and written language. Students' ELP level is based on their overall score, a composite derived from the four subtests. The CELDT identifies five proficiency levels: Beginning, Early Intermediate, Intermediate, Early Advanced, and Advanced ELP. The CDE reports scaled score ranges of each proficiency band as follows: Beginning ELs overall scores range from 215 to 396; Early Intermediate ELs overall scores range from 397 to 446; Intermediate ELs over all scores range from 447 to 495; Early Advanced ELs overall scores range from 496 to 539; and Advanced ELs overall scores range from 540 to 635.

The CELDT is not often used in research. Thus, we correlated the CELDT data with data from the Woodcock Munoz Language Survey--Revised (WMLS-R; Woodcock, Munoz-Sandoval, Ruef, & Alvarado, 2005) as part of our larger research study. The results indicated that the CELDT correlated moderately and positively with the WMLS-R (r = .50, p < .01) given in the fall of second grade. The CELDT data were provided by the school district for each child and were used as grouping variable.

DIBELS Oral Reading Fluency (DORF--version 6.1). The DORF is a standardized, individually administered test of accuracy and fluency with connected text (Good & Kaminski, 2002). Each participant was administered three DORF progress monitoring probes and the median number of words read correctly and number of errors was used as the performance score for each wave. The technical manual reported high concurrent validity with the Test of Oral Reading Fluency (Children's Educational Services, 1987), with coefficients of each passage ranging from .91 to .96, and high alternate-form reliability with coefficients ranging from .89 to .96. Benchmark cut scores are provided as a means of determining the instructional needs of the student. Each student was assessed with three grade-level DORF passages during each testing period in the fall, winter, and spring.

DIBELS Phoneme Segmentation Fluency (PSF; Good & Kaminski, 2002). PSF is a standardized, individually administered test of phonological awareness. The PSF measure assesses a student's ability to segment three- and four-phoneme words into their individual phonemes fluently. The PSF measure has been found to be a good predictor of later reading achievement and is intended for use with students from the winter of kindergarten to the middle of first grade (Kaminski & Good, 1996). The 2-week, alternate-form reliability for the PSF measure was .88 and the 1-month, alternate-form reliability was .79 (Kaminski & Good, 1996).

DIBELS Nonsense Word Fluency (NWF; Good & Kaminski, 2002). NWF is a standardized, individually administered test of the alphabetic principle, including letter-sound correspondence and of the ability to blend letters into words in which letters represent their most common sounds. The number of letter-sounds correctly read per minute was used as the unit of analysis. The 1-month, alternate-form reliability for NWF was .83. The predictive validity of DIBELS NWF in January of first grade with CBM ORF in first grade was 0.82, with CBM ORF in Spring of first grade was 0.60, and with the Woodcock-Johnson Psycho-Educational Battery Total Reading Cluster was 0.67 (Good et al., 2004)

Assessment training procedures. The data were collected by trained school psychology graduate students from a local school psychology graduate training program and the principal investigator. Each research associate was trained by the principal investigator. These trainings involved reviewing procedures for administering each of the measures, modeling, observing, and guided practice. In addition, monthly meetings were held to review progress and to review assessment procedures. The principal investigator observed each research associate give and score each measure to ensure reliability of assessment and scoring procedures. Written directions outlining procedures were provided to each research associate.

Interrater Reliability

Interrater reliability for all measures, based on a sample of 10% of probes, was within appropriate limits. Interrater reliability was determined by dividing the number of agreements by the number of agreements plus disagreements. Interrater reliability scores across three waves of data collection were PSF (95%, 98%, 97%), NWF (94%, 98%, 98%), and ORF (99%, 98%, 97%). Interrater reliability was not available for the CELDT as the data were collected by the school district and submitted to the principal investigator.


Growth curve modeling with HLM 6.08 software (Raudenbush, Bryk, Cheong, & Congdon, 2004) was conducted to address the extent to which English proficiency affected reading level and growth on measures of PA, letter-sound correspondence, and oral reading fluency in English. For the purpose of this study, a linear growth model was assumed as it is recommended for waves of data collection of fewer than four data points (Raudenbush & Bryk, 2002). Both unconditional and conditional growth models were analyzed. The unconditional model can be defined as the simplest student level model. The Level 1 model consists of linear model [[Y.sub.ij] = [[pi].sub.0i] + [[pi].sub.1j] ([time.sub.ij]) + [e.sub.ij]] with intercept ([[pi].sub.0i]), slope ([[pi].sub.1i]), time, and random error ([e.sub.ij]) as predictors. The Level 2 equations of the unconditional model only modeled the intercept ([[pi].sub.0i] = [[beta].sub.00] + [r.sub.0i]) and slope ([[pi].sub.1i] = [[beta].sub.10] + [r.sub.1i]) estimates at Level 1. These estimates provide the overall mean and overall slope of the entire sample. The conditional model is different than the unconditional model in that indicators are included in the Level 2 analyses to account for additional variance.


Table 1 includes the means and standard deviations for each measure by ELP during each wave of data collection (i.e., fall, winter, spring). Skewness and kurtosis values for each measure by ELP (see Table 2) were assessed to address assumptions of normality as well as potential floor effects that have been noted with DIBELS measures (Catts et al., 2009). Kurtosis and skewness values for all ELP groups showed no signs of floor effects for PSF throughout second grade. Floor effects for NWF and ORF were visible at the beginning and middle of second grade, with the effect lessening at the end of second grade for students with Beginning and Early Intermediate ELP levels. Floor effects for NWF at the beginning of the second grade were visible, but not visible for ORF for students with Intermediate ELP levels. Floor effects for NWF and ORF were not visible for students with Early Advanced or Advanced ELP levels. The results were consistent with Catts et al. (2009) with initial floor effects for NWF and ORF at the beginning of the school year for students with weaker English language skills, and the effect lessening toward the end of the school year.

Unconditional Models

The unconditional model provided empirical evidence for determining proper specification of the individual growth equation and baseline statistics for evaluating Level 2 models (Raudenbush & Bryk, 2002). The unconditional model provided the average level and slope parameters considering time as the only predictor. Full maximum-likelihood methods were used to assess model fit. Students' scores were centered at fall (Time 1) to assess level and rate of growth from fall to spring of the second grade. Therefore, the intercept represented the average student score on each screening measure at the beginning of second grade. Review of intercept and slope coefficients presented in Table 3 indicates that scores for each measure increased as the level of ELP increased from Beginning to Advanced levels with the exception of the intercept for NWF for the Early Advanced and Advanced group.

Unconditional models (i.e., random coefficient regression models) for PSF, NWF, and ORF indicators were estimated to determine whether Level 2 variance components were significant to justify specifying a multilevel model. The analysis found that the random effects at Level 1 ([[sigma].sup.2]) and at Level 2 ([[tau].sub.00], [[tau].sub.10]) were significant, indicating there was significant variance to explain (see Table 4). Thus, conditional models were appropriate. The results of the analyses of the unconditional models yielded the average level and growth estimates for all second-grade EL learners. For example, the estimated average intercept (e.g., grand mean), [[beta].sub.00], and average growth rate, [[beta].sub.10], for ORF were 41.10 words and 9.51 words per 10 weeks, respectively. Overall, this suggests that the parameter estimate for initial status for all students, on average, was 41 words correctly read per minute with an average rate of change of 9.51 words every 10 weeks (or 0.95 words per week). The square root of the variance estimate for slope, [square root of ([[tau].sub.11])] provides an estimated standard deviation (Raudenbush and Bryk, 2002). Thus, a student who is one standard deviation above the mean would be expected to have a rate of growth of 9.51 + [square root of (21.11)] or 14.10 words every 10 weeks or 1.41 words per week. Review of Table 3 indicates that both the intercept and growth rate coefficients have significant t statistics, which suggests that both parameters are important for understanding the overall growth trajectory (Raudenbush & Bryk, 2002). Unconditional models for PSF and NWF also suggested similar findings for all fixed effects (see Table 4).

The test of homogeneity that means and slopes did not differ by group of language proficiency was examined ([H.sub.0]: [[tau].sub.00] = 0; [H.sub.0]: [[tau].sub.10] = 0). The null hypothesis was rejected for intercept ([[tau].sub.00] = 691.02, df = 259, [chi square] = 1759.14, p < .001) and slope ([[tau].sub.10] = 21.11, df = 259, [chi square] = 473.72, p < .001), suggesting that second-grade EL learners differ by ELP in the amount of words they can read (ORF initial status) and vary in their ORF learning rates. The null hypothesis that mean and slopes were similar across levels of proficiency was also rejected for PSF and NWF (see Table 4).

Conditional Models

Conditional growth models included predictor variables of language status (e.g., dummy-coded variables for ELP level). Level 3 analyses were not conducted to assess the nested structure of measures within students within classrooms (or teachers) because the limited number of school sites would not have offered sufficient power to find reliable effects (Snidjers & Bosker, 1993).

Differences in ORF initial status and growth. Because the unconditional model indicated significant within-level variation that needed further exploration, conditional models (i.e., intercepts and slopes as outcomes models) were introduced to account for language status variables. For these analyses, ELs with Beginning ELP were coded as the reference group. Equation (1) presents the Conditional Level 2 model, which includes the language predictors in both the intercept and slope equations. The Beginning ELP group was the comparison group for the following example:


In this analysis, the intercept (([[beta].sub.00]) and slope ([[beta].sub.10]) parameter coefficients represent Beginner ELs. The t ratios for each group were significant at p < .005. For ORF, every group on average started reading words at a higher level than ELs at a Beginning level of ELP (see Figure 1). However, a different finding was observed when comparing rates of growth for each group of ELs on the ORF indicator. Only ELs with Advanced levels of ELP differed in their rate of growth (4.78 words greater than beginner ELs) throughout second grade (see Table 5). Successive conditional Level 2 models were developed where each ELP group was alternately specified as the reference group. The parameter coefficients presented in the table are the difference in coefficients between the group's parameter estimates to that of the reference group. A Bonferroni-type adjustment was conducted to avoid an inflated Type I error rate from multiple significance tests. In this case, the family-wise error rate (.05) was divided by the number of difference tests conducted in Table 5 (10), which adjusted the alpha level to .005 (Tabachnick & Fidell, 2007). Table 5 provides details to suggest that students from every group of ELP differed in their abilities to read words on the ORF measure. Students with Early Advanced and Advanced levels of ELP did not differ in their initial status for reading words (ORF) at the fall of second grade. Thus, ELP level was associated with greater initial status at the beginning of the second grade, such that students with Advanced ELP were able to read more words than students with lower ELP levels (see Figure 1).

Differences in NWF Initial Status and Growth. Tests were also completed to assess patterns of initial status and rates of growth on the NWF indicator. As mentioned above, successive models were conducted alternating each reference group to determine whether intercepts and slopes differed across each group as compared to the reference group. Table 5 lists the coefficients retrieved after conducting these analyses. NWF initial status coefficients among Beginning, Early Intermediate, and Intermediate EL students were not significantly different, suggesting that initial status levels on the NWF indicator were very similar among the three groups. Initial status coefficients for Early Advanced and Advanced ELs did not meet statistical significance at the .005 level. Practically, however, students with Advanced and Early Advanced ELP correctly identified 28.87 and 22.76 letter-sounds more than did Beginning ELs, respectively (see Table 5), as well as 22.34 and 16.23 letter-sounds more than did Early Intermediate ELs, respectively. In addition, none of the growth parameters differed among the tests of significance at the .005 level on the NWF indicator. It should be noted that the growth rate among Advanced and Beginner ELP was of practical significance at the alpha level of .01. These results suggest that students with higher proficiency levels in English (Advanced and Early Advanced) started the second grade with the ability to recognize more letter-sounds than that of students with lower proficiency levels in English (Beginner, Early Intermediate, and Intermediate). With respect to slopes, only students with the most Advanced level of ELP displayed visibly steeper rates of growth across the year than did other ELP groups (see Figure 2).


Differences in PSF initial status and growth. Table 3 provides initial status and growth coefficients and results of significance tests across groups for PSF. Significance tests among the ELP groups did not reveal any differences among initial status. Students with Advanced ELP levels (Advanced and Early Advanced English proficiency) on average were able to obtain 6.84 and 8.40 more segment sounds correct compared to ELs with Beginning ELP, respectively. However, rate of growth on PSF was not statistically significant among any of the ELP groups at the .005 significance level. Figure 3 displays the rate of growth among students with Beginning ELP that began to diminish as compared to that of students with Advanced ELP.


Our study sought to examine whether there are group differences on early literacy assessment tools for a sample of second-grade ELs with varying degrees of ELP (e.g., students with Beginning, Early Intermediate, Intermediate, Early Advanced, and Advanced levels of Proficiency). This study was undertaken to examine the overgeneralizations of homogeneity commonly made among ELs (Artiles et al., 2005; Vanderwood & Nam, 2008). Furthermore, assumptions that all assessments work for all cultures violates the issue of assessment equivalence, or "the degree to which test scores can be used to make comparable inferences for different examinees" (American Educational Research Association, American Psychological Association, & National Council for Measurement in Education, 1999 p. 92). At this point, educators are not provided a different set of benchmarks or expectations to use for ELs with the most frequently used reading screening measures, yet it may be unrealistic to expect growth on reading assessment tools that is similar to that of NS. We gave the DIBELS second-grade ORF and DIBELS early literacy measures that are typically given to first-graders to our second-grade EL sample for the purpose of determining whether information about ELs' early literacy skills could help clarify the students' reading performance. Our results suggest it may be necessary to take into account a student's English proficiency level when giving and interpreting DIBELS screening results.

Level and Growth Rates Differences by ELP on ORF

English language proficiency level was significantly related to differences in ORF initial status. Every ELP group, except Early Advanced and Advanced, were significantly different from one another in terms of the number of words they were able read aloud (i.e., initial status) at the beginning of second grade. In other words, there was a clear relationship between ELP level, as measured by our state's English language proficiency assessment (i.e., CELDT), and reading performance at the beginning of second grade.


Besides the differences in level on ORF, significant differences were also found in growth rates between students with Beginning ELP and students with Early Advanced and Advanced ELP (see Table 3). The time between testing periods consisted of 10 weeks, and thus estimated weekly growth rates were created for each measure for each ELP group. Weekly growth rates for ORF were approximately 0.82, 0.95, 0.97,1.1, and 1.3 words per week for Beginning, Early Intermediate, Inter mediate, Early Advanced, and Advanced levels of ELP, respectively. Dominguez de Ramirez and Shapiro (2007) reported a growth rate of 0.86 words per week for second-grade general education students and 0.75 words per week for bilingual students using ORF measures in English. In contrast, Deno, Fuchs, Marston, and Shin (2001) reported a growth rate of 1.66 words per week for second-grade English speakers. Our weekly growth rates of students with Early Advanced and Advanced ELP were closer to the average reading fluency growth rates for ELs reported by Al Otaiba et al. (2009) of 1.19 words per week and Betts et al. (2009) of 1 word per week. Based on our sample of second-graders, it appears ELs with Early Advanced and Advanced levels of ELP read at a level that was similar to English-proficient and native English-speaking students. These results are consistent with a recent study by Kiefer (2008) finding that the rates of literacy growth for kindergarten ELs with fluent oral English proficient skills converged with those of native English speakers, while ELs with lower ELP skills diverged from those of native English speakers.


Initial Status and Growth Rates for Phonological Awareness and Alphabetic Principle

Previous research has provided evidence to support growth models for initial sound fluency, phoneme segmentation fluency, and combined phoneme segmentation task measures in kindergarten (Linklater et al., 2009). Similarly, the current study also provides evidence for linear growth models for PA and letter-sound correspondence for second-grade students. Growth rates for phonological awareness skills were 0.38, 0.47, 0.56, 0.47, and 0.64 phonemes recognized per week for the groups, respectively. The rates of growth for PA skills in the second grade are similar for students with moderate to advanced levels of English proficiency (Intermediate, Early Advanced, and Advanced groups), with the Beginning-level students performing significantly lower. This may suggest that EL learners at the Beginning ELP level could possibly benefit from more phonological awareness support than students at other levels of ELP or potentially vocabulary-focused interventions to improve word recognition and phonological awareness. That students at other levels of ELP did not differ significantly in phonological awareness is potentially caused by the fact that for older students, the importance of PA decreases as reading skill continues to develop (Verhoeven, 1990).

With respect to letter-sound correspondence skills (NWF), ELs with Early Advanced and Advanced levels of ELP were able to read more letter-sounds in 1-min than EL learners at Beginning and Early Intermediate ELP levels. This suggests that the ELs with Advanced ELP began the second grade with stronger developed word recognition skills and potentially greater automaticity with text. Students with the most advanced ELP had steeper growth rates with letter-sound correspondence, which may be a product of more established word recognition skills. However, further research is needed as our sample size was small for our Advanced ELP group.


Our study demonstrates a strong relationship between ELP level and literacy skills as measured by DIBELS. Students with lower English language skills also performed lower on literacy skill measures. It is impossible with the data from this study to determine the direction of the relationship or make any inferences about causality, but it is very clear a relationship exists for the students in this sample. Our results also suggest it may be necessary to monitor second-grade students with lower levels of ELP with measures of phonemic awareness to truly understand whether growth is occurring and to identify possible areas of intervention. At this stage, it is not clear whether intervention focused on phonemic awareness would actually help second-grade students at the Beginning level of ELP, but is clear their skills in this area are significantly different than those of students at more advanced levels of language performance.

A second conclusion that can be made based on our data are students in the lowest level of English proficiency grow at a different rate on the measures than those students with more advanced levels. Floor effects have been found with early literacy measures (e.g., Catts et al., 2009), yet these effects typically are reduced as students move onto first and second grade. A review of our data does not indicate a floor effect (e.g., positive skewness) for the whole group, nor for Intermediate, Early Advanced, and Advanced ELP levels. We did note floor effects at the beginning of second grade for students with the lowest levels of ELP.

These data also indicate the substantial differences in ORF scores related to students' ELP scores suggest making inferences about ELs as a group may be a mistake. This point is critical for those who conduct research with ELs, given that a significant amount of research with ORF treats ELs as a homogenous group. Without knowing and reporting the composition of Els' language proficiency skills, the research conclusions made could change from one study to another simply based on differences in ELP.

Implications for Screening Within a Multitiered Response to Intervention Framework

One of the of the most significant findings related to using DIBELs to support a multitiered system of support is ELs with the lowest level of ELP will show growth on the measures over an academic year. At least one national report indicates there has been resistance to including ELs in screening assessment because of the belief that the students' lack of English proficiency would prevent measurement of English reading skills (Gersten et al., 2007). Although students with a low level of ELP demonstrated growth in reading skills throughout the academic year, their growth was substantially different from EL students with ELP levels considered close to proficiency. This difference in growth makes it absolutely critical for teams to understand the composition of the EL population they assessed when interpreting screening results. These data suggest if the group is at the higher end of the ELP continuum, it would be appropriate to have the same expectations for growth as is held for native English speakers. Conversely, if the group is comprised of a large percentage of students at the lower end of the ELP continuum, it may be unrealistic to expect growth similar to NS students. In this situation, conclusions about the quality of Tier I support should be tempered by an understanding of the effect of language proficiency on reading development.

When evaluating reading interventions for individual EL students, it is critical to understand the type of goals that are set and the conclusion that is to be made (Vanderwood & Nam, 2008). If one of the questions is whether an EL student is growing at an average rate (e.g., 50th percentile rate of improvement), it appears it may be inappropriate to use growth rates calculated for NS with ELs at the lower end of the ELP continuum. It is also important to note that if the question of special education eligibility is addressed and the application of a dual-discrepancy model (Fuchs, Fuchs, McMaster, & Al Otaiba, 2003) is desired, it would be hard to argue language proficiency is not affecting reading development. It seems appropriate to be cautious when making conclusions about reading growth with CBM when a student has a limited level of ELP. In this case, we agree with Hosp et al. (2011) in that teams should make decisions based on multiple forms of information to reduce bias associated with screening tools and with Catts et al. (2009) and review the limitations that some screeners have as they relate to floor effects.

Limitations and Future Directions

The most significant limitation of this study is that we were unable to compare our EL students' performance on DIBELS to native English speakers at the same schools. Instead, we were forced to compare our results to previous studies that most likely provided reading instruction with different materials and strategies. Related to this limitation was the small number of EL students with Advanced levels of ELP. That fewer numbers of ELs with Advanced levels of proficiency are available for comparison could point to the need to strengthen the instruction to help ELs attain academic language or perhaps the mechanism for determining who meets advanced level of proficiency needs to be revisited. The data in this article indicates that ELs of different ELP levels start the second grade with differing word reading abilities, such that those with the lowest level of English proficiency read fewer words and grow slightly slower than those with advanced levels of English proficiency. The CELDT test requires students to meet standards for reading and writing to meet advanced levels of proficiency; thus, if students at lower ELP levels have weaknesses with word reading, they will have challenges meeting these standards. This may point to a greater need for local educational agencies to utilize preventative, proactive supports to help minimize the literacy gaps among groups of ELs and to provide interventions to help students gain mastery of literacy skills that predict reading and written language outcomes.

A second limitation relates to the use of a linear model to assess growth. The linear approach was conducted because of recommendations when three waves of data are available (Raudenbush & Bryk, 2002; Snidjers & Boskers, 1993). To determine whether rates of acceleration differ across levels of ELP, one would want to collect more than three waves of data (see Keifer, 2008; Al Otaiba et al., 2009). The third limitation of our study is the sample size was relatively small and thus follow-up studies with larger cohorts of each ELP level would be appropriate.

A question these data raise is whether it may be necessary to develop a different set of cut scores for ELs with low levels of ELP. If the purpose of cut scores is to help teams determine students' level of risk of reading problems, using cut scores developed with primarily NS students seems problematic. It is quite possible that as the English language skills of an EL develop, reading skills will also develop at a rate that is more typical of NS students. It seems critically important to conduct a longitudinal study that examines the connection between English proficiency and reading growth as measured by screening and progress monitoring tools (e.g., DIBELS).


American Educational Research Association, American Psychological Association, & National Council for Measurement in Education (1999). Standards for educational and psychological testing. Washington, DC: Author.

Al Otaiba, S., Petscher, Y., Pappamihiel, N. E., Williams, R. S., Drylund, A. K., & Connor, C. (2009). Modeling oral reading fluency development in Latino students: A longitudinal study across second and third grade. Journal of Educational Psychology, 101, 315-329.

Artiles, A. J., Rueda, R., Salazar, J., & Higareda, I. (2005). Within-group diversity in minority disproportionate representation: English language learners in urban school districts. Exceptional Children, 71, 283300.

August, D., & Shanahan, T. (Eds.). (2006). Developing literacy in second-language learners: Report of the national literacy panel on language-minority youth and children. Mahwah, NJ: Lawrence Erlbaum Associates.

Baker, D. L., Plasencia-Peinado, J., & Lezcanso-Lytle, V. (1998). The use of curriculum-based measurement with language-minority students. In M. Shinn (Ed.), Advanced applications of curriculum-based measurement (pp. 175-213). New York, NY: Guilford Press.

Betts, J., Bolt, S., Decker, D., Muyskens, P., & Marston, D. (2009). Examining the role of time and language type in reading development for English language learners. Journal of School Psychology, 47, 143-166.

California Department of Education. (2011). Dataquest. [Data File]. Sacramento, CA: California Department of Education. Retrieved September 14, 2011, from http://dq.cde.ca.gov/dataquest/

California Department of Education. (n.d.). California laws and codes. Retrieved September 27, 2007, from http://www.cde.ca.gov/re/lr/cl/

California Department of Education. (2006). California English Language Development Test. Retrieved September 28, 2007, from http://www.cde.ca.gov/ta/tg/el

Catts, H. W., Petscher, Y., Schatschneider, C., Bridges, M. S., & Mendoza, K. (2009). Floor effects associated with universal screening and their impact on the early identification of reading disabilities. Journal of Learning Disabilities, 42, 163-176.

Children's Educational Services. (1987). Test of oral reading fluency. Minneapolis, MN: Author.

Deno, S. L., Fuchs, L. S., Marston, D. B., & Shin, J. (2001). Using curriculum-based measurement to develop growth standards for students with learning disabilities. School Psychology Review, 30, 507-524.

Dominguez de Ramirez, R., & Shapiro, E. S. (2006). Curriculum-based measure-ment and the evaluation of reading skills of Spanish-speaking English language learners in bilingual education classrooms. School Psychology Review, 35, 356-369.

Dominguez de Ramirez, R., & Shapiro, E. S. (2007). Cross-language relationship between Spanish and English oral reading fluency among Spanish-speaking English language learners in bilingual education classrooms. Psychology in the Schools, 44, 795-806.

Fuchs, D., Fuchs, L. S., McMaster, K. N., & Al Otaiba, S. (2003). Identifying children at risk for reading failure. Curriculum-based measurement and dual discrepancy approach. In H. L. Swanson & K. R. Harris (Eds.), Handbook of learning disabilities (pp. 431-449). New York, NY: Guilford.

Gersten, R., Baker, S. K., Shanahan, T., Linan-Thompson, S., Collins, P., & Scarcella, R. (2007). Effective literacy and English language instruction for English learners in the elementary grades: A practice guide (NCEE 2007-4011). Washington, DC: National Center for Education Evaluation and Regional Assistance, Institute of Education Sciences, U.S. Department of Education. Retrieved from http://ies.ed.gov/ncee

Good, R. H., & Kaminski, R. A. (2002). DIBELS oral reading fluency passages for first through third grades (Technical Report No. 10). Eugene, OR: University of Oregon.

Good, R. H., & Kaminski, R. A. (2002). Dynamic Indicators Of Basic Early Literacy Skills (6th ed.). Eugene, OR: Institute for the Development of Educational Achievement. Available at http://dibels.uoregon.edu

Good, R. H., Kaminski, R. A., Shinn, M., Bratten, J., Shinn, M., Laimon, L.,...Flindt, N. (2004). Technical adequacy and decision making utility of DIBELS (Technical Report No. 7). Eugene, OR: University of Oregon.

Hakuta, K., & Beatty, A. (2000). (Eds.) Testing English language learners in US schools. Washington, DC: National Academy Press.

Hosp, J. L., Hosp, M. A., & Dole, J. K. (2011). Potential bias in predictive validity of universal screening measures across disaggregated subgroups. School Psychology Review, 40, 108-131.

Jenkins, J. R., Hudson, R. F., & Johnson, E. S. (2007). Screening at-risk readers in a response to intervention framework. School Psychology Review, 36, 582-600.

Kaminski, R. A., & Good, R. H. (1996). Toward a technology for assessing basic early literacy skills. School Psychology Review, 25, 215-227.

Keifer, M. J. (2008). Catching up or falling behind? Initial English proficiency, concentrated poverty, and the reading growth of language minority learners in the United States. Journal of Educational Psychology, 100, 851-868.

Klingner, J. K., Artiles, A. J., & Barletta, L. M. (2006). English language learners who struggle with reading: Language acquisition or LD. Journal of Learning Disabilities, 39, 108-128.

Linklater, D. L., O Connor, R. E., & Palardy, G. J. (2009). Kindergarten literacy assessment of English only and English language learner students: An examination of the predictive validity of three phonemic awareness measures. Journal of School Psychology, 47, 369-394.

National Assessment of Educational Progress. (1999, 2011). The nation's report card. Retrieved March 8, 2012, online at http://nationsreportcard.gov/reading_2011/

National Clearinghouse for English Language Acquisition. (2011). Key demographics & practice recommendations for young English learners. Retrieved December 18, 2012, online from http://www.ncela.gwu.edu/

National Reading Panel. (2000). Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction. Retrieved Feb 05, 2011, from http://www.nichd.nih.gov/publications/nrp/smallbook.htm

Rhodes, R. L., Ochoa, S. H., & Ortiz, S. O. (2005). Assessing culturally and linguistically diverse students: A practical guide. New York, NY: Guilford Press.

Raudenbush, S.W., & Bryk, A. S. (2002) Hierarchical linear models: Applications and data analysis methods (2nd ed.). Thousand Oaks, CA: Sage.

Raudenbush, S.W., Bryk, A. S., & Congdon, R. (2004). HLM 6 for Windows [Computer software]. Lincolnwood, IL: Scientific Software International.

San Diego Unified School District. (n.d.). Units of Inquiry. Retrieved online on May 12, 2009, from http://old.sandi.net/depts/literacy/index.html

Silberglitt, B., & Hintz, J. M. (2007). How much growth can we expect? A conditional analysis of R-CBM growth rates by level of performance. Exceptional Children, 74, 71-84.

Snidjers, T. A. B., & Bosker, R. J. (1993). Standard errors and sample sizes for two-level research. Journal of Educational Statistics, 18, 237-259.

Tabachnick, B. G., & Fidell, L. S. (2007). Using multi variate statistics (5th ed.). Boston: Allyn & Bacon.

Townsend, D., & Collins, P. (2008). English or Spanish? Assessing Latino/a children in English and in Spanish for risk of reading disabilities. Topics in Language Disorders, 28, 61-83.

Vanderwood, M. L., Linklater, D., & Healy, K. (2008). Predictive accuracy of nonsense word fluency for English language learners. School Psychology Review, 37, 5-17.

Vanderwood, M. L., & Nam, J. E. (2007). Response to intervention for English language learners: Current development and future directions. In S. R. Jimmerson, M. K. Burns, & A. M. VanDer Heyden (Eds.), Handbook of response to intervention: The science and practice of assessment and intervention (pp. 408 417). New York, NY: Springer.

Verhoeven, L. T. (1990). Acquisition of reading in a second language. Reading Research Quarterly, 25, 90-114.

Woodcock, R. W. Munoz-Sandoval, F., Ruef, M., & Alvarado, C. G. (2005). Woodcock-Munoz Language Survey Revised. Rolling Meadows, IL: Riverside Publishing.

Date Received: April 28, 2011

Date Accepted: August 23, 2012

Action Editor: Benjamin Silberglitt

Gabriel Gutierrez and Mike L. Vanderwood University of California--Riverside

Correspondence regarding this article should be addressed to Gabriel Gutierrez, 1489 Ferrara Ct., Escondido, CA 92025; e-mail: mr.gabe@gmail.com

Gabriel Gutierrez, PhD, is a school psychologist in the San Diego Unified School District and an adjunct professor in the Behavioral Sciences Department at Palomar College and the Graduate School of Education at Alliant International University. His primary research interests include strengthening progress monitoring processes, academic intervention delivery, and response to intervention eligibility decisions applied to culturally and linguistically diverse students. As a practitioner, he focuses his energy on providing sound school psychological services to help meet the diverse needs of students, as well as to provide consultation and professional development support related to response to intervention processes to school psychologists in the district.

Mike Vanderwood, PhD, is currently an associate professor and school psychology program coordinator at the University of California-Riverside. His research focuses on evaluating and improving the quality of assessment, intervention, and consultation tools used in a multitiered (e.g., response to intervention) approach to service delivery for culturally and linguistically diverse students.

Table 1

Means and Standard Deviations for All English Language Proficiency

             Overall     Beginning   Intermediate   Intermediate
Source       (n = 260)   (n = 49)      (n = 90)       (n = 81)

PSF fall      35.63       30.57         34.54          36.81
             (12.73)     (14.56)       (12.50)        (12.10)

PSF winter    40.87       36.62         39.81          42.61
             (13.39)     (15.37)       (13.08)        (12.82)

PSF spring    45.20       36.98         44.40          48.20
             (15.17)     (15.14)       (16.17)        (13.05)

NWF fall      49.87       36.41         46.49          52.78
             (29.34)     (23.79)       (25.07)        (30.02)

NWF winter    58.08       41.73         55.41          61.03
             (31.10)     (26.31)       (26.67)        (30.38)

NWF spring    71.32       50.33         65.71          75.91
             (35.53)     (27.14)       (30.91)        (35.13)

ORF fall      50.71       29.63         42.58          58.27
             (28.09)     (20.69)       (22.29)        (30.22)

ORF winter    60.38       36.14         52.59          68.42
             (30.81)     (21.90)       (25.35)        (26.80)

ORF spring    70.48       46.40         61.50          77.84
             (32.14)     (25.31)       (25.65)        (26.19)

             Advanced   Advanced
Source       (n = 30)   (n = 10)

PSF fall      39.70      42.50
             (10.76)    (10.20)

PSF winter    43.14      49.40
             (10.35)    (15.43)

PSF spring    49.24      55.40
             (12.46)    (18.01)

NWF fall      70.77      67.30
             (35.32)    (27.19)

NWF winter    79.38      85.60
             (35.73)    (17.94)

NWF spring    96.10      99.90
             (39.53)    (24.99)

ORF fall      77.90      83.80
             (28.25)    (20.42)

ORF winter    87.66      100.90
             (28.13)    (17.93)

ORF spring   100.10     109.70
             (33.02)    (25.63)

Note. PSF = Phoneme Segmentation Fluency, NWF = Nonsense Word Fluency,
ORF = Oral Reading Fluency. Standard deviations are presented in
parenthesis below mean values.

Table 2

Skewness and Kurtosis Values by English Language Proficiency Level

                Beginning          Early Intermediate
Source           (n = 49)               (n = 90)

Measure x
Time         Skewness   Kurtosis   Skewness   Kurtosis

PSF fall     -.162      -.797       .071      -.206
PSF winter    .317      -.013       .045       .268
PSF spring    .061       .392       .198      -.230
NWF fall      .731       .472      1.127      1.012
NWF winter   1.341      1.546       .644      -.081
NWF spring    .660      -.116       .736      -.419
ORF fall      .923       .987      1.080      1.252
ORF winter    .550       .182       .954      1.118
ORF spring    .038      -.997       .401       .106

                Intermediate         Early Advanced
Source           (n = 81)              (n = 30)

Measure x
Time         Skewness   Kurtosis   Skewness   Kurtosis

PSF fall     -.062        .626     -.253        .318
PSF winter    .745       1.111     -.715        .443
PSF spring   -.176        .223      .842        .116
NWF fall     1.101        .511      .396       -.766
NWF winter    .726       -.280      .392      -1.162
NWF spring    .249      -1.144      .061      -1.094
ORF fall      .454        .676     -.341       -.033
ORF winter    .372        .091     -.362        .230
ORF spring    .220       -.085     -.247       -.507

Source           (n = 10)

Measure x
Time         Skewness   Kurtosis

PSF fall      .348       -.793
PSF winter    .632      -1.108
PSF spring    .408       -.984
NWF fall     1.190       1.088
NWF winter    .037      -1.117
NWF spring    .093      -1.413
ORF fall     -.497        .143
ORF winter    .079       -.520
ORF spring    .411       -.803

Note. PSF = Phoneme Segmentation Fluency, NWF = Nonsense Word Fluency,
ORF = Oral Reading Fluency.

Table 3

Growth Rates and Standard Error Estimates for Each Level of English
Language Proficiency

                                Beginning   Intermediate   Intermediate
Source                          (n = 49)      (n = 90)       (n = 81)

Oral Reading Fluency
  Intercept                     20.950 *      33.900 *       47.890 *
  Standard error                 3.200         2.590          2.780
  Growth rate                    8.170 *       9.450 *        9.650 *
  Standard error                 1.095         0.701          0.729
Nonsense Word Fluency
  Intercept                     28.905 *      35.437 *       39.424 *
  Standard error                 4.353         3.199          4.069
  Growth rate                    7.031 *      10.106 *       11.550 *
  Standard error                 1.731         1.443          1.773
Phonemic Segmentation Fluency
  Intercept                     27.796 *      30.511 *       31.481 *
  Standard error                 2.289         1.689          1.770
  Growth rate                    3.796 *       4.650 *        5.556 *
  Standard error                 0.955         0.705          0.743

                                Advanced   Advanced
Source                          (n = 30)   (n = 10)

Oral Reading Fluency
  Intercept                     66.970 *   72.230 *
  Standard error                 4.730      5.710
  Growth rate                   10.380 *   12.950 *
  Standard error                 1.112      2.166
Nonsense Word Fluency
  Intercept                     57.778 *   51.667 *
  Standard error                 8.923      9.764
  Growth rate                   12.250 *   16.300 *
  Standard error                 3.875      3.680
Phonemic Segmentation Fluency
  Intercept                     34.633 *   36.200 *
  Standard error                 2.925      5.066
  Growth rate                    4.667 *    6.450 *
  Standard error                 1.221      2.115

* p < .005.

Table 4

Unconditional and Conditional Growth Models for ORF, NWF, and PSF


Coefficient Estimates  Parameter           ORF        NWF        PSF

Initial status
  Beginning            [[beta].sub.00]    41.10 *    38.65 *    30.99 *
  Early intermediate   [[beta].sub.01]    --         --         --
  Intermediate         [[beta].sub.02]    --         --         --
  Early advanced       [[beta].sub.03]    --         --         --
  Advanced             [[beta].sub.04]    --         --         --
  Beginning            [[beta].sub.10]     9.51 *    10.46 *     4.84 *
  Early intermediate   [[beta].sub.20]    --         --         --
  Intermediate         [[beta].sub.30]    --         --         --
  Early advanced       [[beta].sub.40]    --         --         --
  Advanced             [[beta].sub.50]    --         --         --
Within child
  Measurement error    [[sigma].sup.2]    51.36     283.83      70.43
Child level
  Initial status       [[tau].sub.00]    691.02 *   632.28 *    92.00 *
  Reliability                              0.85       0.49       0.36
  Growth               [[tau].sub.10]     21.11 *    91.48 *     9.14 *
  Reliability                              0.45       0.39       0.21
Model summary
  ICC                  p                   0.93       0.66       0.57
  Deviance statistic                    6438.24    7277.07    6032.49
  # parameters                             6          6          6
  Deviance change      [chi square]       --         --         --


Coefficient Estimates     ORF        NWF        PSF

Initial status
  Beginning              15.62 *    23.58 *    26.45 *
  Early intermediate     14.49 *     7.37       3.34
  Intermediate           28.49 *    11.36       4.21
  Early advanced         48.77 *    31.26 *     7.18 *
  Advanced               54.35 *    24.97 *     9.17 *
  Beginning               6.68 *     8.32 *     3.40 *
  Early intermediate      0.85       2.91       0.67
  Intermediate            0.97       4.66       1.49
  Early advanced          1.51       5.41       0.16
  Advanced                4.07 *     9.50 *     2.11
Within child
  Measurement error      51.36     283.83      70.43
Child level
  Initial status        445.99 *   547.73 *    83.80 *
  Reliability             0.79       0.45       0.34
  Growth                 19.48 *    83.98 *     8.00 *
  Reliability             0.43       0.37       0.19
Model summary
  ICC                     0.90       0.66       0.55
  Deviance statistic   6324.44    7204.20    5990.42
  # parameters           22         22         22
  Deviance change       113.80 *    72.87 *    42.07

Note. PSF = Phoneme Segmentation Fluency, NWF = Nonsense Word Fluency,
ORF = Oral Reading Fluency.

* p < .005. ICC = Intraclass Correlation Coefficient.

Table 5
Significance Tests for Differences among Intercept
and Slope Coefficients

Comparison    ORF                NWF                 PSF
(Group A      Initial   ORF      Initial    NWF      Initial   PSF
v. Group B)   Status    Growth   Status     Growth   Status    Growth

2-1           12.99 *   1.28      6.53       3.07     2.72      0.85
3-1           26.99 *   1.47     10.52       4.52     3.69      1.76
4-1           46.06 *   2.21     28.87 (a)   5.22     6.84      0.87
5-1           51.33 *   4.78 *   22.76 (a)   9.30     8.40      2.65
3-2           13.99 *   1.05      3.98       1.44     2.78      0.91
4-2           33.06 *   1.98     22.34       2.14     0.97      0.02
5-2           38.32 *   3.50     16.23       6.19     5.69      1.80
4-3           19.07 *   0.74     18.35       0.70     3.15     -0.88
5-3           24.33 *   3.31     12.24       4.75     4.71      0.89
4-5           5.27      2.57      6.11       4.05     1.57      1.78

Note. 1 = beginning ELP; 2 = early intermediate ELP; 3 = intermediate
ELP; 4 = early advanced ELP; 5 = advanced ELP; PSF = Phoneme
Segmentation Fluency; NWF = Nonsense Word Fluency;
ORF = Oral Reading Fluency.

* p < .005.
(a) p < .01.