At worst, the impact of free reading appears to be the same as that of traditional instruction, and it is often better, especially when studies are continued for more than an academic year, a finding that the National Reading Panel has obscured by omitting important studies and by describing others incorrectly, Mr. Krashen charges.
IN HER review of the report on phonics by the National Reading Panel (NRP), Elaine Garan concluded that the report involved "a limited number of studies of a narrow population."1 I will argue that this problem is not limited to the section on phonics: it also applies to the section of the report on fluency. It is only by omitting a large number of relevant studies - and misinterpreting the ones that were included - that the NRP was able to reach the startling conclusion that there is no clear evidence that encouraging children to read more actually improves reading achievement.2
The selection criteria used by the NRP to choose studies for its report were as follows:
1. The study had to be a research study that appeared to consider the effect of encouraging students to read more on reading achievement.
2. The study had to focus on English reading education, conducted with children (K-12).
3. The study itself had to have appeared in a refereed journal.
4. The study had to have been carried out with English language reading.3
The NRP claimed that it could find only 14 studies that met these criteria.4 Of these, 10 were studies of the impact of sustained silent reading (SSR) programs in which some class time is set aside for free reading with little or no "accountability." Of these 10, three had positive results, with the students who were engaged in free voluntary reading outperforming comparison groups. Another study showed positive results for one condition but not for other conditions, and the rest of the studies showed no difference between groups or no gains. Table 1 summarizes these outcomes.5
In other sections of the NRP report, such as the sections on phonics and phonemic awareness, the NRP listed studies that were excluded from its analysis. This was not done for the section on fluency. We do not know, therefore, which excluded studies were simply missed and which were rejected, nor do we know the specific rationale for their rejection.
In Table 2, I present an "expanded" set of SSR studies in which tests of reading comprehension were used. Many of the studies summarized in Table 2 meet the four criteria of the NRP and were apparently missed. But there were some "vio-lations." A few were done with students slightly older than the age limit imposed by the NRP; in all such cases, the subjects were undergraduate college students. Subjects in some of the studies were students of English as a second language.6 In several studies, students read in Spanish, not English; in these cases, the students were native speakers of Spanish. Finally, some studies were not published in refereed journals.
Table 2 summarizes the results of these studies. It includes studies included by the NRP as well as those that the NRP did not include.7
In the studies in Table 2, SSR students did as well as or better than comparison students in 50 out of 53 comparisons. For longer-term studies (those longer than one year), SSR students were superior in eight of 10 studies, and there was no difference in the other two. Moreover, there are plausible reasons why the results were not even more positive. In one study by Isabel Schon, Kenneth Hopkins, and Carol Vojir, there was no difference between SSR students and comparison groups, but only five of the 11 SSR teachers actually carried out SSR conscientiously.8 The classes taught by these five achieved significantly better gains.
In a study by Ruth Cline and George Kretke, another study showing no difference, the subjects were junior high school students who were reading two years above grade level and probably had already established a reading habit. …