Science or Reading: What Is Being Measured by Standardized Tests?

Article excerpt


This study examined reading issues associated with a standardized science test. Grade 11 students in Connecticut were shown released science test items and asked about the reading issues associated with the items. Findings suggested that students varied in their understanding of the nature of the items and in their ability to read for detail. The analysis of responses indicated that students perceived the following factors to be influences on their understanding of items: background information, information provided by items, unique item features, and ability to handle the challenges presented by items. Findings raised questions about interpretations of students' science content knowledge that are based solely on standardized tests.


Results from the 2007 Trends in International Mathematics and Science Study (TIMSS) indicated that American students were underachieving in science and that they compared unfavorably with students in other countries in this subject. The results showed that science achievement of American fourth and eighth graders has not changed statistical Iy since 1995, when the study was first conducted (Gonzales et al., 2008). Moreover, despite being the richest nation in the world, the United States had produced students who scored behind those in several other countries, some of had far fewer resources. Those countries included Slovenia, Hungary, and the Czech Republic as well as Chinese Taipei, Singapore, Only 15% of American fourth graders and 10% of eighth graders surpassed the science benchmark for the 2007 TIMSS.

Given these results, it is critical that American science educators have valid measures of students' science knowledge, to insure that curricular revisions and interventions are allocated appropriately. That raises the question, "Is the standardized science test within TIMSS, (or any other standardized science tests) isolating scientific knowledge, or are other variables influencing the results? One such variable that can potentially erode the validity of standardized tests is reading. The importance of reading cannot be overstated in our information-rich society. The ability to read has been considered a foundation for many classroom learning and assessment tasks, including those in science. Therefore, reading is a critical variable that must be considered when assessing students in the science.


Educators have often examined the readability of test items, compared students' achievement on reading and content-area assessments, and debated the validity of tests for various student subgroups. However, one aspect is missing from this stream of data, the voice of the students. How well can they interpret test items? What issues do they identify in test items? How well-equipped do they perceive themselves to be to answer test items?

This article represents part of a larger study that examined the relationship between reading and students' performance on a science test. This portion of the study qualitatively examined students self reports about reading-related factors that could potentially influence their ability to respond correctly to Connecticut Academic Performance Test (CAPT) science items. Any test that uses language in its items is inherently assessing students' reading ability as well as content knowledge (American Educational Research Association [AERA], American Psychological Association [APA], National Council on Measurement in Education [NCME], 1 999). As Roe, Stoodt, and Burns (1991) explained, "Secondary school students sometimes fail to do well on tests, not because they do not know the material, but because they have difficulty reading and comprehending the test" (p. 1 62).


Reading ability is not only instrumental for learning content, but this skill has also been shown to influence students' performance on standardized tests. …