Academic journal article Journal of College Reading and Learning

Effects of Response Mode and Time Allotment on College Students' Writing

Academic journal article Journal of College Reading and Learning

Effects of Response Mode and Time Allotment on College Students' Writing

Article excerpt

Traditionally, two types of test items have been used in educational assessment: items in which the student selects the correct response from a set (as seen in multiple-choice, truefalse, and matching items) and items in which the student constructs a response on his or her own (as in short answer or essay items). The latter type of test item has become more common in large-scale standardized testing for several reasons. First, writing samples are thought by many to be the best method of assessing writing ability (Conlan, 1986; Linn & Miller, 2005). Second, computer programs have been developed to score writing samples (Dikli, 2006), reducing the financial and logistical challenges associated with this type of assessment. Third, multiple-choice tests have continued to raise concerns about the ability to measure complex reasoning and problem solving (for a review, see Phelps, 2003).

Although essay tests have certain acknowledged advantages over selected-response tests (e.g., utility in assessing certain higher-order learning objectives), essay tests also have potential limitations. One such limitation is that the response mode of the test may significantly affect examinees' scores; composing an essay using a computerized word processor program may lead to a different score than composing an essay by hand (Russell, 1999). A second limitation is that time limits, which determine the amount of text that students can compose, may significantly affect students' scores, since the amount of text written is a robust correlate of holistic measures of essay quality (Hopkins, 1998; Powers, 2005). In the present study, we explore these issues empirically, asking how time limits and response modes interact to affect students' essay composition.

Before reviewing relevant literature on writing assessment, this study briefly discuss one of the motivations behind the study. An increasing number of students in higher education have diagnoses of common disabilities (e.g., learning disabilities, attention problems, psychiatric disorders, etc.) that may adversely impact their scores on standardized tests (Konur, 2002). In many countries, alterations are made to the administration of the tests (for example, testing accommodations) in the hopes of giving these students a fair chance at showing their skills (Bolt & Thurlow, 2004; Hampton & Gosden, 2004). Extending test time limits and allowing examinees to use computers to write are among the most common accommodations offered, but little is known about how these accommodations affect essay examinations.

An appropriate test accommodation should mitigate performance obstacles of students with disabilities (e.g., large print for a student with visual limitations), while having less of an effect on the performance of non-accommodated students (Fuchs & Fuchs, 2001). However, many testing accommodations provide at least some benefit to students both with and without disabilities (Sireci, Scarpati, & Li, 2005). Indeed, some work shows that students who have poor academic skills but no disability diagnoses benefit more from accommodations than those with official diagnoses do (e.g., Elliott & Marquart, 2004). The present study, then, was conducted in part for its potential implications for the use of extended time and word processor accommodations, both for students with and without disability diagnoses.

Response Mode and Writing Performance

A number of studies have compared writing produced with and without the aid of a word processor. In a recent meta-analysis, Goldberg, Russell, and Cook (2003) examined 26 of these studies published between 1992 and 2002 using students in K-1 2 educational settings. These investigators concluded that word processors lead reliably to writing in greater quantities (with a weighted effect size of d = .50) and to writing of better quality (judged using a variety of measures, depending on the study; weighted effect size of d = . …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.