Composition Medium Comparability in a Direct Writing Assessment of Non-Native English Speakers

Article excerpt

ABSTRACT

The Test of English as a Foreign Language (TOEFL) contains a direct writing assessment, and examinees are given the option of composing their responses at a computer terminal using a keyboard or composing their responses in handwriting. This study sought to determine whether performance on a direct writing assessment is comparable for examinees when given the choice to compose essays in handwriting versus word processing. We examined this relationship controlling for English language proficiency and several demographic characteristics of examinees using linear models. We found a weak two-way interaction between composition medium and English language proficiency with examinees with weaker English language scores performing better on handwritten essays while examinees with better English language scores performing comparably on the two testing media. We also observed predictable differences associated with geographic region, native language, gender, and age.

INTRODUCTION

Increasingly, computers are being used to administer selection and certification tests. With the transition from a paper-based to a computer-based testing system comes a potential threat to the consequential basis of test use aspect of validity (Messick, 1989). That is, implementation of a computer-based testing program could result in unintended negative consequences for some examinees or for some societal components of the testing system. For example, differences in the performance of gender and ethnic groups exist on paper-based tests, and some fear that the shift toward a computer-based testing system may exacerbate existing social barriers to advancement opportunities for women, minorities, economically disadvantaged, and elderly individuals. Previous research comparing computer-based and paper-and-pencil tests has revealed only small differences between population means of multiple-choice tests administered in these two media (Mead & Drasgow, 1993). However, little is known about the influence of computerized testing on "at risk" groups of examinees or about the comparability of performance-based tests (e.g., direct writing assessments) administered in these two media, particularly for diverse populations of examinees. The purpose of this article is to compare computer-based and paperbased scores on the writing section of the Test of English as a Foreign Language (TOEFL) for a diverse population of international examinees.

LITERATURE REVIEW

What evidence exists to support concerns about the potential negative impact of computer-based testing on some populations of examinees? First, it is clear that some groups of examinees are less likely to have access to, and hence experience and proficiency with, computers. In the US, minorities and women are less likely to have computers in their homes, and males are likely to dominate computer use at school--the primary location within which some groups learn about and gain experience using computers (Campbell, 1989; Grignon, 1993; Keogh, Barnes, Joiner, & Littleton, 2000). Internationally, women, Africans, and Spanish speakers are less likely to have access to computers (Janssen Reinen & Plomp, 1993; Miller & Varman, 1994; Taylor, Kirsch, Eignor, & Jamieson, 1999). Similarly, one would expect older individuals who learned how to use a computer later in life to have less experience using computers, although it is not clear whether these individuals would have restricted access.

Second, inequities in computer access and familiarity may lead to lower levels of confidence and higher levels of anxiety toward computer-based tasks. U.S. minorities and women (internationally) exhibit higher levels of computer anxiety and lower levels of confidence for performing computer-related tasks (Janssen Reinen & Plomp, 1993; Legg & Buhr, 1992; Loyd & Gressard, 1986; Massoud, 1992; Nolan, McKinnon, & Soler, 1992; Shashaani, 1997; Temple & Lips, 1989; Whitely, 1997). …