Academic journal article International Journal of Instructional Media

Gender Differences in Computer-Administered versus Paper-Based Tests

Academic journal article International Journal of Instructional Media

Gender Differences in Computer-Administered versus Paper-Based Tests

Article excerpt

INTRODUCTION

The acceptance and use of computerized testing is increasing each year. For example, the Educational Testing Service (ETS) introduced a computer version of the Graduate Record Examination (GRE) in 1992, and is working on a computerized SAT. The Graduate Management Admissions Test (GMAT) administration was completely computerized in the fall of 1997, and since 1999, the "e-rater" computer program has scored the essay portion of that examination (400,000 in 2001). The National Council of Architectural Registration Boards uses an ETS computerized test as part of its professional licensing process. Computerized tests are also part of ETS's new generation of teacher tests, the Praxis Series, and the National Council Licensure Examination for nurses is only available on computer. Not only ETS, but other for profit groups as well as individual teachers and corporate trainers are all trying out online testing, especially as a complement or component of distance learning courses.

However, one unresolved problem associated with using computerized examinations is performance bias due to examinee's individual differences (FairTest, 2002). For example, the performance gap which already exists on paper-based multiple-choice tests between men and women (Weaver & Raptis, 2001), ethnic groups, and persons from different socioeconomic backgrounds could widen as a result of computerized testing since schools with large minority or low-income populations are far less likely to have access to technology, and poor and minority children in general are much less likely to have access outside of school.

Further, females may be adversely affected by computerized tests since a much greater number of females than males report no school access to computers, few computer learning experiences, and limited knowledge about computers. In addition, computer anxiety is much more prevalent among females than males, with Black females reporting the greatest anxiety (Bugbee & Bernt, 1990; Gilroy & Desai, 1986; Legg & Buhr, 1992; Moe & Johnson, 1988; Sutton, 1991; Urban, 1986).

For example, Shashaani (1994) in a study of 1,700 high school students reported that gender-differences in the amount of computer experience was directly related to attitude about computers. As measured by the number of computer classes attended, the amount of daily computer usage, and home computer access, male students had more computer experience than female students and males showed more positive attitudes toward computers, which included greater computer interest, computer confidence, and perceived computer utility. Similarly, Busch (1995) reported reduced perceived self-efficacy for female college students (relative to males) regarding completion of complex tasks that involved word processing and spreadsheet software, however, no gender differences were found in computer attitudes or self-efficacy regarding simpler computer tasks.

Further, Taylor and Mounfield (1994) surveyed a large group of college undergraduates in order to identify individual factors that affect gender differences in computer attitudes. They reported a significant correlation between early prior computing experiences and level of success by females in a college computer course, and infer that pre-college and initial college computing experiences can have an important role in achieving gender equity in college computer science courses.

By extension, these studies suggest that females may not do as well on computer-administered tests as they would on equivalent paper-based tests. This would most likely be due to the stress of testing added to lack of experience with computers. For example, Vogel (1994) reported that level of computer anxiety has complex effects on performance on computer administered verbal sections of the Graduate Record Examination (relative to paper-and-pencil).

In another study of the GRE delivered by computer and on paper, this time involving highly computer literate examinees, Parshall and Kromrey (1993) reported that computer-based test scores on the verbal, quantitative, and analytic sections were all greater than the associated paper-based test scores. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.