Academic journal article Academy of Educational Leadership Journal

Computer-Based Testing: A Comparison of Computer-Based and Paper-and-Pencil Assessment

Academic journal article Academy of Educational Leadership Journal

Computer-Based Testing: A Comparison of Computer-Based and Paper-and-Pencil Assessment

Article excerpt

INTRODUCTION

The purpose of this study was to compare assessment scores of students using proctored computer based tests (CBT) to test scores of students taking proctored paper-and-pencil tests in the classroom. Ertuck, Ozden, and Sanli (2004) define computer-assisted assessment (CAA) as encompassing a range of activities, including the delivery, marking, and analysis of all or part of the student assessment process using stand-alone or networked computers and associated technologies. More specifically, Wise, Bames, Harvey and Plake (1989) use the term computer-based testing (CBT) for when computer delivers a test in the manner and order as it would appear on a pencil-and-paper test. Regardless of format, assessment as a whole is an important, fundamental, and constant component of teaching and learning (Rovai, 2000; Rowe, 2004; Serwata, 2003). Assessments provide the professor a means of judging students' mastery of course material. The student may benefit from testing as well. It has been shown that testing may increase a student's retention of knowledge in the subject material through challenging a student's comprehension (Everding, 2006).

Many professors are moving to computer based tests for benefits such as reduced grading effort and the ability to test more frequently (Erturk, Ozden, and Sanli, 2004). Professional certifications such as the CPA and CMA exam, as well as many graduate-level aptitude tests, such as the GMAT and GRE, are also given via computer program in a proctored testing center. Therefore, understanding what type of effect the CBT format has on the performance of the individual undertaking the test is not only important for the individual being assessed, but also for the entity using said assessments as a measure of that individual's ability.

In Erturk, Ozden, and Sanli's Student Perceptions of Online Assessment (2004) the researchers site Bull & Mckenna (2001) in providing the motivation for migrating to CBT:

* Increase frequency of assessments;

* Increase range of assessed knowledge;

* Increase feedback;

* Increase assessment methods;

* Increase objectivity;

* Reduce marking workloads; and

* Reduce administrative workloads.

CBT has several advantages in terms of time management within the classroom: it reduces instructional time dedicated to testing and allows more flexibility in scheduling and administering tests (Bonham, 2006; Bugbee 1996; DeSouza & Fleming, 2003; Graham, Mogel, Brallier, & Palm, 2008; Zandvleit & Farragher, 1997). There are also advantages in terms of evaluating the test: it puts data directly into electronic databases which allows for easy item analysis (Bonham, 2006; Bugbee, 1996; Zandvleit & Farragher, 1997). It also makes feedback instantly available to instructors and students (Bonham, 2006; Bugbee, 1996; Zandvleit & Farragher, 1997). Finally, several studies have reported students prefer online tests to written ones (Bonham, 2006; Bugbee, 1997; Zandvleit & Farragher, 1997)

Waschull (2001) reports similar test performance for students taking unproctored online tests and students taking paper and pencil tests in the classroom. In a review of literature from the 1980's and early 1990's, during the early stages of computer based testing, Bugbee (1996) found computer based test scores tended to be similar to paper-administered scores when the testing environment was similar (both proctored or unproctored). In Harmon and Lambrios' study (2008) final test scores from unproctored online tests and proctored online tests did not differ.

Several studies comparing computer based versus paper assessment have been conducted. Clariana and Wallace (2002) found that students scored significantly higher on computer based testing; however, no indication was provided as to if the computer based exam was proctored. Additionally, the subject matter tested was computer science, where students' expertise and comfort level with computers could have influenced the results. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.