Academic journal article Journal of Information Technology Education

Computer Based Assessment Systems Evaluation Via the ISO9126 Quality Model

Academic journal article Journal of Information Technology Education

Computer Based Assessment Systems Evaluation Via the ISO9126 Quality Model

Article excerpt

Introduction

Most solutions to the problem of delivering course content, supporting both student learning and assessment, nowadays imply the use of computers, thanks to the continuous advances of Information Technology. According to Bull (1999), using computers to perform assessment is more contentious than using them to deliver content and to support student learning. In many papers, the terms Computer Assisted Assessment (CAA) and Computer Based Assessment (CBA) are often used interchangeably and somewhat inconsistently. The former refers to the use of computers in assessment. The term encompasses the uses of computers to deliver, mark and analyze assignments or examinations. It also includes the collation and analysis of data gathered from optical mark readers. The latter (that will be used in this paper) addresses the use of computers for the entire process, including assessment delivery and feedback provision (Charman and Elmes, 1998).

The interest in developing CBA tools has increased in recent years, thanks to the potential market of their application. Many commercial products, as well as freeware and shareware tools, are the result of studies and research in this field made by companies and public institutions. For an updated survey of course and test delivery/management systems for distance learning see Looms (2001). This site maintains a description of more then one hundred products, and is constantly updated with new items. This noteworthy growth in the market raises the problem of identifying a set of criteria that may be useful to an educational team wishing to select the most appropriate tool for their assessment needs. According to our findings, only two papers have been devoted to such an important topic (Freemont & Jones, 1994; Gibson et al., 1995). The major drawbacks shown by both papers are: a) the unstated underlying axiom that a CBA system is a sort of monolith to be evaluated as a single entity, and b) the lack of an adequate description of how the discussed criteria were arrived at. Since anyone could come up with some kind of list, what needs to be known is what makes them valid. A typical CBA system is composed by:

* A Test Management System (TMS)--i.e. a tool providing the instructor with an easy to use interface, the ability to create questions and to assemble them into tests, the possibility of grading the tests and making some statistical evaluations of the results.

* A Test Delivery System (TDS)--i.e. a tool for the delivery of tests to the students. The tool may be used to deliver tests using paper and pencil, a stand-alone computer, on a LAN, or over the web. The TDS may be augmented with a web-enabler used to deliver the tests over the Web. In many cases producers distribute two different versions of the same TDS, one to deliver tests either on single computers or on LAN, and the other to deliver tests over the web. This is the policy adopted for instance by Cogent Computing Co. (2000) with CQuest-Test and CQuest-Web.

The TMS and TDS modules may be integrated in a single application, as for instance InQsit (2000) developed by the Ball State University, or may be delivered as separate applications. As an instance of this latter policy, we may cite ExaMaker & Examine developed by HitReturn (2000).

Therefore, it is very important to identify a set of quality factors that can be used to evaluate both the modules belonging to this general structure of a CBA system.

Although the literature on guidelines to support the selection of CBA systems seems to be very poor, there are many research studies in Software Engineering providing general criteria that may be used to evaluate software systems (Anderson, 1989; Ares Casal et al., 1998; Henderson et al., 1995; Nikoukaran et al, 1999; Vlahavas et al. 1999). A relevant effort has been made in this field by the International Standard Organization which in 1991, defined the ISO9126 standard for "Information Technology--Software Quality Characteristics and Sub-characteristics" (ISO, 1991). …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.