Academic journal article Australian Journal of Education

Multiple-Choice versus Short-Response Items: Differences in Omit Behaviour

Academic journal article Australian Journal of Education

Multiple-Choice versus Short-Response Items: Differences in Omit Behaviour

Article excerpt

The overall rate of omission of items for 28 331 17-year-old Australian students on a high-stakes test of achievement in the common elements or cognitive skills of the senior school curriculum is reported for a subtest in multiple-choice format and a subtest in short-response format. For the former, the omit rates were minuscule and there was no significant difference by gender or by type of school attended. For the latter, where an item can be `worth' up to five times that of a single multiple-choice item, the omit rates were between 10 and 20 times that for multiple-choice and the difference between male and female omit rate was significant as was the difference between students from government and nongovernment schools. For both formats, females from single-sex schools omitted significantly fewer items than did females from coeducational schools. Some possible explanations of omit behaviour are alluded to.

Introduction

Omitted response in context

For the last 75 years or so, multiple-choice (MC) tests have been a pervasive feature of education, admittedly waxing and waning in popularity within countries, systems, levels, and disciplines. `No assessment technique has been rubbished [sic] quite like multiple choice, unless it is graphology' (Wood, 1991, p. 32). On the other hand, Wainer and Thissen (1993) write that they `have never found a test that is composed of an objectively and a subjectively scored section for which [it] is not true [that, whatever is being measured by the constructed-response section is measured better by the multiple-choice section]' (Wainer & Thissen, 1993, p. 116).

More recently, as validity has come to be viewed as being as important as reliability (Moss, 1992), and as authentic assessment (Hambleton & Murphy, 1992) has become de rigueur, MC tests have been complemented or replaced by tests in which students are required to produce a response (as an essay or as a short answer) rather than to select the correct or best response. This change in emphasis can also be linked to a shift towards assessment tasks which, according to Shepard (1991), emulate the kind of process-based higher-order tasks thought to represent good practice. Responding to the task set by a short-response item (SKI) might involve writing a paragraph of exposition or explanation, performing a calculation, constructing a graph, compiling a table, or producing a sketch or drawing. The short-response (SR) format is referred to elsewhere as `short-answer' (Viviani, 1990) and `constructed response' (Bennett, 1991).

When candidates take a conventional MC test, their responses to the items fall into three categories: correct, incorrect, and absent. In a conventional MC test, there is only one correct option or key, and the scoring rule applied does not encompass differential rewards for the various incorrect responses or distractors. When candidates take a test in SR format, their responses also fall into three categories. The first category contains contributory responses; the candidate supplies a response and it is a creditable response--worth either full marks or part marks, so there are actually subdivisions of this category. The second category contains noncontributory responses; the candidate supplies a response but it does not merit the lowest part marks available. The third category describes the situation where there is no response; the candidate leaves the response space completely blank and therefore the response attracts no credit. For both MC and SR formats, it is the last-mentioned category that is labelled `omit'.

Gafni and Melamed (1990) differentiate between two types of omitted multiple-choice item or question (MCQ): unanswered items within the range of items answered (referred to as `intentionally omitted'); and unanswered items in a string at the end of a testpaper (`unreached'). This typology could also be used to differentiate between two types of omitted SRI. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.