Academic journal article Educational Technology & Society

Marking Strategies in Metacognition-Evaluated Computer-Based Testing

Academic journal article Educational Technology & Society

Marking Strategies in Metacognition-Evaluated Computer-Based Testing

Article excerpt

Introduction

Computer-based testing (CBT) has been widely used since information technology became popularity. Such tests are easily administrated by computer or an equivalent electronic device, and students can immediately access their test results. Many researchers claimed that CBT systems were valuable self-evaluation tools for self-managed learning (Croft, Danson, Dawson, & Ward, 2001; Peat & Franklin, 2002). However, studies indicated that, for effective and efficient use as self-managed learning tools, CBT systems must provide adaptive feedback for future learning (Souvignier & Mokhlesgerami, 2006; Thelwall, 2000; Wong, Wong, & Yeung, 2001). They must also provide information that enables students to control their own pace during the test (Parshall, Kalhn, & Davey, 2002, p. 41).

Adaptive feedback enabled students to learn according to provided instructional strategies (Collis & Messing, 2000; Collis & Nijhuis, 2000). According to Collis, De Boer and Slotman (2001), giving adaptive feedback after a test was one strategy for helping students learn effectively. It could help underachievers extend their learning. For example, giving answer-explanations (AEs) related to key knowledge concepts of test items after a CBT could help students to understand what they have learned and to identify their mistakes (Wang, Wang, Huang, & Chen, 2004); that is, AEs was a metacognitive strategy (Rasekh & Ranjbary, 2003). Answer-explanations offered via automatic evaluation tools could correct student mistakes, reinforce their memories, and support their learning as well as reduce teacher workload so that individual students could receive adaptive compensatory instruction in a forty-student class. Therefore, if CBT systems only displayed scores without feedback, the "teachable moment", or the moment of educational opportunity when students were disposed to learn, might not be used effectively (Collis et al., 2001; Ram, Cox, & Narayanan, 1995).

To help students control their own pace, CBT systems could provide the information needed to navigate a test, such as reminders of unanswered items. Gibson, Brewer, Dholakia, Vouk and Bitzer (1995) showed that such information could help students complete the CBT efficiently and reduce their frustration and anxiety. Another mechanism for controlling the testing process within the CBT environment was the marking function. Marking was a skill used to increase the efficiency and effectiveness of self-managed learning (Parshall et al., 2002, p34). In the present study, marking referred to a test-taking behavior, in which the student placed a question mark next to a test item to indicate an uncertain answer, and it also served as a reminder to review, check or revise the answer. According to Higgins and Hoffmann (2005), students rarely marked test items when they were sure of their answers. Therefore, marking could be considered one alternative to the confidence rating technique conventionally used to measure the metacognition monitoring ability of students. Students applying confidence rating technique were required to check the confidence degree of their answers. Their metacognition monitoring ability was then evaluated by the matching the confidence degree with the test results (Baker & Brown, 1984; Vu, Hanley, Desoete, & Roeyers, 2006; Strybel, & Proctor, 2000). For example, choosing a correct answer and marking it high on confidence level suggested good metacognition monitoring ability whereas choosing a wrong answer and marking it high on confidence level indicated poor metacognition monitoring ability.

This study proposed metacognition-evaluated feedback (MEF), a new feedback mode for CBT systems displaying AEs integrating student answer responses and marking records. This study had two purposes. First, it explored whether marking could improve the test scores of examinees. Second, it investigated how MEF affected the review behavior of students after completing a CBT. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.