Academic journal article Memory & Cognition

Retrospective Bias in Test Performance: Providing Easy Items at the Beginning of a Test Makes Students Believe They Did Better on It

Academic journal article Memory & Cognition

Retrospective Bias in Test Performance: Providing Easy Items at the Beginning of a Test Makes Students Believe They Did Better on It

Article excerpt

We examined the effect of three variables (test list structure, report option, and framing) on retrospective bias in global evaluations of test performance (postdictions). Participants answered general knowledge questions and estimated correctness of their performance after each block. The ordering of the questions within a block affected bias: Participants believed they had answered more questions correctly when questions were sorted from the easiest to the hardest than when the same questions were randomized or sorted from the hardest to the easiest. This bias was obtained on global postdictions but was not apparent on item-by-item ratings, pointing to a memory-based phenomenon. In addition, forcing participants to produce a response to every question increased performance without affecting evaluations. Finally, framing the evaluation question in terms of the number of questions answered incorrectly (rather than the number correctly answered) did not affect how positively participants evaluated their performance, but did render postdictions less accurate. Our results provide evidence that students' evaluations of performance after a test are prone to retrospective memory biases.

After taking an exam, students retrospectively evaluate their performance and use this evaluation to guide their expectation of the approximate grade they might achieve on that exam. Sometimes this expectation may be accurate, but in other cases students seem surprised by their scores. The factors that affect such postdictions on tests are the focus of this article. Much previous research has focused on metacomprehension (i.e., how well students think they have understood a text; e.g., Maki & Berry, 1984) and predictions of test performance (i.e., how well students think they are going to do on a test before they have taken it, on the basis of how well they know the material; e.g., Glenberg & Epstein, 1985). Less research has investigated the factors that specifically affect global postdictions (i.e., how well students think they have performed once they have taken a test). The benefits of accurate self-evaluation have been discussed elsewhere (e.g., Hacker, Bol, & Keener, 2008), and include improved self-efficacy and more appropriate study behavior. In addition to these benefits, consider also the decision students have to make after taking exams for which they have the option to cancel their scores. For instance, in 2006-2007, 26.3% of students taking the Law School Admission Test (LSAT) canceled their scores and resat the test at least once (LSAT Repeater Data, n.d.); there are extensive discussions by students in online forums and even semiprofessional advice is available to help students decide whether to keep or cancel their scores (Ivey, 2005). Awareness of additional factors that bear on students' evaluations of performance following a test would make a valuable contribution to these discussions.

Hacker, Bol, Horgan, and Rakow (2000) showed that postdictions tend to be more accurate than predictions, and concluded that students are generally very accurate on judgments made after a test. Whereas predictions are made prospectively and are based on what students think they know, postdictions are made retrospectively and reflect the student's experience of the test (Hacker et al., 2008). In the absence of objective information about test difficulty, predictions are made entirely on the basis of internal states. Postdictions, on the other hand, may be more reliable insofar as they take test difficulty into account. Nevertheless, postdictions are susceptible to biases just like any metacognitive judgments (Nelson & Narens, 1990), although the sources of bias may be different from those guiding predictions. In addition to biases arising from inaccurate assessment of performance on individual questions (Lichtenstein, Fischhoff, & Phillips, 1982), postdictions are also susceptible to retrospective memory biases that arise from attempts to evaluate the test experience as a whole. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.