|1.||This research was partially funded by a York University Faculty of Arts Minor Research Grant to the second author.|
|2.||The authors acknowledge the contributions of Demo Aliferis, Paul Chemabrow, Charlotte Copas, Myra Radzins, and Sara Persaud—our teaching assistants who graded all the tests and categorized the students' comments; and Dannielle Poirier and Lorraine Chiasson who coded the data. We also thank the Introductory Psychology students who were good enough to fill out the questionnaires.|
David K. Dodd
Eastern Illinois University
Students' perceptions that multiple-choice exams contain “trick” questions may contribute to test anxiety and lead them to view the instructor as an adversary rather than an advocate. Over the past several years, we have developed and used a technique called answer justification (AJ) that allows students to convert any multiple-choice item perceived as being “tricky” into a short-answer essay. While an earlier version of our manuscript was under journal review, Nield and Wintre (1986) described and evaluated a similar procedure. The purposes of this article are to compare and contrast our technique with that of Nield and Wintre, to present our own evaluation data, and to summarize the specific benefits of the technique for students and instructors.
With both our technique and that of Nield and Wintre (1986), students have the opportunity to write a brief explanation of their answers for any multiple-choice question that is perceived to be ambiguous or confusing. Students select one “best” alternative and then explain their answer on the back of their answer sheet (Nield and Wintre's method) or on forms provided (our method). A convincing explanation earns credit for a missed question. The most fundamental difference between Nield and Wintre's (1986) technique and ours is that their students can also lose credit for a faulty explanation of a correct answer, whereas our students are not penalized.
We have evaluated our technique in introductory psychology courses (3 sections of 50 to 110 students each), a sophomore-junior level course in human-interaction skills (2 sections of 25 students each), and a junior-senior level course in prejudice and discrimination (35 students). Collectively, our analyses included 345 students and 17 different exam administrations, with exam length varying from 27 to 50 multiple-choice questions. From a total of 44, 370 opportunities to use AJ, students used it 505 times (1%). On a typical exam, 25% of the class used AJ; most of those using it did so only once (mode = 1, M = 1.9, range = 1 to 7). Scoring was unnecessary 67% of the time because the student had selected the correct alternative; of the remaining justifications, 24% received full credit, 6% partial credit, and 70% no credit. Justifications tended to be brief and easily scored; on a typical 50-item exam given to a class of 50 students, total scoring time, including modifications to grades, was about 20 min.
Nield and Wintre (1986) evaluated the usage of their technique on a sample of 416 introductory psychology students. Like us, they found usage to be between 1.5 and 2 times per user, and they also did not find the amount of extra grading to be excessive. Over the entire course, 41% of Nield and Wintre's students explained at least one answer, compared to 56% of our students. There are two probable explanations for our apparently higher usage rate: Most obviously, our students had nothing to lose by using AJ, whereas their students could be penalized for incorrect explanations. In addition, we were apparently more lenient in scoring. Among students who explained incorrect answers, 30% of our students received full or partial credit, whereas only 12% of their students received credit.
We administered a brief, anonymous questionnaire to our students near the end of the semester to evaluate satisfaction with the technique. Of 259 respondents, 94% “liked