The current investigation examines the impact of student prepared testing aids on student performance. Specifically, this investigation explores the plausibility of a dependency hypothesis explanation versus an engagement hypothesis explanation for the potential impact of student prepared testing aids. Unlike many studies on this topic that come out of general psychology classes, this study includes undergraduate students in an applied statistics section. The results of this investigation support the engagement hypothesis in that students who hand-generated testing aids performed significantly better than students who computer-generated their aids. Interestingly, the greatest impact on student performance was revealed for the applied portion of the exam, over the selected response portion of the exam.
The use of student-prepared testing aids, commonly referred to as cheat sheets or crib sheets, is a commonplace assessment procedure in education (Butler & Crouch, 2011; Larwin & Larwin, 2009). However, opposing hypotheses exist about the reasons for their impact on student performance and learning. Dorsel & Cundiff (1979), Dickson & Bauer (2008) and Funk and Dickson (2011) have concluded that student-prepared testing aids benefit students in that they serve as a source of information that students need for an exam, and students then rely on or depend on the aid, rather than learning the information and committing it to memory. According to this dependency hypothesis explanation, the act of creating a testing aid does not enhance student learning and memory; it amounts instead to a clerical exercise in creating a reference tool that is depended upon to provide the information the student needs during an exam.
However, there were a number of potential extraneous variables operating within the Dickson and Bauer (2008) study that qualifies this evidentiary challenge. In this repeated-measures study, Dickson and Bauer found that there was indeed significantly better performance when students prepared and used student-prepared testing aids on a multiple-choice psychology exam, relative to when those same students prepared but did not use testing aids on a pre-test subset composed of identical questions. As all the students had prepared testing aids before the pre-test-then-exam sequence, the fact that the students performed better on the exam when testing aid use was allowed caused the authors to conclude that constructing aids did not actually enhance student learning; they argued instead, that the use of testing aids simply enhanced student performance. Dickson and Bauer reasoned that had the act of constructing testing aids actually enhanced learning, all students should have performed similarly on the pre-test and the exam.
However, in the Dickson and Bauer (2008) study, students were aware that the pre-test would not count toward their grade and thus their performance quality on the pre-test was of little significance to them personally. This may have undermined some of their motivation to perform well. Secondly, students were under the impression that they would be able to use their student-prepared testing aids on their examinations. Thus, even though the pre-test didn't "count," having an unexpected pretest to take without the potential benefit of their student-prepared testing aids may have generated some test anxiety that could have worked to attenuate pre-test performance. Finally, as this study was a repeated-measures study where the pre-test items and exam items on which performance was assessed were identical, it is possible the improved scores on the exam, the second assessment, were influenced by a practice effect. Indeed, Dickson and Bauer presented evidence that the students performed significantly better on the specific exam questions that were identical to those in the pre-test relative to the exam questions that were not a part of the pre-test. …