(ProQuest Information and Learning: Foreign characters omitted.)
A survey of the quantitative offerings and requirements of all 51 Canadian undergraduate psychology programs showed that courses in psychological testing are offered less often and are required much less often than those in statistics and methods. This may reflect a lack of attention to testing in graduate school training and a preference for "experimental" over "correlational" psychology. I argue that testing courses should be required because measurement is a fundamental topic, that certain important debates rest on test data, and that tests are widely used in applied settings. In closing, I present some suggestions for course content.
Over the last 50 years, a number of discussions have been devoted to the content of the psychology undergraduate curriculum in the U.S.A. (Brewer, 1997). There seems to be general agreement that the major in psychology should consist of an introductory course, methods courses, content courses in a variety of areas, and an integrated capstone course (e.g., history and systems) (McGovern, Furumoto, Halpern, Kimble, & McKeachie, 1991).
Given that psychology is a scientific discipline, the methods courses are of particular importance because they unify psychology (Stanovich, 1998). According to Brewer (1997), the 1951 Cornell Conference recommended courses in statistics and "ability." Two conferences in the 1960s and 1970s were not prescriptive, but meetings of the Association of American Colleges in 1988 (McGovern, Furumoto, Halpern, Kimble, & McKeachie, 1991) and an APA-sponsored conference at SL Mary's College of Maryland in 1991 both agreed that the curriculum should include courses in statistics, research methods/design and psychometrics/individual differences. Although it did not designate specific courses, a British Psychological Society Working Party (1994) recommended that certain content areas and themes appear in the undergraduate curriculum. In particular, it suggested that the psychology graduate should possess the skills of numeracy, quantitative argument, and knowledge of questionnaire and test design, which correspond respectively to courses in statistics, research methods/design and psychometrics/ psychological testing. Notably, these proposals give equal status to the three kinds of quantitative courses.
Do curricula reflect these suggestions? A survey of 222 Ph.D. programs in the U.S.A and Canada found that 88% of the responding departments (84%) offered courses in standard ANOVA statistics; 70% in research methods; but only 45% in test theory and 25% in test construction (Aiken, West, Sechrest, & Reno, 1990). Paradoxically, in view of the 88% that reported offering statistics, "95% of departments offer a universally required doctoral level introductory statistics sequence" (pp. 724-725). However, and of more relevance here, Aiken et al. infer that "very few of the departments have any requirement in measurement."
Most, 77%, of the responding departments in the Aiken et al. survey thought that over 75% of their graduates could apply statistical techniques to their own research, that 83% could apply laboratory techniques, but only 23% could apply psychological testing concepts and procedures (classical test theory, reliability and validity). Thus, although a clear majority of departments offered courses in statistics and methods and thought their graduates competent, less than half offered courses in testing and fewer thought their graduates were competent in that area. Although they did not have data from the 1960s and relied on the memory of colleagues, Aiken et al. argue that measurement and testing have declined in importance at the graduate level.
Poor postgraduate testing training has given rise to what Byrne (1996) refers to as Lambert's (1991) "crisis in measurement literacy," which means that sound principles of psychometrics are often ignored (Merenda, 1990). …