The present study was conducted in the context of previous research on the validity, correlates, and stability over time of clinical psychology program graduate scores on the national licensing exam, the Examination for Professional Practice in Psychology (EPPP). The purpose of the present study was to determine the characteristics of programs that demonstrated great improvement on the EPPP. Nineteen clinical programs that had dramatic improvement from 1988-1991 to 1997-1998 on mean EPPP scores were identified. Letters were sent to departmental chairs for their explanation for the increases. The themes of greater scientific rigor and selection of better quality students were salient.
The examination for Professional Practice in Psychology (EPPP) is the national licensing examination that is used in almost all U.S. states and Canadian provinces. It has demonstrated validity, e.g. graduates of regionally accredited programs obtaining higher mean scores than those of regionally unaccredited programs (Templer, Tomeo, Harville, & Pointkowski, 2000). Clinical psychology programs whose graduates score higher on the EPPP have higher admissions standards, a higher ratio of faculty to graduate students, greater research orientation, and approval of the American Psychological Association (Yu, Renaldi, Templer, Colbert, Siscoe & Van Patten, 1997; Templer & Tomeo, 1998).
The relative EPPP scores of clinical psychology program graduates has stability over time. Templer, Couture, Martinez, and Tomeo (1999) reported a correlation of .80 between mean EPPP score in 1988-1995 of graduates of clinical psychology programs and mean scores of these programs in 1997.
The purposes of the present study were to (1) identify clinical psychology programs that demonstrated dramatic improvement over time; and (2) to explore possible explanations for these improvements.
The means and ranks of the 154 clinical psychology programs that had EPPP mean scores for both the 1988-1991 (Educational Reporting Service, 1992) and the 19971998 (Educational Reporting Service, 1999) time periods were determined. A six-year interval was employed because six years constitute approximately a "generation" of graduate students.
Table I contains the 19 most improved programs. The criterion originally used was N of at least 6 for both time periods and an increase in 31 or more percentile points. There were, however, two programs that did not meet the N criterion but were included because of dramatic improvement. Bryn Mawr went from the 18th to the 83rd percentile. University of Pennsylvania went from the 43rd to the 99th percentile. The order of Table I was randomly determined rather than representing degrees of improvement.
Table I and a letter describing the purpose of the study were sent to all 19 departmental chairs. The letter said, "I would be most appreciative if you could, in the bottom half of this letter, give your opinion about the reason(s) for this dramatic increases and return the letter in the stamped envelope provided. I am aware of the fact that a few of the program's Ns are very small so that chance fluctuations explain some of the improvements. However, if impressions based on those small Ns mesh with those based on larger Ns, their credibility will be strengthened."
Of the 19 letters sent out, there were responses from 12 programs, usually from either the departmental chair or the director of clinical training. One respondent stated she did not know how to explain the improvement. Another respondent indicated that no conclusions are warranted because the findings are an artifact of the small N. A third respondent indicated that the number must be incorrect because their program graduates fewer students than the numbers seem to indicate. The third comment may be, at least in part, a function of the present authors neglecting to say in their cover letter that the number pertains to number of exams taken rather than number of graduates taking the exam. …