In an era of increased scrutiny for scarce college resources and the desire to evaluate our curriculum against other programs, we undertook an objective evaluation of our psychology curriculum. The purpose of this study was to examine our undergraduate psychology curriculum to determine which courses predicted performance on a national outcome measure, the Major Field Test in Psychology II (MFTP), and to compare our graduates against national norms. A simple regression analysis of grades in core psychology courses against the total MFTP score was conducted. Results of the analysis revealed that only the research methods course predicted the total MFTP score (p < .001). Furthermore, a one sample t-test of the students' scores against the national mean for administrations in 1995-1998 revealed that there was no significant difference between our students' performance and the national norm (p = .771).
Mentoring psychology majors in an undergraduate program who hope to become successful applicants to graduate psychology programs is a challenging process. Many graduate psychology programs, particularly in the clinical area, have a low acceptance-to-application ratio (Mayne, Norcross, & Sayette, 1994). This problem stimulated our Psychology Department at Ursuline College to explore program and curriculum changes that might enhance our students' performances and help them to become better prepared applicants in the graduate school process.
One need only to review the extensive number of variables in the graduate psychology application process to become aware of a need for a systematic approach to undergraduate preparation for application to graduate school. Factors in the application protocol include:
* the student's GPA,
* the student's GPA in the major,
* verbal and quantitative scores on the Graduate Record Examination (GRE),
* the score on the GRE Advanced Test in Psychology,
* internship experiences, professional recommendations,
* performance in research courses and
* the design and presentation or original research (Mayne et al., 1994).
Along with encouraging our students to engage in appropriate volunteer work and internship experiences, we revised one aspect of our psychology curriculum to emphasize research courses and an original research project involving close faculty supervision and monitoring. One facet of consistent feedback from former students and graduate schools in psychology was to enhance the research component of our curriculum.
To accomplish a research based curriculum our faculty created a "Graduate School Track" designed for students who aspired to a graduate career in psychology. This included revamping our core curriculum, the creation of a capstone culminating seminar that reviews all major content domains in psychology, and the enhancement and addition of research design courses. These research design courses assist students in planning and implementing original research and presenting it at conferences such as the Ohio Undergraduate Psychology Conference and the Midwestern Psychological Association.
Once we had revised significant elements of our program we felt the need for an external and objective measure which would provide feedback on our student's performance who had completed the curriculum. We chose the Major Field Test in Psychology II (MFTP) (Educational Testing Service, 1998a) to be this measure. This outcome measure provides a comparative measure of psychology majors' academic performance across a number of relevant content domains in the specialty of psychology.
We also felt the scores on the field test would act as an excellent criterion measure to assess which specific aspects of students' performance in the undergraduate curriculum would be predictive of successful MFTP performance. Utilizing the extensive database of the Educational Testing Service, we could track our students' progress and provide them with an objective assessment as to where they stood vis-a-vis other graduating psychology majors throughout the entire country.
Assessing which components of the program were predictive of mastery of a comprehensive outcome measure would have a two-fold meaning. One, it would provide feedback as to which specific components in our program were predictors of success and what to revamp and revise in our curriculum to further develop our program. And secondly, it might indicate our students who perform well were enhancing their chances to be accepted in a competitive graduate program. To this end we focused upon two basic concerns: which specific curriculum factors would predict MFTP scores, and how do our graduates compare against national norms?
Data was collected over a four year period (1996-1999) from graduating seniors majoring in psychology at Ursuline College, a small liberal arts women's college in northeastern Ohio.
Materials and Procedure
The undergraduates were administered the outcome measure MFTP (Educational Testing Service, 1998a) as part of their culminating seminar in psychology. This capstone course is taken in the fall semester of the senior year. Scores on this exam include results for both students and the department based upon national norms.
Course grades from core psychology major courses were compiled from all individuals along with MFTP scores. Core psychology major courses included: General Psychology, Introduction to Statistics, Abnormal Psychology, Personality Theories, Research Methods and Capstone Culminating Seminar in Psychology.
Two global analyses of the data were conducted. Simple regression analysis using all core psychology major course grades as predictors of the MFTP total score in a single step indicate that only Research Methods was a significant predictor ([beta] = .749, t(33) = 4.317, p < .001). Prediction of the MFTP total score was substantial (R2 = .701). All other factors were non-significant (p > .05).
The second analysis of the data compared the performance of our department's graduates on the MFTP against the national norms for the test. National norms were calculated by the Educational Testing Service (1998b) for all MFTP administrations for the years 1995-1998. The total MFTP score for all administrations of the test at our college were analyzed in a one sample t-test against the national norm mean of 156.5. The analysis revealed that our departmental mean (M = 156.11, SD = 13.02) did not significantly differ from the national mean (t(96) = -.292, p = .771).
The results indicate that scores on our outcome measure are best predicted by performance in our major research course. We find these results intriguing because they are consistent to some degree with student applicant feedback and feedback from graduate schools we have interviewed. A demonstration of research skills seems to be a major determining criterion when assessing the suitability of graduate applicants credentials.
It is also worth noting that in our research courses we emphasize a hands-on learning experience, i.e., students are assigned independent projects and later choose a topic of their own interest which involves close supervision and contact with a faculty member with similar interests. We have hypothesized that this approach, which is distinctively different from many of our other courses, might enhance student motivation, independent working skills and a sense of competence. These observations seems consistent with Spilich's (1997) work who points out that increased positive scores on the MFTP are directly traceable to the style of teaching and mentoring of undergraduate psychology students. It is for this reason that the predictive ability of positive performance in our research course is not a surprise to us. Anecdotally, we suspected this all along. Strong students in Research Methods are typically the ones who have the most engaging research projects, and are the ones who continue in our Graduate School Track of the curriculum.
The second analysis which examined our graduate's performance on the MFTP against national norms revealed that the overall performance of our graduates is quite average. This is all the more remarkable since completing the MFTP in our program is not associated with any course grade, nor is achieving a minimal percentile score a requirement for graduation in our department. In essence, there is no external motivation to do well on this achievement test. This is one of the few times that one might cheer average performance in anything, let alone achievement. We are a department of only two full time faculty, and seem to be delivering a sound educational product with very limited resources. Although students graduating from the department as a whole are no different than the national average, each year there is always a student or two who achieve at or above the 95th percentile nationally, and several who score above the 80th percentile. If we included externally motivating factors to persuade students to try to perform well on the MFTP, we are hopeful that our department average might rise above the national mean.
Comparisons of students' MFTP performances before and after curriculum changes are not possible. The implementation of major and subsequent minor curriculum changes are confounded with the implementation of the MFTP assessment, and insufficient pre-revision scores exist. Objectively, the research productivity of our undergraduate majors has increased tremendously concurrently with the curriculum changes. During those four years (1996-1999) 27 papers and posters were presented at state or regional conferences by our undergraduates either as sole authors or working in pairs. Prior to this time period, no presentations or posters occurred. We also feel that the changes in our undergraduate curriculum have helped to create an academic climate in which novice students are now being mentored by more advanced students in the research process. All students now seem much more attuned to the requirements of a successful graduate school application.
At this time we would like to further evaluate what non intellective factors such a class size, type of project, degree of independence in designing and implementing a research project, type of mentoring style (e.g., close versus cursory supervision), would be further predictive of scores on our outcome measure.
Educational Testing Service. (1998a). Major Field Test in Psychology II. Princeton, NJ: Author.
Educational Testing Service. (1998b). Major Field Tests: Comparative Data Guide and Descriptions of Reports. Princeton, NJ: Author.
Mayne, T. J., Norcross, J. C., & Sayette, M. A. (1994). Admission requirements, acceptance rates, and financial assistance in clinical psychology programs: Diversity across the practice-research continuum. American Psychologist, 49, 806-811.
Spilich, G. J. (1997). Does undergraduate research pay off?. Council for Undergraduate Research Quarterly, 18, 57-59, 89-90.
Thomas W. Frazier, Ph.D. and Christopher L. Edmonds, Ph.D., Associate Professors of Psychology, Ursuline College.
Correspondence concerning this article should be addressed to Dr. Christopher L. Edmonds, Department of Psychology, Ursuline College, 2550 Lander Road, Pepper Pike, Ohio 44124. Email: email@example.com.…