Academic journal article Journal of Geoscience Education

Increasing the Significance of Course Evaluations in Large-Enrollment Geoscience Classes

Academic journal article Journal of Geoscience Education

Increasing the Significance of Course Evaluations in Large-Enrollment Geoscience Classes

Article excerpt

ABSTRACT

The goal of this study was to identify student evaluation of teaching (SET) questions that could be biased against large-enrollment geoscience classes. SET questionnaires were collected from twenty geoscience departments; individual questions were examined for potential bias by examining differences in SET responses for classes of differing size and discipline using data from the Department of Geography and Earth Sciences at UNC Charlotte.

At UNC Charlotte, lower-level courses receive 6% lower evaluation scores than upper-level courses and overall ratings by students in larger classes were 12% lower than those in classes with fewer than 75 students. Comparisons between instructors teaching different disciplines should be reviewed with discretion. When introductory geography and earth science classes are compared, geography instructors scored 20% higher than earth science instructors (n = 40 sections for geography, n = 38 sections for earth sciences).

Student evaluations should also be viewed in the context of the difficulty of the professor. At UNC Charlotte, evaluations are plotted against the mean grade-point average assigned by the professor. These plots are both summative when considered by the Reappointment, Promotion, and Tenure (RPT) committee, and formative when used by the instructor to sense how difficult and/or effective he or she is relative to other instructors.

INTRODUCTION AND PREVIOUS RESEARCH

The majority of geoscience departments in the United States evaluate instructor quality using two methods: peer review by senior faculty and student evaluations of teaching. While the former method relies on the expertise of experienced and tenured faculty, the latter evaluation tool is dependent on the opinions of untrained and relatively unqualified students. SETs are typically paper and pencil questionnaires that serve three purposes: to provide feedback to the instructor, to serve as a tool for personnel decisions such as tenure, promotion, or compensation, and if published, to serve as an indicator to students of instructor quality. Because evaluation results are so important in the careers of many instructors, the discussion of the validity of these instruments and their utility has become an area of increasing debate (e.g., Abbott et al., 1990; Freeman, 1994; d'Apollonia and Abrami, 1997; Greenwald, 1997; Marsh and Roche, 1997; McKeachie, 1997; Senior, 1999; Bailey et al., 2000; Griffin, 2001; Sproule, 2002; Gray and Bergmann, 2003). Nevertheless, the quality of the tools used in the evaluation of geoscience instructors of large classes has not been specifically addressed.

The University of North Carolina at Charlotte has revised its teaching evaluation form several times in the last decade (Figure 1). Each revision was intended to make the questions fairer for instructors in different size classes teaching different disciplines. In its present form, however the evaluation may still be biased against instructors of large enrollment geoscience classes as measured by the difference of means on student evaluations. One objective of our study was to survey other geoscience departments and to identify SET questions that could be biased against large earth science classes; we evaluated ten years worth of responses to UNC Charlotte's SET questionnaire to identify these potentially biased questions. We also investigated the use of evaluations by university administration and the relationship between evaluation scores and grading leniency.

CURRENT EVALUATION STRATEGIES

Student evaluation results are typically returned to the instructor and university administration within several months of the end of the semester. Instructors can use the results to improve teaching methodology and RPT committees can use the results for summative reviews (Senior, 1999). Unfortunately, several geoscience departments discourage formative applications by making the results difficult for instructors to obtain. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.