Academic journal article College Student Journal

An Examination of College-Wide Student Teaching Evaluations

Academic journal article College Student Journal

An Examination of College-Wide Student Teaching Evaluations

Article excerpt

This paper examines the student teaching evaluations of a new survey instrument developed in the spring, 1997 semester. Correlations between scores on the teaching evaluation and student time in preparation are also examined. Students were asked to use diaries to list study time spent on a daily basis for a one-week period. It was determined that there is a strong relationship between the instructor's time demands and scores on the evaluations. This relationship can be exploited by faculty to increase scores by decreasing demands on students. To avoid this exploitation, departments need to develop a consensus on time demands and use this consensus as part of the teaching evaluation process.

Introduction

During the spring, 1997 semester, the College of Arts and Sciences of the University of Louisville adopted a new teaching evaluation instrument. A copy of the questions is attached in the appendix. It was decided that the instrument should concentrate on two objectives: behaviors and outcomes. The first seven questions focused on behaviors which are required of every faculty member; behaviors such as providing a syllabus and conducting class. The outcomes were more difficult to judge. Because there are so many different courses and teaching styles within a College of Arts and Sciences, the wording of the questions ended up somewhat convoluted because of the need to make them general enough to accommodate all differences.

To summarize the responses in the questions, it was decided to use medians instead of means. It was determined that the medians would be less susceptible to extreme values. In outcomes, the medians generally were higher than the means. There were exceptions to this rule. There was one real problem with using medians. A median is not a unique value. Therefore, it was decided to use the supremum of all possible median values. After one year, the Associate Dean for Personnel decided to go back to means, primarily because she had difficulty comprehending medians.

It was also believed useful to examine the results of the teaching evaluations and to examine them statistically.

Evaluation of Responses

The validity of the testing instrument was first considered. Cronbach's alpha was computed with a result of 85% on questions 8-16. Questions 1-7 have to do with course mechanics and were not intended to have any discriminating power. In addition, a factor analysis was performed to examine student characteristics collected along with the evaluation questions. There were five factors examined in those characteristics: time required for course, student performance, maturity level, grade level, and ethnic subpopulations.

Evaluations were examined across the College of Arts and Sciences. For all departments, overall averages and medians were high ranging from 3.5 to 5 on a 5-point Likert scale (Figure 1). The natural science departments overall were lower than the social sciences which were lower than the humanities but the results Were not statistically significant. There was also an increase by class level with the senior level courses (400) highest and freshman level courses (100) the lowest (Figure 2). This difference was statistically significant (p=0.042).

[FIGURES 1-2 OMITTED]

Within college divisions, there is considerable validity. Many departments only teach courses which are electives for students or for students in majors; other departments teach a heavy load of service courses. Figure 3 demonstrates that the general education departments have lower student teaching evaluations (p=0.001).

[FIGURE 3 OMITTED]

It was considered whether student characteristics would have any impact upon responses to the evaluation questions. In particular, it was of interest to determine whether there was a relationship between responses to instructor satisfaction and amount of time required. In each case, there was a strong correlation, but the correlation varied by department. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.