Academic journal article
By James, David
College and University , Vol. 79, No. 4
Given the current emphasis on accountability in education, college teachers need to prepare to prove that students are learning in their classes. This action research was completed on three English Composition classes. Using a pretest/posttest model, concrete evidence was found to conclude that students learned more about the mechanics of writing due to class content.
In this era of educational accountability, what can teachers do to prove that students are learning? How do teachers judge, other than subjectively, the effectiveness of their techniques? What concrete evidence can teachers summon to the forefront to document, to themselves and to others, that the lectures, activities, exercises, readings, and assignments for class made a significant difference in the students' knowledge base?
The elementary and secondary school systems point to various state and national test scores as evidence of growth or decline. Examining the K-12 level, Richard Sagor (2000) writes, "Teachers and schools face ridicule and loss of funding if they fail to meet community expectations" (p. 11). Standard-driven accountability is imposed upon them. Some secondary schools dole out the SAT or ACT scores of their students as concrete evidence. But college teachers, in general, use none of those measures. So far, they have not had to document their effectiveness in any systematic way. Most professors profess and then test, grade, and assume the efficacy of their practices. If students do not learn, it is obviously the students' lack of motivation or discipline causing the failure. The assessment movement found nearly everywhere today is a valuable step for teachers who seek helpful feedback from students, but it is largely subjective and qualitative in form (Angelo and Cross 1993). The data accumulated are used to better direct the teachers' practices to hopefully enhance learning. There is nothing wrong with this data, but student testimonials are rarely viewed as concrete evidence when it comes to measuring student learning levels.
At Oakland Community College, action research was conducted to explore the question of evidence of learning in three English Composition classes (ENG 1510). In Fall 2003, three ENG 1510 classes were given a pretest on the first day of class and a posttest on the last day of the semester. This test consisted of 40 items (one point per item) that asked students to insert punctuation such as commas, semi-colons, colons, apostrophes, quotation marks; identify and correct faulty sentences; select proper verbs and pronouns in examples; answer true/false questions about plagiarism; and match sentence types (i.e., comma splice, fused, complex, compound, etc.)
Since the data collected were on the interval scale and pretest and posttest results were available on all students, a t-test for nonindependent samples was conducted (Ary, Jacobs, and Razavieh 1990, p. 196). It should be noted that the pretest was not given back to students, nor were their individual scores revealed to them. Only class averages were shared.
The pretest class averages were remarkably consistent for each group:
In a typical test situation, these score averages would receive 'D' grades, hovering around the 60 percent correct level. However, no individual grades were given. Instead, the pretest served as a teaching device for the instructor-closer analysis of the items highlighted the general areas of weakness for the classes. …