Evaluating Oral Presentations Using Behaviorally Anchored Rating Scales

Article excerpt

INTRODUCTION

Improving students' oral communication skills is a topic of much concern in today's competitive environment (e.g., Buckley, Peach, and Weitzel 1989; Porter and McKibbin 1988). Evaluating oral presentations in the classroom is often viewed as a subjective task, exacerbated by the lack of formal training of most business academicians on how to provide useful feedback to students. Behaviorally Anchored Rating Scales (BARS) are used as a tool for performance appraisal "indicating how performance can be improved" (Cocanougher and Ivancevich 1978). BARS forms have been used extensively in businesses as a means of employee evaluation (e.g., Ingram and LaForge 1992; Schwab, Heneman, and Decotiis 1975) and at least one application has been applied to the evaluation of students in written case analyses (McIntyre and Gilbert 1994).

The purpose of this paper is to present a more formal and objective approach to performance evaluations of oral presentations using a BARS approach. The BARS form is presented as a means of evaluation that can: 1) provide effective feedback on students' performance during oral presentations; 2) provide guidance for expected performance (thus understanding of aspects of behavior that should be improved in future presentations); and 3) ensure equitable evaluations. A brief review of the development of the BARS instrument is described prior to outlining its application in the classroom.

BARS DEVELOPMENT

Cocanougher and Ivancevich (1978) detail the five-step operations procedure for developing a BARS evaluation system for a sales force, and McIntyre and Gilbert (1994) present an adaptation of the process for use in evaluation of students presenting case analyses. The development of the BARS form for oral presentations is based on these two articles, and incorporates an option suggested by McIntyre and Gilbert (1994), the inclusion of both students and faculty in the development process.

Step 1 in the process involves the generation of critical incidents (examples of effective and ineffective behavior during oral presentations). For this step, an MBA-level class was chosen. Students were asked to read Flanagan's (1954) article regarding critical incidents and received an instruction sheet stating the objectives of the process. Twenty-one students generated a list of 298 critical incidents during a brainstorming session. After the brainstorming session, the critical incidents were reviewed and compiled in random order. Seventeen incidents were considered to be irrelevant (most appeared to assess written presentations rather than oral presentations), leaving 281 available for Step 2 in the process, refinement of the critical incidents and the creation of performance dimensions (the overall qualities defined by specific critical incidents).

In a second session, the same set of MBA students were asked to generate a list of performance dimensions by refining the 281 critical incidents into a smaller set of more general categories. The original dimensions included allocation of time, appearance/demeanor, supplementary materials, organization/preparation, coverage of material, creativity/interest, and communication skills. The students then placed each critical incident into one of the seven performance dimensions.

Step 3 is a verification check of the relationship of critical incidents to performance dimensions. For this step, Cocanougher and Ivancevich (1978) recommend that a second group be used. Seven full-time faculty members were asked to place the critical incidents into one of the seven performance dimensions. For a critical incident to be retained for the BARS form, 60% of both the student and faculty groups had to assign it to the same performance dimension. One performance dimension (creativity/interest) was eliminated because too few critical incidents were assigned to it, and 132 incidents were retained for further evaluation. …