Academic journal article Language, Learning & Technology

Concerns with Computerized Adaptive Oral Proficiency Assessment

Academic journal article Language, Learning & Technology

Concerns with Computerized Adaptive Oral Proficiency Assessment

Article excerpt

A Commentary on "Comparing Examinee Attitudes Toward Computer-Assisted and Other Oral Proficient Assessments" by Dorry Kenyon and Valerie Malabonga

There is no doubt that computers and related technology have already acquired considerable importance in the development, administration, scoring, and evaluation of language tests, as this special issue of Language Learning & Technology demonstrates (see also Alderson, 2000; Brown, 1997; Chalhoub-Deville & Deville, 1999; Chapelle, 2001; Dunkel, 1999). Given the integral role computers play in our lives, and advances in technology which will make possible the measurement of an expanding array of constructs, it is clear that the use of computer-based tests (CBTs) for language assessment and other educational/occupational assessment purposes will become increasingly predominant in the immediate future (Bennett, 1999). However, what is unclear is the extent to which CBTs will offer the most appropriate means for (a) informing the interpretations that language educators want to make about the language skills, knowledge, or proficiencies of L2 learners and users; and (b) fulfilling the intended purposes and achieving the desired consequences of language test use (Norris, 2000).

Of particular concern for language testing is the extent to which CBTs may contribute to assessment of productive language performances, especially those involving speaking abilities (Alderson, 2000; Bernstein, 1997; Chalhoub-Deville & Deville, 1999). Of course, computerized tests of speaking have been developed which elicit production on constrained tasks and automatically score isolated features such as fluency and pronunciation (e.g., Ordinate Corporation, 1998), and seminal work is underway in developing an integrated speaking component for the CBT Test of English as a Foreign Language (Butler, Eignor, Jones, McNamara, & Suomi, 2000). However, despite such efforts, it is questionable whether the full range of individual and interactive speaking performances that language educators are interested in will be adequately elicited in computerized formats; likewise, it is doubtful that the complexities of such performances and the inferences that we make about them will be captured by automated scoring and speech recognition technology (Burstein, Kaplan, Rohen-Wolf, Zuckerman, & Lu, 1999). Furthermore, because it is unlikely that complex speaking performances will be automatically scoreable, the applicability of computerized adaptive testing (CAT) for assessing speaking, among other complex abilities, remains unclear (see related discussions in Wainer, 2000).

Recent research and development efforts at the Center for Applied Linguistics (CAL) demonstrate one approach to combining available technology with advances in measurement theory (i.e., adaptive testing) in order to move beyond the testing of receptive language skills (e.g., Chalhoub-Deville, 1999) and towards a creative solution to the computerization of direct tests of complex speaking abilities. As such, the Computerized Oral Proficiency Instrument (COPI) presents the language testing community with a good opportunity to further consider just how CBT capabilities may best be matched with intended uses for language tests. In this brief commentary, I will address (a) what the COPI has to offer to language testing, (b) some of the key issues that should be addressed in future research on the COPI, and (c) several fundamental concerns associated with attempts to computerize L2 speaking assessment.

WHAT DOES THE COPI HAVE TO OFFER?

The COPI features several technical and procedural innovations which may offer improvements over other types of technology-mediated oral proficiency assessment (e.g., the tape-based Simulated Oral

Proficiency Interview, or SOPI), especially in terms of examinees' affective responses and efficiency in administration and scoring. As reported in Kenyon and Malabonga's article, the COPI utilizes technology to address affective concerns by introducing examinees to the computerized test format with a hands-on tutorial, by increasing examinee control over topic selection and planning/response time, and by introducing an adaptive algorithm in order to present examinees with speaking tasks that are not overly easy or difficult. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.