Assessing High-Order Skills with Partial Knowledge Evaluation: Lessons Learned from Using a Computer-Based Proficiency Test of English for Academic Purposes

Article excerpt

Introduction

In non-English speaking countries, students enrolled in Master and PhD degree programs are usually required to undertake an English proficiency test (EPT), aimed at assessing their ability to understand and produce technical literature in English as it is the language used in most scientific journals. In practice, such EPT consists in asking students to translate a text passage from a scientific paper or a technical book from English (L2) into their mother tongue (L1), and in some cases to produce a version in L2 of another piece of technical text originally in L1. It is usual, as was the case of the Institute of Mathematics and Computer Science at the University of Sao Paulo (ICMC), Brazil, until April 1998, that the test is prepared each year by a different member of the academic staff. This is disadvantageous since distinct exams may be highly non-uniform due to the high degree of subjectiveness in the evaluation process. The alternative approach used by some Universities and funding agencies is to require the student to pass a general-purpose exam such as TOEFL (http://www.toefl.com) or IELTS (http://www.ielts.org). None of these exams, however, evaluate students' competence in terms of the demands of highly standardized research articles written in English. Furthermore, the need of "genre-consciousness" is not aroused, which is essential for a novice researcher to better and faster perform the reading and writing tasks for his/her own research. In order to obviate these limitations, the first author of this paper proposed a new type of proficiency exam for graduate students, in which students' competences would be evaluated according to four modules: M1: dealing with the analysis of the structure of a given section of a paper, where the students had to identify the section components; M2: dealing with the analysis of relationships among clauses which are signalled by discourse markers; M3: involving knowledge of conventions in the English language for scientific texts; M4: involving knowledge of writing strategies for each component of the sections' paper.

The four modules include questions that assess two cognitive levels from the well-known Bloom's Taxonomy (Bloom, 1956), which divide cognitive objectives into six subdivisions ranging from the simplest behavior to the most complex: knowledge, comprehension, application, analysis, synthesis and evaluation. In addition to the knowledge category, normally used in multiple-choice tests, we included questions of two modules (M1 and M2) to assess competences in the fourth level of difficulty (analysis). In order to evaluate such abilities, particularly those for a higher cognitive level, we employed an information theoretical model of knowledge assessment to measure the student knowledge base. According to this model, a student may have total information, almost total information, partial information, partial misinformation, misinformation or total lack of information about a given topic being assessed. This approach makes use of a scoring procedure referred to as Admissible Probability Measurement (APM) developed by Shuford & Brown (1974) and employed in computerized exams by Bruno (1986, 1987) and Bruno, Holland & Ward (1988) and in manually marked exams by Klinger (1997).

As we shall comment upon later, this new proficiency exam was successful but relied on the work of a lecturer who is knowledgeable of a number of issues in scientific writing. This severe limitation could only be overcome if the exam was fully automated. This prompted us to develop a computer-based assessment system in which the students' ability to read and write scientific literature in English was evaluated with objective questions and the APM scoring procedure. This computer-based system possesses a test management subsystem and a test delivery subsystem that are integrated into a single application. It is web-based and allows access to 5 different types of users: test administrator, instructor, students, master program secretary, and general public. …

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.