Academic journal article Educational Technology & Society

A Two-Stage Multi-Agent Based Assessment Approach to Enhance Students' Learning Motivation through Negotiated Skills Assessment

Academic journal article Educational Technology & Society

A Two-Stage Multi-Agent Based Assessment Approach to Enhance Students' Learning Motivation through Negotiated Skills Assessment

Article excerpt

Introduction

The use of Simulation based systems for education and training purposes is still hindered by lack of methods and tools to assess learners' progress during a training session. For instance, in classroom-based learning, assessment is usually conducted in two ways (formative and summative) and is performed by human experts. However, in Simulation-based learning, these assessment methods become inappropriate, as they often consist in a negative feedback without explanation or improvement guidance, which can lead a learner to lose motivation and to stop learning. Furthermore, when it comes to assessment, there is no appropriate Computer-based assessment methodology adapted to Simulation-based learning/training (Ekanayake et al., 2011). Currently, skills assessment in training simulations is often conducted by human instructors using subjective qualitative methods (based on human expertise), which becomes difficult to automate as expected in Simulation-based learning systems in regards to reduction of instructional time and costs (Eck, 2006).

In order to help students better cope with difficulties encountered in solving problems, many researchers have developed intelligent assessment tools based on artificial intelligence approaches (Stathacopoulo, 2005; Huang, 2008). For example, the conceptual framework developed by Mislevy et al. (2003) adopts an Evidence-Centered Design (ECD), which informs the design of valid assessments and can yield real-time estimates of students' competency levels across a range of knowledge and skills. However, the following issues in existing assessment models require further investigation:

* Assessment tools often proceed in a single stage evaluation of student's skills, and focus more on producing marks than giving detailed explanation on what the students failed to understand or put into practice (Chang et al., 2006). Furthermore, learner feedbacks may be insufficient and lack accuracy to help students.

* Generally, all assessment tools set a threshold score for tests to be succeeded. This discriminates students whose final score is near the passing limit. Does a student with a 9.9 final score have significantly less knowledge than a student with 10 as final mark? Moreover when considering potential error margins.

* The existing assessment tools focus more on assessing learner's performance regardless of whether this assessment contributes to learners' motivation and not give up of learning.

These issues can be addressed first by refining the assessment skills criteria in order to detail what part of the learning process went wrong. Secondly, marks should be handled with a margin error, thus avoiding threshold phenomena where an assessment can change significantly. Moreover, taking into account limited compensation between the different assessment criteria (as when deciding whether a student should graduate or not), would grant a more flexible assessment as humans do. Finally, reporting feedback to the student can then be detailed and not fully negative, reducing the demotivation issue.

In this paper, we propose to use distributed assessor agents to assess skills individually and thus inform precisely about difficulties encountered by the student at each skill. By using Fuzzy sets, assessor agents are able to evaluate the level of control of each skill by considering the difficulty of each action of the skill. Our strategy involves a twostage approach (see figure 1): The first stage focuses on student's skill evaluation by means of assessor agents; each is responsible of evaluating only one skill of the student. This will inform the second stage of the approach, which concerns the global evaluation of the student's capabilities. This evaluation stage is managed by an aggregate agent and is based on the assessor agents' assessments, allowing a negotiation process to decide whether the student passes the required skills qualification. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.