In the context of the inclusive knowledge society, the role of system interfaces that are more closely tailored to the way people naturally work, live and acquire knowledge is unquestionably recognized as important. In addition, the need for active and accessible learning promotes only the e-learning that engages the users effectively. Nevertheless, despite so much publicity and activity, the progress in the field of e-learning has been relatively slow until recently, when problems were often associated with poor designed e-learning applications cf. (SIGCHI, 2001; Granic, 2008). It seems that too much of the research has been driven by technical possibilities, while paying inadequate attention to the area of application. This issue has been ignored for some time, in the hope that new technologies will somehow resolve the lack of real progress. However, to efficiently communicate the contents and improve the learning experience, interaction mechanisms merit particular consideration. Usability studies in the e-learning field are not very frequent despite the important role that usability plays in the success of every e-learning system. If the interface is not transparent and easy to use, the learners/students concentrate on interaction aspects and not on acquiring content. In addition, it has been claimed that usability assessment needs further consideration of the learning perspective. Namely, the approaches to e-learning usability range from those adapted to e-learning to those applying heuristics without special adjustment to the educational context. Accordingly, as an established set of heuristics and a joint evaluation methodology for e-learning systems do not exist yet, there is obviously a need for further research and empirical evaluation.
The paper reports on a case study of an e-learning platform implemented in the network of fourteen European schools. The contribution of this paper is two-fold. First, it critically examines the usability of a large-scale e-learning system across several countries in Europe. The second contribution of the paper is providing some general findings and lessons learned from the experience. Usability testing, which integrated six empirical methods into a laboratory-based test, was complemented with heuristic inspections. Interface compliance with Nielsen's (1994) traditional principles was enhanced with experts' judgment of the system's "educational evaluation" by means of three sets of criteria: Learning with software heuristics (Squires & Preece, 1999), Educational design heuristics (Quinn, 1996) and Pedagogical dimensions (Reeves, 1994). We expect that this contribution with its general findings and know-how from the experience will facilitate the understanding on how to evaluate and improve the usability of e-learning systems based on users' (learners'/students' and teachers') and experts' feedback. Since there are limited studies in the field, this contribution adds to the body of knowledge.
Research in the human-computer interaction (HCI) field has provided numerous principles and guidelines that can steer designers in making their decisions. Although applying good design guidelines alone is a good start, it is no substitute for system usability evaluation. In general, usability is context-dependent and is shaped by the interaction between users, tasks and system purpose. A variety of usability evaluation methods have been developed over the past few decades and most are grouped into usability test methods, user-based involving end-users, and inspection methods engaging HCI experts. Research studies involving different kinds of applications, different user groups and evaluation techniques have been conducted and the need for combining the methods is well understood in the usability field; see e.g., Sears & Jacko (2008).
To analyze usability of interaction mechanisms of e-learning systems, more or less standard assessments and studies have been carried out. …