Academic journal article Interdisciplinary Journal of e-Skills and Lifelong Learning

Validation of a Learning Object Review Instrument: Relationship between Ratings of Learning Objects and Actual Learning Outcomes

Academic journal article Interdisciplinary Journal of e-Skills and Lifelong Learning

Validation of a Learning Object Review Instrument: Relationship between Ratings of Learning Objects and Actual Learning Outcomes

Article excerpt

Introduction

To meet diverse learning needs and to improve student learning, a variety of resources, often including digital media, are developed where the combination of the media and methods of use change with context and try to take account of student differences. New technologies have emerged to assist these objectives, and one method of designing and presenting computer based educational materials is that of learning objects (LOs), usually defined as any digital resource that can be reused to support learning (Wiley, 2000). Examples of such digital resources that can be employed within instructional materials include images or photos, live data feeds, live or prerecorded video or audio snippets, text, animations, and web-delivered applications such as a Java applet, a blog, or a web page combining text, images and other media. Thus LO approaches can be wide-ranging and offer new possibilities to access and reuse online materials (Wiley, 2005).

Online repositories storing large numbers of LOs, which different user groups (e.g. teachers, instructional designers, material producers, and learners) can access and employ in various contexts according to their needs, can, in principle, bring economy and variety into the educational process (Nurmi & Jaakkola, 2006a). However, although LOs can provide stimulating opportunities to improve educational practices, to extend the use of digital technologies in schools and to reduce the time required to prepare technology enhanced teaching, many associated problems and practical shortcomings can arise (Akpinar & Simsek, 2007; Jonassen & Churchill, 2004; Kay & Knaack, 2007; Li, Nesbit & Richards, 2006; Nurmi & Jaakkola, 2005; Parrish, 2004; Strijker, 2004; Vuorikari, Manouselis, & Duval, 2006). There is a lack of empirical evidence on the effectiveness of LOs, though this has not reduced the interest in the technique, and indeed it provides an incentive for further research.

Whilst the LO debate continues (Churchill, 2007; Cochrane, 2005; Friesen, 2005; Krauss & Ally, 2005; Maceviciute & Wilson, 2008; Merrill, 2001; Parrish, 2004; Polsani, 2003; Salas & Ellis, 2006; Varlamis & Apostolakis, 2006; Wiley, 2000), the effectiveness of LOs is likely to be limited if they do not conform to established design principles and have not been subjected to formative user testing (Li et al., 2006). A range of different evaluation approaches for such learning resources exists, and Vuorikari et al. (2006) studied and analyzed a sample of thirteen evaluation approaches either currently applied to learning object repositories (LORs) or used as general quality guidelines for digital learning resources. These approaches were distinguished in terms of:

(1) methodological characteristics focusing on the process or the product;

(2) the stage of the learning resource lifecycle focusing on developmental guidelines or enduser evaluation ratings;

(3) the educational processes or optimization parts of the development lifecycle;

(4) the form of evaluation instruments used, e.g. questionnaires, a list of criteria or certification instruments;

(5) the audience as developers, evaluators, subject experts, teachers, or end users;

(6) the criteria or metrics engaged by the tools; and

(7) the characteristics of the environment in which evaluation approach is expected to be applied.

Also, a recent survey (Tzikopoulos, Manouselis, & Vuorikari, 2007) reported on 23 highlighted evaluation and rating approaches. Because there is such diversity in the goals and forms of LO evaluations, Dron, Boyle, and Mitchell (2002) and Vuorikari et al. (2006) suggest the use of tagged metadata for storing the results of such evaluations, not only noting data on sharing and reusability, but summarizing the experience and achievements LO resources in use. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.