Academic journal article Educational Technology & Society

On the Educational Validity of Research in Educational Technology

Academic journal article Educational Technology & Society

On the Educational Validity of Research in Educational Technology

Article excerpt

Introduction

Despite the advances that have occurred over the last 40 years in the use of technology in education, the impact of this research on educational practice remains limited. Low-technology tools have been replaced with equivalent computer hardware and software, but these changes are common to all organizations, educational or otherwise. More specific proposals for increasing the use of educational technology remain largely unheeded by most of those who teach 'at the coalface'. Advocates for increased use of technology prefer to attribute the lack of impact to characteristic failings among the practitioners, categorising them as Luddites, technophobes, and/or 'laggards'--after Rogers (1962). Yet the persistence of the problem merits deeper consideration. Luddites only exist when the technology is more efficient, and technophobia cannot account for rejection in fields, such as computer science, where knowledge and practice in information and communication technology (ICT) is central to the domain. The classification as 'laggards' is also invalid. Roger defines laggards as those who adopt a change long after the majority have done so; the laggards can never be the majority in such circumstances.

Alsop and Tompsett (2007), in their analogy with research and innovation in healthcare from a 'soft-systems' (Checkland, 1999) perspective, argued that research in this field is over-focused on technological change, with a corresponding disregard for demonstrating that innovative technology provides assured educational benefits for those who adopt it. If there is insufficient evidence for practitioners to adopt a technological change in practice in their particular domain, then it is rational to consider, and prefer, non-technological changes that could provide more assured benefits and/or introduce fewer risks. The focus on practitioners draws on a key distinction made by Oancea (2005) in the debate over the quality, value, and even feasibility of research in the complex domain of education (see, for example: Hargreaves, 1996; Hammersley, 2002; OECD CERI, 2002; Simons, 2003). Oancea categorises researchers as either 'intellectuals', who focus on elucidating educational problems, or 'technicians, who focus on providing solutions to known educational problems. Alsop and Tompsett suggested that if research by 'technicians' were to be classified as in Table 1, below, then almost all valid research in ICT would be classified at level A. They argued that practitioners (regardless of their own experience and abilities) would require evidence that could be classified at level C or higher to warrant a change in practice.

Taking computer science as a focus, this paper provides the first review of published research focused in one specific domain in higher education. The search for, and selection of relevant evidence mirrors the principles of a systematic review (as described, for example, in Jadad, 1998).

Systematic reviews differ in two critical ways from conventional literature reviews (e.g., Sheard et al. 2009), and/or meta-analysis of existing results (e.g., Lou, Abrami and d'Apollonia, 1996, or Springer, Stanne and Donnovan 1999). Firstly, the focus of research must be specific; evidence for change with less able students in mathematics in secondary school cannot be used as evidence for change when teaching at different levels of education or with different levels of ability. In this case, the focus is the teaching of a 'first' computing language in an undergraduate computer-science program. Secondly, the value of an experimental study is assessed primarily in terms of the first scale proposed by Alsop and Tompsett: the quality of evidence that is achieved by the experiment, and comparing those changes that are supported at the highest level of research that can be achieved, ignoring both the specific proposal and the actual improvement in outcome at this point. More specifically, a systematic review follows a five stage process consisting of: (1) collecting evidence for all relevant interventions, (2) evaluating the quality of the research process for each study, (3) elimination of findings that are below the highest achieved within that set of studies, (4) collating the results of homogeneous experiments to increase, if possible, the reliability of evidence for each intervention, and, finally, (5) consideration of the scale and spread of the impact that can be attributed to each remaining intervention in order to identify the best' option, or combination of options. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.