Academic journal article Educational Technology & Society

A Monitoring and Evaluation Scheme for an ICT-Supported Education Program in Schools

Academic journal article Educational Technology & Society

A Monitoring and Evaluation Scheme for an ICT-Supported Education Program in Schools

Article excerpt

Introduction

Information and communication technologies (ICTs) arrived in schools more than 25 years ago (Robertson, 2002; Reynolds et al., 2003). The general perception has been that they would increase levels of educational attainment by introducing changes in teaching and learning processes and strategies, adapting them to the needs of the individual student (Sunkel, 2006). During the nineties, investments in ICT grew in response to the rapid rise of the Internet and the World Wide Web (Pelgrum, 2001) and as an effort to bridge the social inequity between people with and without access to ICT, also known as the digital divide (Warschauer, 2003).

There are four commonly accepted rationales used to justify investment in educational ICT: support for economic growth, promotion of social development, advancement of educational reform and support for educational management (Kozma, 2008). These rationales are still not backed by any strong evidence of ICTs' impact on student attainment, however, and whether the manner in which ICT is implemented impacts on students' knowledge and understanding has yet to be unambiguously determined (Trucano, 2005; Cox and Marshall, 2007).

There are at least three reasons for this lack of evidence. First, there is a mismatch between the methods used to measure effects and the type of learning promoted (Trucano, 2005; Cox and Marshall, 2007). Researchers have looked for improvements in traditional processes and knowledge instead of new reasoning and new knowledge which might emerge from ICT use (Cox and Marshall, 2007). Second, although some large-scale studies have found that ICTs have a statistically significant positive effect on student learning (Watson, 1993; Harrison et al., 2002), it is not yet possible to identify which particular types of ICT use have contributed to these gains (Cox and Marshall, 2007). To clarify this would require specific information about these technologies and the ways teachers and students are using them.

The third reason for the dearth of evidence is related to the fact that monitoring and evaluation (M&E) are not receiving the attention they deserve (Trucano, 2005). The monitoring of an ICT for education (ICT4E) program examines what and how is being done (fidelity of implementation) (Wagner et al., 2005), while evaluation analyzes the immediate or direct effects of the program intervention and implementation (Rovai, 2003) in order to measure performance. The central elements of an M&E scheme are indicators and assessment instruments (Wagner et al., 2005). An indicator is a piece of information which communicates a certain state, trend, warning or progress to the audience (Sander, 1997) whereas assessment instruments furnish that information in a specific context (Wagner et al., 2005).

The main role of assessing fidelity of implementation is to determine whether an ICT4E program is operating as intended in overall terms (Rovai, 2003; Wagner et al., 2005) and in line with the program designers' specific intentions (Agodini et al., 2003). For this to be possible, the designers must first specify which are the important or critical features teachers have to enact in their classrooms and then develop measures for establishing whether and how those features are put into practice in real classrooms (Penuel, 2005). M&E can then provide a deeper understanding of the relationship between variability in the implementation of a program and its measured effects (Agodini et al., 2003; Penuel, 2005). They are also able to identify the limits of a program's applicability or flexibility and possible flaws in the assumptions underlying it (Penuel, 2005; Light, 2008).

During the implementation of an ICT4E project, a well-designed M&E scheme will be feeding back qualitative and quantitative data to the project managers, who can then use this information to refine or adjust the project (formative M&E), to learn from experience, to determine whether the project has served their client communities and how it might be improved in a later phase, or perhaps how it might be replicated (summative M&E) (Batchelor and Norrish, 2005). …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.