Academic journal article British Journal of Community Justice

Protocols for Evaluating Restorative Justice Programmes

Academic journal article British Journal of Community Justice

Protocols for Evaluating Restorative Justice Programmes

Article excerpt


This article provides a review and critique of the current research findings about restorative justice. It is suggested that some of the positive findings are not due to programme efficacy, but rather to well-known threats to validity. The effect of case attrition on selection bias is considered in light of the voluntary nature of many restorative justice programs. Standardization of program measures is urged with specific research protocols presented and described. Protocols for measuring participant perceptions are compared. Before scientifically valid statements can be made about best practices, much more rigorous research needs to be conducted. If the results of multiple program evaluations are going to contribute to accumulated understanding of the practice, measures across programs must be standardized. A research agenda is described that would eventually allow for empirically fitting the forum to the fuss and establishing best practice standards across models. Six programme level and six case level measures are proposed as the minimum required for basic program comparisons to be meaningful.

Key words: case attrition, selection bias, research protocols, restorative justice, program evaluation, threats to validity

Public policy responses to crime should not be based upon the enthusiasm or popularity of programme advocates. The long history of failed criminal justice reform efforts justifies a healdiy skepticism. If a justice programme is effective, it should be possible to scientifically measure and convincingly demonstrate these effects. If programme advocates cannot objectively demonstrate the merits of an intervention programme using sound empirical measures, they, too, deserve a large measure of skepticism. Confidence in a given programme's effectiveness becomes appropriate only when positive results are convincingly demonstrated. Confidence in a type or model of practice is justified only after positive results have been replicated in a number similar programmes. There have been nearly 100 restorative justice programme evaluations published as of 2004. Yet, research on restorative justice practice today is a mile wide but only an inch deep (McCold, 2003).

Recent research findings range wildly in their estimates of the beneficial effects of restorative justice programmes, especially regarding claims of reducing offender recidivism1. Some researchers conclude that restorative justice is no more effective than court in this regard (Davis, 1982; Roy, 1994; Moore, 1995; Niemeyer & Shichor, 1996; McCoId, 1998; Sundell & Vinnerljung, 2004), Others claim to demonstrate moderate reductions in recidivism of 10-15% (Umbreit, 1994; Geudens, 1998; Bontà, WallaceCapretta & Rooney, 1998; Calhoun, 2000; McGarrell, et al, 2000; Trimboli, 2000; Luke & Lind, 2002; Döilling & Hartmann, 2003; Australian Institute of Criminology, 2004). And some research projects report dramatic reductions of offender recidivism of 30% or more (Chan, 1996; Hsein 1996; Doolan, 1999; Sherman, Strang & Woods, 2000; Wilson & Prinzo, 2001; Rowe, 2002; Chan, 2003). Each of these claims are based upon research protocols with inherent weaknesses, or design flaws that limit the conclusions that can be validly drawn from a given set of outcome results (Campbell & Stanley, 1963).

My own reading of the three dozen studies of reoffending reviewed is that while restorative justice programs do not involve a consistent guarantee of reducing offending, even badly managed restorative justice programs are most unlikely to make reoffending worse (Braithwaite, 2002, p. 61).

Recent efforts to conduct meta-analyses of the findings from restorative justice programme evaluations (Latimer, Dowden, & Muise, 2001; Nugent, Umbreit & William, 2003) are premature, since programmes vary widely in their content, there are too few evaluations that include a valid comparison group, and most programmes have an insufficient number of cases upon which to draw solid conclusions. …

Author Advanced search


An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.