Evaluation Research and Criminal Justice: Beyond a Political Critique
Travers, Max, Australian and New Zealand Journal of Criminology
This article is intended to stimulate reflection and debate about the relationship between pure and applied research in criminology. The central argument is that evaluation research, which has almost become a dominant paradigm in researching criminal justice, has lower methodological standards than peer-reviewed social science. It considers this case in relation to quantitative and qualitative methods, and examines examples of a 'flagship' and 'small-scale' evaluation. The article concludes by discussing the implications for evaluators (who are encouraged to employ a wider range of methods), funding agencies and criminology as an academic discipline.
There has been considerable disquiet among critical criminologists in both Australia and the United Kingdom about the rise of evaluation as a research paradigm (Hillyard, 2001; Israel, 2000; O'Malley, 1996; White, 2002). It has been suggested that this has a managerial bias, and serves the needs of the powerful; and that there are tremendous institutional and financial pressures to conduct this kind of research. It is not, however, often recognised that academics not known for their political radicalism (Hood, 2001; Pawson & Tilley, 1997) and many professional evaluators are also concerned about the type of research being done in this field.
This article seeks to unpack this issue, in a provisional way, by focusing on a complaint that is not directly political. This is the charge that most evaluation research is methodologically poor, and intellectually uninteresting, when assessed by the standards employed in the academic peer-reviewed disciplines of criminology and sociology. The paper will consider this criticism in relation to quantitative and qualitative methods. It will also examine two examples of evaluation research: a well-funded 'flagship' project and a 'small-scale' evaluation conducted for a local agency.
This review of the methodological deficiencies of evaluation (which are acknowledged in the evaluation literature) raises disturbing issues for both academic criminologists and evaluators. In the first place, it suggests that applied research does not have to be rigorous, in academic terms, to be useful; so claims that evaluation is a robust, scientific discipline that produces 'objective' findings cannot be sustained. However, it also raises difficult questions about method for criminologists, as many academic studies use similar methods, but with a different political slant.
The Political Critique of Evaluation Research
The main argument advanced by critical criminologists is that evaluation research serves the needs of the powerful and has a managerial bias. Those teaching criminal justice courses may have some sympathy with this critique as they often rely heavily on evaluation reports. (1) These always present an upbeat picture of organisations struggling with and overcoming problems in a process of 'continuous improvement' that must reflect the views of those who commissioned the research, rather than cynical and disaffected practitioners on the ground. Academics sometimes complain that their reports are shelved or censored because they produce unpalatable findings or recommendations (see, e.g., Morgan, 2000; White, 2001). One can also easily imagine how researchers practise a form of self-censorship by steering clear from anything that might be controversial or damaging to the sponsoring agency. Reports do not, for example, contain lengthy interviews with staff about the grievances that inevitably arise from successive cut-backs or organisational changes, or reveal personality conflicts within management teams, or document abuses of power or entrenched racist or sexist attitudes inside institutions. (2) They also never criticise government policy, although one would expect practitioners and managers to have a range of political views. Academics working as consultants have sometimes criticised the implementation or success of evaluations (e. …