Evidence-Based Policy and Practice: Cross-Sector Lessons from the United Kingdom
Nutley, Sandra, Davies, Huw, Walter, Isabel, Social Policy Journal of New Zealand
This paper identifies key lessons learnt in the Public Sector quest for policy and practice to become more evidence based. The Annex to this paper provides outlines of and web links to specific initiatives across the public sector in the United Kingdom.
There is nothing new about the idea that policy and practice should be informed by the best available evidence. Researchers and analysts have long worked with and within government to provide evidence-based policy advice, and the specific character of the relationship between social research and social policy in Britain was shaped in the 19th and 20th centuries (Bulmer 1982). The 1960s represented a previous high point in the relationship between researchers and policy makers (Bulmer 1986, Finch 1986). However, during the 1980s and early 1990s there was a distancing and even dismissal of research in many areas of policy, as the doctrine of "conviction politics" held sway.
In the United Kingdom it was the landslide election of the Labour government in 1997, subsequently returned with a substantial majority in 2001, that revitalised interest in the role of evidence in the policy process. In setting out its modernising agenda, the government pledged, "We will be forward-looking in developing policies to deliver outcomes that matter, not simply reacting to short-term pressures" (Cm 4310 1999). The same white paper proposed that being evidence based was one of several core features of effective policy making, a theme developed in subsequent government publications (Performance and Innovation Unit 2001, National Audit Office 2001, Bullock et al. 2001).
In the wake of this modernising agenda, a wide range of ambitious initiatives have been launched to strengthen the use of evidence in public policy and practice. A cross-sector review of some of these can be found in the book What Works: Evidence-Based Policy and Practice in Public Services (Davies et al. 2000) and in two special issues of the journal Public Money and Management (Jan 1999, Oct 2000). In order to give a flavour of the range, scope and aims of these developments, the annex to this paper provides an overview of two generic initiatives and a summary of several sector-specific developments.
This paper seeks to draw out some of the key lessons that have emerged from the experience of trying to ensure that public policy and professional practice are better informed by evidence than has hitherto been the case. It does this by highlighting four requirements for improving evidence use and considering progress to date in relation to each of these.
Because the use of evidence is just one imperative in effective policy making, and in acknowledgement that policy making itself is always inherently political, a caveat seems appropriate at this point. Further, as professional practice is also generally contingent on both client needs and local context, warnings are similarly needed in this area also. The term "evidence-based" when attached as a modifier to policy or practice has become part of the lexicon of academics, policy people, practitioners and even client groups. Yet such glib terms can obscure the sometimes limited role that evidence can, does, or even should, play. In recognition of this, we would prefer "evidence-influenced", or even just "evidence-aware", to reflect a more realistic view of what can be achieved. Nonetheless, we will continue the current practice of referring to "evidence-based policy and practice" (EBPP) as a convenient shorthand for the collection of ideas around this theme, which has risen to prominence over the past two decades. On encountering this term, we trust the reader will recall our caveat and moderate their expectations accordingly.
FOUR REQUIREMENTS FOR IMPROVING EVIDENCE USE IN POLICY AND PRACTICE
If evidence is to have a greater impact on policy and practice, then four key requirements would seem to be necessary:
1. agreement as to what counts as evidence in what circumstances;
2. a strategic approach to the creation of evidence in priority areas, with concomitant systematic efforts to accumulate evidence in the form of robust bodies of knowledge;
3. effective dissemination of evidence to where it is most needed and the development of effective means of providing wide access to knowledge; and
4. initiatives to ensure the integration of evidence into policy and encourage the utilisation of evidence in practice.
The remainder of this paper takes each of these areas in turn both to explore diversity across the public sector and to make some tentative suggestions about how the EBPP agenda may be advanced.
THE NATURE OF EVIDENCE
In addressing the EBPP agenda in 1999, the United Kingdom Government Cabinet Office described evidence as:
Expert knowledge; published research; existing statistics; stakeholder consultations; previous policy evaluations; the Internet; outcomes from consultations; costings of policy options; output from economic and statistical modelling. (Strategic Policy Making Team 1999)
This broad and eclectic definition clearly positions research-based evidence as just one source amongst many, and explicitly includes informal knowledge gained from work experience or service use:
There is a great deal of critical evidence held in the minds of both front-line staff ... and those to whom policy is directed. (ibid.)
Such eclecticism, whilst inclusive and serving to bring to the fore hitherto neglected voices such as those of service users, also introduces the problems of selection, assessment and prioritising of evidence. A survey of policy making in 2001 (Bullock et al. 2001) found that a more limited range of evidence appeared to be used by government departments: domestic and international research and statistics, policy evaluation, economic modelling and expert knowledge.
It is instructive that egalitarianism in sources of evidence is not present equally in all parts of the public sector. Health care, for example, has an established "hierarchy of evidence" for assessing what works. This places randomised experiments (or, even better, systematic reviews of these) at the apex; observational studies and professional consensus are accorded much lower credibility (Hadorn et al. 1996, Davies and Nutley 1999). This explicit ranking has arisen for two reasons. First, in health care there is a clear focus on providing evidence of efficacy or effectiveness: which technologies or other interventions are able to bring about desired outcomes for different patient groups. The fact that what counts as "desired outcomes" is readily understood (i.e. reductions in mortality and morbidity, and improvements in quality of life) greatly simplifies the methodological choices. The second reason for such an explicit methodological hierarchy lies in bitter experience: much empirical research suggests that biased conclusions may be drawn about treatment effectiveness from the less methodologically rigorous approaches (Schulz et al. 1995, Kunz and Oxman 1998, Moher et al. 1998).
In contrast to the hierarchical approach in health care, other sector areas (such as education, criminal justice and social care) are riven with disputes as to what constitutes appropriate evidence. Also, there is relatively little experimentation (especially compared with health care), and divisions between qualitative and quantitative paradigms run deep (Davies et al. 2000). This happens in part because of the more diverse and eclectic social science underpinnings in these sectors (in comparison to the natural sciences underpinning in much of health care), and in part because of the multiple and contested nature of the outcomes sought. Thus knowledge of "what works" tends to be influenced greatly by the kinds of questions asked, and is, in any case, largely provisional and highly dependent on context.
Randomised experiments can answer the pragmatic question of whether intervention A provided, in aggregate, better outcomes than intervention B in the sampled population. However, such experiments do not answer the more testing question of whether and what aspects of interventions are causally responsible for a prescribed set of outcomes. This may not matter if interventions occur in stable settings where human agency plays a small part (as is the case for some medical technologies), hut in other circumstances there are dangers in generalising from experimental to other contexts.
Theory-based evaluation methods often seem to hold more promise, because of …
Questia, a part of Gale, Cengage Learning. www.questia.com
Publication information: Article title: Evidence-Based Policy and Practice: Cross-Sector Lessons from the United Kingdom. Contributors: Nutley, Sandra - Author, Davies, Huw - Author, Walter, Isabel - Author. Journal title: Social Policy Journal of New Zealand. Issue: 20 Publication date: June 2003. Page number: 29+. © 1998 Social Policy Agency. COPYRIGHT 2003 Gale Group.
This material is protected by copyright and, with the exception of fair use, may not be further copied, distributed or transmitted in any form or by any means.