Evidence-based practice (EBP) has become the watchword of human services in recent years. Derived from evidence-based medicine (EBM), EBP has many meanings-and in some cases, no meaning-but generally speaking, it espouses reliance on experimental research in human services practice. The EBM movement took its cue from the work of John Wennberg at Dartmouth Medical School and others, who demonstrated that physicians practicing in different but demographically similar communities treated the same medical conditions differently. They found, for example, significant unexplained variation in tonsillectomy rates in neighboring cities.
Wennberg's influential inference from these data was that physicians were fundamentally uncertain about how to treat their patients. Doctors did not have good science, he reasoned, and therefore they could be swayed by clinically extraneous factors such as medical tradition. Physician uncertainty was seen to have implications for both the quality and the cost of care: At least some physicians were providing less-than-optimal care and spending scarce healthcare dollars to do it. The remedy for the so-called small-area variation problem seemed clear. Physicians, Wennberg argued, needed better science-in particular, probabilistic studies of which interventions will work and which will not. Randomized controlled trials (RCTs) became the gold standard for clinical research and the foundation for a new medical paradigm.
RCTs can indicate, with as little bias as possible, whether a given intervention is efficacious-that is, whether it works under trial conditions. Why not, then, use these studies to make practice decisions and, by extension, decisions about human services policy? Why wouldn't practitioners do what works and demand the same from others? This logic of EBM has proven very powerful in medicine and now, as EBP, in human services. Although proponents of EBP have found it difficult to change practitioner behavior, they increasingly determine the design of programs and the content of practice guidelines. Decision makers take an EBP approach to which services are financed and which research projects funded. EBP offers human services the science to compete with, for example, pharmacological interventions for access to the public purse.
EBP is ascendant, but serious questions about its usefulness remain. First, RCTs and other experimental studies can tell what is efficacious, but not necessarily what is effective-that is, what works under the conditions of actual practice. The methodological rigor of controlled studies may undercut their validity as guides to action. For example, most RCTs exclude subjects with comorbidities or other characteristics that might obscure the relationship between dependent and independent variables. This is one reason why, for example, clinical trials so often exclude older people.
Furthermore, the study of efficacy requires standardized interventions and participants who are unaware to which group-experimental or control-they have been assigned. EBP also tends to gloss over the critical distinction between interventions that have been shown not to work and those that have not yet been shown to work. Consequently, interventions that are easy to research or whose research is well funded may prevail in program and policy decisions even if some other service is truly more effective.
Thoughtful proponents of EBP are concerned with the limitations of experimental research and pursue a more nuanced approach to which research questions require which research methodologies. Some use "EBP" to mean greater use of research generally-quasiexperimental and even case studies-in practice and policymaking. This approach is nothing new, however, and EBP means to make a break with the past. It entails a hierarchy of evidence types, where RCTs remain at the top to trump alternative ways of knowing. In a number of states, for example, reimbursement for mental health services is contingent on RCT evidence that they work. …