Academic journal article Fordham Urban Law Journal

Just the (Unwieldy, Hard to Gather, but Nonetheless Essential) Facts, Ma'am: What We Know and Don't Know about Problem-Solving Courts

Academic journal article Fordham Urban Law Journal

Just the (Unwieldy, Hard to Gather, but Nonetheless Essential) Facts, Ma'am: What We Know and Don't Know about Problem-Solving Courts

Article excerpt

   Policymakers often think, incorrectly, that an evaluation is like
   an "audit" or trial in which the results arc usually clear cut and
   definitive. Either the funds were spent or they weren't; either
   the program served its intended beneficiaries at a reasonable
   cost per client or it didn't. Such "audit" questions are much easier
   to answer than the "evaluation" questions of cause and effect,
   often stretching out over a lifetime of the targets of crime
   prevention efforts. (1)

The expected value of any net impact assessment of any large scale social program is zero. (2)


Robert Martinson's seminal 1974 Public Interest article, What Works? Questions and Answers About Prison Reform offered a bleak assessment of rehabilitative initiatives aimed at criminal offenders. (3) This literary review of prison-based treatment programs--from vocational training to psychotherapy--concluded that, "[w]ith few and isolated exceptions, the rehabilitative efforts that have been reported so far have had no appreciable effect on recidivism." (4) Martinson determined the failure of treatment programs by demonstrating a failure of research because the more than 200 studies he reviewed gave little evidence that the programs studied were linked to a reduction in crime. (5) Yet, what seems to have most frustrated the author were the studies themselves, many of which left unclear whether the programs had not worked, or whether the system under which they were administered prevented successful implementation. (6)

Martinson's hugely influential article cast a pall over rehabilitative criminal justice programs for years. To this day, reformers in the field find themselves grappling with the suspicion--held by many academics, policymakers, and citizens--that "nothing works." The 1996 report, Preventing Crime: What Works, What Doesn't, What's Promising confronted this mindset head on:

   Merely because a program has not been evaluated properly does
   not mean that it is failing to achieve its goals. Previous reviews
   of crime prevention programs, especially in prison rehabilitation,
   have made that error, with devastating consequences for
   further funding for those efforts. In addressing the unevaluated
   programs, we must blame the lack of documented effectiveness
   squarely on the evaluation process, and not on the programs
   themselves. (7)

Sherman attempted to infuse hope into the field by throwing out the "nothing works" conclusion. (8) But the more accurate conclusion he offered in its place--that very little is known about what works (9)--comes with its own set of frustrations.

Problem-solving court research, (10) most of it conducted post-Sherman, faces a political climate that prefers definitive answers over cautious preliminary findings, and is still likely to mistake uncertainty for proof of failure. Yet, at present, cautious preliminary findings are the best tools. The research to date on these new judicial experiments does not offer many definitive conclusions. Projects that fall under the "problem-solving" umbrella have been around for a relatively short time--anywhere from a decade to a few months. It takes time and money to track recidivism over the long term, to meaningfully weigh program costs and benefits, and to compare new practices to one another, as well as to business as usual.

The best research designs use a random assignment model, splitting a single pool of defendants between an experimental track and normal case processing, but in most cases this "gold standard" is not feasible for studies of problem-solving courts. Quasi-experimental comparison groups range from the good (defendants in traditional court with carefully matched characteristics to drug court participants), to the not so good (less rigorously selected groups of defendants undergoing normal prosecution), to the bad (defendants who refused to participate in the problem-solving court). …

Search by... Author
Show... All Results Primary Sources Peer-reviewed


An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.