When All Models Are Wrong: More Stringent Quality Criteria Are Needed for Models Used at the Science/policy Interface, and Here Is a Checklist to Aid in the Responsible Development and Use of Models

Article excerpt

Beware the rise of the government scientists turned lobbyists," trumpeted the headline on an article by British journalist George Mombiot in the left-leaning newspaper The Guardian, adding that "From badgers to bees, government science advisers are routinely misleading us to support the politicians' agendas." The article, published on April 29, 2013, criticized the current chief scientist at the UK's environment department for his assessment of the desirability of culling badgers, and the British government's new chief scientist for his opposition to the European ban on the pesticides blamed for killing bees and other pollinators.

From the other side of the ocean and the political spectrum, Rep. Chris Stewart (R-UT) asked (rhetorically) during a U.S. congressional hearing in July 2013 whether the federal Environmental Protection Agency's study of shale-gas fracking "is a genuine, fact-finding, scientific exercise, or a witch-hunt to find a pretext to regulate."

Wherever one stands on these specific issues, such skepticism seems increasingly common, and increasingly independent of ideological position. Science is facing a question: Does the tone of these sorts of attacks reflect a collapse of trust in the scientific enterprise and in its social and institutional role?

Scientific leaders have long portrayed their enterprise as a self-regulating community bound to a higher ethical commitment to truth-telling than society as a whole. Yet the tone and intractability of controversies ranging from badgers to bees to fracking suggests that society may be less willing to accept such claims than in the past.

Perhaps with good reason. The October 19, 2013, cover article of The Economist, a nonspecialist periodical with a centrist, proscience political stance, asserts: "Scientists like to think of science as self-correcting. To an alarming degree, it is not It goes on to recommend that "checklists ... should be adopted widely, to help guard against the most common research errors. Budding scientists must be taught technical skills, including statistics, and must be imbued with skepticism towards their own results and those of others."

The scientific enterprise seems only slowly to be awakening to this problem and its dangers. In 2011, Science magazine published a special series of articles on reproducibility problems in several disciplines, while its counterpart Nature published an article by a pharmaceutical industry executive suggesting rules to spot suspect work in preclinical cancer papers published in top-tier journals. The journal Organic Syntheses accepts only papers whose syntheses can be reproduced by a member of the journal's editorial board, and Science Exchange, a commercial online portal, has launched a Reproducibility Initiative that matches scientists with experimental service providers. Meanwhile, the number of retractions of published scientific work continues to rise.

Against this background of declining trust and increasing problems with the reliability of scientific knowledge in the public sphere, the dangers for science become most evident when models--abstracts of more complex real-world problems, generally rendered in mathematical terms--are used as policy tools. Evidence of poor modeling practice and of negative consequences for society abounds. Bestselling books by Nassim Taleb and Joseph Stiglitz have documented for public consumption the contributions of models to recent financial disasters; just two examples of what seems to be a proliferation of books, reports, and papers that lambast the role of economists and mathematicians in pricing a class of derivatives at the heart of the subprime mortgage crisis. Even the Queen of England got into the act, questioning the London School of Economics' economists on why they did not see the crisis coming.

The situation is equally serious in the field of environmental regulatory science. Orrin Pilkey and Linda Pilkey-Jarvis, in a stimulating small volume titled Useless Arithmetic: Why Environmental Scientists Can't Predict the Future, offer a particularly accessible series of horror stories about model misuse and consequent policy failure. …