Expert Political Judgment: How Good Is It? How Can We Know?

Expert Political Judgment: How Good Is It? How Can We Know?

Expert Political Judgment: How Good Is It? How Can We Know?

Expert Political Judgment: How Good Is It? How Can We Know?


The intelligence failures surrounding the invasion of Iraq dramatically illustrate the necessity of developing standards for evaluating expert opinion. This book fills that need. Here, Philip E. Tetlock explores what constitutes good judgment in predicting future events, and looks at why experts are often wrong in their forecasts.

Tetlock first discusses arguments about whether the world is too complex for people to find the tools to understand political phenomena, let alone predict the future. He evaluates predictions from experts in different fields, comparing them to predictions by well-informed laity or those based on simple extrapolation from current trends. He goes on to analyze which styles of thinking are more successful in forecasting. Classifying thinking styles using Isaiah Berlin's prototypes of the fox and the hedgehog, Tetlock contends that the fox--the thinker who knows many little things, draws from an eclectic array of traditions, and is better able to improvise in response to changing events--is more successful in predicting the future than the hedgehog, who knows one big thing, toils devotedly within one tradition, and imposes formulaic solutions on ill-defined problems. He notes a perversely inverse relationship between the best scientific indicators of good judgement and the qualities that the media most prizes in pundits--the single-minded determination required to prevail in ideological combat.

Clearly written and impeccably researched, the book fills a huge void in the literature on evaluating expert opinion. It will appeal across many academic disciplines as well as to corporations seeking to develop standards for judging expert decision-making.


Autobiographical exercises that explore why the researcher opted to go forward with one project rather than another have often struck me as self-dramatizing. What matters is the evidence, not why one collected it. Up to now, therefore, I have hewed to the just-the-facts conventions of my profession: state your puzzle, your methods, and your answers, and exit the stage.

I could follow that formula again. I have long been puzzled by why so many political disagreements—be they on national security or trade or welfare policy—are so intractable. I have long been annoyed by how rarely partisans admit error even in the face of massive evidence that things did not work out as they once confidently declared. and I have long wondered what we might learn if we approached these disputes in a more aggressively scientific spirit—if, instead of passively watching warring partisans score their own performance and duly pronounce themselves victorious, we presumed to take on the role of epistemological referees: soliciting testable predictions, scoring accuracy ourselves, and checking whether partisans change their minds when they get it wrong.

I initially implemented my research plan tentatively, in a trial-and-error fashion in small-scale forecasting exercises on the Soviet Union in the mid-1980s, and then gradually more boldly, in larger-scale exercises around the world over the next decade. My instinct was to adopt and, when necessary, adapt methods of keeping score from my home discipline of psychology: correspondence measures of how close political observers come to making accurate predictions and logical-process measures of the degree to which observers play fair with evidence and live up to reputational bets that require them to update their beliefs.

Without giving too much away, I can say that surprises are in store. We shall discover that the best forecasters and timeliest belief updaters shared a self-deprecating style of thinking that spared them some of the big mistakes to which their more ideologically exuberant colleagues were prone. There is often a curiously inverse relationship between how well forecasters thought they were doing and how well they did.

I could now exit the stage. But the project makes more sense when traced to its origins: my first close-up contact with the ingenuity and determination that political elites display in rendering their positions impregnable to evidence. the natural starting point is a 1984 meeting at . . .

Search by... Author
Show... All Results Primary Sources Peer-reviewed


An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.