Academic journal article Psychonomic Bulletin & Review

Subjective Recalibration of Advisors' Probability Estimates

Academic journal article Psychonomic Bulletin & Review

Subjective Recalibration of Advisors' Probability Estimates

Article excerpt

Are decision makers sensitive to the statistical properties (i.e., calibration) of probability estimates that they receive from advisors? After specifying the ideal use of such estimates, we derive the roughly ideal forecast consumer (RIFC) and generalize it to account for how humans might use the estimates. We report an experiment in which participants first experienced various advisors by seeing their probability estimates and the associated outcomes and then provided confidence judgments in the presence of the advisors' estimates. The generalized model described the data well and showed that the participants were appropriately sensitive to the statistical properties of the advisors. Models of the individuals were better calibrated than the participants themselves, but still inferior to the RIFC. A detailed description of our model-testing procedure can be found in an appendix to the article.

(ProQuest: ... denotes formulae omitted.)

Decision makers (DMs) often rely on probabilistic estimates from external sources to guide their behavior. Examples include investment decisions made on the basis of market forecasts and military and/or diplomatic actions guided by intelligence forecasts. This research explores the DMs' use of the estimates as a function of their quality.

The report is organized as follows: First, we review pertinent conceptual and methodological aspects of measuring the quality of the estimates. Then, we briefly summarize the evidence regarding how DMs use estimates provided by external sources. Finally, we develop a cognitively plausible model for using the estimates, which we call the roughly ideal forecast consumer (RIFC) because it yields close to optimal performance, and use it in an experiment as a baseline against which to assess human performance.1 We conclude with theoretical implications of the results.

Calibration of Probabilities

The correspondence between estimates and actual events is the topic of an extensive literature (e.g., Wallsten, Budescu, Erev, & Diederich, 1997). For our purposes, a discussion of calibration suffices. An estimate is well calibrated if it matches the relative frequency of the events conditional on the estimate. To illustrate, a precipitation forecast of 60% is well calibrated if rain is observed within the forecast area on 60 of 100 days on which the forecast is used.

We distinguish between the estimate and the observed relative frequency conditional on the estimate by referring to the former as the subjective probability (SP) and the latter as the objective probability (OP). The calibration measure is typically used to characterize the quality of a distribution of estimates, each associated with a different event. In this context, calibration is a measure of the extremity of the OPs relative to the extremity of the corresponding SPs (see, e.g., Erev, Wallsten, & Budescu, 1994; Wallsten et al., 1997; Yates, 1982, 1990). According to this definition, a set of SPs is well calibrated if (in the long run) OP equals SP for all SP; it is overconfident when SP is more extreme than OP; and it is underconfident when SP is less extreme than OP. The cases are depicted by three advisors' calibration curves in the left panel of Figure 1, which graphs OP (times 100) as a function of advisor estimate, SP (times 100). The main diagonal represents a well-calibrated advisor, for whom OP = SP for all values of SP. The curve that begins below the diagonal, crosses it at the middle, and continues above the diagonal represents an underconfident advisor, in that events occur (or fail to occur) with more extreme probabilities than he or she estimates. The remaining curve represents an overconfident advisor, in that events occur (or fail to occur) with less extreme probabilities than he or she estimates.

Ideal use of an advisor's estimate requires treating the advisor's SP, p, not as a probability, but simply as a datum that signals the event E with some OP (cf. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.