Academic journal article Agricultural and Resource Economics Review

Contingent Valuation, Hypothetical Bias, and Experimental Economics

Academic journal article Agricultural and Resource Economics Review

Contingent Valuation, Hypothetical Bias, and Experimental Economics

Article excerpt

Although the contingent valuation method has been widely used to value a diverse array of nonmarket environmental and natural resource commodities, recent empirical evidence suggests it may not accurately estimate real economic values. The hypothetical nature of environmental valuation surveys typically results in responses that are significantly greater than actual payments. Economists have had mixed success in developing techniques designed to control for this "hypothetical bias." This paper highlights the role of experimental economics in addressing hypothetical bias, and identifies a gap in the existing literature by focusing on the underlying causes of this bias. Most of the calibration techniques used today lack a theoretical justification, and therefore these procedures need to be used with caution. We argue that future experimental research should investigate the reasons hypothetical bias persists. A better understanding of the causes should enhance the effectiveness of calibration techniques.

Key Words: contingent valuation, experiments, hypothetical bias, stated preference

Consider the challenge faced by a contingent valuation (CV) practitioner who is interested in estimating the economic value of a non-market good, such as visibility at a National Park or the protection of habitat for an endangered species. The CV survey is carefully designed and constructed (e.g., Mitchell and Carson, 1989; Champ, Brown, and Boyle, 2004) and the results are produced with the latest estimation techniques (Haab and McConnell, 2003). We now have an estimate for the economic value of the good-but is this value accurate?

The answer to this question has stirred considerable, and sometimes contentious debate, as highlighted by litigation resulting from the 1989 Exxon Valdez oil spill in Prince William Sound [see Diamond and Hausman (1994); Hanemann (1994); and Portney (1994) for a synthesis of the debate]. Using only field CV data, we cannot be certain that value estimates are accurate. Why? Since CV surveys are hypothetical in both the payment for and provision of the good in question, we do not know whether what an individual says she would do in a hypothetical setting matches what she will do when actually given the opportunity to do so.1 And, without the ability to observe the latter, it is difficult to confirm whether the values elicited from a hypothetical survey accurately reflect the real economic value of the good. Some researchers have expressed concern that this lack of a consequential economic commitment in CV surveys often leads to hypothetical bias in which economic values are overstated. For example, as Harrison and Rutström (forthcoming) assert: "As a matter of logic, if you do not have to pay for the good, but a higher verbal willingness-to-pay response increases the chance of its provision, then verbalize away to increase your expected utility!"

Economics experiments offer the potential to shed some light on the accuracy of responses to hypothetical CV questions. Experimental research has a well-established framework which was widely recognized when Vernon Smith became a co-recipient of the Nobel Memorial Prize in Economics "for having established laboratory experiments as a tool in empirical economic analysis." What distinguishes experiments from other empirical techniques are control and replication. The ability to control the environment under which individuals make economic decisions is what gives experiments power. The experimenter can vary treatments to test hypotheses about the effects of different explanatory variables on individual choices. Unlike a typical field CV survey, a carefully designed experiment can include both hypothetical and real payment scenarios. By comparing outcomes in these two settings, one can make some inferences about the existence of hypothetical bias, its causes, and ways to mitigate its effects. Moreover, other researchers can replicate, and perhaps extend, the experiment to test its robustness. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.