When modeling valuation under uncertainty, economists generally prefer expected utility because it has an axiomatic foundation, meaning that the resulting choices will satisfy a number of rationality requirements. In expected utility theory, values are computed by multiplying probabilities of each possible state of nature by the payoff in that state and summing the results. The drawback of this approach is that all state probabilities need to be dealt with separately, which becomes extremely cumbersome when it comes to learning. Finance academics and professionals, however, prefer to value risky prospects in terms of a trade-off between expected reward and risk, where the latter is usually measured in terms of reward variance. This mean-variance approach is fast and simple and greatly facilitates learning, but it impedes assigning values to new gambles on the basis of those of known ones. To date, it is unclear whether the human brain computes values in accordance with expected utility theory or with mean-variance analysis. In this article, we discuss the theoretical and empirical arguments that favor one or the other theory. We also propose a new experimental paradigm that could determine whether the human brain follows the expected utility or the mean-variance approach. Behavioral results of implementation of the paradigm are discussed.
(ProQuest: ... denotes formulae omitted.)
When choices under uncertainty satisfy certain basic rationality criteria, they can be thought of as maximizing a utility index that is obtained by multiplying probabilities of possible states by utilities of the outcomes promised in each of the states. This equivalence between choices and maximization of an expected utility index was first demonstrated by Von Neumann and Morgenstern (1947) and has since been proven to hold under quite general conditions of uncertainty (Savage, 1972) and far less stringent conditions of rationality-better reflecting the properties of actual human choices (Tversky & Kahneman, 1992).
The maximization of an expected utility index may be thought of not only as representing choices, but also as a means to compute choice. This view is implicit in much of recent neuroeconomic work in which attempts are made to find separate neurocorrelates of probabilities (Chandrasekhar, Capra, Moore, Noussair, & Berns, 2008), of utilities assigned to magnitudes (Tom, Fox, Trepel, & Poldrack, 2007), or both (Tobler, O'Doherty, Dolan, & Schultz, 2007). But although expected utility theory does provide an effective way to compute choices under uncertainty, it is by no means the only way.
Indeed, in financial economics, it has long been tradition to compute the value of risky gambles in terms of statistical moments, expected payoff, payoff variance, and so forth (Black & Scholes, 1973; Markowitz, 1952).
The approach is not unrelated to expected utility: A mathematical operation called the Taylor series expansion demonstrates that a finite number of moments suffices to approximate well any smooth expected utility index (see the Appendix). This being said, we hasten to add that financial economists usually consider only the first two statistical moments (namely, expected payoff and payoff variance); two moments would, in general, provide only a very crude approximation of expected utility. Also, the square root of variance-that is, standard deviation-is often used as a measure of risk, instead of variance, because in many realistic cases (examples will follow), standard deviation is of the same order of magnitude as mean payoff and, hence, easily comparable.
Here, we raise the fundamental question of how the human brain computes choices. Does it follow the approach in classical decision theory, multiplying state probabilities by utilities of magnitudes to be received in each state, or does it rather opt for the financial approach, assessing expected reward and risk (measured as variance), to be integrated in a valuation signal that drives choices? …