decision analysis, preferences are quantified in terms of utilities, which can deal with a single attribute such as monetary payoff or with multiple attributes (e.g., cost, length of life, and quality of life in a medical decision). Probabilities, representing uncertainties, and utilities, representing preferences, are combined via the calculation of expected utilities.
Bayesian analysis is particularly suitable for decision modeling because it provides probabilities both for unobservable parameters (prior and posterior distributions) and for observable events or variables (predictive distributions). This permits the determination of any sorts of expected values of interest and enables the decision maker to understand the upside potential and downside risk associated with a course of action. Moreover, the key aspect of Bayesian statistics, probability revision, becomes especially important in dynamic decision-making problems where sequences of decisions must be made over time and new information becomes available over time. Finally, by anticipating the possible reactions to new sample information before it is actually obtained, the decision maker can determine the expected value of the sample information and can thus decide whether or not to seek such information.
A lengthy discussion of the modeling of decision-making problems under uncertainty would be inappropriate here. Many books are available for those who would like to pursue this topic further; examples include Raiffa ( 1968), Keeney and Raiffa ( 1976), Lindley ( 1985), Bell, Raiffa, and Tversky ( 1988), Smith ( 1988), and Clemen ( 1991). The point of interest for the purposes of this chapter is that Bayesian procedures are of great value in the modeling of decision-making problems under uncertainty.
Bayesian statistics provides a unified framework within which to approach problems of inference and decision making under uncertainty. It uses probability, the mathematical language of uncertainty, to represent uncertainty quantitatively. Moreover, it operates in an intuitively appealing manner, similar in nature to the way individuals react qualitatively to uncertainty. At any given point, the statistician's uncertainty is represented by probabilities for uncertain quantities. As new information is then obtained, these probabilities are revised just as an individual revises judgments upon seeing new evidence.
The revision of probabilities is accomplished formally via Bayes' theorem, which requires two sets of inputs: the initial probabilities and the likelihoods of the new information given the possible values for the uncertain parameters or variables. The assessment of likelihoods can be done subjectively, but it is typically accomplished by modeling the data-generating process (the process generating the new information). For instance, some models commonly used for dichotomous processes are Bernoulli and Markov models; for a process generat-