Bayesian Statistics: An Overview
Robert L. Winkler Duke University INSEAD, Fontainebleau, France
In the Bayesian approach to statistics, an attempt is made to utilize all available information in order to reduce the amount of uncertainty present in an inferential or decision-making problem. As new information is obtained, it is combined with any previous information to form the basis for making inferences or deci& sions. The formal mechanism used to combine the new information with the previously available information is known as Bayes' theorem: This explains why this general approach to statistics is known as Bayesian statistics. Bayes' theorem involves the use of probabilities, which is only natural, since probability can be thought of as the mathematical language of uncertainty. At any given point in time, the statistician's state of information about some uncertain quantity can be represented by a set of probabilities. When new information is obtained these probabilities are revised in order that they may represent all of the available information.
An important feature of the Bayesian approach is the notion that any variable about which the statistician is uncertain can be treated as a random variable. This means, for example, that parameters of statistical models (e.g., a Bernoulli proportion, a normal mean, a regression coefficient) can be viewed as random variables instead of as fixed, unknown quantities. As a result, probability dis& tributions for such parameters can be considered and can be updated as new information is obtained. Indeed, the primary inferential statement about a param& eter is a probability distribution for the parameter, and other types of inferences, such as point estimates, interval estimates, and tests of hypotheses, if desired, are viewed as secondary and are based on this probability distribution. There&