Bayesian Binomial Regression: Predicting Survival at a Trauma Center

Article excerpt

1. INTRODUCTION

The recent development of Monte Carlo methods has eliminated most of the difficulties historically associated with Bayesian analyses of nonlinear models. This paper illustrates the simplicity of a fully Bayesian approach to binomial regression models using data from the University of New Mexico Trauma Center. In particular, we discuss a prior specification that focuses on eliciting binomial probabilities, rather than specifications for the more esoteric regression coefficients. We use simple Monte Carlo methods for prediction, inferences on regression coefficients and probabilities, diagnostics, link selection, and sensitivity analysis of the prior. A complete analysis can be handled easily and accurately within this framework.

Most of the methods discussed have appeared elsewhere. Leonard (1972) discussed Bayesian hierarchical models for binomial data. Zellner and Rossi (1984) gave an overview of Bayesian methods for binomial regression models. Johnson (1985) introduced predictive case deletion diagnostics for binomial regression. We integrate their ideas along with Bedrick, Christensen, and Johnson's (1996) (hereafter referred to as BCJ) ideas on specifying priors to provide a variety of tools appropriate for analyzing binomial response data.

Consider regression data ([y.sub.i], [x'.sub.i]), i = 1, ..., n, where the [y.sub.i]s are success proportions from independent binomial [N.sub.i] random variables and the [x.sub.i]s are known k vectors of covariates. The probability of success for any single trial y with covariate x is r(x' [Beta]), that is, r(x' [Beta]) [equivalent] p(y = 1|x, [Beta]) = p, where [Beta] is an unknown k vector of regression coefficients. The function r([multiplied by]) can be an arbitrary cdf, but we will assume without much loss of generality that r([multiplied by]) corresponds to either the logistic, probit, or complementary log-log models, that is,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

The link function is [r.sup.-1](p) = x' [Beta], where [r.sup.-1](p) = log{p/(1 - p)}, [[Phi].sup.-1](p), and log{-log(1 - p)) for the three models, respectively. The likelihood for data Y = ([y.sub.1, ..., [y.sub.n])' is

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

For a prior [Pi]([Beta]) the posterior of [Beta] is

[Pi]([Beta]|Y) = L([Beta]|Y)[Pi]([Beta])/[integral] L([Beta]|Y)[Pi]([Beta])d[Beta].

Most interesting aspects of a Bayesian analysis are obtained from various integrals involving [Pi]([Beta]|Y). Such integrals are intractable, so we use simulation methods to obtain approximations. Two popular approaches use importance sampling and Gibbs sampling (Zellner and Rossi 1984; Dellaportas and Smith 1993). Alternative approaches based on Laplace approximations (Tierney and Kadane 1986) and numerical integration (Smith, Skene, Shaw, Naylor, and Dransfield 1985) are available, but are less commonly used.

Simulation methods yield a discrete approximation to the posterior distribution taking values [[Beta].sup.i] with probability [q.sub.i], i = 1, ..., t as discussed in Section 3. Given a function h([Beta]) the posterior expectation [[Theta].sub.h] [equivalent] E{h([Beta])|Y} is approximated

by

(1.1) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Typically, the Strong Law of Large Numbers applies to give almost sure convergence as t increases.

Section 2 discusses standard Bayesian inference, with emphasis on the predictive distribution. It includes influence measures and a procedure for selection of the appropriate link function. Section 3 discusses computational issues. Section 4 contains concluding remarks and other suggested source material.

2. BAYESIAN INFERENCE

2.1 Specifying the Prior

Several methods of specifying priors for binomial regression problems have been proposed. The standard approach has been to assume either a normal prior, or the diffuse prior [Pi]([Beta]) = 1 for the regression coefficients. …