Austin, P. C., and Tu, J. V. (2004), "Bootstrap Methods for Developing Predictive Models," the American Statistician, 58, 131-137: Comment by Sauerbrei, Royston, and Schumacher and Reply

Article excerpt

In the article "Bootstrap Methods for Developing Predictive Models." Austin and Tu used bootstrap resampling in conjunction with automated methods of variable selection with the intention to develop parsimonious prediction models. As briefly mentioned in the article, they used a simplification of an approach proposed by two of us more than a decade ago (Sauerbrei and Schumacher 1992). As in our article, they used back ward elimination within each of a large number of bootstrap samples to develop a predictive model. For a given variable X they determined the proportion of bootstrap samples in which that variable was selected; this quantity that we termed relative inclusion frequency or inclusion fraction (h(X)) is then used for the decision whether a specific variable X is included in the final model or not.

Although we are pleased that this sensitive approach was taken and resulted in a useful prediction model, we would like to comment on some aspects and what we see as weaknesses of the analysis. We will also refer to some recent developments that may improve the modeling of continuous predictors.

The authors illustrated their approach in a case study to predict 30-day mortality for patients with acute myocardial infarction (AMI or heart attack). They created a series of seven predictive models containing variables which were selected in at least 100%, 80%, 60%, 50%, 40%, 20%, and 10% of bootstrap replicates. They also considered an eighth model, the full model containing all 30 candidate variables. Setting aside a validation dataset comprising one third of the original large sample of 3,882 patients with complete data, they compared the models by assessing their performance in the validation dataset with respect to goodness of fit (Hosmer-Lemeshow statistic) and discriminative ability (c-index).

Except for the simplest model consisting only of the three variables selected in all 1,000 bootstrap replications, p values of the goodness-of-fit test were all much larger than .10. This simplest model had a c-index of .771; c-indices of the other seven models varied only slightly (.802-.824). The authors stated that their model based on eight variables with h([X.sub.i]) [greater than or equal to] .60 for all variables [X.sub.i] compares favorably with models reported in the literature, and that this model is more parsimonious than most such models reported in the literature. They concluded that bootstrap resampling in conjunction with automated model selection methods could identify a parsimonious model with excellent predictive performance.

Their main result confirms our experience that simple models including all "strong" predictors only have discriminative ability similar to that of more complicated models using more variables (Sauerbrei 1999). Usually the linear predictors from simple and complex models are highly correlated. For example, in a study of patients with glioma, Pearson correlation coefficients were between .94 and .99 for prognostic indices from the full model with 15 variables and three models with 9, 5, and 4 variables derived with backward elimination and nominal significance levels of .157, .05, and .01, respectively (Sauerbrei 1999). In addition to the high correlation between the prognostic indices, it should be borne in mind that "weak" predictors will be included only in models with many variables. Apart from some "strong" factors, several "weak" and uninfluential factors are considered as potential predictors. Because the inclusion of a variable depends on estimated regression coefficients rather than the true (unknown) values, a weak predictor is more likely to be included in a model if the corresponding regression coefficient is overestimated (Copas and Long 1991). Clearly, in new data weak predictors will lose a substantial part of their "partial" predictive ability.

As discussed more than a decade ago (Chen and George 1985; Altman and Andersen 1989: Sauerbrei and Schumacher 1992), methods of variable selection will often exclude weaker factors in a bootstrap sample, resulting in smaller relative inclusion frequencies. …