Austin, P. C., and Tu, J. V. (2004), "Bootstrap Methods for Developing Predictive Models," the American Statistician, 58, 131-137: Comment by Sauerbrei, Royston, and Schumacher and Reply

By Sauerbrei, W.; Royston, P. et al. | The American Statistician, February 2005 | Go to article overview

Austin, P. C., and Tu, J. V. (2004), "Bootstrap Methods for Developing Predictive Models," the American Statistician, 58, 131-137: Comment by Sauerbrei, Royston, and Schumacher and Reply


Sauerbrei, W., Royston, P., Schumacher, M., The American Statistician


In the article "Bootstrap Methods for Developing Predictive Models." Austin and Tu used bootstrap resampling in conjunction with automated methods of variable selection with the intention to develop parsimonious prediction models. As briefly mentioned in the article, they used a simplification of an approach proposed by two of us more than a decade ago (Sauerbrei and Schumacher 1992). As in our article, they used back ward elimination within each of a large number of bootstrap samples to develop a predictive model. For a given variable X they determined the proportion of bootstrap samples in which that variable was selected; this quantity that we termed relative inclusion frequency or inclusion fraction (h(X)) is then used for the decision whether a specific variable X is included in the final model or not.

Although we are pleased that this sensitive approach was taken and resulted in a useful prediction model, we would like to comment on some aspects and what we see as weaknesses of the analysis. We will also refer to some recent developments that may improve the modeling of continuous predictors.

The authors illustrated their approach in a case study to predict 30-day mortality for patients with acute myocardial infarction (AMI or heart attack). They created a series of seven predictive models containing variables which were selected in at least 100%, 80%, 60%, 50%, 40%, 20%, and 10% of bootstrap replicates. They also considered an eighth model, the full model containing all 30 candidate variables. Setting aside a validation dataset comprising one third of the original large sample of 3,882 patients with complete data, they compared the models by assessing their performance in the validation dataset with respect to goodness of fit (Hosmer-Lemeshow statistic) and discriminative ability (c-index).

Except for the simplest model consisting only of the three variables selected in all 1,000 bootstrap replications, p values of the goodness-of-fit test were all much larger than .10. This simplest model had a c-index of .771; c-indices of the other seven models varied only slightly (.802-.824). The authors stated that their model based on eight variables with h([X.sub.i]) [greater than or equal to] .60 for all variables [X.sub.i] compares favorably with models reported in the literature, and that this model is more parsimonious than most such models reported in the literature. They concluded that bootstrap resampling in conjunction with automated model selection methods could identify a parsimonious model with excellent predictive performance.

Their main result confirms our experience that simple models including all "strong" predictors only have discriminative ability similar to that of more complicated models using more variables (Sauerbrei 1999). Usually the linear predictors from simple and complex models are highly correlated. For example, in a study of patients with glioma, Pearson correlation coefficients were between .94 and .99 for prognostic indices from the full model with 15 variables and three models with 9, 5, and 4 variables derived with backward elimination and nominal significance levels of .157, .05, and .01, respectively (Sauerbrei 1999). In addition to the high correlation between the prognostic indices, it should be borne in mind that "weak" predictors will be included only in models with many variables. Apart from some "strong" factors, several "weak" and uninfluential factors are considered as potential predictors. Because the inclusion of a variable depends on estimated regression coefficients rather than the true (unknown) values, a weak predictor is more likely to be included in a model if the corresponding regression coefficient is overestimated (Copas and Long 1991). Clearly, in new data weak predictors will lose a substantial part of their "partial" predictive ability.

As discussed more than a decade ago (Chen and George 1985; Altman and Andersen 1989: Sauerbrei and Schumacher 1992), methods of variable selection will often exclude weaker factors in a bootstrap sample, resulting in smaller relative inclusion frequencies. …

The rest of this article is only available to active members of Questia

Sign up now for a free, 1-day trial and receive full access to:

  • Questia's entire collection
  • Automatic bibliography creation
  • More helpful research tools like notes, citations, and highlights
  • A full archive of books and articles related to this one
  • Ad-free environment

Already a member? Log in now.

Notes for this article

Add a new note
If you are trying to select text to create highlights or citations, remember that you must now click or tap on the first word, and then click or tap on the last word.
One moment ...
Default project is now your active project.
Project items

Items saved from this article

This article has been saved
Highlights (0)
Some of your highlights are legacy items.

Highlights saved before July 30, 2012 will not be displayed on their respective source pages.

You can easily re-create the highlights by opening the book page or article, selecting the text, and clicking “Highlight.”

Citations (0)
Some of your citations are legacy items.

Any citation created before July 30, 2012 will labeled as a “Cited page.” New citations will be saved as cited passages, pages or articles.

We also added the ability to view new citations from your projects or the book or article where you created them.

Notes (0)
Bookmarks (0)

You have no saved items from this article

Project items include:
  • Saved book/article
  • Highlights
  • Quotes/citations
  • Notes
  • Bookmarks
Notes
Cite this article

Cited article

Style
Citations are available only to our active members.
Sign up now to cite pages or passages in MLA, APA and Chicago citation styles.

(Einhorn, 1992, p. 25)

(Einhorn 25)

1

1. Lois J. Einhorn, Abraham Lincoln, the Orator: Penetrating the Lincoln Legend (Westport, CT: Greenwood Press, 1992), 25, http://www.questia.com/read/27419298.

Cited article

Austin, P. C., and Tu, J. V. (2004), "Bootstrap Methods for Developing Predictive Models," the American Statistician, 58, 131-137: Comment by Sauerbrei, Royston, and Schumacher and Reply
Settings

Settings

Typeface
Text size Smaller Larger Reset View mode
Search within

Search within this article

Look up

Look up a word

  • Dictionary
  • Thesaurus
Please submit a word or phrase above.
Print this page

Print this page

Why can't I print more than one page at a time?

Help
Full screen

matching results for page

    Questia reader help

    How to highlight and cite specific passages

    1. Click or tap the first word you want to select.
    2. Click or tap the last word you want to select, and you’ll see everything in between get selected.
    3. You’ll then get a menu of options like creating a highlight or a citation from that passage of text.

    OK, got it!

    Cited passage

    Style
    Citations are available only to our active members.
    Sign up now to cite pages or passages in MLA, APA and Chicago citation styles.

    "Portraying himself as an honest, ordinary person helped Lincoln identify with his audiences." (Einhorn, 1992, p. 25).

    "Portraying himself as an honest, ordinary person helped Lincoln identify with his audiences." (Einhorn 25)

    "Portraying himself as an honest, ordinary person helped Lincoln identify with his audiences."1

    1. Lois J. Einhorn, Abraham Lincoln, the Orator: Penetrating the Lincoln Legend (Westport, CT: Greenwood Press, 1992), 25, http://www.questia.com/read/27419298.

    Cited passage

    Thanks for trying Questia!

    Please continue trying out our research tools, but please note, full functionality is available only to our active members.

    Your work will be lost once you leave this Web page.

    For full access in an ad-free environment, sign up now for a FREE, 1-day trial.

    Already a member? Log in now.