Assessing Influence in Variable Selection Problems

Article excerpt

1. INTRODUCTION

Variable selection techniques are widely used to determine which variables are "important" predictors, find a reduced set of predictors, or provide better prediction by avoiding overfitting.

Given these goals, it is clearly undesirable for the final model to depend strongly on only a few observations. Measures of influence are thus very important for model building. In this article we examine the use of a "leave-one-out" measure of changes in predicted values to assess influence of individual observations in model building.

A number of measures of influence have been suggested for ordinary multiple regression. A good review of methods for defining the influence of individual cases is Chatterjee and Hadi (1986) and the discussion following. One of the important points in that paper is that the idea of influence is very broad--Chatterjee and Hadi start by asking "Influence on what?"

Our perspective in this article is that influence on the predicted values, rather than on selected variables, is the most useful for the variable selection problem. This may seem paradoxical; the goal in variable selection is (to state a tautology) selection of predictors. But multicollinearity among the independent variables considerably complicates the picture. It is well known that when the degree of multicollinearity is high, small perturbations of the data induce large fluctuations of the regression coefficients, so that many different models may have very similar fit. It is not surprising then that deletion of a single case may change the model without much change in fit. In Section 3 we see that the method suggested here can be used to flag changes in the model, including changes that are not influential on the fit. Also, the computer intensive method we discuss allows the investigator to maintain a history of changes in the set of selected variables when individual data points are deleted from the data set.

A number of measures of influence on predicted values have been developed in the context of ordinary multiple regression (without selection), based on the idea of case deletion. The influence of the ith case is assessed by determining the distance between predicted values estimated from the full data set, W and those computed from [W.sub.-i], the data with case i omitted. Commonly used measures of this type include Cook's distance (Cook 1977a) and DFFITS (Belsley, Kuh, and Welch 1980, p. 15). A case is declared to be influential if this distance is "large," where size is determined by comparison with some reference value. Although these methods often flag very different cases as being influential, the differences between them are essentially differences in metric (Cook and Weisberg 1982, pp. 122-124).

On the other hand, there is little explicit advice in the literature on how to assess influence when the fitted model is chosen by a variable selection procedure. One approach is to compute diagnostics (conditionally) on the selected model (see, for example, Neter, Wasserman, and Kutner 1990, pp. 460-465, and Pena and Ruiz-Castillo 1984). Alternatively, Chatterjee and Hadi (1988) studied the impact of simultaneously omitting a case and a variable from the full model. Weisberg (1981) introduced a statistic for computing the contribution of each case to Mallows's [C.sub.p] (Mallows 1973).

But none of these approaches addresses directly the model selection aspect of the problem. Computing diagnostics on the selected model flags potential problems with that model, but does not address which other models might have been selected under small perturbations of the data. The Weisberg statistic is useful for determining, among models with similar [C.sub.p], those models that are least influenced by individual cases, but it does not assist in determining which models might have been selected when cases contributing disproportionately are omitted. (We demonstrate in Sec. …