This chapter is concerned with the common sense of multiple regression. One purpose is to indicate the potential of multiple regression analysis, together with some of its pitfalls. A second purpose is to provide a framework to understand published applications.
A new approach has been adopted. Formulas are largely omitted because few are needed to understand regression analysis. No attempt is made to teach regression calculations as this is better left to the computer. Instead, the discussion is concerned with conceptual understanding and practical applications.
Multiple regression is mainly useful for prediction, in personnel selection, for example, or medical diagnosis. Multiple regression has notable capabilities: It can pool predictive power of multiple predictor variables that may be quite different in nature; it is not fooled by mere face validity; it does not suffer from cognitive biases that afflict even experts; and it can learn from experience. In practice, multiple regression typically outpredicts expert judges.
A quite different use of regression analysis concerns interpretation and causal inference. One interpretational use is with controlled variables in experimental design. Multiple regression can have certain advantages over standard factorial Anova, especially in allowing simpler designs. a
The other interpretational use of multiple regression is with uncontrolled variables. The hope is that observational data can be made to yield causal inferences by “controlling for” or “partialing out” effects of uncontrolled variables. Such applications are minefields (Section 16.2, pages 501 ff).