Empirical Economics? an Econometric Dilemma with Only a Methodological Solution
Stanley, T. D., Journal of Economic Issues
Conventional econometric practice begins with a general model or family of models. Various model specifications are explored, and the one that best fits the data is chosen. If, however, researchers regard the chosen model as merely a convenient summary of the data, what can be learned about economic theory or our explanation of economic phenomena? Nothing, unless theory is held in genuine risk of rejection or revision by observation.
Rarely does applied econometrics begin with a theory (and when it does, it is utility theory). More rare still are those studies where a specific utility function is used to derive the researcher's econometric model. Because econometric models are not regarded as the theory under examination, researchers usually do not bother to accumulate evidence, for or against, a specific econometric representation. Rather, the development and application of new statistical techniques propel model evaluation and evolution.
Econometric inference is only as trustworthy as the underlying statistical assumptions. The necessary assumptions concern specific distributional properties of the errors in the associated econometric model.(1) Such errors receive little attention from economic theorists and are added grudgingly at the end of econometric models. Nonetheless, it is the proper specification of these error distributions, along with the exact structure of the interdependence, that sanctions our empirical economic inferences. The necessity of correctly specifying the underlying statistical model is always at issue in actual econometric applications. Questions of proper specification affect the interpretation of any empirical economic evidence. Unfortunately, we can never know whether our economic models are correctly specified.
The purpose of this essay is to reveal the epistemological constraint, the "Duhem-Quine thesis," that lies at the heart of econometrics and to offer a methodological solution.(2) The fundamental nature of this epistemological limitation prohibits technical solution. Rather, a sound empirical economics requires an explicit methodological solution to the pretest/specification dilemma. The acceptance of this methodological proposal would change what economists deem as "theory" and the way in which conventional "economic theory" is regarded. Following presentation of this philosophical argument, the proposed econometric methodology will be illustrated by reviewing efforts to explain the "consumption puzzle" - i.e., the difference between the short run and long run propensities to consume.
The Pretest/Specification Dilemma
Blind mechanical application of one particular criterion, or many criteria, is not a satisfactory strategy. All of the criteria suffer from the defects of preliminary-test estimation [Griffiths, Hill, and Judge 1993, 342].
Making errors is an inevitable consequence of statistical analysis. In fact, mastering error distributions is the basis of statistical models and reasoning. Both statistics and econometrics make fallibility their foundation and their strength. Although statistical methods adequately confront probability and error when the generating distributions of the relevant processes are known, these distributions are not generally known. Economists have been especially reluctant to regard probability and error distributions as part of the phenomenon about which they need to theorize.
Econometric techniques are only as good as the accuracy of the assumptions upon which they are based. A specific statistical test or econometric model will give valid implications, albeit probabilistic ones, to the extent that the underlying statistical and causal structures have been correctly identified and modeled - thus, the problem of specification. Unlike Friedman's "positive economics," which asserts that the assumptions of economic theory are irrelevant [Friedman 1953; Blaug 1980], econometricians fully acknowledge the dependence of empirical inference on statistical assumptions. …