Academic journal article Review - Federal Reserve Bank of St. Louis

Model Fit and Model Selection/Commentary

Academic journal article Review - Federal Reserve Bank of St. Louis

Model Fit and Model Selection/Commentary

Article excerpt

(ProQuest Information and Learning: ... denotes formulae omitted.)

This paper uses an example to show that a model that fits the available data perfectly may provide worse answers to policy questions than an alternative, imperfectly fitting model. The author argues that, in the context of Bayesian estimation, this result can be interpreted as being due to the use of an inappropriate prior over the parameters of shock processes. He urges the use of priors that are obtained from explicit auxiliary information, not from the desire to obtain identification. (JEL C11, E40, E60)

Federal Reserve Bank of St. Louis Review, July/August 2007, 09(4), pp. 349-60.

In an influential recent paper, Smets and Wouters (2003) construct a dynamic stochastic general equilibrium (DSGE) model with a large number of real and nominal frictions and estimate the unknown parameters of the model using sophisticated Bayesian techniques. They document that the estimated model has out-of-sample forecasting performance superior to that of an unrestricted vector autoregression. They write of their findings (p. 1125), "This suggests that the current generation of SDGE [stochastic dynamic general equilibrium] models with sticky prices and wages is sufficiently rich to capture the stochastics and the dynamics in the data, as long as a sufficient number of structural shocks is considered. These models can therefore provide a useful tool for monetary policy analysis" (italics added for emphasis). The European Central Bank (ECB) agrees. They are planning to begin using models with explicit micro-foundations for the first time in their anatys es of monetary policy. In doing so, they are explicitly motivated by the Smets and Wouters (2003) analysis.1

Smets and Wouters and the ECB are adherents to what one might call the principle of fit. According to this principle, models that fit the available data well should be used for policy analysis; models that do not fit the data well should not be. The principle underlies much of applied economic analysis. It is certainly not special to sophisticated users of econometrics: Even calibrators who use little or no econometrics in their analyses believe in the principle of fit. Indeed, there are literally dozens of calibration papers concerned with figuring out what perturbation in a given model will lead it to fit one or two more extra moments (like the correlation between hours and output or the equity premium).

In this paper, I demonstrate that the principle of fit does not always work. I construct a simple example economy that I treat as if it were the true world. In this economy, I consider an investigator who wants to answer a policy question of interest and estimates two models to do so. I show that model 1, which has a perfect fit to the available data, may actually provide worse answers than model 2, which has an imperfect fit.

The intuition behind this result is quite simple. The policy question of interest concerns how labor responds to a change in the tax rate. The answer depends on the elasticity of the labor supply. In both models, the estimate of this parameter hinges on a particular non-testable assumption about how stochastic shocks to the labor-supply curve c o vary with tax rates. When model 2's identification restriction is closer to being correct than model l's, model 2 provides a better answer to the policy question, even though its fit is always worse.

In the second part of the paper, I consider a potential fix. I enrich the class of possible models by discarding the non-testable assumption mentioned above. The resultant class of models is, by construction, only partially identified; there is a continuum of possible parameter estimates that are consistent with the observed data. I argue that, from the Bayesian perspective, a user of model 1 essentially has an incorrect prior over the set of parameters of this richer third model. As a solution, I suggest using a prior that is carefully motivated from auxiliary information, so that it does not assign zero probability to positive-probability events. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.