JOURNAL OF APPLIED ECONOMETRICS, VOL. 9, S123-S144 (1994)
STATISTICAL INFERENCE IN CALIBRATED MODELS
Department of Economics, Universitat Pompeu Fabra, Balmes 132, 08008 Barcelona, Spain and Department of Economics, Università di Catania, 95100 Catania, Italy, and CEPR
This paper describes a Monte Carlo procedure to assess the performance of calibrated dynamic general equilibrium models. The procedure formalizes the choice of parameters and the evaluation of the model and provides an efficient way to conduct a sensitivity analysis for perturbations of the parameters within a reasonable range. As an illustration the methodology is applied to two problems: the equity premium puzzle and how much of the variance of actual US output is explained by a real business cycle model.
The current macroeconometrics literature has proposed two ways to confront general equilibrium rational expectations models with data. The first, an estimation approach, is the direct descendant of the econometric methodology proposed 50 years ago by Haavelmo (1944). The second, a calibration approach, finds its justification in the work of Frisch (1933) and is closely linked to the computable general equilibrium literature surveyed e.g. in Shoven and Whalley (1984).
The two methodologies share the same strategy in terms of model specification and solution. Both approaches start from formulating a fully specified general equilibrium dynamic model and in selecting convenient functional forms for preferences, technology, and exogenous driving forces. They then proceed to find a decision rule for the endogenous variables in terms of the exogenous and predetermined variables (the states) and the parameters. When the model is nonlinear, closed-form expressions for the decision rules may not exist and both approaches rely on recent advantages in numerical methods to find an approximate solution which is valid either locally or globally (see e.g. the January 1990 issue of the Journal of Business and Economic Statistics for a survey of the methods and Christiano, 1990, and Dotsey and Mao, 1991, for a comparison of the accuracy of the approximations).
It is when it comes to choosing the parameters to be used in the simulations and in evaluating the performance of the model that several differences emerge. The first procedure attempts to find the parameters of the decision rule that best fit the data either by maximum likelihood (ML) (see e.g. Hansen and Sargent, 1979, or Altug, 1989) or generalized method of moments (GMM) (see e.g. Hansen and Singleton, 1983, or Burnside et al., 1993). The validity of the specification is examined by testing restrictions, by general goodness of fit tests or by comparing the fit of two nested models. The second approach ‘calibrates’ parameters using a set of alternative rules which includes matching long-run averages, using previous microevidence or a priori selection, and assesses the fit of the model with an informal distance criterion.
These differences are tightly linked to the questions the two approaches ask. Roughly speaking, the estimation approach asks the question ‘Given that the model is true, how false is
Received July 1992
© 1994 by John Wiley & Sons, Ltd.
Revised August 1994