Academic journal article Federal Reserve Bank of Atlanta, Working Paper Series

General Aggregation of Misspecified Asset Pricing Models

Academic journal article Federal Reserve Bank of Atlanta, Working Paper Series

General Aggregation of Misspecified Asset Pricing Models

Article excerpt

1 Introduction

Stochastic discount factor (SDF) models are routinely rejected when confronted with data. We examine certain aggregations of these models when all are assumed to be misspecified and the true SDF process is not included in the choice set. To be sure, all models are misspecified by design as they are constructed to be simple approximations to a complex 'Data Generation Process' (DGP). The DGP is a 'latent object', and models of it are simplified/directed/partial maps. This is especially true when these models are incompletely specified and are estimated by moment matching. Despite the obvious nature of the statement above, its accommodation in practice remains inconsistent and even contradictory in many instances. We argue that the traditional inference objectives require a more careful consideration when all models are expressly allowed to be misspecified.

The analysis of misspecified moment condition models is still in its infancy. This is a fertile ground for important future research; see Lars Hansen's Nobel lecture, Hansen (2013). When there are several candidate models, their respective 'pseudo-true' objects that may allow a misspecification-consistent analysis, are relative objects, specific to each model and even to the estimation criteria that quantify them (GMM, Kullback-Leibler, Likelihood). Model selection and model averaging, and certainly policy analysis, do not have clearly defined objectives in this setting.

Partial effects, for instance, would refer to different conditional distributions and parameters, as provided by each model. This problem is only partially mitigated in some situations, as in the context of comparing misspecified asset pricing models using the Hansen-Jagannathan (Hansen and Jagannathan, 1991, 1997) distance that uses the inverse of the same second moment matrix of the test assets to weigh the pricing errors for all candidate models. But there is a larger problem here that is inherent to 'model selection' which is designed to choose only one of the candidate models and ignores the information in the remaining models. Model selection may be meaningful only if the 'true' DGP model were in the set of candidate models (the dictionary) and the procedure is consistent. This is a highly unrealistic situation as all models are misspecified. Indeed, 'consistency' in selection seems dubious when the true DGP is not included. A better alternative that has been favored for informal reasons, and has recently received further theoretical justification is "aggregation" which includes averaging and pooling.

Bernando and Smith (1994) offer a characterization and a taxonomy of the different views regarding model comparison and selection. The first perspective, that includes Bayesian model averaging and frequentist model selection, is conditioned on one of the models being 'true'. In this approach, the ambiguity about the true model is resolved asymptotically and in the limit, the mixture that summarizes the beliefs about the individual models assigns a weight of one to one of the models. Diebold (1991) provides an illuminating example of this in the context of Bayesian forecast combination. Another possibility is also to assume that a 'true model/DGP' exists but is too complicated or cumbersome to implement, and all of the candidate models are viewed as approximations and hence misspecified. The third view dispenses completely with the self-contradictory notion of a 'true model' and treats the candidate models as genuinely misspecified either because they are believed to represent different aspects of the underlying DGP or because the underlying structure is completely unknown. "If models are misspecified in an indeterminate manner, then we should not be aiming at the discovery of the 'true data generating process'" (Maasoumi, 1993). Reasonable models may be statistically consistent with aspects of the data emanating from the latent DGP.

Earlier attempts to accommodate misspecified models in econometrics date back to the mid and late 70s. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.