Academic journal article Federal Reserve Bank of Atlanta, Working Paper Series

Ambiguity Aversion and Variance Premium

Academic journal article Federal Reserve Bank of Atlanta, Working Paper Series

Ambiguity Aversion and Variance Premium

Article excerpt

1. Introduction

Much attention has been paid to the equity premium puzzle: the high equity premium in the data requires an implausibly high degree of risk aversion in a standard rational representative-agent model to match the magnitude (Mehra and Prescott (1985)). More recently, researchers have realized that such a standard model typically predicts a negligible premium for higher moments such as variance premium (defined as the difference between the expected stock market variances under the risk neutral measure and under the objective measure), even with a high risk aversion coefficient. This result, however, is at odds with the sizable variance premium observed in the data, generating the so called variance premium puzzle. (1)

The goal of this paper is to provide an ambiguity-based explanation for the variance premium puzzle. The Ellsberg (1961) paradox and related experimental evidence point out the importance of distinguishing between risk and ambiguity--roughly speaking, risk refers to the situation where there is a known probability measure to guide choices, while ambiguity refers to the situation where no known probabilities are available. In this paper, we show that ambiguity aversion helps generate a sizable variance premium to closely match the magnitude in the data. In particular, it captures about 96 percent of the average variance premium whereas risk can only explain about 4 percent of it.

To capture ambiguity-sensitive behavior, we adopt the recursive smooth ambiguity model developed by Hayashi and Miao (2011) and Ju and Miao (2012) who generalize the model of Klibanoff, Marinacci and Mukerji (2009). The Hayashi-Ju-Miao model also includes the Epstein-Zin model as a special case in which the agent is ambiguity neutral. Ambiguity aversion is manifested through a pessimistic distortion of the pricing kernel in the sense that the agent attaches more weight on low continuation values in recessions. This feature generates a large countercyclical variation of the pricing kernel. (2) Ju and Miao (2012) show that the large countercyclical variation of the pricing kernel is important for the model to resolve the equity premium and risk-free rate puzzles and to explain the time variation of equity premium and equity volatility observed in the data. The present paper shows that it is also important for understanding the variance premium puzzle.

The Hayashi-Ju-Miao model allows for a three-way separation among risk aversion, intertemporal substitution, and ambiguity aversion. This separation is important not only for a conceptual reason, but also for quantitative applications. In particular, the separation between risk aversion and intertemporal substitution is important for matching the low risk-free rate observed in the data as is well known in the Epstein-Zin model. In addition, it is important for long-run risks to be priced (Bansal and Yaron (2004)). The separation between risk aversion and ambiguity aversion allows us to decompose equity premium into a risk premium component and an ambiguity premium component (Chen and Epstein (2002) and Ju and Miao (2012)). We can then fix the risk aversion parameter at a conventionally low value and use the ambiguity aversion parameter to match the mean equity premium in the data. This parameter plays an important role in amplifying and propagating the impact of uncertainty on asset returns and variance premium.

Following Ju and Miao (2012), we assume that consumption growth follows a regime-switching process (Hamilton (1989)) and that the agent is ambiguity averse to the variation of the hidden regimes. The agent learns about the hidden state based on past data. Our adopted recursive smooth ambiguity model incorporates learning naturally. In this model, the posterior of the hidden state and the conditional distribution of the consumption process given a state cannot be reduced to a compound predictive distribution in the utility function, unlike in the standard Bayesian analysis. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.