Academic journal article Economic Perspectives

Value at Risk for a Mixture of Normal Distributions: The Use of Quasi-Bayesian Estimation Techniques

Academic journal article Economic Perspectives

Value at Risk for a Mixture of Normal Distributions: The Use of Quasi-Bayesian Estimation Techniques

Article excerpt

Rapid globalization, innovations in the design of derivative securities, and examples of spectacular losses associated with derivatives over the past decade have made firms recognize the growing importance of risk management. This increased focus on risk management has led to the development of various methods and tools to measure the risks firms face.

One popular risk-measurement tool is value at risk (VaR), which is defined as the minimum loss expected on a portfolio of assets over a certain holding period at a given confidence level (probability). For example, consider a trader who is concerned about the risk, over the next ten days, associated with holding a specific portfolio of assets. A statement that, at the 95 percent confidence level, the VaR of this portfolio is $100,000 implies that 95 percent of the time, losses over the 10-day holding period should not exceed $100,000 (or losses should exceed $100,000 only 5 percent of the time).

The use of value at risk techniques in risk management has exploded over the last few years. Financial institutions now routinely use VaR techniques in managing their trading risk and nonfinancial firms have started adopting the technology for their risk-management purposes as well. In addition, regulators are beginning to design new regulations around it. Examples of these regulations include the determination of bank capital standards for market risk and the reporting requirements for the risks associated with derivatives used by corporations.

Proponents of VaR argue that the ability to quantify risk exposure into a single number represents the single most powerful advantage of the technique.(1) Despite its simplicity, however, the technique is only as good as the inputs into the VaR model.(2) Many implementations of VaR assume that asset returns are normally distributed. This assumption simplifies the computation of VaR considerably. However, it is inconsistent with the empirical evidence of asset returns, which finds that asset returns are fat tailed. This implies that extreme events are much more likely to occur in practice than would be predicted based on the assumption of normality. Take, for example, the stock market crash of October 1987. The assumption of normality would imply that such an extreme market movement should occur only once in approximately 5,900 years. As we know, however, there have been worse stock crashes than that of October 1987 even in this century. This suggests that the normality assumption can produce VaR numbers that are inappropriate measures of the true risk faced by the firm.

While alternative return distributions have been proposed that better reflect the empirical evidence, any replacement for the normality assumption must confront the issue of the simplicity of computations, which is one of the primary benefits of VaR. In this article, I examine one such alternative assumption that simultaneously allows for asset returns that are fat tailed and for tractable estimations of VaR. This distribution, based on a mixture of normal densities, has also been proposed by Zangari (1996). First, I relate the mixture of distributions approach to alternatives that have been presented in the academic literature on the stochastic processes governing asset returns. Second, I use an estimation technique for the parameters of the mixture of distributions that is computationally simpler than the techniques suggested by Zangari - the quasi-Bayesian maximum likelihood estimation (QB-MLE) approach (first suggested by Hamilton, 1991).(3) Third, using simulated data, I show that the QB-MLE combined with the mixture of normals assumption provides better measures of value at risk for fat-tailed distributions (like the Student's t) than the traditional normality assumption. I then establish that the technique does not suffer from the problems associated with the traditional maximum likelihood approach and that it is effective in recovering the parameters from simulated data. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.