Risk Management and Financial Crises

Article excerpt

It is a universal truth that the existence of risk implies the existence of failure. Not all types of risk are the same, however, and all failures are not created equal.

In fact, failure can occur for several reasons, each of which teaches a different lesson in risk management.1 One involves the breakdown of management controls. Consider rogue traders such as Nick Leeson, whose losses single-handedly brought down the venerable firm of Barings Brothers. In such a case, the firm bears more risk than it had intended and it suffers the consequences. Perhaps losses by naive and unsophisticated investors should also be counted in this category. There certainly were investors, among them Gibson Greetings and Odessa College (the small Texas school that sank most of its funds in such risky derivatives as "inverse floaters" and "structured principal-only strips") who testified in court that they had been misled.2

The second category comprises cases in which management knowingly takes a risk and loses: It assumes the intended level of risk but gets a bad draw. Think of the Hunt brothers, who were holding 200,000,000 ounces of silver in 1979, just before the price plummeted. Or the many well-known funds, such as Piper Jaffray, that were caught unawares by the interest rate spike of 1994.3

The third possibility is perhaps a bit more subtle: The firm bears an amount of risk that is privately optimal-that is, management understands and accepts the extent of its exposure-but that amount of risk is not socially optimal. The prime examples here are the Great Depression of the 1930s and the savings and loan crisis of the 1980s. This third possibility is particularly disconcerting because it defies standard notions of risk management. It is not like estimating a firm's expected monthly loss from interest rate movements, a task which, however difficult, is at least based on a fairly clear underlying concept.

The fact is that distinctions between the private and social aspects of risk management remain murky. The most basic questions-why firms hedge, or whether they even should-are unresolved. And uncertainty about the private versus the social benefits of risk reduction complicates the job of sorting out who is managing risk correctly. Admittedly, researchers have spun stories about smoothing taxes or avoiding bankruptcy costs, about differences in the costs of internal and external funds, and about information disparities between managers and owners.4 These stories differ as to how much hedging takes place and whether it's the socially correct amount. For example, financial distress can carry high costs-a long, painful bankruptcy may entail extensive legal fees, destroy the manager's reputation, and generally eat up the firm's value. Prudent managers, wishing to avoid this cost, would be careful about the amount of risk their firm undertook. To the extent that the bankrupt firm loses its value to society, this is socially prudent as well. But the picture changes (particularly from the social standpoint) if hedging is used merely to minimize corporate taxes. This makes it hard to decide whether a firm is bearing the socially correct amount of risk. It is even harder to assess quantitatively how much socially inappropriate risk the firm bears. Unfortunately, getting this wrong can be very expensive, not only for the firm involved, but also for the entire economy-witness the Great Depression and the S&L crisis.

Sad Examples

I cite these examples because they are two cases in which there is some consensus about why the social cost of the risk exceeded its private cost. Admittedly, many details remain controversial, and economic historians still debate the exact causes of the Depression and which S&L crook looted the most.

In the Great Depression, U.S. banks were made vulnerable by branching restrictions that forbade them to diversify geographically. A bank in Kansas, say, couldn't lend money to New York foundries or to farms in Florida. …