PART 1: The Catastrophe of Dumbing Down Catastrophe Models
Khadilkar, Jayant, Risk Management
The use of catastrophe models has, over the last 20 years, become the norm for disaster risk management. Although some cat models existed earlier, it was in 1992 after Hurricane Andrew that the landscape of property catastrophe risk management changed and the importance of having a probabilistically simulated, event-based approach to evaluating risk was better understood.
Since Andrew, cat models have evolved to encompass the latest in scientific research for an increasing variety of perils. Nevertheless, many catastrophic events over the past decade (the multiple Florida hurricanes of 2004, Hurricane Katrina in 2005, Hurricane Ike in 2008, the swarm of tornados in 2011) have called attention to the fallibility of cat models and the insurance industry's overconfidence in their results. Cat models are unquestionably a valuable tool, but they are not the complete answer.
Cat models are built using limited data sources and scientific approximations and, as such, remain incomplete and have an abundance of built-in uncertainty. For example, Atlantic hurricane statistics are available back to 1851, but complete data did not become available until the advent of weather satellites in the 1950s. Given that U.S. hurricane models are built on this very limited historical data, how much confidence can a user have in a cat models accuracy in determining what might be a 1-in-100-year event?
As another example, there are a number of scientifically valid methods to describe how hurricanes weaken after making landfall. Each of these methods is an approximation of reality that can produce significantly different results. Cat model developers choose one of these methods for their model and make many more choices and approximations before they arrive at a finished product. Therefore, the results produced by a single cat model have a lot of uncertainty around them. That is why different cat models produce different answers; the developers used different (though equally valid) methods and approximations for their models.
Despite these shortcomings, cat models have nevertheless represented a quantum leap for the insurance industry and how it considers catastrophic risk. Models provide a common framework for tying together hazard, vulnerability and exposure data. Models allow for the sharing of information in a consistent and recognized format. Risk takers can evaluate potential catastrophe scenarios and roughly estimate the probabilities of different sizes of loss. Where cat models excel is not the absolute measurement of risk, but in the relative evaluation of risk.
A Single Point of Failure
Virtually all cat models generate a loss distribution curve as part of their risk evaluation. A single point on the curve (though not always the same point), known as probable maximum loss (PML), has become the most commonly used measure of risk.
Many practitioners are justifiably uncomfortable with reducing the entire output of a cat model down to a single point. The credibility of loss estimates is significantly reduced as focus is narrowed down to either a single location or a single point on a loss distribution.
Yet the use of PML remains popular for a variety of reasons. It is a simple way to express the results of a complex model. The models themselves make it easier to produce single point estimates of PML. And regulators and rating agencies use it to calculate the financial strength of a risk taker.
If you ask people within the same organization for their understanding of PML, more likely than not you will get multiple interpretations. The definition of PML varies from business unit to business unit and from company to company. One might consider the 1-in-100-year occurrence loss to be the PML; another might look at the 1-in-250-year aggregate loss.
Yet the broader questions remain unanswered by the PML. How can decisions be made based on a single point on the distribution? …