The stakes for risk management decisions are higher than ever and thus these decisions need to be supported by a higher level of detail. Determining that appropriate level of detail is
a matter of understanding and grappling with the uncertainties of probability that inevitably challenge the risk management profession. We need to be able to critically assess the applicability and credibility of all sources of risk management information-from the numbers (the base of all risk analysis) to the intuitive powers of the risk decision makers.
A Lack of Data
To make sound judgments based on statistics, we need data. The more data the better. For most organizations, however, data on individual exposures is scarce. This is especially true for severe, yet infrequent events, which involve the most difficult and costly decisions.
The results of statistical study on losses for any particular exposure boil down to a chart like that shown in Figure 1. A probability number, here expressed as a percentage, is associated with various loss potentials. Likelihood is typically shown as a probability of exceeding a given annual aggregate loss.
The numbers are derived in various ways. The most common method matches theoretical equations with observed loss data. The fewer data points there are, the more possible mathematical equations fit the data. Yet only one true equation exists. Which one is correct? The choice is based on which equation minimizes the difference between its theoretical predictions and the actual data. If the actual data is subject to random fluctuations, however, the chance that the selected equation fits the true underlying loss distribution is slim.
We can take the analysis one step further and calculate the likelihood that the probability we come up with is true-our statistical confidence intervals. Unfortunately, this can lead to an infinite regression of probabilities on probabilities, and it assumes certain theoretical conditions. In this fast-moving and complex world, we can never be sure of such assumptions.
Loss prediction precision fades at exactly that point where we need to make significant decisions. How reliable is a decision based on a number developed in a system so dependent on possibly flawed theoretical ideas? To effectively use formal risk assessments, we must understand the numbers, any uncertainties surrounding them and what we can do about those uncertainties.
Actuarial Limitations: The Ten Percent Rule
While we might specify probabilities accurately for frequent events, accuracy falls off as the frequency of loss decreases. Likewise, as the potential losses increase, we become less sure of the probability of exceeding these amounts. Estimates involve picking one of the many theoretical models that predict future loss, but that choice, beyond a certain point, becomes arbitrary (see Figure 2).
The probability number associated with a particular loss amount could be the result of a range of modeling options. The problem is compounded when the fitted models are used to extrapolate beyond observed data. Such data deficiencies mean that the results of statistical risk assessments can be highly inaccurate for losses where predicted probability falls below ten percent.
In the study shown in Figure 1, aggregate losses for our hypothetical firm are predicted to exceed $6.6 million 10 percent of the time (or once every ten years). So losses will be under $6.6 million 90 percent of the time. Results at higher thresholds-5 percent or 1 percent-are highly questionable, regardless of the skill of the statistician. This means that if we are concerned about losses exceeding $8 million, or we want a comfort level in excess of 90 percent with our risk financing decisions, extra caution should be used.
This problem manifests itself when we are dealing with more sophisticated risk management and risk financing issues, beyond buying insurance or selecting a maintenance deductible. …