Fig World Economic Forum The core global risks likelihood with severif by economic loss

If only all risks were like this, but they are not. There may be no decent claims history, the conditions in the future may not resemble the past. There could be a possibility of a few rare but extremely lrge losses, it may not be possible to reduce volatility by writing a lot of the same type of risk, and it may not be possible to diversify the risk portfolio. One or more of these circumstances can apply. For example, lines of business where we have low claims experience and doubt over the future include 'political risks' (protecting a financial asset against a political action such as confiscation). The examples most relevant to this chapter are 'catastrophe' risks, which typically have low claims experience, large losses, and limited ability to reduce volatility. To understand how underwriters deal with these, we need to revisit what the pricing of risk and indeed risk itself are all about.

8.5 Pricing the risk

The primary challenge for underwriters is to set the premium to charge the customer -the price of the risk. In constructing this premium, an underwriter will usually consider the following elements:

1. Loss costs, being the expected cost of claims to the policy.

2. Acquisition costs, such as brokerage and profit commissions.

3. Expenses, being what it costs to run the underwriting operation.

4. Capital costs, being the cost of supplying the capital required by regulators to cover the possible losses according their criterion (e.g., the United Kingdom's FSA currently requires capital to meet a l-in-200-year or more annual chance of loss).

? 5. Uncertainty cost, being an additional subjective charge in respect of the uncertainty of this line of business. This can in some lines of business, such as political risk, be the dominant factor.

6. Profit, being the profit margin required of the business. This can sometimes be set net of expected investment income from the cash flow of receiving premiums before having to pay out claims, which for 'liability' contracts can be many years.

Usually the biggest element of price is the loss cost or 'pure technical rate'. We saw above how this was set for household building cover. The traditional method is to model the history of claims, suitably adjusted to current prices, with frequency/severity probability distribution combinations, and then trend these in time into the future, essentially a model of the past playing forward. The claims can be those either on the particular contract or on a large set of contracts with similar characteristics.

Setting prices from rates is - like much of the basic mathematics of insurance -essentially a linear model even though non-linearity appears pronounced when large losses happen. As an illustration of non-linearity, insurers made allowance for some inflation of rebuild/repair costs when underwriting for US windstorm, yet the actual 'loss amplification' in Hurricane Katrina was far greater than had been anticipated. Another popular use of linearity has been linear regression in modelling risk correlations. In extreme cases, though, this assumption, too, can fail. A recent and expensive example was when the dotcom bubble burst in April 2000. The huge loss of stock value to millions of Americans triggered allegations of impropriety and legal actions against investment banks, class actions against the directors and officers of many high technology companies whose stock price had collapsed, for example, the collapse of Enron and WorldCom and Global Crossing, and the demise of the accountants Arthur Andersen discredited when the Enron story came to light. Each of these events has led to massive claims against the insurance industry. Instead of linear correlations, we need now to deploy the mathematics of copulas.3 This phenomenon is also familiar from physical damage where a damaged asset can in turn enhance the damage to another asset, either directly, such as when debris from a collapsed building creates havoc on its neighbours ('collateral damage'), or indirectly, such as when loss of power exacerbates communication functions and recovery efforts ('dependency damage').

As well as pricing risk, underwriters have to guard against accumulation of risk. In catastrophe risks the simplest measure of accumulations is called the 'aggregate' - the cost of total ruin when everything is destroyed. Aggregates represent the worst possible outcome and are an upper limit on an underwriter's exposure, but have unusual arithmetic properties. (As an example, the aggregate exposure of California is typically lower than the sum of the aggregate exposures in each of the Cresta zones into which California is divided for earthquake assessment. The reason for this is that many insurance policies cover property in more than one zone but have a limit of loss across the zones. Conversely, for fine-grained geographical partitions, such as postcodes, the sum across two postal codes can be higher than the aggregate of each. The reason for this is that risks typically have a per location [per policy when dealing with reinsurance] deductible!)

8.6 Catastrophe loss models

For infrequent and large catastrophe perils such as earthquakes and severe windstorms, the claims history is sparse and, whilst useful for checking the results of models, offers insufficient data to support reliable claims analysis.

A copula is a functional, whose unique existence is guaranteed from Sklar's theorem, which says that a multivariate probability distribution can be represented uniquely by a functional of the marginal probability functions.

Instead, underwriters have adopted computer-based catastrophe loss models, typically from proprietary expert suppliers such as RMS (Risk Management Solutions), AIR (Applied Insurance Research), and EQECAT.

The way these loss models work is well-described in several books and papers, such as the recent UK actuarial report on loss models (GIRO, 2006). From there we present Fig. 8.2, which shows the steps involved.

Quoting directly from that report:

Catastrophe models have a number of basic modules:

• Event module iem

A database of stochastic events (the event set) with each event denned by its physical parameters, location, and annual probability /frequency of occurrence.

• Hazard module iem

This module determines the hazard of each event at each location. The hazard is the consequence of the event that causes damage - for a hurricane it is the wind at ground level, for an earthquake, the ground shaking.

• Inventory (or exposure) module iem

A detailed exposure database of the insured systems and structures. As well as location this will include further details such as age, occupancy, and construction.

• Vulnerability module iem

Vulnerability can be defined as the degree of loss to a particular system or structure resulting from exposure to a given hazard (often expressed as a percentage of sum insured).

• Financial analysis module iem

This module uses a database of policy conditions (limits, excess, sub limits, coverage terms) to translate this loss into an insured loss.

Of these modules, two, the inventory and financial analysis modules, rely primarily on data input by the user of the models. The other three modules represent the engine of the catastrophe model, with the event and hazard modules being based on seismological and meteorological assessment and the vulnerability module on engineering assessment. (GIRO, 2006, p. 6)

The model simulates a catastrophic event such as a hurricane by giving it a geographical extent and peril characteristics so that it 'damages' - as would a real hurricane - the buildings according to 'damageability' profiles for occupancy, construction, and location. This causes losses, which are then applied to the insurance policies in order to calculate the accumulated loss to the insurer. The aim is to produce an estimate of the probability of loss in a year called the occurrence exceedance probability (OEP), which estimates the chance of exceeding a given level of loss in any one year, as shown in Fig. 8.3. When the probability is with respect to all possible losses in a given year, then the graph is called an aggregate exceedance probability curve (AEP).

You will have worked out that just calculating the losses on a set of events does not yield a smooth curve. You might also have asked yourself how the 'annual' bit gets in. You might even have wondered how the damage is chosen because surely in real life there is a range of damage even for otherwise similar buildings. Well, it turns out that different loss modelling companies have different ways of choosing the damage percentages and of combining these events, which determine the way the exceedance probability distributions are calculated.4 Whatever their particular solutions, though, we end up with a two-dimensional estimate of risk through the 'exceedance probability (EP) curve'.

Applied insurance research (AIR), for example, stochastically samples the damage for each event on each property and the simulation is for a series of years. Risk management solutions (RMS), on the other hand, take each event as independent, described by a Poisson arrival rate and treat the range of damage as a parameterized beta function ('secondary uncertainty') in order to come up with the OEP curve, and then some fancy mathematics for the AEP curve. The AIR method is the more general and conceptually the simplest, as it allows for non-independence of events in the construction of the events hitting a given year, and has built-in damage variability (the so-called secondary uncertainty).

Catastrophes and insurance 175

Exceedance probability (EP} curve

cj 0.0041

$20m

Fig. 8.3 Exceedance probability loss curve.

The shaded area shows 3% of cases, ana this means there is a \% chance in any one year of a $20m loss or more.

Put another way this is a Return Period of 1 in 100 years of a loss of $Z0m or more.

The shaded area shows 3% of cases, ana this means there is a \% chance in any one year of a $20m loss or more.

Put another way this is a Return Period of 1 in 100 years of a loss of $Z0m or more.

Aggregate Exceedance Probability Curve

$20m

Fig. 8.3 Exceedance probability loss curve.

Fig. 8.2 Generic components of a loss model.

8.7 What is risk?

[Rjisk is either a condition of, or a measure of, exposure to misfortune - more concretely, exposure to unpredictable losses. However, as a measure, risk is not one-dimensional - it has three distinct aspects or facets' related to the anticipated values of unpredictable losses. The three facets are Expected Loss, Variability of Loss Values, and Uncertainty about the Accuracy of Mental Models intended to predict losses.

Ted Yellman, 2000

Although none of us can be sure whether tomorrow will be like the past or whether a particular insurable interest will respond to a peril as a representative of its type, the assumption of such external consistencies underlies the construction of rating models used by insurers. The parameters of these models can be influenced by past claims, by additional information on safety factors and construction, and by views on future medical costs and court judgments. Put together, these are big assumptions to make, and with catastrophe risks the level of uncertainty about the chance and size of loss is of primary importance, to the extent that such risks can be deemed uninsurable. Is there a way to represent this further level of uncertainty? How does it relate to the 'EP curves' we have just seen?

Kaplan and Garrick (1981) defined quantitative risk in terms of three elements -probability for likelihood, evaluation measure for consequence, and 'level 2 risk' for the uncertainty in the curves representing the first two elements. Yellman (see quote in this section) has taken this further by elaborating the 'level 2 risk' as uncertainty of the likelihood and adversity relationships.

We might represent these ideas by the EP curve in Fig. 8.4. When dealing with insurance, 'Likelihood' is taken as probability density and 'Adversity' as loss. Jumping further up the abstraction scale, these ideas can be extended to qualitative assessments, where instead of defined numerical measures of probability and loss we look at categoric (low, medium, high) measures of Likelihood and Adversity. The loss curves now look like those shown in Figs. 8.5 and 8.6. Putting these ideas together, we can represent these elements of risk in terms of fuzzy exceedance probability curves as shown in Fig. 8.7.

Another related distinction is made in many texts on risk (e.g., see Woo, 1999) between intrinsic or 'aleatory' (from the Greek for dice) uncertainty and avoidable or 'epistemic' (implying it follows from our lack of knowledge) risk. The classification of risk we are following looks at the way models predict outcomes in the form of a relationship between chance and loss. We can have many different parameterizations of a model and, indeed, many different models. The latter types of risk are known in insurance as 'process risk' and 'model risk', respectively.

Fig. 8.5 Qualitative risk assessment chart - Treasury Board of Canada.

These distinctions chime very much with the way underwriters in practice perceive risk and set premiums. There is a saying in catastrophe re-insurance that 'nothing is less than 1 on line', meaning the vagaries of life are such that you should never price high-level risk at less than the chance of a total

Catastrophes and insurance

Catastrophes and insurance

Fig. 8.4 Qualitative loss curve.

Impact Risk distribution

Significant

Moderate

Minor

Low Medium High

Likelihood

Economic aid financial

Fl Interest rate F2 Securities F3 Cost of insurance

Environmental

El Climate change

E2 Pollution E3 Ozone depletion

Legal

LI Liabilities 12 Human rights L3 International agreements

Technological

Tl Nuclear power T2 Biotechnology T3 Genetic engineering

Safety and security

SI Invasion

S2 Terrorism S3 Organized crime

Fig. 8.5 Qualitative risk assessment chart - Treasury Board of Canada.

Catastrophes and insurance

loss once in a hundred years (1%). So, whatever the computer models might tell the underwriter, the underwriter will typically allow for the 'uncertainty' dimension of risk. In commercial property insurance this add-on factor has taken on a pseudo-scientific flavour, which well illustrates how intuition may find an expression with whatever tools are available.

8.8 Price and probability

Armed with a recognition of the three dimensions of risk - chance, loss, and uncertainty - the question arises as to whether the price of an insurance contract, or indeed some other financial instrument related to the future, is indicative of the probability of a particular outcome. In insurance it is common to 'layer' risks as 'excess of loss' to demarcate the various parts of the EP curve. When this is done, then we can indeed generally say that the element of price due to loss costs (see aforementioned) represents the mean of the losses to that layer and that, for a given shape of curve, tells us the probability under the curve. The problem is whether that separation into 'pure technical' price can be made, and generally it cannot be as we move into the extreme tail because the third dimension - uncertainty - dominates the price. For some financial instruments such as weather futures, this probability prediction is much easier to make as the price is directly related to the chance of exceedance of some measure (such as degree days). For commodity prices, though, the relationship is generally too opaque to draw any such direct relationships of price to probability of event.

8.9 The age of uncertainty

We have seen that catastrophe insurance is expressed in a firmly probabilistic way through the EP curve, yet we have also seen that this misses many of the most important aspects of uncertainty.

Choice of model and choice of parameters can make a big difference to the probabilistic predictions of loss we use in insurance. In a game of chance, the only risk is process risk, so that the uncertainty resides solely with the probability distribution describing the process. It is often thought that insurance is like this, but it is not: it deals with the vagaries of the real world. We attempt to approach an understanding of that real world with models, and so for insurance there is additional uncertainty from incorrect or incorrectly configured models.

In practice, though, incorporating uncertainty will not be that easy to achieve. Modellers may not wish to move from the certainties of a single EP curve to the demands of sensitivity testing and the subjectivities of qualitative risk assessment Underwriters in turn may find adding further levels of explicit uncertainty uncomfortable. Regulators, too, may not wish to have the apparent scientific purity of the loss curve cast into more doubt, giving insurers more not less latitude! On the plus side, though, this approach will align the tradition of expert underwriting, which allows for many risk factors and uncertainties, with the rigour of analytical models such as modern catastrophe loss models.

Global catastrophic risks f £

Tlie Top Short-term Risks with the lightest Severity Ranking

to LiSt M (iii) Earthquake hits Tofcyo

StmuHaniOLLs conventional terror«»

worldwide

(L ii J Pathogenic avian Rapid growth of MI V, virus MSN1 spitid* TB infections outside iiLgh/low liuiiun ^ s»b-Sjfiaian Africa

20% fall of the L:-;5 (n) Earthquake tri dfcwh pop. iira in Japan

20% fall of the L:-;5 (n) Earthquake tri dfcwh pop. iira in Japan

New H iV, IB infections of 5 rmHtOB people

Terronsr attacks toniimtttai 2004-05 frequency and intensity

Likelihood

Fig, 8.6 Qualitative risk assessment chart - World Economic Forum 2006.

Example 1 - Property fire risk

Probability Density

Loss I

Example 2 - Insurance portfolio

Example 2 - Insurance portfolio

Loss I

Example 4 - Run over this year

Example 3 - Catch cold this year j Likelihood ]

Example 4 - Run over this year

Example 3 - Catch cold this year j Likelihood ]

[likelihood j t Adversity]

Fig. 8.7 Illustrative qualitative loss curves.

One way to deal with 'process' risk is to find the dependence of the model on the source assumptions of damage and cost, and the chance of events. Sensitivity testing and subjective parameterizations would allow for a diffuse but more realistic EP curve.

This leaves 'model' risk - what can we do about this? The common solution is to try multiple models and compare the results to get a feel for the spread caused by assumptions. The other way is to make an adjustment to reflect our opinion of the adequacy or coverage of the model, but this is today largely a subjective assessment.

There is a way we can consider treating parameter and model risk, and that is to construct adjusted EP curves to represent the parameter and model risk. Suppose that we could run several different models and got several different EP curves? Suppose, moreover, that we could rank these different models with different weightings. Well, that would allow us to create a revised EP curve, which is the 'convolution' of the various models.

In the areas of emerging risk, parameter and model risk, not process risk, play a central role in the risk assessment as we have little or no evidential basis on which to decide between models or parameterizations.

But are we going far enough? Can we be so sure the future will be a repeat of the present? What about factors outside our domain of experience? Is it possible that for many risks we are unable to produce a probability distribution even allowing for model and parameter risk? Is insurance really faced with 'black swan' phenomena ('black swan' refers to the failure of the inductive principle that all swans were white when black swans were discovered in Australia), where factors outside our models are the prime driver of risk?

What techniques can we call upon to deal with these further levels of uncertainty?

8.10 New techniques

We have some tools at our disposal to deal with these challenges.

Continue reading here: Complexity science

Was this article helpful?

0 0