Overview of loss models

Hurricane loss models have traditionally consisted of an input set of historical or synthetic storms that constitute a frequency or occurrence model, and additional meteorological, vulnerability, and actuarial components. In support of these components, databases on historical events and their detailed characteristics are necessary. For average annual loss cost estimation, probability distributions governing the stochastic generation of events are also necessary. For a given, fixed exposure, the hurricane loss model would then be executed to simulate tens of thousands of years in order to produce loss cost estimates and attendant uncertainties in the estimates. This overview of hurricane loss model construction pertains both to the first loss model approved by the commission (the AIR model; Clark, 1986, 1997) and to the current public domain model (Powell et al., 2005), as well as a model that has garnered ongoing funding from the Federal Emergency Management Agency's (FEMA's) HAZUS project (Vickery et al, 2000). The Risk Management Solutions, Inc. (RMS) and EQECAT models also fit into this framework, as has been evidenced by their public submissions to the commission.

Although in principle the hurricane loss models as shown diagrammatically in Figure 11.1 are straightforward to construct, the ''devil is in the details.'' In particular, databases on historical hurricanes (e.g., HURDAT (with updates); Schwerdt et al., 1979; Ho et al., 1987) must be mined to develop probability distributions of frequency of occurrence by location, of intensity by location, of forward speed, and so forth. A wind field model needs to be selected that can incorporate hurricane characteristics, such as central pressure, radius of maximum winds, forward speed, etc. A parametric wind field provides the wind speed at the sites of the exposures of interest (e.g., residential housing stock). In addition, some (but not all) published hurricane models consider further

land Cover Topography Data

/ Eposure Data land Cover Topography Data

Wind

Friction

Model

Model

Damage Function

Module

Loss Costs

Figure 11.1. Traditional structure of hurricane loss models.

adjustment to the wind speed due to frictional effects and overland weakening. Finally, the developer of a model would need to determine a damage function that converts wind forces on structures to actual physical damages to the structure (roof, cladding, windows, and so forth, but converted through the function to a proportion of structure damage). Vulnerability functions produce ''ground-up losses,'' which need additional refinement in the actuarial component of the model to emulate financial losses. The vulnerability functions have historically been the part of the model that distinguishes the various submissions, and they are considered highly proprietary. For vulnerability functions derived from insurance claims data (Friedman, 1984), an enormous amount of data processing, curve fitting, and engineering expert judgment take place, especially to handle mitigation impacts (shutters, tie-downs, roof-to-wall connectors, and so forth). Loss data for major events must be obtained from insurance companies or other sources to develop vulnerability curves or to validate existing curves against those with more recent events.

Watson and Johnson (2004) and Watson et al. (2004) investigated 324 combinations of wind field models, friction models, and damage functions drawn from the open literature, and they found that their range of average annual loss costs bracketed those reported by the proprietary modelers. Perhaps this is not surprising given that there is a finite set of parametric wind field models and damage functions. On the other hand, our implementations followed the descriptions given in the open literature and did not necessarily conform perfectly to the proprietary versions. The multi-model platform allowed the examination of the sources of variation driving the loss costs among model combinations (Watson and Johnson, 2006). The choice of wind field - the least sensitive (i.e., proprietary) part of the model - was shown to be the main driver of the variation.

Some politicians have gasped at the occasional two- to three-factor disparity of loss costs in specific counties among the proprietary models. They rightly ask how it is possible that individual models developed with great care and expense by teams of professionals produce such widely varying results. At first, one is tempted to despair in reviewing the state of the art of hurricane loss models as is reflected in these apparent differences. Small changes in key underlying assumptions in a single model can produce wide swings in the generated loss costs in some locations. However, we have found that the median loss cost from the 324 combinations at each site provides stable results that change little in response to variations in otherwise key assumptions such as far field pressure or the radius of maximum winds - values that are inaccurate or incompletely known for historical storms. We refer to this scheme as an ensemble median approach. This approach was further analyzed in a technical report to the commission (Watson and Johnson, 2006). Several variants of the HURDAT dataset (Jarvinen et al., 1984, as updated annually by the Tropical Prediction Center) were considered, as well as various other model inputs such as land cover, and while individual model results varied widely, the results from the median ensemble were again much more stable. Thus the median ensemble allows us to more fully explore the impact of various alternative methodologies, such as GCM methods, for studying the underlying hurricane climatology without the risk of an interaction with a specific modeling technique (wind model, friction model, etc.) biasing the results.

Continue reading here: Climate models as drivers of loss estimation models

Was this article helpful?

0 0