Hierarchical holographic modelling and the theory of scenario structuring

7.3.1 Philosophy and methodology of hierarchical holographic modelling

Hierarchical holographic modelling (HHM) is a holistic philosophy/ methodology aimed at capturing and representing the essence of the inherent diverse characteristics and attributes of a system - its multiple aspects, perspectives, facets, views, dimensions, and hierarchies. Central to the mathematical and systems basis of holographic modelling is the overlapping among various holographic models with respect to the objective functions, constraints, decision variables, and input-output relationships of the basic system. The term holographic refers to the desire to have a multi-view image of a system when identifying vulnerabilities (as opposed to a single view, or a flat image of the system). Views of risk can include but are not limited to (1) economic, (2) health, (3) technical, (4) political, and (5) social systems. In addition, risks can be geography related and time related. In order to capture a holographic outcome, the team that performs the analysis must provide a broad array of experience and knowledge.

The term hierarchical refers to the desire to understand what can go wrong at many different levels of the system hierarchy. HHM recognizes that for the risk assessment to be complete, one must recognize that there are macroscopic risks that are understood at the upper management level of an organization that are very different from the microscopic risks observed at lower levels. In a particular situation, a microscopic risk can become a critical factor in making things go wrong. To carry out a complete HHM analysis, the team that performs the analysis must include people who bring knowledge up and down the hierarchy.

HHM has turned out to be particularly useful in modelling large-scale, complex, and hierarchical systems, such as defence and civilian infrastructure systems. The multiple visions and perspectives of HHM add strength to risk analysis. It has been extensively and successfully deployed to study risks for government agencies such as the President's Commission on Critical Infrastructure Protection (PCCIP), the FBI, NASA, the Virginia Department of Transportation (VDOT), and the National Ground Intelligence Center, among others. (These cases are discussed as examples in Haimes [2004].) The HHM methodology/philosophy is grounded on the premise that in the process of modelling large-scale and complex systems, more than one mathematical or conceptual model is likely to emerge. Each of these models may adopt a specific point of view, yet all may be regarded as acceptable representations of the infrastructure system. Through HHM, multiple models can be developed and coordinated to capture the essence of many dimensions, visions, and perspectives of infrastructure systems.

7.3.2 The definition of risk

In the first issue of Risk Analysis, Kaplan and Garrick (1981) set forth the following 'set of triplets' definition of risk, R:

where S; here denotes the i-th 'risk scenario', I; denotes the likelihood of that scenario, and X; the 'damage vector' or resulting consequences. This definition has served the field of risk analysis well since then, and much early debate has been thoroughly resolved about how to quantify the I;and X;, and the meaning ofprobability', 'frequency', and 'probability of frequency' in this connection (Kaplan, 1993, 1996).

In Kaplan and Garrick (1981) the S; themselves were defined, somewhat informally, as answers to the question, 'What can go wrong?' with the system or process being analysed. Subsequently, a subscript 'c' was added to the set of triplets by Kaplan (1991,1993):

This denotes that the set of scenarios {S;} should be 'complete', meaning it should include 'all the possible scenarios, or at least all the important ones'.

7.3.3 Historical perspectives

At about the same time that Kaplan and Garrick's (1981) definition of risk was published, so too was the first article on HHM (Haimes, 1981, 2004). Central to the HHM method is a particular diagram (see, for example, Fig. 7.1). This is particularly useful for analysing systems with multiple, interacting (perhaps overlapping) subsystems, such as a regional transportation or water supply system. The different columns in the diagram reflect different 'perspectives' on the overall system.

HHM can be seen as part of the theory of scenario structuring (TSS) and vice versa, that is, TSS as part of HHM. Under the sweeping generalization of the HHM method, the different methods of scenario structuring can lead to seemingly different sets of scenarios for the same underlying problem. This fact is a bit awkward from the standpoint of the 'set of triplets' definition of risk (Kaplan and Garrick, 1981).

The HHM approach divides the continuum but does not necessarily partition it. In other words, it allows the set of subsets to be overlapping, that is, non-disjoint. It argues that disjointedness is required only when we are going to quantify the likelihood of the scenarios, and even then, only if we are going to add up these likelihoods (in which case the overlapping areas would end up counted twice). Thus, if the risk analysis seeks mainly to identify scenarios rather than to quantify their likelihood, the disjointedness requirement can be relaxed somewhat, so that it becomes a preference rather than a necessity.

To see how HHM and TSS fit within each other (Kaplan et al., 2001), one key idea is to view the HHM diagram as a depiction of the success scenario So- Each box in Fig. 7.1 may then be viewed as defining a set of actions or results required of the system, as part of the definition ofsuccess'. Conversely then, each box also defines a set of risk scenarios in which there is failure to accomplish one or more of the actions or results defined by that box. The union of all these sets contains all possible risk scenarios and is then 'complete'.

This completeness is, of course, a very desirable feature. However, the intersection of two of our risk scenario sets, corresponding to two different HHM boxes, may not be empty. In other words, our scenario sets may not be 'disjoint'.

7.4 Phantom system models for risk management of emergent multi-scale systems

No single model can capture all the dimensions necessary to adequately evaluate the efficacy of risk assessment and management activities of emergent multi-scale systems. This is because it is impossible to identify all relevant state variables and their sub-states that adequately represent large and multi-scale systems (Haimes, 1977, 1981, 2004). Indeed, there is a need for theory and methodology that will enable analysts to appropriately rationalize risk management decisions through a process that

1. identifies existing and potential emergent risks systemically,

2. evaluates, prioritizes, and filters these risks based on justifiable selection criteria,

3. collects, integrates, and develops appropriate metrics and a collection of models to understand the critical aspects of regions,

4. recognizes emergent risks that produce large impacts and risk management strategies that potentially reduce those impacts for various time frames,

5. optimally learns from implementing risk management strategies, and

3 IS

m s

% a

Vj *fi

s

CO

'C

c c;

03

i

1

.6

c

OC

Jp

'fi

%

W

1

u

1

rt a

8

4 <8

a.

s

a

<u

j;

et

6. adheres to an adaptive risk management process that is responsive to dynamic, internal, and external forced changes. To do so effectively, models must be developed to periodically quantify, to the extent possible, the efficacy of risk management options in terms of their costs, benefits, and remaining risks.

A risk-based, multi-model, systems-driven approach can effectively address these emergent challenges. Such an approach must be capable of maximally utilizing what is known now and optimally learn, update, and adapt through time as decisions are made and more information becomes available at various regional levels. The methodology must quantify risks as well as measure the extent of learning to quantify adaptability. This learn-as-you-go tactic will result in re-evaluation and evolving/learning risk management over time.

Phantom system models (PSMs) (Haimes, 2007) enable research teams to effectively analyse major forced (contextual) changes on the characteristics and performance of emergent multi-scale systems, such as cyber and physical infrastructure systems, or major socio-economic systems. The PSM is aimed at providing a reasoned virtual-to-real experimental modelling framework with which to explore and thus understand the relationships that characterize the nature of emergent multi-scale systems. The PSM philosophy rejects a dogmatic approach to problem-solving that relies on a modelling approach structured exclusively on a single school of thinking. Rather, PSM attempts to draw on a pluri-modelling schema that builds on the multiple perspectives gained through generating multiple models. This leads to the construction of appropriate complementary models on which to deduce logical conclusions for future actions in risk management and systems engineering. Thus, we shift from only deciding what is optimal, given what we know, to answering questions such as (1) What do we need to know? (2) What are the impacts of having more precise and updated knowledge about complex systems from a risk reduction standpoint? and (3) What knowledge is needed for acceptable risk management decision-making? Answering this mandates seeking the 'truth' about the unknowable complex nature of emergent systems; it requires intellectually bias-free modellers and thinkers who are empowered to experiment with a multitude of modelling and simulation approaches and to collaborate for appropriate solutions.

The PSM has three important functions: (1) identify the states that would characterize the system, (2) enable modellers and analysts to explore cause-and-effect relationships in virtual-to-real laboratory settings, and (3) develop modelling and analyses capabilities to assess irreversible extreme risks; anticipate and understand the likelihoods and consequences of the forced changes around and within these risks; and design and build reconfigured systems to be sufficiently resilient under forced changes, and at acceptable recovery time and costs.

The PSM builds on and incorporates input from HHM, and by doing seeks to develop causal relationships through various modelling and Emulation tools; it imbues life and realism into phantom ideas for emergent vstems that otherwise would never have been realized. In other words, with different modelling and simulation tools, PSM legitimizes the exploration and experimentation of out-of-the-box and seemingly 'crazy' ideas and ultimately discovers insightful implications that otherwise would have been completely missed and dismissed.

7.5 Risk of extreme and catastrophic events

7.5.1 The limitations of the expected value of risk One of the most dominant steps in the risk assessment process is the quantification of risk, yet the validity of the approach most commonly used to quantify risk - its expected value - has received neither the broad professional scrutiny it deserves nor the hoped-for wider mathematical challenge that it mandates. One of the few exceptions is the conditional expected value of the risk of extreme events (among other conditional expected values of risks) generated by the partitioned multi-objective risk method (PMRM) (Asbeck and Haimes, 1984; Haimes, 2004).

Let px(x) denote the probability density function of the random variable X, where, for example, X is the concentration of the contaminant trichloroethylene (TCE) in a groundwater system, measured in parts per billion (ppb). The expected value of the concentration (the risk of the groundwater being contaminated by an average concentration of TCE), is E(X) ppb. If the probability density function is discretized to n regions over the entire universe of contaminant concentrations, then E(X) equals the sum of the product of pi and %;, where p; is the probability that the i-th segment of the probability regime has a TCE concentration of x;. Integration (instead of summation) can be used for the continuous case. Note, however, that the expected-value operation commensurates contaminations (events) of low concentration and high frequency with contaminations of high concentration and low frequency. For example, events x\ = 2 pbb and xi = 20,000 ppb that have the probabilities p\ =0.1 and p^ — 0.00001, respectively, yield the same contribution to the overall expected value: (0.1) (2) + (0.00001) (20,000) = 0.2+0.2. However, to the decision maker in charge, the relatively low likelihood of a disastrous contamination of the groundwater system with 20,000 ppb of TCE cannot be equivalent to the contamination at a low concentration of 0.2 ppb, even with a very high likelihood of such contamination. Owing to the nature of mathematical smoothing, the averaging function of the contaminant concentration in this example does not lend itself to prudent management decisions. This is because the expected value of risk does not accentuate catastrophic events and their consequences, thus misrepresenting what would be perceived as an unacceptable risk.

7.5.2 The partitioned multi-objective risk method Before the partitioned multi-objective risk method (PMRM) was developed, problems with at least one random variable were solved by computing and minimizing the unconditional expectation of the random variable representing damage. In contrast, the PMRM isolates a number of damage ranges (by specifying so-called partitioning probabilities) and generates conditional expectations of damage, given that the damage falls within a particular range. A conditional expectation is defined as the expected value of a random variable, given that this value lies within some pre-specified probability range. Clearly, the values of conditional expectations depend on where the probability axis is partitioned. The analyst subjectively chooses where to partition in response to the extreme characteristics of the decision-making problem. For example, if the decision-maker is concerned about the once-in-a-million-years catastrophe, the partitioning should be such that the expected catastrophic risk is emphasized.

The ultimate aim of good risk assessment and management is to suggest some theoretically sound and defensible foundations for regulatory agency guidelines for the selection of probability distributions. Such guidelines should help incorporate meaningful decision criteria, accurate assessments of risk in regulatory problems, and reproducible and persuasive analyses. Since these risk evaluations are often tied to highly infrequent or low-probability catastrophic events, it is imperative that these guidelines consider and build on the statistics of extreme events. Selecting probability distributions to characterize the risk of extreme events is a subject of emerging studies in risk management (Bier and Abhichandani, 2003; Haimes, 2004; Lambert et al., 1994; Leemis, 1995).

There is abundant literature that reviews the methods of approximating probability distributions from empirical data. Goodness-of-fit tests determine whether hypothesized distributions should be rejected as representations of empirical data. Approaches such as the method of moments and maximum likelihood are used to estimate distribution parameters. The caveat in directly applying accepted methods to natural hazards and environmental scenarios is that most deal with selecting the best matches for the 'entire' distribution. The problem is that these assessments and decisions typically address worst-case scenarios on the tails of distributions. The differences in distribution tails can be very significant even if the parameters that characterize the central tendency of the distribution are similar. A normal and a uniform distribution that have similar expected values can markedly differ on the tails. The possibility of significantly misrepresenting the tails, which are potentially the most relevant portion of the distribution, highlights the importance of considering extreme events when selecting probability distributions (see also Chapter 8, this volume).

More time and effort should be spent to characterize the tails of distributions when modelling the entire distribution. Improved matching between extreme events and distribution tails provides policymakers with more accurate and relevant information. Major factors to consider when developing distributions that account for tail behaviours include (1) the availability of data, (2) the characteristics of the distribution tail, such as shape and rate of decay, and (3) the value of additional information in assessment.

The conditional expectations of a problem are found by partitioning the problem's probability axis and mapping these partitions onto the damage axis. Consequently, the damage axis is partitioned into corresponding ranges. A conditional expectation is defined as the expected value of a random variable given that this value lies within some pre-specified probability range. Clearly, the values of conditional expectations are dependent on where the probability axis is partitioned. The choice of where to partition is made subjectively by the analyst in response to the extreme characteristics of the problem. If, for example, the analyst is concerned about the once-in-a-million-years catastrophe, the partitioning should be such that the expected catastrophic risk is emphasized. Although no general rule exists to guide the partitioning, Asbeck and Haimes (1984) suggest that if three damage ranges are considered for a normal distribution, then the +ls and +4s partitioning values provide an effective rule of thumb. These values correspond to partitioning the probability axis at 0.84 and 0.99968; that is, the low-damage range would contain 84% of the damage events, the intermediate range would contain just under 16%, and the catastrophic range would contain about 0.032% (probability of 0.00032). In the literature, catastrophic events are generally said to be those with a probability of exceedance of 10~5 (see, for instance, the National Research Council report on dam safety [National Research Council, 1985]). This probability corresponds to events exceeding +4s.

A continuous random variable X of damages has a cumulative distribution function (cdf)P(x)and a probability density function (pdf)p(x), which are defined by the following relationships:

The cdf represents the non-exceedance probability of x. The exceedance probability of % is defined as the probability that X is observed to be greater than x and is equal to one minus the cdf evaluated at x. The expected value, average, ormean value of the random variable X is defined as

For the discrete case, where the universe of events (sample space) ofthe random variable X is discretized into I segments, the expected value of damage £(X] can be written as

Continue reading here: J

Was this article helpful?

0 0