Pastfuture asymmetry and risk inferences

One important selection effect in the study of GCRs arises from the breakdown of the temporal symmetry between past and future catastrophes when our existence at the present epoch and the necessary conditions for it are taken into account. In particular, some of the predictions derived from past records are unreliable due to observation selection, thus introducing an essential qualification to the general and often uncritically accepted gradualist principle that 'the past is a key to the future'. This resulting cmthropic overconfidence bias time

Fig. 6.1 A schematic presentation of the single-event toy model. The evidence fi consists of our present-day existence.

For a summary of vast literature on observation selection, anthropic principles, and anthropic reasoning in general, see Barrow and Tipler (1986); Balashov (1991); Bostrom (2002a). is operative in a wide range of catastrophic events, and leads to potentially dangerous underestimates of the corresponding risk probabilities. After we demonstrate the effect on a toy model applied to a single catastrophic event situation in Section 6.2.1, we shall develop the argument in more detail in Section 6.2.2, while considering its applicability conditions for various types of GCRs in Section 6.2.3. Finally, we show that with the help of additional astrobiological information, we may do even better and constrain the probabilities of some very specific exogenous risks in Section

6.2.1 A simplified model

Consider the simplest case of a single very destructive global catastrophe, for instance, a worse-than-Toba super-volcanic eruption (see Chapter 10, this volume). The evidence we take into account in a Bayesian manner is the fact of our existence at the present epoch; this, in turn, implies the existence of a complicated web of evolutionary processes upon which our emergence is contingent; we shall neglect this complication in the present binary toy model and shall return to it in the next subsection. The situation is schematically shown in Fig. 6.1. The a priori probability of catastrophe is Pand the probability of human extinction (or a sufficiently strong perturbation leading to divergence of evolutionary pathways from the morphological subspace containing humans) upon the catastrophic event is Q. We shall suppose that the two probabilities are (1) constant, (2) adequately normalized, and (3) applicable to a particular well-defined interval of past time. Event B2 is the occurence of the catastrophe, and by E we denote the evidence of our present existence.

4 Parts of this section are loosely based upon Cirkovic (2007).

The direct application of the Bayes formula for expressing conditional probabilities in form:

P[liiil'(L'\liit+ Pi 3; J P{.F|fiz.)' (6.1) using our notation, yields the a posteriori probability as


By simple algebraical manipulation, we can show that

that is, we tend to underestimate the true catastrophic risk. It is intuitively clear why: the symmetry between past and future is broken by the existence of an evolutionary process leading to our emergence as observers at this particular epoch in time. We can expect a large catastrophe tomorrow, but we cannot - even without any empirical knowledge - expect to find traces of a large catastrophe that occurred yesterday, since it would have pre-empted our existence today. Note that lim — —— = oc.

Very destructive events completely destroy predictability! An obvious consequence is that absolutely destructive events, which humanity has no chance of surviving at all (Q = 0), completely annihilate our confidence in predicting from past occurrences. This almost trivial conclusion is not, however, widely appreciated.

The issue at hand is the possibility of vacuum phase transition (see Chapter 16, this volume). This is an example par excellence of the Q = 0 event: its ecological consequences are such that the extinction not only of humanity but also of the terrestrial biosphere is certain.5 However, the anthropic bias was noticed neither by Hut and Rees (1983), nor by many of subsequent Papers citing it. Instead, these authors suggested that the idea of high-energy experiments triggering vacuum phase transition can be rejected by comparison with the high-energy events occurring in nature. Since the energies of particle collisions taking place, for instance, in interactions between cosmic rays and the Earth's atmosphere or the solid mass of the Moon are still orders of agnitude higher than those achievable in human laboratories in the near future, and with plausible general assumptions on the scaling of the relevant reaction cross-sections with energy, Hut and Rees concluded that in view of the fact that the Earth (and the Moon) survived the cosmic-ray bombardment for about 4.5 Gyr, we are safe for the foreseeable future. In other words, their argument consists of the claim that the absence of catastrophic event of this type in our past light cone gives us the information that the probability P (or its rate per unit time p) has to be so extremely small, that any fractional increase caused by human activities (like building and operating of a new particle collider) is insignificant. If, for example, p is 10™50 per year, then its doubling or even a 1000-fold increase by deliberate human activities is arguably unimportant Thus, we can feel safe with respect to the future on the basis of our observations about the past. As we have seen, there is a hole in this argument: all observers everywhere will always find that no such disaster has occurred in their backward light cone - and this is true whether such disasters are common or rare.

5 for more optimistic view of this possibility in a fictional context see Egan (2002).

6.2.2 Anthropic overconfidence bias

In order to predict the future from records of the past, scientists use a wide variety of methods with one common feature: the construction of an empirical distribution function of events of a specified type (e.g., extraterrestrial impacts, supernova/gamma-burst explosions, or super-volcanic eruptions). In view of the Bayesian nature of our approach, we can dub this distribution function the a posteriori distribution function. Of course, deriving such function from observed traces is often difficult and fraught with uncertainty. For instance, constructing the distribution function of asteroidal/cometary impactors from the sample of impact craters discovered on Earth (Earth Impact Database, 2005) requires making physical assumptions, rarely uncontroversial, about physical parameters of impactors such as their density, as well as astronomical (velocity distribution of Earth-crossing objects) and geological (the response of continental or oceanic crust to a violent impact, formation conditions of impact glasses) input.

However, the a posteriori distribution is not the end of the story. What we are interested in is the 'real' distribution of chances of events (or their causes), which is 'given by

Nature' but not necessarily completely revealed by the historical record. This underlying objective characteristic of a system can be called its a priori distribution function. It reflects the (evolving) state of the system considered without reference to incidental spatio-temporal specifics. Notably, the a priori distribution function describes the stochastic properties of a chance-generating system in nature rather than the contingent outcomes of that generator in the particular history of a particular place (in this case planet Earth). The relationship between a priori and a posteriori distribution functions for several natural catastrophic hazards is shown in a simplified manner in Table 6.1. Only a priori distribution is useful for predicting the future, since it is not constrained by observation selection effects.

The key insight is that the inference to the inherent (a priori) distribution function from the reconstructed empirical (a posteriori) distribution must tA account of an observation selection effect (Fig 6.2). Catastrophic events

61 Examples of Natural Hazards Potentially Comprising GCRs and Two Types of Their Distribution Functions __

f Event A Priori Distribution Empirical (A Posteriori}

TyPe 01 Distribution impacts


Supeirtovae and/ orGRBs

Distribution of near-Earth objects and EarSh-crossing coméis Distribution of geophysical 'hot spots' and their activity Distribution of progenitors and their motions in the Solar neighbourhood

Distribution of impact craters, shock glasses, and so on Distribution of calderas, volcanic ash, ice cores, and so on Geochemical trace anomalies, distribution of remnants

Notfc Only a pnori distribution is veritably describing nature andean serve as a source of predictions about the future events.

Notfc Only a pnori distribution is veritably describing nature andean serve as a source of predictions about the future events.


Fig- 6.2 A sketch of the common procedure for deriving predictions about the future from the past records. This applies to quite benign events as well as to GCRs, but only in the latter case do we need to apply the correction symbolically shown in dashed-line box. Steps framed by dashed line are - surprisingly enough - usually not performed ui the standard risk analysis; they are, however, necessary in order to obtain unbiased estimates of the magnitude of natural GCRs.

Global catastrophic risks exceeding some severity threshold eliminate all observers and are hence unobservable. Some types of catastrophes may also make the existence of observers on a planet impossible in a


Fig- 6.2 A sketch of the common procedure for deriving predictions about the future from the past records. This applies to quite benign events as well as to GCRs, but only in the latter case do we need to apply the correction symbolically shown in dashed-line box. Steps framed by dashed line are - surprisingly enough - usually not performed ui the standard risk analysis; they are, however, necessary in order to obtain unbiased estimates of the magnitude of natural GCRs.

Global catastrophic risks exceeding some severity threshold eliminate all observers and are hence unobservable. Some types of catastrophes may also make the existence of observers on a planet impossible in a subsequent interval of time, the size of which might be correlated with the magnitude of the catastrophe. Because of this observation selection effect, the events reflected in our historical record are not sampled from the full events space but rather from just the part of the events space that lies beneath the 'anthropic compatibility boundary' drawn on the time-severity diagram for each type of catastrophe. This biased sampling effect must be taken into account when we seek to infer the objective chance distribution from the observed empirical distribution of events. Amazingly, it is usually not taken into account in most of the real analyses, perhaps 'on naive ergodic grounds'.6

This observation selection effect is in addition to what we might call 'classical' selection effects applicable to any sort of event (e.g., removal of traces of events in the distant part by erosion and other instances of the natural entropy increase; see Woo, 1999). Even after these classical selection effects have been taken into account in the construction of an empirical (a posteriori) distribution, the observation selection effects remain to be corrected for in order to derive the a priori distribution function.

6.2.3 Applicability class of risks

It seems obvious that the reasoning sketched above applies to GCRs of natural origin since, with one partial exception there is no unambiguous way of treating major anthropogenic hazards (like the global nuclear war or misuse of biotechnology) statistically. This is a necessary, but not yet sufficient, condition for the application of this argument. In order to establish the latter, we need natural catastrophic phenomena which are

• sufficiently destructive (at least in a part of the severity spectrum)

• sufficiently random (in the epistemic sense) and

• leaving traces in the terrestrial (or in general local) record allowing statistical inference.

There are many conceivable threats satisfying these broad desiderata. Some examples mentioned in the literature comprise the following:

1. asteroidal/cometary impacts (severity gauged by the Turin scale or the impact crater size)

2. super-volcanism episodes (severity gauged by the so-called volcanic explosivity index (VEI) or a similar measure)

6 I thank C.R. Shalizi for this excellent formulation

3. Supernovae/gamma-ray bursts (severity gauged by the distance/ intrinsic power)

4. superstrong Solar flares (severity gauged by the spectrum/intrinsic power of electromagnetic and corpuscular emissions).

The crucial point here is to have events sufficiently influencing our past, but without too much information which can be obtained externally to the terrestrial biosphere. Thus, there are differences between kinds of catastrophic events in this regard. For instance, the impact history of the Solar System (or at least the part where the Earth is located) is, in theory, easier to be obtained for the Moon, where erosion is orders of magnitude weaker than on Earth. In practice, in the current debates about the rates of cometary impacts, it is precisely the terrestrial cratering rates that are used as an argument for or against existence of a large dark impactor population (see Napier, 2006; Chapter 11 in this volume ), thus offering a good model on which the anthropic bias can, at least potentially, be tested. In addition to the impact craters, there is a host of other traces one attempts to find in field work which contribute to the building of the empirical distribution function of impacts, notably searching for chemical anomalies or shocked glasses (e.g., Schultz et al., 2004).

Supernovae/gamma-ray bursts distribution frequencies are also inferred (albeit much less confidently!) from observations of distant regions, notably external galaxies similar to the Milky Way. On one hand, finding local traces of such events in the form of geochemical anomalies (Dreschhoff and Laird, 2006) is excessively difficult and still very uncertain. This external evidence decreases the importance of the Bayesian probability shift. On the other hand, the destructive capacities of such events have been known and discussed for quite some time (see Chapter 12, this volume; Hunt, 1978; Ruderman, 1974; Schindewolf, 1962), and have been particularly enhanced recently by successful explanation of hitherto mysterious gamma-ray bursts as explosions occurring in distant galaxies (Scalo and Wheeler, 2002). The possibility of such cosmic explosions causing a biotic crisis and possibly even a mass extinction episode has returned with a vengeance (Dar et al., 1998; Melott et al., 2004).

Super-volcanic episodes (see Chapter 10, this volume) - both explosive pyroclastic and non-explosive basaltic eruptions of longer duration - are perhaps the best example of global terrestrial catastrophes (which is the rationale for choosing it in the toy model above). They are interesting for two additional recently discovered reasons: (1) Super-volcanism creating Siberian basaltic traps almost certainly triggered the end-Permian mass extinction (251.4 ± 0.7 Myr before present), killing up to 96% of the terrestrial non-bacterial species (e.g., Benton, 2003; White, 2002). Thus, its global destructive potential is today beyond doubt. (2) Super-volcanism is perhaps the single almost-realized existential catastrophe: the Toba super-eruption probably reduced human population to approximately 1000 individuals, nearly causing the extinction of humanity (Ambrose, 1998; Rampino and Self, 1992). In that light, we would do very well to consider seriously this threat which, ironically in views of historically well-known calamities like destructions of Santorini, Pompeii, or Tambora, has become an object of concern only very recently (e.g., McGuire, 2002; Roscoe, 2001).

As we have seen, one frequently cited argument in the debate on GCRs, the one of Hut and Rees (1983), actually demonstrates how misleading (but comforting!) conclusions about risk probabilities can be reached when anthropic overconfidence bias is not taken into account. 6.2.4 Additional astrobiological information

The bias affecting the conclusions of Hut and Rees (1983) can be at least partially corrected by using the additional information coming from astrobiology, which has been recently done by Tegmark and Bostrom (2005). Astrobiology is the nascent and explosively developing discipline that deals with three canonical questions: How does life begin and develop? Does life exist elsewhere in the universe? What is the future of life on Earth and in space? One of the most interesting of many astrobiological results of recent years has been the study by Lineweaver (2001), showing that the Earth-like planets around other stars in the Galactic Habitable Zone (GHZ; Gonzalez et al., 2001) are, on average, 1.8 ± 0.9 Gyr older than our planet (see also the extension of this study by Lineweaver et al., 2004). His calculations are based on the tempo of chemical enrichment as the basic precondition for the existence of terrestrial planets. Moreover, Lineweaver's results enable constructing a planetary age distribution, which can be used to constrain the rate of particularly destructive catastrophes, like the vacuum decay or a strangelet catastrophe.

The central idea of the Tegmark and Bostrom study is that planetary age distribution, as compared to the Earth's age, bounds the rate for many doomsday scenarios. If catastrophes that permanently destroy or sterilize a cosmic neighbourhood were very frequent, then almost all intelligent observers would arise much earlier than we did, since the Earth is a latecomer within the habitable planet set. Using the Lineweaver data on planetary formation rates, it is possible to calculate the distribution of birth rates for intelligent species under different assumptions about the rate of sterilization by catastrophic events. Combining this with the information about our own temporal location enables the rather optimistic conclusion that the cosmic (permanent) sterilization rate is at the most of order of one per 109 years.

How about catastrophes that do not permanently sterilize a cosmic neighbourhood (preventing habitable planets from surviving and forming in that neighbourhood)? Most catastrophes are obviously in this category. Is biological evolution on the other habitable planets in the Milky Way influenced more or less by catastrophes when compared to the Earth? We cannot easily say, because the stronger the catastrophic stress is (the larger analogue of our probability 1 - Q is on average), the less useful information can we extract about the proximity -or else - of our particular historical experience to what is generally to be expected. However, future astrobiological studies could help us to resolve this conundrum. Some data already exist. For instance, one recently well-studied case is the system of the famous nearby Sun-like star Tau Ceti which contains both planets and a massive debris disc, analogous to the Solar System Kuiper belt. Modelling of Tau Ceti's dust disc observations indicate, however, that the mass of the colliding bodies up to 10 km in size may total around 1.2 Earth-masses, compared with 0.1 Earth-masses estimated to be in the Solar System's Edgeworth-Kuiper Belt (Greaves et al., 2004). Thus, Tau Ceti's dust disc may have around 10 times more cometary and asteroidal material than is currently found in the Solar System - in spite of the fact that Tau Ceti seems to be about twice as old as the Sun (and it is conventionally expected the amount of such material to decrease with time). Why the Tau Ceti System would have a more massive cometary disc than the Solar System is not fully understood, but it is reasonable to conjecture that any hypothetical terrestrial planet of this extrasolar planetary system has been subjected to much more severe impact stress than the Earth has been during the course of its geological and biological history.7

Continue reading here: Doomsday Argument

Was this article helpful?

0 0