The Doomsday Argument (DA) is an anthropic argument purporting to show that we have systematically underestimated the probability that humankind will become extinct relatively soon. Originated by the astrophysicist Brandon Carter and developed at length by the philosopher John Leslie,8 DA purports to show that we have neglected to fully take into account the indexical information residing in the fact about when in the history of the human species we exist. Leslie (1996) - in what can be considered the first serious study of GCRs facing humanity and their philosophical aspects - gives a substantial weight to DA, arguing that it prompts immediate re-evaluation of probabilities of extinction obtained through direct analysis of particular risks and their causalmechanisms.
The core idea of DA can be expressed through the following thought experiment. Place two large urns in front of you, one of which you know contains 10 balls, the other a million, but you do not know which is which.
7 Earth-like planets have not been discovered yet around Tau Ceti, but in view of the crude observational techniques employed so far, it has not been expected; the new generation of planet-searching instruments currently in preparation (Darwin, Gaia, TPF, etc.) will settle this problem.
8 Originally in Leslie (1989); for his most comprehensive treatment, see Leslie (1996). Carter did not publish on DA.
The balls in each urn are numbered 1, 2, 3, 4,... Now take one ball at random from the left urn; it shows the number 7. This clearly is a strong indication that the left urn contains only 10 balls. If the odds originally were 50:50 (identically looking urns), an application of Bayes' theorem gives the posterior probability that the left urn is the one with only 10 balls as Ppost (n = 10) = 0.99999. Now consider the case where instead of two urns you have two possible models of humanity's future, and instead of balls you have human individuals, ranked according to birth order. One model suggests that the human race will soon become extinct (or at least that the number of individuals will be greatly reduced), and as a consequence the total number of humans that ever will have existed is about 100 billion. Even the vociferous optimists would not put the prior probability of such a development excessively low - certainly not lower than the probability of the largest certified natural disaster (so-called 'asteroid test') of about 10~8 per year. The other model indicates that humans will colonize other planets, spread through the Galaxy, and continue to exist for many future millennia; we consequently can take the number of humans in this model to be of the order of, say, 1018. As a matter of fact, you happen to find that your rank is about 60 billion. According to Carter and Leslie, we should reason in the same way as we did with the urn balls. That you should have a rank of 60 billion is much more likely if only 100 billion humans ever will have lived than if the number was 1018. Therefore, by Bayes' theorem, you should update your beliefs about mankind's prospects and realize that an impending doomsday is much more probable than you thought previously.9
Its underlying idea is formalized by Bostrom (1999, 2002a) as the Self-sampling Assumption (SSA):
SSA: One should reason as if one were a random sample from the set of all observers in one's reference class.
In effect, it tells us that there is no structural difference between doing statistics with urn balls and doing it with intelligent observers. SSA has several seemingly paradoxical consequences, which are readily admitted by its supporters; for a detailed discussion, see Bostrom (2001). In particular, the reference class problem ('what counts as an observer?') has been plaguing the entire field of anthropic reasoning. A possible response to it is an improved version of SSA, 'Strong SSA' (SSSA):
SSSA: One should reason as if one's present observer-moment were a random sample from the set of all observer-moments in its reference class.
9 This is the original, Carter-Leslie version of DA. The version of Gott (1993) is somewhat different, since it does not deal with the number of observers, but with intervals of time characterizing any phenomena (including humanity's existence). Where Gott does consider the number of observers, his argument is essentially temporal, depending on (obviously quite speculative) choice of particular population model for future humanity. It seems that a gradual consensus has been reached about inferiority of this version compared to Leslie-Carter's (see especially Caves, 2000; Olum, 2002), so we shall concentrate on the latter.
It can be shown that by taking more indexical information into account than SSA does (SSA considers only information about which observer you are, but you also have information about, for example, which temporal part of this observer = observer-moment you are at the current moment), it is possible to relativize your reference class so that it may contain different observers at different times, depending partly on your epistemic situation on the occasion. SSA, therefore, describes the correct way of assigning probabilities only in certain special cases; and revisiting the existing arguments for SSA, we find that this is all they establish. In particular, DA is inconclusive. It is shown to depend on particular assumptions about the part of one's subjective prior probability distribution that has to do with indexical information - assumptions that one is free to reject, and indeed, arguably, ought to reject in light of their strongly counterintuitive consequences. Thus, applying the argument to our actual case may be a mistake; at least, a serious methodological criticism could be made of such an inference.
Continue reading here: Fermis paradox
Was this article helpful?