The Simulation Argument

A particular speculative application of the theory of observation selection leads to the so-called Simulation Argument of Bostrom (2003). If we accept the possibility that a future advanced human (post-human) civilization might have the technological capability of running 'ancestor-simulations' - computer simulations of people like our historical predecessors sufficiently detailed for the simulated people to be conscious - we run into an interesting consequence illuminated by Bostrom (2003). Starting from a rather simple reasoning for the fraction of observers living in simulations (fsim)

Jsim = number of observers in simulations / total number of observers = =number of observers in simulations /(number of observers in simulations + number of observers outside of simulations) (6.5)

Bostrom reaches the intriguing conclusion that this commits one to the belief that either (1) we are living in the simulation, or (2) we are almost certain never to reach the posthuman stage, or (3) almost all post-human civilizations lack individuals who ran significant numbers of ancestor-simulations, that is, computer-emulations of the sort of human-like creatures from which they evolved. Disjunct (3) looks at first glance most promising, but it should be clear that it suggests a quite uniform or monolithic social organization of the future, which could be hallmark of totalitarianism, a GCR in its own right (see Chapter 22, this volume). The conclusion of the Simulation Argument appears to be a pessimistic one, for it narrows down quite substantially the range of positive future scenarios that are tenable in light of the empirical information we now have. The Simulation Argument increases the probability that we are living in a simulation (which may in many subtle ways affect our estimates of how likely various outcomes are) and it decreases the probability that the post-human world would contain lots of free individuals who have large computational resources and human-like motives. But how does it threaten us right now?

In a nutshell, the simulation risk lies in disjunct (1), which implies the possibility that the simulation we inhabit could be shut down. As Bostrom (2002b, P. 7) writes: "While to some it may seem frivolous to list such a radical or "philosophical" hypothesis next the concrete threat of nuclear holocaust, we must seek to base these evaluations on reasons rather than untutored intuition.' Until a refutation appears of the argument presented in Bostrom (2003), it would be intellectually dishonest to neglect to mention simulation-shutdown as a potential extinction mode.

A decision to terminate our simulation, taken on the part of the post-human director (under which we shall subsume any relevant agency), may be prompted by our actions or by any number of exogenous factors. Such exogenous factors may include generic properties of such ancestor-simulations such as fixed temporal window or fixed amount of allocated computational resources, or emergent issues such as a realization of a GCR in the director's world. Since we cannot know much about these hypothetical possibilities, let us pick one that is rather straightforward to illustrate how a risk could emerge: the energy cost of running an ancestor-simulation.

From the human experience thus far, especially in sciences such as physics and astronomy, the cost of running large simulations may be very high, though it is still dominated by the capital cost of computer processors and human personnel, not the energy cost. However, as the hardware becomes cheaper and more powerful and the simulating tasks more complex, we may expect that at some point in future the energy cost will become dominant. Computers necessarily dissipate energy as heat, as shown in classical studies of Landauer (1961) and Brillouin (1962) with the finite minimum amount of heat dissipation required per processing of 1 bit of information.15 Since the simulation of complex human society will require processing a huge amount of information, the accompanying energy cost is necessarily huge. This could imply that the running of ancestor-simulations is, even in advanced technological societies, expensive and/or subject to strict regulation. This makes the scenario in which a simulation runs until it dissipates a fixed amount of energy allocated in advance (similar to the way supercomputer or telescope resources are allocated for today's research) more plausible. Under this assumption, the simulation must necessarily either end abruptly or enter a prolonged phase of gradual simplification and asymptotic dying-out. In the best possible case, the simulation is allocated a fixed fraction of energy resources of the director's civilization. In such case, it is, in principle, possible to have a simulation of indefinite duration, linked only to the (much more remote) options for ending of the director's world. On the other hand, our activities may make the simulation shorter by increasing the complexity of present entities and thus increasing the running cost of the simulation.16

15 There may be exceptions to this related to the complex issue of reversible computing. In addition, if the Landauer-Brillouin bound holds, this may have important consequences for the evolution of advanced intelligent communities, as well as for our current SETI efforts, as shown by Cirkovic and Bradbury (2006).

16 The 'planetarium hypothesis' advanced by Baxter (2000) as a possible solution to Fermi's paradox, is actually very similar to the general simulation hypothesis; however, Baxter suggests exactly 'risky' behaviour in order to try to force the contact between us and the director(s)!

Continue reading here: Making progress in studying observation selection effects

Was this article helpful?

0 0