Martin J Rees Foreword

In 1903, H.G. Wells gave a lecture at the Royal Institution in London, highlighting the risk of global disaster: 'It is impossible', proclaimed the young Wells, "'to show why certain things should not utterly destroy and end the human race and story; why night should not presently come down and make all our dreams and efforts vain. ... something from space, or pestilence, or some great disease of the atmosphere, some trailing cometary poison, some great emanation of vapour from the interior of the earth, or new animals to prey on us, or some drug or wrecking madness in the mind of man.' Wells' pessimism deepened in his later years; he lived long enough to learn about Hiroshima and Nagasaki and died in 1946.

In that year, some physicists at Chicago started a journal called the Bulletin of Atomic Scientists, aimed at promoting arms control. The logo' on the Bulletin's cover is a clock, the closeness of whose hands to midnight indicates the editor's judgement on how precarious the world situation is. Every few years the minute hand is shifted, either forwards or backwards.

Throughout the decades of the Cold War, the entire Western World was at great hazard. The superpowers could have stumbled towards Armageddon through muddle and miscalculation. We are not very rational in assessing relative risk. In some contexts, we are absurdly risk-averse. We fret about statistically tiny risks; carcinogens in food, a one-in-a-million change of being killed in train crashes, and so forth. But most of us were 'in denial' about the far greater risk of death in a nuclear catastrophe.

In 1989, the Bulletin's clock was put back to 17 minutes to midnight. There is now far less chance of tens of thousands of bombs devastating our civilization. But there is a growing risk of a few going off in a localized conflict. We are confronted by proliferation of nuclear weapons among more nations - and perhaps even the risk of their use by terrorist groups.

Moreover, the threat of global nuclear catastrophe could be merely in temporary abeyance. During the last century the Soviet Union rose and fell; there were two world wars. In the next hundred years, geopolitical realignments could be just as drastic, leading to a nuclear stand-off between new superpowers, which might be handled less adeptly (or less luckily) than the Cuba crisis, and the other tense moments of the Cold War era. The nuclear threat will always be with us - it is based on fundamental (and public) scientific ideas that date from the 1930s.

Despite the hazards, there are, today, some genuine grounds for being a technooptimist. For most people in most nations, there has never been a better time to be alive. The innovations that will drive economic advance -information technology, biotechnology and nanotechnology - can boost the developing as well as the developed world. Twenty-first century technologies could offer lifestyles that are environmentally benign - involving lower demands on energy or resources than what we had consider a good life today. And we could readily raise the funds - were there the political will - to lift the world's two billion most-deprived people from their extreme poverty.

But, along with these hopes, twenty-first century technology will confront us with new global threats - stemming from bio-, cyber- and environmental-science, as well as from physics -that could be as grave as the bomb. The Bulletin's clock is now closer to midnight again. These threats may not trigger sudden worldwide catastrophe - the doomsday clock is not such a good metaphor - but they are, in aggregate, disquieting and challenging. The tensions between benign and damaging spin-offs from new technologies, and the threats posed by the Promethean power science, are disquietingly real. Wells' pessimism might even have deepened further were he writing today.

One type of threat comes from humanity's collective actions; we are eroding natural resources, changing the climate, ravaging the biosphere and driving many species to extinction.

Climate change looms as the twenty-first century's number-one environmental challenge. The most vulnerable people - for instance, in Africa or Bangladesh - are the least able to adapt. Because of the burning of fossil fuels, the CO2 concentration in the atmosphere is already higher than it has ever been in the last half million years - and it is rising ever faster. The higher CO2 rises, the greater the warming - and, more important still, the greater will be the chance of triggering something grave and irreversible: rising sea levels due to the melting of Greenland's icecap and so forth. The global warming induced by the fossil fuels we burn this century could lead to sea level rises that continue for a millennium or more.

The science of climate change is intricate. But it is simple compared to the economic and political challenge of responding to it. The market failure that leads to global warming poses a unique challenge for two reasons. First, unlike the consequences of more familiar kinds of pollution, the effect is diffuse: the CO2 emissions from the UK have no more effect here than they do in Australia, and vice versa. That means that any credible framework for mitigation has to be broadly international. Second, the main downsides are not immediate but lie a century or more in the future: inter-generational justice comes into play; how do we rate the rights and interests of future generations compared to our own? The solution requires coordinated action by all major nations. It also requires far-sightedness - altruism towards our descendants. History will judge us harshly if we discount too heavily what might happen when our grandchildren grow old. It is deeply worrying that there is no satisfactory fix yet on the horizon that will allow the world to break away from dependence on coal and oil - or else to capture the CO2 that power stations emit. To quote Al Gore, 'We must not leap from denial to despair. We can do something and we must.'

The prognosis is indeed uncertain, but what should weigh most heavily and motivate policy-makers most strongly - is the 'worst case' end of the range of predictions: a 'runaway' process that would render much of the Earth uninhabitable.

Our global society confronts other 'threats without enemies', apart from (although linked with) climate change. High among them is the threat to biological diversity. There have been five great extinctions in the geological past. Humans are now causing a sixth. The extinction rate is 1000 times higher than normal and is increasing. We are destroying the book of life before we have read it. There are probably upwards of 10 million species, most not even recorded - mainly insects, plants and bacteria.

Biodiversity is often proclaimed as a crucial component of human well-being. Manifestly it is: we are clearly harmed if fish stocks dwindle to extinction; there are plants in the rain forest whose gene pool might be useful to us. But for many of us these 'instrumental' - and anthropocenrric - arguments are not the only compelling ones. Preserving the richness of our biosphere has value in its own right, over and above what it means to us humans.

But we face another novel set of vulnerabilities. These stem not from our collective impact but from the greater empowerment of individuals or small groups by twenty-first century technology.

The new techniques of synthetic biology could permit inexpensive synthesis of lethal biological weapons - on purpose, or even by mistake. Not even an organized network would be required: just a fanatic or a weirdo with the mindset of those who now design computer viruses -the mindset of an arsonist. Bio (and cyber) expertise will be accessible to millions. In our networked world, the impact of any runaway disaster could quickly become global.

Individuals will soon have far greater 'leverage' than present-day terrorists possess. Can our interconnected society be safeguarded against error or terror without having to sacrifice its diversity and individualism? This is a stark question, but I think it is a serious one.

We are kidding ourselves if we think that technical education leads to balanced rationality: it can be combined with fanaticism - not just the traditional fundamentalism that we are so mindful of today, but new age irrationalities too. There are disquieting portents - for instance, the Raelians (who claim to be cloning humans) and the Heavens Gate cult (who committedcollective suicide in hopes that a space-ship would take them to a 'higher sphere'). Such cults claim to be 'scientific' but have a precarious foothold in reality. And there are extreme eco-freaks who believe that the world would be better off if it were rid of humans. Can the global village cope with its village idiots - especially when even one could be too many?

These concerns are not remotely futuristic - we will surely confront them within next 10-20 years. But what of the later decades of this century? It is hard to predict because some technologies could develop with runaway speed. Moreover, human character and physique themselves will soon be malleable, to an extent that is qualitatively new in our history. New drugs (and perhaps even implants into our brains) could change human character; the cyberworld has potential that is both exhilarating and frightening.

We cannot confidently guess lifestyles, attitudes, social structures or population sizes a century hence. Indeed, it is not even clear how much longer our descendants would remain distinctively 'human'. Darwin himself noted that 'not one living species will transmit its unaltered likeness to a distant futurity'. Our own species will surely change and diversify faster than any predecessor - via human-induced modifications (whether intelligently controlled or unintended) not by natural selection alone. The post-human era may be only centuries away. And what about Artificial Intelligence? Super-intelligent machine could be the last invention that humans need ever make. We should keep our minds open, or at least ajar, to concepts that seem on the fringe of science fiction.

These thoughts might seem irrelevant to practical policy - something for speculative academics to discuss in our spare moments. I used to think this. But humans are now, individually and collectively, so greatly empowered by rapidly changing technology that we can - by design or as unintended consequences - engender irreversible global changes. It is surely irresponsible not to ponder what this could mean; and it is real political progress that the challenges stemming from new technologies are higher on the international agenda and that planners seriously address what might happen more than a century hence.

We cannot reap the benefits of science without accepting some risks - that has always been the case. Every new technology is risky in its pioneering stages. But there is now an important difference from the past. Most of the risks encountered in developing 'old' technology were localized: when, in the early days of steam, a boiler exploded, it was horrible, but there was an 'upper bound' to just how horrible. In our evermore interconnected world, however, there are new risks whose consequences could be global. Even a tiny probability of global catastrophe is deeply disquieting.

We cannot eliminate all threats to our civilization (even to the survival of our entire species). But it is surely incumbent on us to think the unthinkable and study how to apply twenty-first century technology optimally, while minimizing the 'downsides'. If we apply to catastrophic risks the same prudent analysis that leads us to take everyday safety precautions, and sometimes to buy insurance - multiplying probability by consequences - we had surely conclude that some of the scenarios discussed in this book deserve more attention that they have received.

My background as a cosmologist, incidentally, offers an extra perspective -an extra motive for concern - with which I will briefly conclude.

The stupendous time spans of the evolutionary past are now part of common culture -except among some creationists and fundamentalists. But most educated people, even if they are fully aware that our emergence took billions of years, somehow think we humans are the culmination of the evolutionary tree. That is not so. Our Sun is less than halfway through its life. It is slowly brightening, but Earth will remain habitable for another billion years. However, even in that cosmic time perspective - extending far into the future as well as into the past - the twenty-first century may be a defining moment. It is the first in our planet's history where one species - ours - has Earth's future in its hands and could jeopardise not only itself but also lifes immense potential.

The decisions that we make, individually and collectively, will determine whether the outcomes of twenty-first century sciences are benign or devastating. We need to contend not only with threats to our environment but also with an entirely novel category of risks - with seemingly low probability, but with such colossal consequences that they merit far more attention than they have hitherto had. That is why we should welcome this fascinating and provocative book. The editors have brought together a distinguished set of authors with formidably wide-ranging expertise. The issues and arguments presented here should attract a wide readership - and deserve special attention from scientists, policy-makers and ethicists. Martin J. Rees


Acknowledgements v

Foreword vii

Martin J. Rees

1 Introduction 1

Nick Bostrom and Milan M. Cirkovic

1.2 Taxonomy and organization 2

1.3 Part I: Background 7

1.4 Part II: Risks from nature 13

1.5 Part III: Risks from unintended consequences 15

1.6 Part IV: Risks from hostile acts 20

1.7 Conclusions and future directions 27

Part I Background 31

2 Long-term astrophysical processes 33

Fred C Adams

2.1 Introduction: physical eschatology 33

2.2 Fate of the Earth 34

2.3 Isolation of the local group 36

2.4 Collision with Andromeda 36

2.5 The end of stellar evolution 38

2.6 The era of degenerate remnants 39

2.7 The era of black holes 41

2.8 The Dark Era and beyond 41

2.9 Life and information processing 43

2.10 Conclusion 44

Suggestions for further reading 45

References 45

3 Evolution theory and the future of humanity 48

Christopher Wills

3.1 Introduction 48

3.2 The causes of evolutionary change 49_Enviromental changes and evolutionary changes 50

3.3.1 Extreme evolutionary changes 51

3.3.2 Ongoing evolutionary changes 53

3.3.3 Changes in the cultural environment 56

3.4 Ongoing human evolution 61

3.4.1 Behavioural evolution 61

3.4.2 The future of genetic engineering 63

3.4.3 The evolution of other species, including those on which we depend 64

3.5 Future evolutionary directions 65

3.5.1 Drastic and rapid climate change without changes in human behaviour 66

3.5.2 Drastic but slower environmental change accompanied by changes in human behaviour 66

3.5.3 Colonization of new environments by our species 67

Suggestions for further reading 68

References 69

4 Millennial tendencies in responses to apocalyptic threats 73

James J. Hughes

4.1 Introduction 73

4.2 Types of millennialism 74

4.2.1 Premillennialism 74

4.2.2 Amillennialism 75

4.2.3 Post-millennialism 76

4.3 Messianism and millenarianism 77

4.4 Positive or negative teleologies: utopianism and apocalypticism 77

4.5 Contemporary techno-millennialism 79

4.5.1 The singularity and techno-millennialism 79

4.6 Techno-apocalypticism 81

4.7 Symptoms of dysfunctional millennialism in assessing future scenarios 83

4.8 Conclusions 85

Suggestions for further reading 86

References 86

5 Cognitive biases potentially affecting judgement of global risks ' 91

Eliezer Yudkowsky

5.1 Introduction 91

5.2 Availability 92

5.3 Hindsight bias 93

5.4 Black Swans 94

5.5 The conjunction fallacy 95

5.6 Confirmation bias 98

5.7 Anchoring, adjustment, and contamination 101

5.8 The affect heuristic 104

5.9 Scope neglect 105

5.10 Calibration and overconfidence 107

5.11 Bystander apathy 109

5.12 Afmalcaution Ill

5.13 Conclusion 112

Suggestions for further reading 115

References 115

6 Observation selection effects and global catastrophic risks 120

Milan M. Cirkovic

6.1 Introduction: anthropic reasoning and global risks 120

6.2 Past-future asymmetry and risk inferences 121

6.2.1 A simplified model 122

6.2.2 Anthropic overconfidence bias 124

6.2.3 Applicability class of risks 126

6.2.4 Additional astrobiological information 128

6.3 Doomsday Argument 129

6.4 Fermi's paradox 131

6.4.1 Fermi's paradox and GCRs 134

6.4.2 Risks following from the presence of extraterrestrial intelligence 135

6.5 The Simulation Argument 138

6.6 Making progress in studying observation selection effects 140

Suggestions for further reading 141

References 141

7 Systems-based risk analysis 146

Yacov Y. Haimes

7.1 Introduction 146

7.2 Risk to interdependent infrastructure and sectors of the economy 148

7.3 Hierarchical holographic modelling and the theory of scenario structuring 150

7.3.1 Philosophy and methodology of hierarchical holographic modelling 150

7.3.2 The definition of risk 151

7.3.3 Historical perspectives 151

7.4 Phantom system models for risk management of emergent multi-scale systems 153.? 7.5 Risk of extreme and catastrophic events 155

7.5.1 The limitations of the expected value of risk 155

7.5.2 The partitioned multi-objective risk method 156

7.5.3 Risk versus reliability analysis 159

Suggestions for further reading 162

References 162

8 Catastrophes and insurance 164

Peter Taylor

8.1 Introduction 164

8.2 Catastrophes 166

8.3 What the business world thinks 168

8.4 Insurance 169

8.5 Pricing the risk 172

8.6 Catastrophe loss models 173

8.7 Whatisrisk? 176

8.8 Price and probability 179

8.9 The age of uncertainty 179

8.10 New techniques 180

8.10.1 Qualitative risk assessment 180

8.10.2 Complexity science 181

8.10.3 Extreme value statistics 181

8.11 Conclusion: against the gods? 181

Suggestions for further reading 182

References 182

9 Public policy towards catastrophe 184

Richard A. Posner

References 200

Part II Risks from nature 203

10 Super-volcanism and other geophysical processes of catastrophic import 205

Michael R. Rampino

10.1 Introduction 205

10.2 Atmospheric impact of a super-eruption 206

10.3 Volcanic winter 207

10.4 Possible environmental effects of a super-eruption 209

10.5 Super-eruptions and human population 211

10.6 Frequency of super-eruptions 212

10.7 Effects of a super-eruptions on civilization 213

10.8 Super-eruptions and life in the universe 214

Suggestions for further reading 216

References 216

11 Hazards from comets and asteroids 222

William Napier

11.1 Something like a huge mountain 222

11.2 How often are we struck? 223

11.2.1 Impact craters 223

11.2.2 Near-Earth object searches 226

11.2.3 Dynamical analysis 226

11.3 The effects of impact 229

11.4 The role of dust 231

11.5 Ground truth? 233

11.6 Uncertainties 234

Suggestions for further reading 235

References 235

12 Influence of Supernovae, gamma-ray bursts, solar flares, and cosmic rays on the terrestrial environment 238

Arnon Dar





Radiation threats



Credible threats

. 238


Solar flares



Solar activity and global warming



Solar extinction

.. 245


Radiation from supernova explosions



Gamma-ray bursts

.. 246


Cosmic ray threats



Earth magnetic field reversals



Solar activity, cosmic rays, and global

warming 250


Passage through the Galactic spiral arms ..



Cosmic rays from nearby supernovae



Cosmic rays from gamma-ray bursts



Origin of the major mass extinctions


The Fermi paradox and mass extinctions



Part HI Risks from unintended consequences 263 13 Climate change and global risk 265

David Frame andMyles R. Allen

13.1 Introduction 265

13.2 Modelling climate change 266

13.3 A simple model of climate change 267ffl 13.3.1

Solar forcing 268

13.3.2 Volcanic forcing 269

13.3.3 Anthropogenic forcing 271

13.4 Limits to current knowledge 273

13.5 Defining dangerous climate change 276

13.6 Regional climate risk under anthropogenic change 278

13.7 Climate risk and mitigation policy 279

13.8 Discussion and conclusions 281

Suggestions for further reading 282

References 283

14 Plagues and pandemics: past, present, and future 287

Edwin Dennis Kilbourne

14.1 Introduction 287

14.2 The baseline: the chronic and persisting burden of infectious disease 287

14.3 The causation of pandemics 289

14.4 The nature and source of the parasites 289

14.5 Modes of microbial and viral transmission 290

14.6 Nature of the disease impact: high morbidity, high mortality, or both 291

14.7 Environmental factors 292

14.8 Human behaviour 293

14.9 Infectious diseases as contributors to other natural catastrophes 293

14.10 Past Plagues and pandemics and their impact on history 294

14.11 Plagues of historical note 295

14.11.1 Bubonic plague: the Black Death 295 -i

14.11.2 Cholera 295

14.11.3 Malaria 296

14.11.4 Smallpox, 296

14.11.5 Tuberculosis 297

14.11.6 Syphilis as a paradigm of sexually transmitted infections 297

14.11.7 Influenza 298

14.12 Contemporary plagues and pandemics 298

14.12.1 HIV/AIDS 298

14.12.2 Influenza 299

14.12.3 HIV and tuberculosis: the double impact of new and ancient threats 299

14.13 Plagues and pandemics of the future 300

14.13.1 Microbes that threaten without infection: the microbial toxins 300

14.13.2 Iatrogenic diseases 300

14.13.3 The homogenization of peoples and cultures 301

14.13.4 Man-made viruses 302

14.14 Discussion and conclusions 302

Suggestions for further reading 304

References 304

15 Artificial Intelligence as a positive and negative factor in global risk 308

Eliezer Yudkowsky

15.1 Introduction 308

15.2 Anthropomorphic bias 308

15.3 Prediction and design 311

15.4 Underestimating the power of intelligence 313

15.5 Capability and motive 314

15.5.1 Optimization processes 315

15.5.2 Aiming at the target 316

15.6 Friendly Artificial Intelligence 317

15.7 Technical failure and philosophical failure 318

15.7.1 An example of philosophical failure 319

15.7.2 An example of technical failure 320

15.8 Rates of intelligence increase 323

15.9 Hardware 328

15.10 Threats and promises 329

15.11 Local and majoritarian strategies 333

15.12 Interactions of Artificial Intelligence with other technologies 337

15.13 Making progress on Friendly Artificial Intelligence 338

15.14 Conclusion 341

References 343

16 Big troubles, imagined and real 346

Frank Wilczek

16.1 Why look for trouble? 346

16.2 Looking before leaping 347

16.2.1 Accelerator disasters 347

16.2.2 Runaway technologies 357

16.3 Preparing to Prepare 358

16.4 Wondering 359

Suggestions for further reading 361

References 361

17 Catastrophe, social collapse, and human extinction 363

Robin Hanson

17.1 Introduction 363

17.2 What is society? 363

17.3 Socialgrowth 364

17.4 Social collapse 366

17.5 The distribution of disaster 367

17.6 Existential disasters 369

17.7 Disaster policy 372

17.8 Conclusion 375

References 376

Part IV Risks from hostile acts 379

18 The continuing threat of nuclear war 381

Joseph Cirincione

18.1 Introduction 381

18.1.1 US nuclear forces 384

18.1.2 Russian nuclear forces 385

18.2 Calculating Armageddon 386

18.2.1 Limited war 386

18.2.2 Globalwar 388

18.2.3 Regional war 390

18.2.4 Nuclear winter 390

18.3 The current nuclear balance 392

18.4 The good news about proliferation 396

18.5 A comprehensive approach 397

18.6 Conclusion 399

Suggestions for further reading 401

19 Catastrophic nuclear terrorism: a preventable peril 402

Gary Ackcrman and William C. Potter

19.1 Introduction " 402

19.2 Historical recognition of the risk of nuclear terrorism 403

19.3 Motivations and capabilities for nuclear terrorism 406

19.3.1 Motivations: the demand side of nuclear terrorism ... 406

19.3.2 The supply side of nuclear terrorism 411

19.4 Probabilities of occurrence 416

19.4.1 The demand side: who wants nuclear weapons? 416

19.4.2 The supply side: how far have terrorists progressed? 419

19.4.3 What is the probability that terrorists will acquire nuclear explosive capabilities in the future? 422

19.4.4 Could terrorists precipitate a nuclear holocaust by non-nuclear means? 426

19.5 Consequences of nuclear terrorism 427

19.5.1 Physical and economic consequences 427

19.5.2 Psychological, social, and political consequences 429

19.6 Risk assessment and risk reduction 432

19.6.1 The risk of global catastrophe 432

19.6.2 Risk reduction 436

19.7 Recommendations 437

19.7.1 Immediate priorities 437

19.7.2 Long-term priorities 440

19.8 Conclusion 441

Suggestions for further reading 442

References 442

20 Biotechnology and biosecurity 450

Ali Nouri and Christopher F. Chyba

20.1 Introduction 450

20.2 Biological weapons and risks 453

20.3 Biological weapons are distinct from other so-called weapons of mass destruction 454

20.4 Benefits come with risks 455

20.5 Biotechnology risks go beyond traditional virology, micro- and molecular biology 458

20.6 Addressing biotechnology risks 460

20.6.1 Oversight of research 460

20.6.2 'Soft' oversight 462

20.6.3 Multi-stakeholder partnerships for addressing biotechnology risks 462

20.6.4 A risk management framework for de novo DNA synthesis technologies 463

20.6.5 From voluntary codes of conduct to international regulations 464

20.6.6 Biotechnology risks go beyond creating novel pathogens 464

20.6.7 Spread of biotechnology may enhance biological security 465

20.7 Catastrophic biological attacks 466

20.8 Strengthening disease surveillance and response 469

20.8.1 Surveillance and detection 469

20.8.2 Collaboration and communication are essential for managing outbreaks 470

20.8.3 Mobilization of the public health sector 471


20.8.4 Containment of the disease outbreak 472

xxii Contents

20.8.5 Research, vaccines, and drug development are essential components of an defence strategy 473

20.8.6 Biological security requires fostering collaborations 473

20.9 Towards a biologically secure future 474

Suggestions for further reading 475

References 476

21 Nanotechnology as global catastrophic risk 481

Chris Phoenix and Mikg Treder

21.1 Nanoscale technologies 482

21.1.1 Necessary simplicity of products 482

21.1.2 Risks associated with nanoscale technologies 483

21.2 Molecular manufacturing 484

21.2.1 Products of molecular manufacturing 486

21.2.2 Nano-built weaponry 487

21.2.3 Global catastrophic risks 488

21.3 Mitigation of molecular manufacturing risks 496

21.4 Discussion and conclusion 498

Suggestions for further reading 499

References 502

22 The totalitarian threat 504

Bryan Caplan

22.1 Totalitarianism: what happened and why it (mostly) ended 504

22.2 Stable totalitarianism 506

22.3 Risk factors for stable totalitarianism 510

22.3.1 Technology 511

22.3.2 Politics 512

22.4 Totalitarian risk management 514

22.4.1 Technology 514

22.4.2 Politics 515

Suggestions for further reading 518

References 518

Authors' biographies 520

Index 531

Continue reading here: Nick Bostrom and Milan M Cirkovic Introduction

Was this article helpful?

0 0