Threats To The Survival Of The Human Race

Estimating the probability that the human race will soon become extinct has become quite a popular activity. Many writers have considered such things as the dangers of nuclear war or of pollution. This book will make few claims to expertise about the details of such highly complex matters. What it will claim instead is that even non-experts can see that the risks aren't negligible. In view of how much is at stake, we have no right to disregard them.2 Besides, even if the 'total risk' (obtained by combining the individual risks) appeared to be fairly small, Carter's doomsday argument could suggest that it should be re-evaluated as large. To get it to look small once more, we should then need to make vigorous risk-reduction efforts.

All the same, the book will in due course settle down to some fairly detailed discussion of risks, particularly those which our efforts might reduce. For the moment let us simply list a large variety of them, with a few quick comments.

Risks already well recognized

1 Nuclear war. Knowledge of how to build nuclear bombs cannot be eradicated. Small nations, terrorists and rich criminals wanting to become still richer by holding the world to ransom can already afford very destructive bombs. Production costs are falling and the world has many multi-billionaires. The effects of large-scale nuclear destruction are largely unknown. Radiation poisoning of the entire globe? 'Nuclear winter' in which dust and soot block sunlight, so that temperatures everywhere fall very sharply? Death of trees and grasses? Of oceanic plankton?

2 Biological warfare or terrorism or criminality. Biological weapons could actually be more dangerous than nuclear ones: less costly, and with a field of destruction harder to limit because the weapons were self-reproducing organisms.

3 Chemical warfare or terrorism or criminality.

4 Destruction of the ozone layer by chlorofluorocarbons or other things. Massive increase in the amount of ultraviolet light reaching the Earth's surface. Cancer runs riot? Death of trees, grasses, plankton?

5 'Greenhouse effect': a rise in Earth's surface temperature because incoming radiation is less easily re-radiated into space, owing to build-up of atmospheric carbon dioxide, methane and other gases. The effect might conceivably be a runaway one because of positive feedback: for example, frozen arctic soils melt and become wetlands, emitting much carbon dioxide and methane and so helping to melt more soils, which leads to still greater emissions. After an increase—usually thought very unlikely—to a carbon dioxide level of 1 per cent, Earth could soon become rather like its neighbour Venus. On Venus, greenhouseeffect temperatures are sufficient to melt lead. On Earth they might approach the boiling point of water.

6 Poisoning by pollution. Already widespread, for instance in the form of acid rain, which can eat holes in clothing. Hundreds of new chemicals enter the environment each year. Their effects are often hard to predict. Who would have thought that the insecticide DDT would need to be banned or that spraying deodorant at your armpits could help destroy the ozone layer? Pollution could particularly affect sperm or produce cancers, from which many lake fish already suffer. Once again there is the danger of positive feedback: the rotting of a poisoned environment generates more poisons. And, at least in the short term, severe pollution seems almost inevitable when uncontrolled population growth is combined with demands for an acceptable standard of living.

7 Disease. As was shown by the Black Death of the Middle Ages, diseases can wipe out very large proportions of those exposed to them. They can now spread world wide very quickly, thanks to air travel. Many remain incurable. Tuberculosis, already killing about three million people annually, has recently developed strains resistant to all known drugs, and antibiotics are useless against viral diseases.

Risks often unrecognized

Group 1: Natural disasters

1 Volcanic eruptions. Sometimes blamed for the death of the dinosaurs. Eruption clouds might produce 'volcanic winter' instead of warfare's 'nuclear winter'.

2 Hits by asteroids and comets. The death of the dinosaurs was very probably caused by an asteroid. You may be far more likely to be killed by a continent-destroying impact than to win a major lottery: your chances of dying like this have been estimated as 1 in 20,000. If there are many life-bearing planets in the universe, perhaps most of them suffer disastrous impacts before intelligent living beings can evolve on them.

3 An extreme ice age due to passage through an interstellar cloud? Not likely in the next few hundred thousand years, despite the point that changes to the 'solar wind' of charged particles might have drastic climatic effects, even at cloud densities far too small to produce much direct reduction of the sunlight reaching Earth's surface.3

4 A nearby supernova—a stellar explosion perhaps equivalent to that of a one-hundred-thousand-trillion-trillion-megaton H-bomb.

5 Other massive astronomical explosions produced when black holes complete their evaporation (a phenomenon discovered by Stephen Hawking's theoretical studies) or by the merger of two black holes or two neutron stars, or of a black hole and a neutron star.

6 Essentially unpredictable breakdown of a complex system, as investigated by 'chaos theory'. The system in question might be Earth's biosphere: its air, its soil, its water and its living things interact in highly intricate ways. On a very long timescale it might be the solar system itself, because planetary motions could be chaotic.4

7 Something-we-know-not-what. It would be foolish to think we had foreseen all possible natural disasters.

Group 2: Man-made disasters

1 Unwillingness to rear children ? Although sometimes mentioned as a danger to the human race, it may be hard to take seriously. If only ten thousand people wanted children, their descendants could soon crowd the globe. Still, some of the rich nations are experiencing population shrinkage at present.

2 A disaster from genetic engineering. Perhaps a 'green scum' disaster in which a genetically engineered organism reproduces itself with immense efficiency, smothering everything? Or one involving organisms which invade the human body? On November 2, 1993, Toronto's The Globe and Mail reported on its front page—but without any mention of possible accidents—genetic alteration of salmonella bacteria at Washington University in St Louis so as to cause 'a harmless, temporary infection in the intestine that triggers antibodies against genetic components of sperm that have been spliced into the bacteria', making the recipient woman infertile. A single dose of this 'birth control vaccine', taken orally, 'might prevent conception for several months or longer'. The effect 'would be reversible': 'you don't get your booster, and within a year or so you can conceive again', the Washington University researcher was reported as saying. But what if one did get one's booster—or one's first dose—by being infected by other people? After all, salmonella bacteria are a major source of infection globally. Perhaps the original genetically altered bacteria couldn't cause this kind of problem, but what if they underwent evolutionary change? And what if any major proportion of the world's women then became permanently infected, through constantly reinfecting one another?

3 A disaster from nanotechnology. Very tiny self-reproducing machines—they could be developed fairly soon through research inspired by Richard Feynman—might perhaps spread world wide within a month in a 'gray goo' calamity.

4 Disasters associated with computers. Computer-initiated nuclear war is the one most often discussed, but there might instead be breakdown of a computer network which had become vital to humanity's day-to-day survival. And, very speculatively, several writers have described computers replacing us, either (a) as an unintended result of competition between nation-states whose methods of production had become more and more computer-controlled; or (b) again unintendedly, after the task of designing computers had been given to computers themselves; or finally (c) through deliberate planning by scientists who viewed the life and intelligence of advanced computers as superior—possibly because death could be delayed for indefinitely long—to the life and intelligence of humans. (Whether the third of these possibilities would be 'a disaster' would depend, of course, on whether those scientists were correct. Whether it should count as 'the extinction of humankind' might itself be controversial if advanced computers inherited many human characteristics, maybe after an initial period during which brains and computers worked in close association.)

5 Some other disaster in a branch of technology, perhaps just agricultural, which had become crucial to human survival. Modern agriculture is dangerously dependent on polluting fertilizers and pesticides, and on progressively fewer genetic varieties. Chaos theory warns us that any very complicated system, and in particular a system involving new technologies interacting in a complex manner, might break down in an essentially unpredictable fashion. Blackouts—failures of electrical power—and communication system failures in large regions of the United States have helped to illustrate the point.

6 Production of a new Big Bang in the laboratory ? Physicists have investigated this possibility. It is commonly claimed that about twenty kilograms of matter—or its equivalent in energy— would need to be compressed into an impracticably small volume, but the cosmologist Andrei Linde has written to me that the correct figure is instead a hundred thousandth of a gram. Still, the compression would indeed have to be tremendous, and a Bang engineered in this fashion would very probably expand into a space of its own. To us, what we had produced would then look like nothing but a tiny black hole.

7 The possibility of producing an all-destroying phase transition, comparable to turning water into ice, could be much graver. In 1984, Edward Farhi and Robert Jaffe suggested that physicists might produce 'strange-quark matter' of a kind which attracted ordinary matter, changing it into more of itself until the entire Earth had been converted ('eaten'). It is thought, however, that strange-quark matter would instead repel ordinary matter. In contrast, there might be a very real vacuum metastability danger associated with experiments at extremely high energies. The space in which we live may be in a 'false vacuum' state, filled with a force field—technically speaking, it would be a scalar field—which is like a statue balancing upright: stable against small jolts but upsettable by large ones. If the jolt of a high-energy experiment produced a bubble of 'true vacuum', this would then expand at nearly the speed of light, destroying everything, rather as when a tiny ice crystal changes a large volume of supercooled water into more ice crystals. We might be safe only so long as our experiments kept below the energies already reached by colliding cosmic rays. Many people think such energies will never be attained by us. But David Schramm and Leon Lederman, Nobel-prizewinning former Director of the National Accelerator Laboratory in Chicago, wrote in 1989 that we might reach them as early as the year 2100 with radically new technology.

8 Annihilation by extraterrestrials, either deliberately or through the kind of vacuum metastability disaster discussed a moment ago, a disaster produced by their high-energy experiments? We haven't yet discovered a single extraterrestrial, but our searches up to date have been so primitive that we'd have had no great chance of finding a civilization like Earth's, not even if it existed among the nearest stars. Still, in attempting to explain the 'Great Silence', the failure to detect broadcasts even from the enormously powerful transmitters of very advanced civilizations, several scientists have suggested that everyone is listening and nobody broadcasting, for fear of attracting hostile attention. Extraterrestrials might view us as a threat—for instance if there seemed a risk that we'd be the ones producing the vacuum metastability disaster. They might first become aware of our presence through detecting the impulses transmitted by the Arecibo radio telescope, which will reach sixty million stars during the next four thousand years, or those of our military early-warning radars. Perhaps, though, extraterrestrial intelligence evolves only very seldom, almost always then destroying itself quickly. (Note that there simply cannot have been, right out to as far as our telescopes can probe, any beings whose high-energy experiments had upset a metastable vacuum. Not, at any rate, unless this was so recently that light rays hadn't yet had time to carry the news to us, because all life would end when the news reached it.)

9 Something-we-know-not-what. We cannot possibly imagine every single danger which technological advances might bring with them.

Obviously many of the above-listed dangers are ones for which there is nothing like firm evidence. On the other hand, we can also lack firm evidence that they are absent. With respect to most of these matters we are just groping in the dark.

Risks from philosophy

Various risks in this category will be discussed in chapters 4 and 7.

1 Threats associated with religions can often count as 'threats based on philosophy', although sometimes it is very poor philosophy. It could be dangerous, for example, to choose as Secretary for the Environment some politician convinced that, no matter what anyone did, the world would end soon with a Day of Judgement. It could be just as bad to choose somebody who felt that God would keep the world safe for us for ever, or else would create any number of other worlds to replace ours if we destroyed it. This isn't an outright attack on religious world-models or on the idea that there exist numerous other worlds, otherwise called 'universes'. My Value and Existence was a lengthy defence of a neoplatonic picture of God as an abstract creative force, or perhaps a world-creating person whose own reason for existence was neoplatonic. Such a person would then exist because he ought to exist, i.e. because the divine existence possessed what can be called creative ethical requiredness. Alternatively, it would be the world itself which possessed such requiredness. Universes and other writings of mine5 defended these ideas once again, together with the notion that there exist many worlds, perhaps through divine action or perhaps because of blind physical mechanisms. Still, theories about God or about multiple worlds cannot be known to be right. We can't be at all sure that there would be other worlds to compensate for the mess we had made of ours.

2 Schopenhauerian pessimism. In attacking religion, many writers put such emphasis on the Problem of Evil—the existence of poisonous snakes, earthquakes, plagues, cancer, Nazi death camps, and so on—that they in effect agree with Schopenhauer, who wrote that it would have been better if our planet had remained like the moon, a lifeless mass. It is then only a short step to thinking that we ought to make it lifeless.

3 Ethical relativism, emotivism, prescriptivism and other doctrines deny that anything is really worth struggling for, in the sense of 'really' in which two and two really make four or in which Africa really is larger than Iceland, (a) Relativism maintains that, for example, burning people alive for fun is only bad relative to particular moral codes, somewhat as putting out your tongue is a polite greeting in Tibet but rude elsewhere. (b) Emotivism holds that to call burning people 'really bad' describes no fact about the practice of people-burning. Instead, it merely expresses real disgust, (c) Prescriptivism again agrees that it describes no fact. 'It's a fact that burning people is bad' just means 'I hereby prescribe that nobody is to burn anybody.' (d) A popular doctrine, recently, has been that the feeling of duty not to burn people results from 'internalizing' a system of socially prescribed rules. (Think of the Englishman who changes into a dinner-jacket for a solitary meal in the jungle.) Typically, the rules in such a system are ones which you hope others will obey during dealings with yourself. If you don't genuinely internalize the system, making it control your behaviour through actual preference, then your fellow humans will detect your lack of enthusiasm for it, shunning you. Once again it is standardly denied that there is a fact of the matter, 'out there in the world' regardless of whether anyone could prove it, that burning people really is wrong.

Might these be only caricatures of various philosophical doctrines? I am afraid not. They are the actual doctrines, very widely defended. True, their defenders are often enthusiastically kind individuals. (There is no psychological rule stating that people can be enthusiastic about a way of behaving only if they think it really good as a matter of fact, fact as much 'out there in the world' as the fact that two and two make four.) Yet if you accepted any of these doctrines, then it could be hard to say why you should endure mental or physical anguish, possibly resisting torture or great temptation, in order to remain a kind individual. For if you sent ten million people to their deaths to make your torments stop, what would be really wrong in that? Really wrong in the ordinary sense, instead of in such senses as 'being really of a sort whose avoidance I hereby prescribe'? Notice that the 'contractarian' position which stresses tit-for-tat internalization—you internalize such and such a code of behaviour in your dealings with me, and I'll do the same in my dealings with you—could seem in trouble when faced with a question asked by Robert Heilbroner,6 'What have people of the far future ever done to benefit me?

4 'Negative utilitarianism' is concerned mainly or entirely with reducing evils rather than with maximizing goods. Now, there will be at least one miserable person per century, virtually inevitably, if the human race continues. It could seem noble to declare that such a person's sufferings 'shouldn't be used to buy happiness for others'—and to draw the conclusion that the moral thing would be to stop producing children. Much of the danger of this way of thinking may come from the impossibility of actually proving its wrongness.

5 Some philosophers attach ethical weight only to people who are already alive or whose births are more or less inevitable. So (a) it wouldn't be a duty to keep the human race in existence if having children were found troublesome, and (b) there would actually seem to be a moral need to let the race become extinct—because the duty not to produce miserable children couldn't now be counterbalanced by any duty to produce happy ones. 'Nobody', it is said, 'is being treated unfairly by being left unborn.'

6 Some philosophers speak of'inalienable rights' which must always be respected, though this makes the heavens fall (the right, perhaps, for parents to have as many children as they want, regardless of whether overpopulation threatens to render the atmosphere unbreathable).

7 Prisoner's dilemma. Many seem to place too much confidence in a particular way of treating this dilemma, which you can meet when asking yourself whether to rely on somebody else's co-operation. (On the brink of nuclear war, for example, two nations may need to trust each other to remain inactive instead of striking first.) It has become fairly standard among philosophers to say that the advantages of uncooperative behaviour should always dominate the reasoning of anyone who had no inclination to be self-sacrificing.

8 'Avenging justice' or 'Rational consistency'. Some philosophers argue that carrying out a threat of revenge, for instance for a nuclear attack, could be appropriate regardless of whether anyone could benefit from it.

It is sometimes suggested that the annihilation of all life on Earth would be no great tragedy. Other intelligent beings would soon evolve somewhere, it is said, and these would then spread throughout the galaxy. But this overlooks the fact that we have precious little idea of how often intelligent life could be expected to evolve, even on ideally suitable planets. Perhaps not just the Milky Way but the entire universe would be for ever afterwards lifeless, if life on Earth came to an end soon.

It can seem unlikely that our galaxy already contains many technological civilizations, for, as Enrico Fermi noted, we could in that case have expected definite signs of their presence, if not through their radio signals then through Earth's actually being colonized by them. After all, it could well be that in a few million years the human race will have colonized the entire galaxy, if it survives.7

Notice that observational selection effects might help to explain our survival up to date. For instance, we couldn't observe that we were on a planet where disease or an asteroid impact had exterminated all intelligent life, even if such planets formed the vast majority of those on which such life had evolved. Intelligent living beings cannot find that they are in places devoid of living intelligence! Observational selection effects, of course, didn't cause us to be where we are. All the same, they could in a sense aid in accounting for it. They could make it unmysterious.

Now that we have seen what some of the risks might be, we can usefully return to Brandon Carter's 'doomsday argument' for thinking them more dangerous than we'd otherwise have thought. The doomsday argument, remember, is that we could hardly expect to be among the very earliest—among the first 0.01 per cent, for instance—of all humans who would ever have been born. On the other hand, it would be none too startling to be among something like the very last 10 per cent, which is where we'd be if the human race were to end shortly. (Of all humans who have yet been born, a fair proportion are alive at this very moment, thanks to recent population growth.) Now, suppose that you suddenly noticed all this. You should then be more inclined than before to forecast humankind's imminent extinction. This, at any rate, is what Carter is suggesting.

Suppose that millions of intelligent races will evolve during the history of our universe. Of all intelligent beings, might not a fair proportion find themselves in rapidly growing races which were about to become extinct through poisoning their environments with new chemicals, developing new forms of warfare, or otherwise misusing the science which had made the rapid growth possible? Carter's reasoning provides us with additional grounds for taking such scenarios seriously.

The Basic Survival Guide

The Basic Survival Guide

Disasters: Why No ones Really 100 Safe. This is common knowledgethat disaster is everywhere. Its in the streets, its inside your campuses, and it can even be found inside your home. The question is not whether we are safe because no one is really THAT secure anymore but whether we can do something to lessen the odds of ever becoming a victim.

Get My Free Ebook


Post a comment