Introduction

Alive after the Fall Review

Surviving World War III

Get Instant Access

I low do human beings and their governments approach worst-case scenarios? Do they tend to neglect them or do they give them excessive weight? Whatever we actually do, how should we deal with unlikely risks of catastrophe?

In the aftermath of the attacks on 9/11, Vice President Dick Cheney set out what has become known as The One Percent Doctrine: "We have to deal with this new type of threat in a way we haven't yet defined . . . With a low-probability, high-impact event like this . . . if there's a one percent chance that Pakistani scientists are helping al Qaeda build or develop a nuclear weapon, we have to treat it as a certainty in terms of our response."1

For especially horrific outcomes, it is tempting to think that a 1 percent chance should be treated as a certainty. In so suggesting, Vice President Cheney took the same position as many people who are confronting a low probability of disaster. No less than environmentalists who focus on species loss, climate change, and genetic modification of food, Vice President Cheney urged that governments should identify, and attempt to prevent, the worst-case scenario. Indeed, another vice president, Al Gore, implied a related principle for climate change—because the risk of a terrible catastrophe is real, we ought to respond aggressively to it. Many environmentalists enthusiastically embrace the Precautionary Principle, which is specifically designed for situations in which we cannot know that harm will occur. According to the Precautionary Principle, threats to the environment need not be established with certainty. Even a small risk of a catastrophic or irreversible harm is enough to require a serious response.

But consider an obvious objection to this position. A 1 percent chance of a terrible outcome is a lot better than a certainty of a terrible outcome. In order to figure out what to do, you should multiply the probability of the outcome by its magnitude. If you face a 1 percent chance of losing $10,000, you should take fewer precautions than if you face a 90 percent chance of losing $10,000. Even with losses that do not involve money, and that are hard to turn into monetary equivalents, it is important to attend to both the probability ofharm and the magnitude ofharm. Ifyou face a 1 percent chance of getting sick, you should act differently from how you would act if you faced a 90 percent chance of getting sick. People who are sensible, or even sane, do not treat a 1 percent risk ofloss the same as a certainty ofloss.

Suppose that you have a health problem of some kind—serious heart disease, a brain tumor, failing eyesight, severe and chronic back pain—and your doctor tells you that an operation is 99 percent likely to solve the problem and to have no bad side-effects. Will you decline the operation if the doctor emphasizes that in 1 percent of cases things go quite wrong? Probably not. Whatever you do, you are most unlikely to treat a small chance of a bad outcome as equivalent to a certainty of a bad outcome. You will focus not just on the nature of the worst case but on the probability that it will come about. Perhaps you will decide to create a special

"margin of safety," or buffer zone, against the worst outcomes. But even if you do so, you will probably think a lot before deciding on the right margin of safety, and you will pay a great deal of attention to the probability of harm.

The same point holds for governments. For public officials no less than the rest of us, the probability of harm matters a great deal, and it is foolish to attend exclusively to the worst-case scenario. Suppose there is a 99 percent chance that a new law will increase national security, and a 1 percent chance that it will decrease national security; a 99 percent chance that a reform of the health care system will improve both health and the economy, and a 1 percent chance that reform will significantly increase unemployment; a 99 percent chance that a voucher system for education will make schools better, and a 1 percent chance that schools will get worse. If government initiatives are rejected whenever they entail a 1 percent chance of a bad outcome, we will have far too few initiatives. In many contexts, governments, just like ordinary people, take their chances on the worst-case scenario—and they are entirely right to do so.

These points create real problems for any one percent doctrine. Ordinarily it is a big mistake to ignore the difference between a 1 percent chance and a certainty. But pause over what it would mean if Al Qaeda were able to acquire nuclear weapons for use against the United States and its allies. For a truly catastrophic outcome, a 1 percent chance may not be so radically different from a much higher chance—and it is tempting to consider responding as if it were a certainty. To see the point, imagine a 1 percent chance that New York City, or the entire East Coast, would be completely destroyed. Or imagine a 1 percent risk of worldwide calamity from climate change—with hundreds of millions of deaths from malaria and other climate-related diseases, countless extinctions, the melt ing of the polar ice sheets, catastrophic flooding in Florida, New York, Paris, Munich, and London. Or imagine a 1 percent chance of a devastating collision between a large asteroid and our planet. If the worst-case scenario is awful enough, we might well treat a small probability as if it were much larger.

But on reflection, is this really wise? One problem is that responses to worst-case scenarios can be both burdensome and risky—and they can have worst-case scenarios of their own. We need to investigate the burdens and risks of the responses, not simply the scenarios. In the context of national security, an aggressive response to a 1 percent threat may create a new threat, perhaps higher than 1 percent, of producing its own disaster. A preemptive war, designed to eliminate a small risk of a terrible outcome, might create larger risks of a different but also terrible outcome. If the United States attacked an unfriendly nation to eliminate the (low probability?) danger that it poses, the attack would likely create a certainty of many deaths, and a (low?) probability of many more.

The Bush administration resisted significant steps to halt climate change, pointing to the burdens and costs of the regulatory actions that some people believe to be required. Suppose that climate change does, in fact, create a 1 percent risk of catastrophe at the very least—and that an aggressive response to climate change, calling for massive changes in energy policy, creates a significant chance of imposing serious hardships on many nations, including not just the United States but India and China as well. Perhaps those hardships would entail significant increases in unemployment and hence poverty. If the world devotes resources to climate change, perhaps it will not be able to use those resources to combat more serious problems.2 To take another example: We could easily imagine responses to the AIDS crisis, such as quarantines, that would impose unacceptable burdens on people who are in fected or who are at risk of becoming infected. To know whether and how to respond, we must look at the consequences of possible responses—not only at the existence, probability, and size of the danger.

In this book, I try to make progress on issues of this kind. I have three specific goals. The first is to understand people's responses to worst-case scenarios and in particular their susceptibility to two opposite problems: excessive overreaction and utter neglect. As we shall see, both problems affect individuals and governments alike. The second goal is to consider how both individuals and public officials might think more sensibly about situations involving low-probability risks of disaster. Insisting on a wide viewscreen, one that emphasizes the likelihood and magnitude of the risks on all sides, would be a good place to start. The third goal is to explore the uses and limits of cost-benefit analysis, especially when dealing with harms that will not come to fruition in the near future. Cost-benefit analysis is at best a proxy for what really matters, which is well-being rather than money; but sometimes proxies can be helpful.

Throughout I shall use climate change as a defining case, both because it has immense practical importance and because it provides a valuable illustration of the underlying principles. But I shall also refer to other sources of very bad worst-case scenarios, including terrorism, depletion of the ozone layer, genetic modification of food, hurricanes, and avian flu. My hope is that the basic analysis can be adapted to diverse problems, including many not yet on the horizon. The discussion is organized around five general themes.

Intuition and analysis People's reactions to risks, and to worst-case scenarios, come through two different routes.3 The first is intuitive; the second is more analytical. Our intuitive reactions tend to be rapid and based on our personal experience. Those who have had a recent encounter with violent crime, an automobile accident, or a serious health scare will often fear a bad outcome in a roughly similar situation, whether or not their fear has any objective basis. By contrast, those who have no relevant experience may believe that an unlikely event is unworthy of attention. If a risk is perceived to fall below a certain threshold, it might not affect our behavior at all.

Our intuitions can lead to both too much and too little concern with low-probability risks. When judgments are based on analysis, they are more likely to be accurate, certainly if the analysis can be trusted. But for most people, intuitions rooted in actual experience are a much more powerful motivating force. An important task, for individuals and institutions alike, is to ensure that error-prone intuitions do not drive behavior.

Overreaction and neglect Intuitive judgments about probability often depend on whether a bad outcome can be readily imagined— whether it is cognitively "available." Before the attacks of 9/11, almost no one had imagined hijackers turning airplanes into flying bombs. The absence of recent airplane hijackings made people feel far more secure than they ought to have—the phenomenon of "unavailability bias." As a result, the terrorist threat was badly neglected. On the other hand, in the aftermath of a highly publicized event people are often far more fearful than they ought to be—the phenomenon of "availability bias." An available incident can lead to excessive fixation on worst-case scenarios—just as the absence of such an incident can lead to an unjustified sense of security.

This problem is compounded by the fact that when people's emotions are engaged, they tend to ignore the question of probability altogether. They focus on the bad outcome or on the worst that might happen, without thinking enough about how unlikely it is. (Best-case thinking—which is the curse, or blessing, of unrealistic optimists—is a related problem.) When governments impose excessive precautions in the face of some risks, they are often falling victim to "probability neglect"—wrongly treating highly improbable dangers as if they were certainties. By contrast, people often believe that they are "safe" in the sense that the worst-case scenario may be dismissed or need not be considered at all. This too is a form of probability neglect, because the assumption is that when a situation can sensibly be described as "safe," there is no risk at all. In fact, human beings face risks of different degrees of probability. It is a pervasive and damaging mistake to think that "safety" comes with an on-off switch.

Worst cases everywhere In some contexts, people are acutely aware of the burdens imposed by attempting to eliminate worst-case scenarios; but in other contexts, they are not attuned to those burdens at all. In regulatory policy, for example, those who urge extensive precautions against the worst cases often disregard the possibility that those very precautions can inflict losses and even create risks of their own. Risks, and bad worst cases, may be on all sides. Preemptive wars—designed to eliminate threats before they can materialize, including the 2003 Iraq invasion and Israel's 2006 attack on Hezbollah—have been defended on the ground that delays may increase the magnitude of the relevant threat. If a nation waits while its adversary plots and prepares, a delay might be deadly. But preemptive wars will ensure many deaths—and such wars can increase, rather than decrease, overall dangers to national security.

Often people, and nations, take undue precautions against worst-case scenarios simply because they disregard the burdens and risks of those precautions. But often people, and nations, neglect worst-case scenarios because they are unduly attentive to the burdens imposed by precautions. It is important to look at both sides of the ledger.

Risk and uncertainty With these points in mind, we can make a good deal of progress in understanding why, and when, people fail to respond sensibly to worst-case scenarios—and what might be done about these problems.

The first step is, of course, to specify the bad outcomes and try to assess their probability. By multiplying outcomes by their probability, we can produce the "expected value" of various courses of action. Sometimes science enables us to identify both outcomes and probabilities within a sharply restricted range. Public health officials might have reason to believe, for example, that the probability of a worst-case outbreak of the avian flu is above 0 percent but below 5 percent; climatologists might conclude that the probability of catastrophic climate change is above 1 percent but below 5 percent. With such information in hand, we are in a position to establish the magnitude and likelihood of gains and losses from various courses of action, including staying the current course ("business as usual"). Perhaps a margin of safety should be created to protect against the worst cases; but as we shall see, it is important to know what we lose by creating that margin of safety. If a patient, suffering from life-threatening cancer, learns that surgery and chemotherapy are 75 percent likely to produce at least a decade of additional life, his choice will probably be easy. Sometimes public officials are in a similar position. Equipped with a sense of the imaginable outcomes, of their probability, and of the burdens and risks of responding to them, we will often be in a good position to know what to do.

When probabilities cannot be assigned to the worst-case scenario, the analysis is harder. Suppose that officials or scientists have no idea about the likelihood of a terrible outcome, or that they are able to specify only a wide range—believing, for example, that the risk of a catastrophic climate change is over 1 percent but below 20 percent. Even when probabilities cannot be assigned, a great deal of progress can be made if we ask how much is lost by eliminating the worst-case scenario and by specifying the difference between the worst case and next-worst case. As we shall see, the problem ofirre-versibility raises distinctive challenges. The simplest point is that it makes sense to do an extra amount in order to maintain flexibility for the future.

Well-being, money, and consequences It is impossible to know how to handle worst-case scenarios without having a sense of the relevant consequences. But once we identify consequences, we still need to do much more thinking before we can decide how to respond. Consequences do not speak for themselves; human beings have to evaluate them. Science may tell us what specific effects climate change is expected to have on human and animal life; but only moral evaluation of those effects is needed to enable us to decide what, exactly, should be done in response.

In offering guidance for handling hard problems, I emphasize the goal of increasing social well-being, usually without attempting to specify that controversial idea. (I use the term "well-being" interchangeably with the term "welfare.") Often we can make a lot of progress on worst-case scenarios without reaching the hardest or most foundational issues. In ordinary life, we do this all the time. If a really bad outcome will almost certainly not occur, and if we lose a lot by trying to prevent it, precautionary steps will not have much appeal. If the worst-case scenario cannot be ruled out, and if it is easy to ensure that it will not occur, we will take precautions. Sensible governments behave the same way, and many good decisions about risks—in such diverse domains as national security and environmental policy—are made on the basis of a fairly simple inquiry into the relevant variables. People who have different conceptions of well-being, or who are not sure how to think about the most serious controversies, can often agree that one course of action makes sense and that others do not. In short, an understanding of the nature of the potential outcomes and their likelihood may well make it possible for people to achieve what we might call an "incompletely theorized agreement" on the appropriate course of ac-tion—that is, an agreement about what to do without any kind of agreement on the theory that underlies our conclusion.4

Often, of course, life produces much harder questions. It is now standard practice for economists and other policy analysts to turn various effects, including risks to life and health, into monetary equivalents. As we shall see, cost-benefit analysis ofthis kind helped to spur extremely aggressive efforts to protect the ozone layer. For that problem, the United States turned out to be the most pro-regulatory nation on the face of the earth—under the leadership of President Ronald Reagan, not generally known for being pro-regulatory. But in the United States, at least, cost-benefit analysis has raised serious cautionary notes about some proposed efforts to protect against climate change.

The idea of cost-benefit analysis raises many questions, both technical and less technical, about whether and how to turn risks and worst cases into monetary equivalents. What matters is well-being, not money, and money is a poor proxy for well-being.5 Nonetheless, I shall offer a qualified defense of cost-benefit analy-sis—not with the preposterous suggestion that it always tells us what to do but with the more modest claim that in deciding what to do, cost-benefit analysis will often provide us with valuable information. Of course we need to know what, exactly, the monetary figures represent. Do they reflect more in the way of premature death and serious illness? Do they refer to higher prices for consumer products? To lower wages? Qualitative as well as quantitative information is important. But in deciding how to respond to worst-case scenarios, monetary equivalents can provide some valuable discipline.

We need to consider distributional questions too. Who is helped and who is hurt by any effort to eliminate worst-case scenarios? The question is especially pressing for the problem of climate change, where people in poor regions, above all India and Africa, are most vulnerable. But the distributional question bears on many other potential disasters as well, such as AIDS and avian flu. We also need to separate the question of regulation from the question of subsidy. It would not make sense to adopt regulations that force the citizens of a poor nation to pay $100 each to eliminate a risk of 1/500,000. But it might well make sense for wealthy nations to transfer resources to citizens of such a nation, to enable them to take more and better steps to eliminate the risks they face.

For advocates ofcost-benefit analysis, a particularly thorny question is how to handle future generations when they are threatened by worst-case scenarios. According to standard practice, money that will come in the future must be "discounted"; a dollar twenty years hence is worth a fraction of a dollar today. (You would almost certainly prefer $1,000 now to $1,000 in twenty years.) Should we discount future lives as well? Is a life twenty years hence worth a fraction of a life today? I will argue in favor of a Principle of Intergenerational Neutrality—one that requires the citizens of every generation to be treated equally. This principle has important implications for many problems, most obviously climate change.

Present generations are obliged to take the interests oftheir threatened descendents as seriously as they take their own.

But the Principle of Intergenerational Neutrality does not mean that the present generation should refuse to discount the future, or should impose great sacrifices on itself for the sake of those who will come later. If human history is any guide, the future will be much richer than the present; and it makes no sense to say that the relatively impoverished present should transfer its resources to the far wealthier future. And if the present generation sacrifices itself by forgoing economic growth, it is likely to hurt the future too, because long-term economic growth is likely to produce citizens who live healthier, longer, and better lives. I shall have something to say about what intergenerational neutrality actually requires, and about the complex relationship between that important ideal and the disputed practice of "discounting" the future.

The plan The organization of this book follows the themes just traced. Chapter 1 sets the stage by exploring risk perceptions, and our responses to worst-case scenarios, with close reference to two of the most important threats of our time: terrorism and climate change. With respect to terrorism, American reactions are greatly heightened by three identifiable mechanisms: availability, probability neglect, and outrage. With respect to climate change, American reactions are greatly dampened. Because most Americans lack personal experience with serious climate-related harms, and because those harms are perceived as likely to occur in the future and in distant lands, Americans have been unwilling to spend much money to prevent them. As we shall see, terrorism and climate change present instructive polar cases: The first is peculiarly likely to produce close attention to worst-case scenarios, while the latter is es pecially unlikely to do so. An understanding of Americans' divergent reactions to these two risks illuminates worst-case thinking in general.

Chapter 2 offers a different kind of comparison. The Montreal Protocol, designed to protect the ozone layer, has been a sensational success story. It has largely eliminated ozone-depleting chemicals and ensured that the ozone layer will eventually return to its natural state. The United States, the world's largest emitter of such chemicals, took the lead in urging an aggressive response to the problem of ozone depletion. By contrast, the Kyoto Protocol, designed to protect against climate change, presents a mixed picture at best. It has been firmly rejected by the United States, the world's largest emitter of greenhouse gases. It does not impose restrictions on emissions from the developing world, even though China is likely to be the world's largest greenhouse gas emitter in the near future. And it is likely to be violated by many of its signatories, including several European nations. The task is to explain the radically different fates of these two efforts to come to terms with potentially catastrophic environmental problems.

Part of the explanation lies in the dramatically different approaches of the United States, based on a domestic assessment of the consequences of the two protocols. Part of the explanation lies in the different incentives ofmost ofthe planet's nations. The problem of ozone depletion could be handled relatively easily, in a way that promised to deliver huge benefits at low cost. The same could not be said for climate change. The United States and China have a lot to gain from the emission of greenhouse gases, and disproportionately little to fear from climate change. Taken together, chapters 1 and 2 offer concrete lessons about how a successful agreement for climate change might be made more probable, and how such an agreement might be structured. The discussion also offers some broader lessons about how and when societies are likely to respond to worst-case scenarios.

Chapters 3 and 4 turn to the question of how to deal with such scenarios. Chapter 3 explains that notwithstanding its international influence, the Precautionary Principle is incoherent; it condemns the very steps that it requires. To see the point, imagine if we adopted a universal One Percent Doctrine, forbidding any course of action that had a 1 percent chance of causing significant harm. The likely result would be paralysis, because so many courses of action would be forbidden. (Even doing nothing might be prohibited; human beings who do nothing—assuming we can agree what that means—will probably end up pretty unhealthy. Will they even eat? What will they eat?) But narrower and better precautionary principles can be devised. In particular, I identify a Catastrophic Harm Precautionary Principle, designed to provide guidance for dealing with extremely serious risks. This principle emphasizes the need to attend to both the magnitude and the probability of the harm. It also emphasizes that when catastrophic risks come to fruition, they often have even more serious consequences than we foresee, because of a process that has been called "social amplification." Properly understood, a Catastrophic Harm Precautionary Principle should counteract the dual problems of overreac-tion and neglect.

Chapter 4 explores the question of irreversibility and in particular the argument for an Irreversible Harm Precautionary Principle, suited to such problems as ozone depletion, protection of endangered species, destruction of cultural artifacts, and of course climate change. One problem here stems from the need to specify the idea of irreversibility, which can be understood in several different ways. Another problem is that precautionary steps may be irreversible too, even if they merely involve expenditures. But the central point remains: Sensible individuals and societies are willing to spend a great deal to preserve their own flexibility in the future.

Chapters 5 and 6 investigate costs and benefits. As Chapter 5 explains in detail, what matters is well-being, and even when the costs of regulation exceed the benefits, regulation might promote well-being. But cost-benefit analysis is useful nonetheless. Whether we care about individual autonomy or social welfare, there is good reason to consider people's "willingness to pay" to avoid risks, including those risks associated with worst-case scenarios—at least when regulation forces people to pay for the benefits they receive. But it is important to ask whether people are adequately informed and whether they suffer from the kinds of cognitive defects emphasized in Chapter 1. If people lack information, or if they process information poorly, we cannot rely on their willingness to pay to reduce statistical risks. At the same time, people's judgments as consumers differ from their judgments as citizens, and this difference complicates the economic case for cost-benefit analysis. An additional problem involves social deprivation: If deprivation has led people to adapt to serious risks, believing them to be an inevitable part of life, then we cannot defend a decision to expose those people to those risks by saying that they prefer the way that they now live. These various points help to clarify the current debate over risk regulation in poor countries. Ofcourse it is preposterous to say that the inhabitants of one nation are "worth less" than the inhabitants of another. But as we shall see, it is not preposterous to say that a rich nation rightly spends more to reduce a mortality risk of 1 in 100,000 than a poor nation does.

Chapter 6 explores one of the most vexing questions of all: valuation of the future. I argue on behalf of a Principle of Inter-generational Neutrality that requires existing generations to give careful consideration to the consequences of their decisions for those who will follow. But this principle does not resolve the debate over whether to "discount" future events or their monetary equivalents. Some of the time, future generations are helped, not hurt, by a decision to discount, because the future is damaged if the present impoverishes itself. But for some problems, cost-benefit analysis with discounting can lead to severe violations of the Principle of Intergenerational Neutrality. As we will see, that form of analysis can impair well-being and cause serious problems of distributional unfairness. These problems should be addressed directly— a point that has implications for climate change in particular.

By way of conclusion, I emphasize the possibility of self-help and investigate some of the links between personal behavior and the judgments of government officials.

CHAPTER ONE

Was this article helpful?

0 0
Swine Influenza

Swine Influenza

SWINE INFLUENZA frightening you? CONCERNED about the health implications? Coughs and Sneezes Spread Diseases! Stop The Swine Flu from Spreading. Follow the advice to keep your family and friends safe from this virus and not become another victim. These simple cost free guidelines will help you to protect yourself from the swine flu.

Get My Free Ebook


Post a comment