Conclusion

Why should there be an organized body of thinking about existential risks? Falling asteroids are not like engineered superviruses; physics disasters are not like nanotechnological wars. Why not consider each of these problems separately?

If someone proposes a physics disaster, then the committee convened to analyze the problem must obviously include physicists. But someone on that committee should also know how terribly dangerous it is to have an answer in your mind before you finish asking the question. Someone on that committee should remember the reply of Enrico Fermi to Leo Szilard's proposal that a fission chain reaction could be used to build nuclear weapons. (The reply was "Nuts!" - Fermi considered the possibility so remote as to not be worth investigating.) Someone should remember the history of errors in physics calculations: the Castle Bravo nuclear test that produced a 15-megaton explosion, instead of 4 to 8, because of an unconsidered reaction in lithium-7: They correctly solved the wrong equation, failed to think of all the terms that needed to be included, and at least one person in the expanded fallout radius died. Someone should remember Lord Kelvin's careful proof, using multiple, independent quantitative calculations from well-established theories, that the Earth could not possibly have existed for so much as forty million years. Someone should know that when an expert says the probability is "a million to one" without using actuarial data or calculations from a precise, precisely confirmed model, the calibration is probably more like twenty to one (though this is not an exact conversion).

Any existential risk evokes problems that it shares with all other existential risks, in addition to the domain-specific expertise required for the specific existential risk. Someone on the physics-disaster committee should know what the term "existential risk" means; should possess whatever skills the field of existential risk management has accumulated or borrowed. For maximum safety, that person should also be a physicist. The domain-specific expertise and the expertise pertaining to existential risks should combine in one person. I am skeptical that a scholar of heuristics and biases, unable to read physics equations, could check the work of physicists who knew nothing of heuristics and biases.

Once upon a time I made up overly detailed scenarios, without realizing that every additional detail was an extra burden. Once upon a time I really did think that I could say there was a ninety percent chance of Artificial Intelligence being developed between 2005 and 2025, with the peak in 2018. This statement now seems to me like complete gibberish. Why did I ever think I could generate a tight probability distribution over a problem like that? Where did I even get those numbers in the first place?

Skilled practitioners of, say, molecular nanotechnology or Artificial Intelligence, will not automatically know the additional skills needed to address the existential risks of their profession. No one told me, when I addressed myself to the challenge of Artificial Intelligence, that it was needful for such a person as myself to study heuristics and biases. I don't remember why I first ran across an account of heuristics and biases, but I remember that it was a description of an overconfidence result - a casual description, online, with no references. I was so incredulous that I contacted the author to ask if this was a real experimental result. (He referred me to the edited volume Judgment Under Uncertainty.)

I should not have had to stumble across that reference by accident. Someone should have warned me, as I am warning you, that this is knowledge needful to a student of existential risk. There should be a curriculum for people like ourselves; a list of skills we need in addition to our domain-specific knowledge. I am not a physicist, but I know a little - probably not enough -

about the history of errors in physics, and a biologist thinking about superviruses should know it too.

I once met a lawyer who had made up his own theory of physics. I said to the lawyer: You cannot invent your own physics theories without knowing math and studying for years; physics is hard. He replied: But if you really understand physics you can explain it to your grandmother, Richard Feynman told me so. And I said to him: "Would you advise a friend to argue his own court case?" At this he fell silent. He knew abstractly that physics was difficult, but I think it had honestly never occurred to him that physics might be as difficult as lawyering.

One of many biases not discussed in this chapter describes the biasing effect of not knowing what we do not know. When a company recruiter evaluates his own skill, he recalls to mind the performance of candidates he hired, many of which subsequently excelled; therefore the recruiter thinks highly of his skill. But the recruiter never sees the work of candidates not hired. Thus I must warn that this paper touches upon only a small subset of heuristics and biases; for when you wonder how much you have already learned, you will recall the few biases this chapter does mention, rather than the many biases it does not. Brief summaries cannot convey a sense of the field, the larger understanding which weaves a set of memorable experiments into a unified interpretation. Many highly relevant biases, such as need for closure, I have not even mentioned. The purpose of this chapter is not to teach the knowledge needful to a student of existential risks, but to intrigue you into learning more.

Thinking about existential risks falls prey to all the same fallacies that prey upon thinking-in-general. But the stakes are much, much higher. A common result in heuristics and biases is that offering money or other incentives does not eliminate the bias. (Kachelmeier and Shehata (1992) offered subjects living in the People's Republic of China the equivalent of three months' salary.) The subjects in these experiments don't make mistakes on purpose; they make mistakes because they don't know how to do better. Even if you told them the survival of humankind was at stake, they still would not thereby know how to do better. (It might increase their need for closure, causing them to do worse.) It is a terribly frightening thing, but people do not become any smarter, just because the survival of humankind is at stake.

In addition to standard biases, I have personally observed what look like harmful modes of thinking specific to existential risks. The Spanish flu of 1918 killed 25-50 million

people. World War II killed 60 million people. 10 is the order of the largest catastrophes in humanity's written history. Substantially larger numbers, such as 500 million deaths, and especially qualitatively different scenarios such as the extinction of the entire human species, seem to trigger a different mode of thinking - enter into a "separate magisterium". People who would never dream of hurting a child hear of an existential risk, and say, "Well, maybe the human species doesn't really deserve to survive."

There is a saying in heuristics and biases that people do not evaluate events, but descriptions of events - what is called non-extensional reasoning. The extension of humanity's extinction includes the death of yourself, of your friends, of your family, of your loved ones, of your city, of your country, of your political fellows. Yet people who would take great offense at a proposal to wipe the country of Britain from the map, to kill every member of the Democratic Party in the U.S., to turn the city of Paris to glass - who would feel still greater horror on hearing the doctor say that their child had cancer - these people will discuss the extinction of humanity with perfect calm. "Extinction of humanity", as words on paper, appears in fictional novels, or is discussed in philosophy books - it belongs to a different context than the Spanish flu. We evaluate descriptions of events, not extensions of events. The cliché phrase end of the world invokes the magisterium of myth and dream, of prophecy and apocalypse, of novels and movies. The challenge of existential risks to rationality is that, the catastrophes being so huge, people snap into a different mode of thinking. Human deaths are suddenly no longer bad, and detailed predictions suddenly no longer require any expertise, and whether the story is told with a happy ending or a sad ending is a matter of personal taste in stories.

But that is only an anecdotal observation of mine. I thought it better that this essay should focus on mistakes well-documented in the literature - the general literature of cognitive psychology, because there is not yet experimental literature specific to the psychology of existential risks. There should be.

In the mathematics of Bayesian decision theory there is a concept of information value - the expected utility of knowledge. The value of information emerges from the value of whatever it is information about; if you double the stakes, you double the value of information about the stakes. The value of rational thinking works similarly - the value of performing a computation that integrates the evidence is calculated much the same way as the value of the evidence itself. (Good 1952; Horvitz et. al. 1989.)

No more than Albert Szent-Gyorgyi could multiply the suffering of one human by a hundred million can I truly understand the value of clear thinking about global risks. Scope neglect is the hazard of being a biological human, running on an analog brain; the brain cannot multiply by six billion. And the stakes of existential risk extend beyond even the six billion humans alive today, to all the stars in all the galaxies that humanity and humanity's descendants may some day touch. All that vast potential hinges on our survival here, now, in the days when the realm of humankind is a single planet orbiting a single star. I can't feel our future. All I can do is try to defend it.

Recommended Reading

Judgment under uncertainty: Heuristics and biases. (1982.) Edited by Daniel Kahneman, Paul Slovic, and Amos Tversky. This is the edited volume that helped establish the field, written with the outside academic reader firmly in mind. Later research has generalized, elaborated, and better explained the phenomena treated in this volume, but the basic results given are still standing strong.

Choices, Values, and Frames. (2000.) Edited by Daniel Kahneman and Amos Tversky. Heuristics and Biases. (2003.) Edited by Thomas Gilovich, Dale Griffin, and Daniel Kahneman. These two edited volumes overview the field of heuristics and biases in its current form. They are somewhat less accessible to a general audience.

Rational Choice in an Uncertain World: The Psychology of Intuitive Judgment by Robyn Dawes. First edition 1988 by Dawes and Kagan, second edition 2001 by Dawes and Hastie. This book aims to introduce heuristics and biases to an intelligent general audience. (For example: Bayes's Theorem is explained, rather than assumed, but the explanation is only a few pages.) A good book for quickly picking up a sense of the field.

Bibliography

Alpert, M. and Raiffa, H. 1982. A Progress Report on the Training of Probability Assessors. In Kahneman et. al. 1982: 294-305.

Ambrose, S.H. 1998. Late Pleistocene human population bottlenecks, volcanic winter, and differentiation of modern humans. Journal of Human Evolution 34:623-651.

Baron, J. and Greene, J. 1996. Determinants of insensitivity to quantity in valuation of public goods: contribution, warm glow, budget constraints, availability, and prominence. Journal of Experimental Psychology: Applied, 2: 107-125.

Bostrom, N. 2001. Existential Risks: Analyzing Human Extinction Scenarios. Journal of Evolution and Technology, 9.

Brenner, L. A., Koehler, D. J. and Rottenstreich, Y. 2002. Remarks on support theory: Recent advances and future directions. In Gilovich et. al. (2003): 489-509.

Buehler, R., Griffin, D. and Ross, M. 1994. Exploring the "planning fallacy": Why people underestimate their task completion times. Journal of Personality and Social Psychology, 67: 366-381.

Buehler, R., Griffin, D. and Ross, M. 1995. It's about time: Optimistic predictions in work and love. Pp. 1-32 in European Review of Social Psychology, Volume 6, eds. W. Stroebe and M. Hewstone. Chichester: John Wiley & Sons.

Buehler, R., Griffin, D. and Ross, M. 2002. Inside the planning fallacy: The causes and consequences of optimistic time predictions. In Gilovich et. al. 2003: 250-270.

Burton, I., Kates, R. and White, G. 1978. Environment as Hazard. New York: Oxford University Press.

Carson, R. T. and Mitchell, R. C. 1995. Sequencing and Nesting in Contingent Valuation Surveys. Journal of Environmental Economics and Management, 28(2): 155-73.

Chapman, G.B. and Johnson, E.J. 2002. Incorporating the irrelevant: Anchors in judgments of belief and value. In Gilovich et. al. (2003).

Christensen-Szalanski, J.J.J. and Bushyhead, J.B. 1981. Physicians' Use of Probabilistic Information in a Real Clinical Setting. Journal of Experimental Psychology: Human Perception and Performance, 7: 928-935.

Cialdini, R. B. 2001. Influence: Science and Practice. Boston, MA: Allyn and Bacon.

Combs, B. and Slovic, P. 1979. Causes of death: Biased newspaper coverage and biased judgments. Journalism Quarterly, 56: 837-843.

Dawes, R.M. 1988. Rational Choice in an Uncertain World. San Diego, CA: Harcourt, Brace, Jovanovich.

Desvousges, W.H., Johnson, F.R., Dunford, R.W., Boyle, K.J., Hudson, S.P. and Wilson, N. 1993. Measuring natural resource damages with contingent valuation: tests of validity and reliability. Pp. 91-159 in Contingent valuation: a critical assessment, ed. J. A. Hausman. Amsterdam: North Holland.

Fetherstonhaugh, D., Slovic, P., Johnson, S. and Friedrich, J. 1997. Insensitivity to the value of human life: A study of psychophysical numbing. Journal of Risk and Uncertainty, 14: 238-300.

Finucane, M.L., Alhakami, A., Slovic, P. and Johnson, S.M. 2000. The affect heuristic in judgments of risks and benefits. Journal of Behavioral Decision Making, 13(1): 1-17.

Fischhoff, B. 1982. For those condemned to study the past: Heuristics and biases in hindsight. In Kahneman et. al. 1982: 332-351.

Fischhoff, B., and Beyth, R. 1975. I knew it would happen: Remembered probabilities of once-future things. Organizational Behavior and Human Performance, 13: 1-16.

Fischhoff, B., Slovic, P. and Lichtenstein, S. 1977. Knowing with certainty: The appropriateness of exterme confidence. Journal of Experimental Psychology: Human Perception and Performance, 3: 522-564.

Ganzach, Y. 2001. Judging risk and return of financial assets. Organizational Behavior and Human Decision Processes, 83: 353-370.

Garreau, J. 2005. Radical Evolution: The Promise and Peril of Enhancing Our Minds, Our Bodies -- and What It Means to Be Human. New York: Doubleday.

Gilbert, D. T. and Osborne, R. E. 1989. Thinking backward: Some curable and incurable consequences of cognitive busyness. Journal of Personality and Social Psychology, 57: 940-949.

Gilbert, D. T., Pelham, B. W. and Krull, D. S. 1988. On cognitive busyness: When person perceivers meet persons perceived. Journal of Personality and Social Psychology, 54: 733-740.

Gilovich, T. 2000. Motivated skepticism and motivated credulity: Differential standards of evidence in the evaluation of desired and undesired propositions. Presented at the 12th Annual Convention of the American Psychological Society, Miami Beach, Florida.

Gilovich, T., Griffin, D. and Kahneman, D. eds. 2003. Heuristics and Biases: The Psychology of Intuitive Judgment. Cambridge, U.K.: Cambridge University Press.

Good, I. J. 1952. Rational decisions. Journal of the Royal Statistical Society, Series B.

Griffin, D. and Tversky, A. 1992. The weighing of evidence and the determinants of confidence. Cognitive Psychology, 24: 411-435.

Harrison, G. W. 1992. Valuing public goods with the contingent valuation method: a critique of Kahneman and Knestch. Journal of Environmental Economics and Management, 23: 248-57.

Horvitz, E.J., Cooper, G.F. and Heckerman, D.E. 1989. Reflection and Action Under Scarce Resources: Theoretical Principles and Empirical Study. Pp. 1121-27 in Proceedings of the Eleventh International Joint Conference on Artificial Intelligence. Detroit, MI.

Hynes, M. E. and Vanmarke, E. K. 1976. Reliability of Embankment Performance Predictions. Proceedings of the ASCE Engineering Mechanics Division Specialty Conference. Waterloo, Ontario: Univ. of Waterloo Press.

Johnson, E., Hershey, J., Meszaros, J.,and Kunreuther, H. 1993. Framing, Probability Distortions and Insurance Decisions. Journal of Risk and Uncertainty, 7: 35-51.

Kachelmeier, S.J. and Shehata, M. 1992. Examining risk preferences under high monetary incentives: Experimental evidence from the People's Republic of China. American Economic Review, 82: 1120-1141.

Kahneman, D. 1986. Comments on the contingent valuation method. Pp. 185-194 in Valuing environmental goods: a state of the arts assessment of the contingent valuation method, eds. R. G. Cummings, D. S. Brookshire and W. D. Schulze. Totowa, NJ: Roweman and Allanheld.

Kahneman, D. and Knetsch, J.L. 1992. Valuing public goods: the purchase of moral satisfaction. Journal of Environmental Economics and Management, 22: 57-70.

Kahneman, D., Ritov, I. and Schkade, D. A. 1999. Economic Preferences or Attitude Expressions?: An Analysis of Dollar Responses to Public Issues, Journal of Risk and Uncertainty, 19: 203-235.

Kahneman, D., Slovic, P., and Tversky, A., eds. 1982. Judgment under uncertainty: Heuristics and biases. New York: Cambridge University Press.

Kahneman, D. and Tversky, A. 2000. eds. Choices, Values, and Frames. Cambridge, U.K.: Cambridge University Press.

Kamin, K. and Rachlinski, J. 1995. Ex Post ^ Ex Ante: Determining Liability in Hindsight. Law and Human Behavior, 19(1): 89-104.

Kates, R. 1962. Hazard and choice perception in flood plain management. Research Paper No. 78. Chicago: University of Chicago, Department of Geography.

Knaup, A. 2005. Survival and longevity in the business employment dynamics data. Monthly Labor Review, May 2005.

Kunda, Z. 1990. The case for motivated reasoning. Psychological Bulletin, 108(3):

480-498.

Kunreuther, H., Hogarth, R. and Meszaros, J. 1993. Insurer ambiguity and market failure. Journal of Risk and Uncertainty, 7: 71-87.

Latane, B. and Darley, J. 1969. Bystander "Apathy", American Scientist, 57: 244-268.

Lichtenstein, S., Fischhoff, B. and Phillips, L. D. 1982. Calibration of probabilities: The state of the art to 1980. In Kahneman et. al. 1982: 306-334.

Lichtenstein, S., Slovic, P., Fischhoff, B., Layman, M. and Combs, B. 1978. Judged Frequency of Lethal Events. Journal of Experimental Psychology: Human Learning and Memory, 4(6), November: 551-78.

McFadden, D. and Leonard, G. 1995. Issues in the contingent valuation of environmental goods: methodologies for data collection and analysis. In Contingent valuation: a critical assessment, ed. J. A. Hausman. Amsterdam: North Holland.

Newby-Clark, I. R., Ross, M., Buehler, R., Koehler, D. J. and Griffin, D. 2000. People focus on optimistic and disregard pessimistic scenarios while predicting their task completion times. Journal of Experimental Psychology: Applied, 6: 171-182

Quattrone, G.A., Lawrence, C.P., Finkel, S.E. and Andrus, D.C. 1981. Explorations in anchoring: The effects of prior range, anchor extremity, and suggestive hints. Manuscript, Stanford University.

Rasmussen, N. C. 1975. Reactor Safety Study: An Assessment of Accident Risks in U.S. Commercial Nuclear Power Plants. NUREG-75/014, WASH-1400 (U.S. Nuclear Regulatory Commission, Washington, D.C.)

Rogers, W. et al. 1986. Report of the Presidential Commission on the Space Shuttle Challenger Accident. Presidential Commission on the Space Shuttle Challenger Accident. Washington, DC.

Sanchiro, C. 2003. Finding Error. Mich. St. L. Rev. 1189.

Schneier, B. 2005. Security lessons of the response to hurricane Katrina. http://www.schneier.com/blog/archives/2005/09/security lesson.html. Viewed on January 23, 2006.

Sides, A., Osherson, D., Bonini, N., and Viale, R. 2002. On the reality of the conjunction fallacy. Memory & Cognition, 30(2): 191-8.

Slovic, P., Finucane, M., Peters, E. and MacGregor, D. 2002. Rational Actors or Rational Fools: Implications of the Affect Heuristic for Behavioral Economics. Journal of Socio-Economics, 31: 329-342.

Slovic, P., Fischoff, B. and Lichtenstein, S. 1982. Facts Versus Fears: Understanding Perceived Risk. In Kahneman et al. 1982: 463-492.

Strack, F. and Mussweiler, T. 1997. Explaining the enigmatic anchoring effect: Mechanisms of selective accessibility. Journal of Personality and Social Psychology, 73: 437446.

Taber, C.S. and Lodge, M. 2000. Motivated skepticism in the evaluation of political beliefs. Presented at the 2000 meeting of the American Political Science Association.

Taleb, N. 2001. Fooled by Randomness: The Hidden Role of Chance in Life and in the Markets. Pp. 81-85. New York: Textre.

Taleb, N. 2005. The Black Swan: Why Don't We Learn that We Don't Learn? New York: Random House.

Tversky, A. and Kahneman, D. 1973. Availability: A heuristic for judging frequency and probability. Cognitive Psychology, 4: 207-232.

Tversky, A. and Kahneman, D. 1974. Judgment under uncertainty: Heuristics and biases. Science, 185: 251-284.

Tversky, A. and Kahneman, D. 1982. Judgments of and by representativeness. In Kahneman et. al. (1982): 84-98.

Tversky, A. and Kahneman, D. 1983. Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment. Psychological Review, 90: 293-315.

Wansink, B., Kent, R.J. and Hoch, S.J. 1998. An Anchoring and Adjustment Model of Purchase Quantity Decisions. Journal of Marketing Research, 35(February): 71-81.

Wason, P.C. 1960. On the failure to eliminate hypotheses in a conceptual task. Quarterly Journal of Experimental Psychology, 12: 129-140.

Wilson, T.D., Houston, C., Etling, K.M. and Brekke, N. 1996. A new look at anchoring effects: Basic anchoring and its antecedents. Journal of Experimental Psychology: General. 4: 387-402.

Continue reading here: Introduction anthropic reasoning and global risks

Was this article helpful?

0 0