Contemporary technomillennialism

4.5.1 The singularity and techno-millennialism

Joel Garreau's (2006) recent book on the psychoculture of accelerating change, Radical Evolution: The Promise and Peril of Enhancing Our Minds, Our Bodies - and What It Means to Be Human, is structured in three parts: Heaven, Hell and Prevail. In the Heaven scenario he focuses on the predictions of inventor Ray Kurzweil, summarized in his 2005 book,

The Singularity Is Near. The idea of a techno-millennial 'Singularity' was coined in a 1993 paper by mathematician and science fiction author Vernor Vinge. In physics 'singularities' are the centres of black holes, within which we cannot predict how physical laws will work. In the same way, Vinge said, greater-than-human machine intelligence, multiplying exponentially, would make everything about our world unpredictable. Most Singularitarians, like Vinge and Kurzweil, have focused on the emergence of superhuman machine intelligence. But thbe even more fundamental concept is exponential technological progress, with the multiplier quickly leading to a point of radical social crisis. Vinge projected that self-willed artificial intelligence would emerge within the next 30 years, by 2023, with either apocalyptic or millennial consequences. Kurzweil predicts the Singularity for 2045.

The most famous accelerating trend is 'Moore's Law', articulated by Intel co-founder Gordon Moore in 1965, which is the observation that the number of transistors that can be fit on a computer chip has doubled about every 18 months since their invention. Kurzweil goes to great lengths to document that these trends of accelerating change also occur in genetics, mechanical miniaturization, and telecommunications, and not just in transistors. Kurzweil projects that the 'law of accelerating returns' from technological change is 'so rapid and profound it represents a rupture in the fabric of human history'. For instance, Kurzweil predicts that we will soon be able to distribute trillions of nanorobots in our brains, and thereby extend our minds, and eventually upload our minds into machines. Since lucky humans will at that point merge with superintelligence or become superintelligent, some refer to the Singularity as the 'Techno-rapture', pointing out the similarity of the narrative to the Christian Rapture; those foresighted enough to be early adopters of life extension and cybernetics will live long enough to be uploaded and 'vastened' (vastly mental abilities) after the Singularity. The rest of humanity may however be left behind'.

This secular 'left behind' narrative is very explicit in the Singularitarian writings of computer scientist Hans Moravec (1990, 2000). For Moravec the human race will be superseded by our robot children, among whom some of us may be able to expand to the stars. In his Robot: Mere Machine to Transcendent Mind, Moravec (2000, pp. 142-162) says 'Our artificial progeny will grow away from and beyond us, both in physical distance and structure, and similarity of thought and motive. In time their activities may become incompatible with the old Earth's continued existence... An entity that fails to keep up with its neighbors is likely to be eaten, its space, materials, energy, and useful thoughts reorganized to serve another's goals. Such a fate may be routine for humans who dally too long on slow Earth before going Ex.

Here we have Tribulations and damnation for the late adopters, in addition to the millennial Utopian outcome for the elect.

Although Kurzweil acknowledges apocalyptic potentials - such as humanity being destroyed by superintelligent machines - inherent in these technologies, he is nonetheless uniformly Utopian and enthusiastic. Hence Garreau's labelling Kurzweil's the 'Heaven' scenario. While Kurzweil (2005) acknowledges his similarity to millennialists by, for instance, including a tongue-in-cheek picture in The Singularity Is Near of himself holding a sign with that slogan, referencing the classic cartoon image of the EndTimes street prophet, most Singularitarians angrily reject such comparisons insisting their expectations are based solely on rational, scientific extrapolation.

Other Singularitarians, however, embrace parallels with religious millennialism. John Smart, founder and director of the California-based Acceleration Studies Foundation, often notes the similarity between his own 'Global Brain' scenario and the eschatological writings of the Jesuit palaeontologist Teilhard de Chardin (2007). In the Global Brain scenario, all human beings are linked to one another and to machine intelligence in the emerging global telecommunications web, leading to the emergence of collective intelligence. This emergent collectivist form of Singularitarianism was proposed also by Peter Russell (1983) in The Global Brain, and Gregory Stock (1993) in Metaman. Smart (2007) argues that the scenario of an emergent global human-computer meta-mind is similar to Chardin's eschatological idea of humanity being linked in a global 'noosphere', or info-sphere, leading to a post-millennial 'Omega Point' of union with God. Computer scientist Juergen Schmidhuber (2006) also has adopted Chardin's 'Omega' to refer to the Singularity.

For most Singularitarians, as for most millennialists, the process of technological innovation is depicted as autonomous of human agency, and wars, technology bans, energy crises or simple incompetence are dismissed as unlikely to slow or stop the trajectory. Kurzweil (2006) insists, for instance, that the accelerating trends he documents have marched unhindered through wars, plagues and depressions. Other historians of technology (Lanier, 2000; Seidensticker, 2006; Wilson, 2007) argue that Kurzweil ignores techno-trends which did stall, due to design challenges and failures, and to human factors that slowed the diffusion of new technologies, factors which might also slow or avert greater-than-human machine intelligence. Noting that most predictions of electronic transcendence fall within the predictor's expected lifespan, technology writer Kevin Kelly (2007)suggests that people who make such predictions have a cognitive bias towards optimism.

The point of this essay is not to parse the accuracy or empirical evidence for exponential change or catastrophic risks, but to examine how the millennialism that accompanies their consideration biases assessment of their risks and benefits, and the best courses of action to reduce the former and ensure the latter. There is of course an important difference between fear of a civilization-ending nuclear war, grounded in all-too-real possibility, and fear of the end of history from a prophesied supernatural event. I do not mean to suggest that all discussion of Utopian and catastrophic possibilities are merely millennialist fantasies, but rather that recognizing millennialist dynamics permits more accurate risk/benefit assessments and more effective prophylactic action.

0 0

Post a comment