MARTIN REES
Astronomer Royal; former president, the Royal Society; emeritus professor of cosmology & astrophysics, University of Cambridge; author, From Here to Infinity: A Vision for the Future of Science
Those of us fortunate enough to live in the developed world fret too much about minor hazards of everyday life: improbable air crashes, carcinogens in food, and so forth. But we are less secure than we think. We should worry far more about scenarios that have thankfully not yet happened—but which, if they occurred, could cause such worldwide devastation that even once would be too often.
Much has been written about possible ecological shocks triggered by the collective impact on the biosphere of a growing and more demanding world population, and about the social and political tensions stemming from scarcity of resources or climate change. But even more worrying are the downsides of powerful new technologies: cyber-, bio-, and nano-. We’re entering an era when a few individuals could, via error or terror, trigger a societal breakdown with such extreme suddenness that palliative government actions would be overwhelmed.
Some would dismiss these concerns as an exaggerated jeremiad: After all, human societies have survived for millennia despite storms, earthquakes, and pestilence. But these human-induced threats are different: They are newly emergent, so we have a limited time base for exposure to them and can’t be so sanguine that we would survive them for long, nor about the ability of governments to cope if disaster strikes. And of course we have zero grounds for confidence that we can survive the worst that even more powerful future technologies could do.
The “anthropocene” era, when the main global threats come from humans and not from nature, began with the mass deployment of thermonuclear weapons. Throughout the cold war, there were several occasions when the superpowers could have stumbled toward nuclear Armageddon through muddle or miscalculation. Those who lived anxiously through the Cuban missile crisis would have been not merely anxious but paralytically scared had they realized just how close the world then was to catastrophe. Only later did we learn that President Kennedy assessed the odds of nuclear war, at one stage, as “somewhere between one out of three and even.” And only when he was long retired did Robert MacNamara state frankly that “[w]e came within a hair’s breadth of nuclear war without realizing it. It’s no credit to us that we escaped—Khrushchev and Kennedy were lucky as well as wise.”
It is now conventionally asserted that nuclear deterrence worked. In a sense, it did. But that doesn’t mean it was a wise policy. If you play Russian roulette with one or two bullets in the barrel, you are more likely to survive than not, but the stakes would need to be astonishingly high—or the value you place on your life inordinately low—for this to seem a wise gamble.
But we were dragooned into just such a gamble throughout the cold war era. It would be interesting to know what level of risk other leaders thought they were exposing us to, and what odds most European citizens would have accepted, if they’d been asked to give informed consent. For my part, I would not have chosen to risk a one in three—or even one in six—chance of a disaster that would have killed hundreds of millions and shattered the physical fabric of all our cities, even if the alternative were a certainty of a Soviet invasion of Western Europe. And of course the devastating consequences of thermonuclear war would have spread far beyond the countries that faced a direct threat.
The threat of global annihilation involving tens of thousands of H-bombs is thankfully in abeyance—even though there is now more reason to worry that smaller nuclear arsenals might be used in a regional context, or even by terrorists. But when we recall the geopolitical convulsions of the last century—two world wars, the rise and fall of the Soviet Union, and so forth—we can’t rule out, later in the present century, a drastic global realignment leading to a standoff between new superpowers. So a new generation may face its own “Cuba”—and one that could be handled less well or less luckily than the Cuban missile crisis was.
We will always have to worry about thermonuclear weapons. But a new trigger for societal breakdown will be the environmental stresses consequent on climate change. Many still hope that our civilization can segue toward a low-carbon future without trauma and disaster. My pessimistic guess, however, is that global annual CO2 emissions won’t be turned around in the next twenty years. But by then we’ll know—perhaps from advanced computer modeling but also from how much global temperatures have actually risen by then—whether or not the feedback from water vapor and clouds strongly amplifies the effect of CO2 itself in creating a greenhouse effect.
If these feedbacks are indeed important, and the world consequently seems on a rapidly warming trajectory because international efforts to reduce emission haven’t been successful, there may be a pressure for “panic measures.” These would have to involve a “Plan B”—being fatalistic about continuing dependence on fossil fuels but combating its effects by some form of geoengineering.
That would be a political nightmare: Not all nations would want to adjust the thermostat the same way, and the science would still not be reliable enough to predict what would actually happen. Even worse, techniques such as injecting dust into the stratosphere or “seeding” the oceans may become cheap enough that plutocratic individuals could finance and implement them. This is a recipe for dangerous and possible runaway unintended consequences, especially if some want a warmer Arctic whereas others want to avoid further warming of the land at lower latitudes.
Nuclear weapons are the worst downside of 20th-century science. But there are novel concerns stemming from the effects of fast-developing 21st-century technologies. Our interconnected world depends on elaborate networks: electric power grids, air-traffic control, international finance, just-in-time delivery, and so forth. Unless these are highly resilient, their manifest benefits could be outweighed by catastrophic (albeit rare) breakdowns cascading through the system.
Moreover, a contagion of social and economic breakdown would spread worldwide via computer networks and “digital wildfire”—literally at the speed of light. The threat is terror as well as error. Concern about cyberattack, by criminals or by hostile nations, is rising sharply. Synthetic biology, likewise, offers huge potential for medicine and agriculture—but it could facilitate bioterror.
It is hard to make a clandestine H-bomb, but millions will have the capability and resources to misuse these “dual use” technologies. Freeman Dyson looks toward an era when children can design and create new organisms just as routinely as he, when young, played with a chemistry set. Were this to happen, our ecology (and even our species) would surely not survive unscathed for long. And should we worry about another sci-fi scenario—that a network of computers could develop a mind of its own and threaten us all?
In a media landscape oversaturated with sensational science stories, “end of the world” Hollywood productions, and Mayan apocalypse warnings, it may be hard to persuade the wide public that there are indeed things to worry about that could arise as unexpectedly as the 2008 financial crisis and have far greater impact. I’m worried that by 2050 desperate efforts to minimize or cope with a cluster of risks with low probability but catastrophic conseqences may dominate the political agenda.