UNKNOWN UNKNOWNS

GARY MARCUS

Professor of psychology; director, NYU Center for Language & Music; author, Guitar Zero: The New Musician and the Science of Learning


There are known knowns and known unknowns, but what we should be worried about most is the unknown unknowns. Not because they are the most serious risks we face but because psychology tells us that unclear risks in the distant future are the risks we’re less likely to take seriously enough.

At least four distinct psychological mechanisms are at work. First, we are moved more by vivid information than by abstract information (even when the abstract information should in principle dominate). Second, we discount the future, rushing for the dollar now as opposed to the two dollars we could have a year later if we waited. Third, the focusing illusion (itself perhaps driven by the more general phenomenon of priming) tends to make us dwell on our most immediate problems even if more serious problems loom in the background. Fourth, we have a tendency to believe in a just world, in which nature naturally rights itself.

These four mechanisms likely derive from different sources, some stemming from systems that govern motivation (future discounting), others from systems that mediate pleasure (belief in a just world), others from the structure of our memory (the focusing illusion, and the bias from vividness). Whatever their source, the four together create a potent psychological drive for us to underweight distant future risks we cannot fully envision.

Climate change is a case in point. In 1975, the Columbia University geochemist Wallace S. Broecker published an important and prescient article in Science called “Climatic Change: Are We on the Brink of a Pronounced Global Warming?” but his worries were ignored for decades, in part because many people presumed, fallaciously, that nature would somehow automatically set itself right. (And in keeping with our tendency to draw inference primarily from vivid information, a well-crafted feature film on climate change played a significant role in gathering public attention, arguably far more so than the original Science article.)

Oxford philosopher Nick Bostrom has pointed out that the three greatest unknowns we should worry about are biotechnology, nanotechnology, and the rise of machines more intelligent than human beings. Each sounds like science fiction and has in fact been portrayed in science fiction, but each poses genuine threats. Bostrom posits “existential risks”—possible, if unlikely, calamities that would wipe out our entire species, much as an asteroid appears to have extinguished the dinosaurs. Importantly, many of these risks, in his judgment, exceed the existential risk of other concerns that occupy a considerably greater share of public attention. Climate change may be more likely, and certainly is more vivid, but is less apt to lead to the extinction of the human species (even though it could conceivably kill a significant fraction).

The truth is, we simply don’t know enough about the potential biotechnology, nanotechonology, or future iterations of artificial intelligence to calculate what their risks are. Compelling arguments have been made that in principle any of the three could lead to human extinction. These risks may prove manageable, but I don’t think we can manage them if we don’t take them seriously. In the long run, biotech, nanotech, and AI are probably significantly more likely to help the species, by increasing productivity and limiting disease, than they are to destroy it. But we need to invest more in figuring out exactly what the risks are and preparing for them. Right now, the United States spends more than $2.5 billion dollars a year studying climate change but (by my informal reckoning) less than 1 percent of that total studying the risk of biotech, nanotech, and AI.

What we really should be worried about is that we are not quite doing enough to prepare for the unknown.

Загрузка...