It is hardly possible to overrate the value… of placing human beings in contact with persons dissimilar to themselves, and with modes of thought and action unlike those with which they are familiar…. Such communication has always been, and is peculiarly in the present age, one of the primary sources of progress.
The manner in which some of the most important individual discoveries were arrived at reminds one more of a sleepwalker’s performance than an electronic brain’s.
In the spring of 1963, Geneva was swarming with diplomats. Delegations from eighteen countries had arrived for negotiations on the Nuclear Test Ban treaty, and meetings were under way in scores of locations throughout the Swiss capital. After one afternoon of discussions between the American and Russian delegations, a young KGB officer approached a forty-year-old American diplomat named David Mark. “I’m new on the Soviet delegation, and I’d like to talk to you,” he whispered to Mark in Russian, “but I don’t want to talk here. I want to have lunch with you.” After reporting the contact to colleagues at the CIA, Mark agreed, and the two men planned a meeting at a local restaurant the following day.
At the restaurant, the officer, whose name was Yuri Nosenko, explained that he’d gotten into a bit of a scrape. On his first night in Geneva, Nosenko had drunk too much and brought a prostitute back to his hotel room. When he awoke, to his horror, he found that his emergency stash of $900 in Swiss francs was missing—no small sum in 1963. “I’ve got to make it up,” Nosenko told him. “I can give you some information that will be very interesting to the CIA, and all I want is my money.” They set up a second meeting, to which Nosenko arrived in an obviously inebriated state. “I was snookered,” Nosenko admitted later—“very drunk.”
In exchange for the money, Nosenko promised to spy for the CIA in Moscow, and in January 1964 he met directly with CIA handlers to discuss his findings. This time, Nosenko had big news: He claimed to have handled the KGB file of Lee Harvey Oswald and said it contained nothing suggesting the Soviet Union had foreknowledge of Kennedy’s assassination, potentially ruling out Soviet involvement in the event. He was willing to share more of the file’s details with the CIA if he would be allowed to defect and resettle in the United States.
Nosenko’s offer was quickly transmitted to CIA headquarters in Langley, Virginia. It seemed like a potentially enormous break: Only months after Kennedy had been shot, determining who was behind his assassination was one of the agency’s top priorities. But how could they know if Nosenko was telling the truth? James Jesus Angleton, one of the lead agents on Nosenko’s case, was skeptical. Nosenko could be a trap—even part of a “master plot” to draw the CIA off the trail. After much discussion, the agents agreed to let Nosenko defect: If he was lying, it would indicate that the Soviet Union did know something about Oswald, and if he was telling the truth, he would be useful for counterintelligence.
As it turned out, they were wrong about both. Nosenko traveled to the United States in 1964, and the CIA collected a massive, detailed dossier on their latest catch. But almost as soon as he started the debriefing process, inconsistencies began to emerge. Nosenko claimed he’d graduated from his officer training program in 1949, but the CIA’s documents indicated otherwise. He claimed to have no access to documents that KGB officers of his station ought to have had. And why was this man with a wife and child at home in Russia defecting without them?
Angleton became more and more suspicious, especially after his drinking buddy Kim Philby was revealed to be a Soviet spy. Clearly, Nosenko was a decoy sent to dispute and undermine the intelligence the agency was getting from another Soviet defector. The debriefings became more intense. In 1964, Nosenko was thrown into solitary confinement, where he was subjected for several years to harsh interrogation intended to break him and force him to confess. In one week, he was subjected to polygraph tests for twenty-eight and a half hours. Still, no break was forthcoming.
Not everyone at the CIA thought Nosenko was a plant. And as more details from his biography became clear, it came to seem more and more likely that the man they had imprisoned was no spymaster. Nosenko’s father was the minister of shipbuilding and a member of the Communist Party Central Committee who had buildings named after him. When young Yuri had been caught stealing at the Naval Preparatory School and was beaten up by his classmates, his mother had complained directly to Stalin; some of his classmates were sent to the Russian front as punishment. It was looking more and more as though Yuri was just “the spoiled-brat son of a top leader” and a bit of a mess. The reason for the discrepancy in graduation dates became clear: Nosenko had been held back a year in school for flunking his exam in Marxism-Leninism, and he was ashamed of it.
By 1968, the balance of senior CIA agents came to believe that the agency was torturing an innocent man. They gave him $80,000, and set him up in a new identity somewhere in the American South. But the emotional debate over his veracity continued to haunt the CIA for decades, with “master plan” theorists sparring with those who believed he was telling the truth. In the end, six separate investigations were made into Nosenko’s case. When he passed away in 2008, the news of his death was relayed to the New York Times by a “senior intelligence official” who refused to be identified.
One of the officials most affected by the internal debate was an intelligence analyst by the name of Richards Heuer. Heuer had been recruited to the CIA during the Korean War, but he had always been interested in philosophy, and especially the branch known as epistemology—the study of knowledge. Although Heuer wasn’t directly involved in the Nosenko case, he was required to be briefed on it for other work he was doing, and he’d initially fallen for the “master plot” hypothesis. Years later, Heuer set out to analyze the analysts—to figure out where the flaws were in the logic that had led to Nosenko’s lost years in a CIA prison. The result is a slim volume called The Psychology of Intelligence Analysis, whose preface is full of laudatory comments by Heuer’s colleagues and bosses. The book is a kind of Psychology and Epistemology 101 for would-be spooks.
For Heuer, the core lesson of the Nosenko debacle was clear: “Intelligence analysts should be self-conscious about their reasoning processes. They should think about how they make judgments and reach conclusions, not just about the judgments and conclusions themselves.”
Despite evidence to the contrary, Heuer wrote, we have a tendency to believe that the world is as it appears to be. Children eventually learn that a snack removed from view doesn’t disappear from the universe, but even as we mature we still tend to conflate seeing with believing. Philosophers call this view naïve realism, and it is as seductive as it is dangerous. We tend to believe we have full command of the facts and that the patterns we see in them are facts as well. (Angleton, the “master theory” proponent, was sure that Nosenko’s pattern of factual errors indicated that he was hiding something and was breaking under pressure.)
So what’s an intelligence analyst—or anyone who wants to get a good picture of the world, for that matter—to do? First, Heuer suggests, we have to realize that our idea of what’s real often comes to us secondhand and in a distorted form—edited, manipulated, and filtered through media, other human beings, and the many distorting elements of the human mind.
Nosenko’s case was riddled with these distorting factors, and the unreliability of the primary source was only the most obvious one. As voluminous as the set of data that the CIA had compiled on Nosenko was, it was incomplete in certain important ways: The agency knew a lot about his rank and status but had learned very little about his personal background and internal life. This led to a basic unquestioned assumption: “The KGB would never let a screw-up serve at this high level; therefore, he must be deceiving us.”
“To achieve the clearest possible image” of the world, Heuer writes, “analysts need more than information…. They also need to understand the lenses through which this information passes.” Some of these distorting lenses are outside of our heads. Like a biased sample in an experiment, a lopsided selection of data can create the wrong impression: For a number of structural and historical reasons, the CIA record on Nosenko was woefully inadequate when it came to the man’s personal history. And some of them are cognitive processes: We tend to convert “lots of pages of data” into “likely to be true,” for example. When several of them are at work at the same time, it becomes quite difficult to see what’s actually going on—a funhouse mirror reflecting a funhouse mirror reflecting reality.
This distorting effect is one of the challenges posed by personalized filters. Like a lens, the filter bubble invisibly transforms the world we experience by controlling what we see and don’t see. It interferes with the interplay between our mental processes and our external environment. In some ways, it can act like a magnifying glass, helpfully expanding our view of a niche area of knowledge. But at the same time, personalized filters limit what we are exposed to and therefore affect the way we think and learn. They can upset the delicate cognitive balance that helps us make good decisions and come up with new ideas. And because creativity is also a result of this interplay between mind and environment, they can get in the way of innovation. If we want to know what the world really looks like, we have to understand how filters shape and skew our view of it.
It’s become a bit in vogue to pick on the human brain. We’re “predictably irrational,” in the words of behavioral economist Dan Ariely’s bestselling book. Stumbling on Happiness author Dan Gilbert presents volumes of data to demonstrate that we’re terrible at figuring out what makes us happy. Like audience members at a magic show, we’re easily conned, manipulated, and misdirected.
All of this is true. But as Being Wrong author Kathryn Schulz points out, it’s only one part of the story. Human beings may be a walking bundle of miscalculations, contradictions, and irrationalities, but we’re built that way for a reason: The same cognitive processes that lead us down the road to error and tragedy are the root of our intelligence and our ability to cope with and survive in a changing world. We pay attention to our mental processes when they fail, but that distracts us from the fact that most of the time, our brains do amazingly well.
The mechanism for this is a cognitive balancing act. Without our ever thinking about it, our brains tread a tightrope between learning too much from the past and incorporating too much new information from the present. The ability to walk this line—to adjust to the demands of different environments and modalities—is one of human cognition’s most astonishing traits. Artificial intelligence has yet to come anywhere close.
In two important ways, personalized filters can upset this cognitive balance between strengthening our existing ideas and acquiring new ones. First, the filter bubble surrounds us with ideas with which we’re already familiar (and already agree), making us overconfident in our mental frameworks. Second, it removes from our environment some of the key prompts that make us want to learn. To understand how, we have to look at what’s being balanced in the first place, starting with how we acquire and store information.
Filtering isn’t a new phenomenon. It’s been around for millions of years—indeed, it was around before humans even existed. Even for animals with rudimentary senses, nearly all of the information coming in through their senses is meaningless, but a tiny sliver is important and sometimes life-preserving. One of the primary functions of the brain is to identify that sliver and decide what to do about it.
In humans, one of the first steps is to massively compress the data. As Nassim Nicholas Taleb says, “Information wants to be reduced,” and every second we reduce a lot of it—compressing most of what our eyes see and ears hear into concepts that capture the gist. Psychologists call these concepts schemata (one of them is a schema), and they’re beginning to be able to identify particular neurons or sets of neurons that correlate with each one—firing, for example, when you recognize a particular object, like a chair. Schemata ensure that we aren’t constantly seeing the world anew: Once we’ve identified something as a chair, we know how to use it.
We don’t do this only with objects; we do it with ideas as well. In a study of how people read the news, researcher Doris Graber found that stories were relatively quickly converted into schemata for the purposes of memorization. “Details that do not seem essential at the time and much of the context of a story are routinely pared,” she writes in her book Processing the News. “Such leveling and sharpening involves condensation of all features of a story.” Viewers of a news segment on a child killed by a stray bullet might remember the child’s appearance and tragic background, but not the reportage that overall crime rates are down.
Schemata can actually get in the way of our ability to directly observe what’s happening. In 1981, researcher Claudia Cohen instructed subjects to watch a video of a woman celebrating her birthday. Some are told that she’s a waitress, while others are told she’s a librarian. Later, the groups are asked to reconstruct the scene. The people who are told she’s a waitress remember her having a beer; those told she was a librarian remember her wearing glasses and listening to classical music (the video shows her doing all three). The information that didn’t jibe with her profession was more often forgotten. In some cases, schemata are so powerful they can even lead to information being fabricated: Doris Graber, the news researcher, found that up to a third of her forty-eight subjects had added details to their memories of twelve television news stories shown to them, based on the schemata those stories activated.
Once we’ve acquired schemata, we’re predisposed to strengthen them. Psychological researchers call this confirmation bias—a tendency to believe things that reinforce our existing views, to see what we want to see.
One of the first and best studies of confirmation bias comes from the end of the college football season in 1951—Princeton versus Dartmouth. Princeton hadn’t lost a game all season. Its quarterback, Dick Kazmaier, had just been on the cover of Time. Things started off pretty rough, but after Kazmaier was sent off the field in the second quarter with a broken nose, the game got really dirty. In the ensuing melee, a Dartmouth player ended up with a broken leg.
Princeton won, but afterward there were recriminations in both college’s papers. Princetonians blamed Dartmouth for starting the low blows; Dartmouth thought Princeton had an ax to grind once their quarterback got hurt. Luckily, there were some psychologists on hand to make sense of the conflicting versions of events.
They asked groups of students from both schools who hadn’t seen the game to watch a film of it and count how many infractions each side made. Princeton students, on average, saw 9.8 infractions by Dartmouth; Dartmouth students thought their team was guilty of only 4.3. One Dartmouth alumnus who received a copy of the film complained that his version must be missing parts—he didn’t see any of the roughhousing he’d heard about. Boosters of each school saw what they wanted to see, not what was actually on the film.
Philip Tetlock, a political scientist, found similar results when he invited a variety of academics and pundits into his office and asked them to make predictions about the future in their areas of expertise. Would the Soviet Union fall in the next ten years? In what year would the U.S. economy start growing again? For ten years, Tetlock kept asking these questions. He asked them not only of experts, but also of folks he’d brought in off the street—plumbers and schoolteachers with no special expertise in politics or history. When he finally compiled the results, even he was surprised. It wasn’t just that the normal folks’ predictions beat the experts’. The experts’ predictions weren’t even close.
Why? Experts have a lot invested in the theories they’ve developed to explain the world. And after a few years of working on them, they tend to see them everywhere. For example, bullish stock analysts banking on rosy financial scenarios were unable to identify the housing bubble that nearly bankrupted the economy—even though the trends that drove it were pretty clear to anyone looking. It’s not just that experts are vulnerable to confirmation bias—it’s that they’re especially vulnerable to it.
No schema is an island: Ideas in our heads are connected in networks and hierarchies. Key isn’t a useful concept without lock, door, and a slew of other supporting ideas. If we change these concepts too quickly—altering our concept of door without adjusting lock, for example—we could end up removing or altering ideas that other ideas depend on and have the whole system come crashing down. Confirmation bias is a conservative mental force helping to shore up our schemata against erosion.
Learning, then, is a balance. Jean Piaget, one of the major figures in developmental psychology, describes it as a process of assimilation and accommodation. Assimilation happens when children adapt objects to their existing cognitive structures—as when an infant identifies every object placed in the crib as something to suck on. Accommodation happens when we adjust our schemata to new information—“Ah, this isn’t something to suck on, it’s something to make a noise with!” We modify our schemata to fit the world and the world to fit our schemata, and it’s in properly balancing the two processes that growth occurs and knowledge is built.
The filter bubble tends to dramatically amplify confirmation bias—in a way, it’s designed to. Consuming information that conforms to our ideas of the world is easy and pleasurable; consuming information that challenges us to think in new ways or question our assumptions is frustrating and difficult. This is why partisans of one political stripe tend not to consume the media of another. As a result, an information environment built on click signals will favor content that supports our existing notions about the world over content that challenges them.
During the 2008 presidential campaign, for example, rumors swirled persistently that Barack Obama, a practicing Christian, was a follower of Islam. E-mails circulated to millions, offering “proof” of Obama’s “real” religion and reminding voters that Obama spent time in Indonesia and had the middle name Hussein. The Obama campaign fought back on television and encouraged its supporters to set the facts straight. But even a front-page scandal about his Christian priest, Rev. Jeremiah Wright, was unable to puncture the mythology. Fifteen percent of Americans stubbornly held on to the idea that Obama was a Muslim.
That’s not so surprising—Americans have never been very well informed about our politicians. What’s perplexing is that since the election, the percentage of Americans who hold that belief has nearly doubled, and the increase, according to data collected by the Pew Charitable Trusts, has been greatest among people who are college educated. People with some college education were more likely in some cases to believe the story than people with none—a strange state of affairs.
Why? According to the New Republic’s Jon Chait, the answer lies with the media: “Partisans are more likely to consume news sources that confirm their ideological beliefs. People with more education are more likely to follow political news. Therefore, people with more education can actually become mis-educated.” And while this phenomenon has always been true, the filter bubble automates it. In the bubble, the proportion of content that validates what you know goes way up.
Which brings us to the second way the filter bubble can get in the way of learning: It can block what researcher Travis Proulx calls “meaning threats,” the confusing, unsettling occurrences that fuel our desire to understand and acquire new ideas.
Researchers at the University of California at Santa Barbara asked subjects to read two modified versions of “The Country Doctor,” a strange, dreamlike short story by Franz Kafka. “A seriously ill man was waiting for me in a village ten miles distant,” begins the story. “A severe snowstorm filled the space between him and me.” The doctor has no horse, but when he goes to the stable, it’s warm and there’s a horsey scent. A belligerent groom hauls himself out of the muck and offers to help the doctor. The groom calls two horses and attempts to rape the doctor’s maid, while the doctor is whisked to the patient’s house in a snowy instant. And that’s just the beginning—the weirdness escalates. The story concludes with a series of non sequiturs and a cryptic aphorism: “Once one responds to a false alarm on the night bell, there’s no making it good again—not ever.”
The Kafka-inspired version of the story includes meaning threats—incomprehensible events that threaten readers’ expectations about the world and shake their confidence in their ability to understand. But the researchers also prepared another version of the story with a much more conventional narrative, complete with a happily-ever-after ending and appropriate, cartoony illustrations. The mysteries and odd occurrences are explained. After reading one version or the other, the study’s participants were asked to switch tasks and identify patterns in a set of numbers. The group that read the version adopted from Kafka did nearly twice as well—a dramatic increase in the ability to identify and acquire new patterns. “The key to our study is that our participants were surprised by the series of unexpected events, and they had no way to make sense of them,” Proulx wrote. “Hence, they strived to make sense of something else.”
For similar reasons, a filtered environment could have consequences for curiosity. According to psychologist George Lowenstein, curiosity is aroused when we’re presented with an “information gap.” It’s a sensation of deprivation: A present’s wrapping deprives us of the knowledge of what’s in it, and as a result we become curious about its contents. But to feel curiosity, we have to be conscious that something’s being hidden. Because the filter bubble hides things invisibly, we’re not as compelled to learn about what we don’t know.
As University of Virginia media studies professor and Google expert Siva Vaidhyanathan writes in “The Googlization of Everything”: “Learning is by definition an encounter with what you don’t know, what you haven’t thought of, what you couldn’t conceive, and what you never understood or entertained as possible. It’s an encounter with what’s other—even with otherness as such. The kind of filter that Google interposes between an Internet searcher and what a search yields shields the searcher from such radical encounters.” Personalization is about building an environment that consists entirely of the adjacent unknown—the sports trivia or political punctuation marks that don’t really shake our schemata but feel like new information. The personalized environment is very good at answering the questions we have but not at suggesting questions or problems that are out of our sight altogether. It brings to mind the famous Pablo Picasso quotation: “Computers are useless. They can only give you answers.”
Stripped of the surprise of unexpected events and associations, a perfectly filtered world would provoke less learning. And there’s another mental balance that personalization can upset: the balance between open-mindedness and focus that makes us creative.
The drug Adderall is a mixture of amphetamines. Prescribed for attention deficit disorder, it’s become a staple for thousands of overscheduled, sleep-deprived students, allowing them to focus for long stretches on a single arcane research paper or complex lab assignment.
For people without ADD, Adderall also has a remarkable effect. On Erowid, an online forum for recreational drug users and “mind hackers,” there’s post after post of testimonials to the drug’s power to extend focus. “The part of my brain that makes me curious about whether I have new e-mails in my inbox apparently shut down,” author Josh Foer wrote in an article on Slate. “Normally, I can only stare at my computer screen for about 20 minutes at a time. On Adderall, I was able to work in hourlong chunks.”
In a world of constant interruptions, as work demands only increase, Adderall is a compelling value proposition. Who couldn’t use a little cognitive boost? Among the vocal class of neuroenhancement proponents, Adderall and drugs like it may even be the key to our economic future. “If you’re a fifty-five-year-old in Boston, you have to compete with a twenty-six-year-old from Mumbai now, and those kinds of pressures [to use enhancing drugs] are only going to grow,” Zack Lynch of the neurotech consulting firm NeuroInsights told a New Yorker correspondent.
But Adderall also has some serious side effects. It’s addictive. It dramatically boosts blood pressure. And perhaps most important, it seems to decrease associative creativity. After trying Adderall for a week, Foer was impressed with its powers, cranking out pages and pages of text and reading through dense scholarly articles. But, he wrote, “it was like I was thinking with blinders on.” “With this drug,” an Erowid experimenter wrote, “I become calculating and conservative. In the words of one friend, I think ‘inside the box.’” Martha Farah, the director of the University of Pennsylvania’s Center for Cognitive Neuroscience, has bigger worries: “I’m a little concerned that we could be raising a generation of very focused accountants.”
Like many psychoactive drugs, we still know little about why Adderall has the effects it has—or even entirely what the effects are. But the drug works in part by increasing levels of the neurotransmitter norepinephrine, and norepinephrine has some very particular effects: For one thing, it reduces our sensitivity to new stimuli. ADHD patients call the problem hyperfocus—a trancelike, “zoned out” ability to focus on one thing to the exclusion of everything else.
On the Internet, personalized filters could promote the same kind of intense, narrow focus you get from a drug like Adderall. If you like yoga, you get more information and news about yoga—and less about, say, bird-watching or baseball.
In fact, the search for perfect relevance and the kind of serendipity that promotes creativity push in opposite directions. “If you like this, you’ll like that” can be a useful tool, but it’s not a source for creative ingenuity. By definition, ingenuity comes from the juxtaposition of ideas that are far apart, and relevance comes from finding ideas that are similar. Personalization, in other words, may be driving us toward an Adderall society, in which hyperfocus displaces general knowledge and synthesis.
Personalization can get in the way of creativity and innovation in three ways. First, the filter bubble artificially limits the size of our “solution horizon”—the mental space in which we search for solutions to problems. Second, the information environment inside the filter bubble will tend to lack some of the key traits that spur creativity. Creativity is a context-dependent trait: We’re more likely to come up with new ideas in some environments than in others; the contexts that filtering creates aren’t the ones best suited to creative thinking. Finally, the filter bubble encourages a more passive approach to acquiring information, which is at odds with the kind of exploration that leads to discovery. When your doorstep is crowded with salient content, there’s little reason to travel any farther.
In his seminal book The Act of Creation, Arthur Koestler describes creativity as “bisociation”—the intersection of two “matrices” of thought: “Discovery is an analogy no one has ever seen before.” Friedrich Kekule’s epiphany about the structure of a benzene molecule after a daydream about a snake eating its tail is an example. So is Larry Page’s insight to apply the technique of academic citation to search. “Discovery often means simply the uncovering of something which has always been there but was hidden from the eye by the blinkers of habit,” Koestler wrote. Creativity “uncovers, selects, re-shuffles, combines, synthesizes already existing facts, ideas, faculties, (and) skills.”
While we still have little insight into exactly where different words, ideas, and associations are located physically in the brain, researchers are beginning to be able to map the terrain abstractly. They know that when you feel as though a word is on the tip of your tongue, it usually is. And they can tell that some concepts are much further apart than others, in neural connections if not in actual physical brain space. Researcher Hans Eysenck has found evidence that the individual differences in how people do this mapping—how they connect concepts together—are the key to creative thought.
In Eysenck’s model, creativity is a search for the right set of ideas to combine. At the center of the mental search space are the concepts most directly related to the problem at hand, and as you move outward, you reach ideas that are more tangentially connected. The solution horizon delimits where we stop searching. When we’re instructed to “think outside the box,” the box represents the solution horizon, the limit of the conceptual area that we’re operating in. (Of course, solution horizons that are too wide are a problem, too, because more ideas means exponentially more combinations.)
Programmers building artificially intelligent chess masters learned the importance of the solution horizon the hard way. The early ones trained the computer to look at every possible combination of moves. This resulted in an explosion of possibilities, which in turn meant that even very powerful computers could only look a limited number of moves ahead. Only when programmers discovered heuristics that allowed the computers to discard some of the moves did they become powerful enough to win against the grand masters of chess. Narrowing the solution horizon, in other words, was key.
In a way, the filter bubble is a prosthetic solution horizon: It provides you with an information environment that’s highly relevant to whatever problem you’re working on. Often, this’ll be highly useful: When you search for “restaurant,” it’s likely that you’re also interested in near synonyms like “bistro” or “café.” But when the problem you’re solving requires the bisociation of ideas that are indirectly related—as when Page applied the logic of academic citation to the problem of Web search—the filter bubble may narrow your vision too much.
What’s more, some of the most important creative breakthroughs are spurred by the introduction of the entirely random ideas that filters are designed to rule out.
The word serendipity originates with the fairy tale “The Three Princes of Serendip,” who are continually setting out in search of one thing and finding another. In what researchers call the evolutionary view of innovation, this element of random chance isn’t just fortuitous, it’s necessary. Innovation requires serendipity.
Since the 1960s, a group of researchers, including Donald Campbell and Dean Simonton, has been pursuing the idea that at a cultural level the process of developing new ideas looks a lot like the process of developing new species. The evolutionary process can be summed up in four words: “Blind variation, selective retention.” Blind variation is the process by which mutations and accidents change genetic code, and it’s blind because it’s chaotic—it’s variation that doesn’t know where it’s going. There’s no intent behind it, nowhere in particular that it’s headed—it’s just the random recombination of genes. Selective retention is the process by which some of the results of blind variation—the offspring—are “retained” while others perish. When problems become acute enough for enough people, the argument goes, the random recombination of ideas in millions of heads will tend to produce a solution. In fact, it’ll tend to produce the same solution in multiple different heads around the same time.
The way we selectively combine ideas isn’t always blind: As Eysenck’s “solution horizon” suggests, we don’t try to solve our problems by combining every single idea with every other idea in our heads. But when it comes to really new ideas, innovation is in fact often blind. Aharon Kantorovich and Yuval Ne’eman are two historians of science whose work focuses on paradigm shifts, like the move from Newtonian to Einsteinian physics. They argue that “normal science”—the day-to-day process of experimentation and prediction—doesn’t benefit much from blind variation, because scientists tend to discard random combinations and strange data.
But in moments of major change, when our whole way of looking at the world shifts and recalibrates, serendipity is often at work. “Blind discovery is a necessary condition for scientific revolution,” they write, for a simple reason: The Einsteins and Copernicuses and Pasteurs of the world often have no idea what they’re looking for. The biggest breakthroughs are sometimes the ones that we least expect.
The filter bubble still offers the opportunity for some serendipity, of course. If you’re interested in football and local politics, you might still see a story about a play that gives you an idea about how to win the mayoral campaign. But overall, there will tend to be fewer random ideas around—that’s part of the point. For a quantified system like a personal filter, it’s nearly impossible to sort the usefully serendipitous and randomly provocative from the just plain irrelevant.
The second way in which the filter bubble can dampen creativity is by removing some of the diversity that prompts us to think in new and innovative ways. In one of the standard creativity tests developed by Karl Duncker in 1945, a researcher hands a subject a box of thumbtacks, a candle, and a book of matches. The subject’s job is to attach the candle to the wall so that, when lit, it doesn’t drip on the table below (or set the wall on fire). Typically, people try to tack the candle to the wall, or glue it by melting it, or by building complex structures out of the wall with wax and tacks. But the solution (spoiler alert!) is quite simple: Tack the inside of the box to the wall, then place the candle in the box.
Duncker’s test gets at one of the key impediments to creativity, what early creativity researcher George Katona described as the reluctance to “break perceptual set.” When you’re handed a box full of tacks, you’ll tend to register the box itself as a container. It takes a conceptual leap to see it as a platform, but even a small change in the test makes that much more likely: If subjects receive the box separately from the tacks, they tend to see the solution much more quickly.
The process of mapping “thing with tacks in it” to the schema “container” is called coding; creative candle-platform-builders are those who are able to code objects and ideas in multiple ways. Coding, of course, is very useful: It tells you what you can do with the object; once you’ve decided that something fits in the “chair” schema, you don’t have to think twice about sitting on it. But when the coding is too narrow, it impedes creativity.
In study after study, creative people tend to see things in many different ways and put them in what researcher Arthur Cropley calls “wide categories.” The notes from a 1974 study in which participants were told to make groups of similar objects offers an amusing example of the trait in excess: “Subject 30, a writer, sorted a total of 40 objects…. In response to the candy cigar, he sorted the pipe, matches, cigar, apple, and sugar cubes, explaining that all were related to consumption. In response to the apple, he sorted only the wood block with the nail driven into it, explaining that the apple represented health and vitality (or yin) and that the wood block represented a coffin with a nail in it, or death (or yang). Other sortings were similar.”
It’s not just artists and writers who use wide categories. As Cropley points out in Creativity in Education and Learning, the physicist Niels Bohr famously demonstrated this type of creative dexterity when he was given a university exam at the University of Copenhagen in 1905. One of the questions asked students to explain how they would use a barometer (an instrument that measures atmospheric pressure) to measure the height of a building. Bohr clearly knew what the instructor was going for: Students were supposed to check the atmospheric pressure at the top and bottom of the building and do some math. Instead, he suggested a more original method: One could tie a string to the barometer, lower it, and measure the string—thinking of the instrument as a “thing with weight.”
The unamused instructor gave him a failing grade—his answer, after all, didn’t show much understanding of physics. Bohr appealed, this time offering four solutions: You could throw the barometer off the building and count the seconds until it hit the ground (barometer as mass); you could measure the length of the barometer and of its shadow, then measure the building’s shadow and calculate its height (barometer as object with length); you could tie the barometer to a string and swing it at ground level and from the top of the building to determine the difference in gravity (barometer as mass again); or you could use it to calculate air pressure. Bohr finally passed, and one moral of the story is pretty clear: Avoid smartass physicists. But the episode also explains why Bohr was such a brilliant innovator: His ability to see objects and concepts in many different ways made it easier for him to use them to solve problems.
The kind of categorical openness that supports creativity also correlates with certain kinds of luck. While science has yet to find that there are people whom the universe favors—ask people to guess a random number, and we’re all about equally bad at it—there are some traits that people who consider themselves to be lucky share. They’re more open to new experiences and new people. They’re also more distractable.
Richard Wiseman, a luck researcher at the University of Hertfordshire in England, asked groups of people who considered themselves to be lucky and unlucky to flip through a doctored newspaper and count the number of photographs in it. On the second page, a big headline said, “Stop counting—there are 43 pictures.” Another page offered 150 British pounds to readers who noticed it. Wiseman described the results: “For the most part, the unlucky would just flip past these things. Lucky people would flip through and laugh and say, ‘There are 43 photos. That’s what it says. Do you want me to bother counting?’ We’d say, ‘Yeah, carry on.’ They’d flip some more and say, ‘Do I get my 150 pounds?’ Most of the unlucky people didn’t notice.”
As it turns out, being around people and ideas unlike oneself is one of the best ways to cultivate this sense of open-mindedness and wide categories. Psychologists Charlan Nemeth and Julianne Kwan discovered that bilinguists are more creative than monolinguists—perhaps because they have to get used to the proposition that things can be viewed in several different ways. Even forty-five minutes of exposure to a different culture can boost creativity: When a group of American students was shown a slideshow about China as opposed to one about the United States, their scores on several creativity tests went up. In companies, the people who interface with multiple different units tend to be greater sources of innovation than people who interface only with their own. While nobody knows for certain what causes this effect, it’s likely that foreign ideas help us break open our categories.
But the filter bubble isn’t tuned for a diversity of ideas or of people. It’s not designed to introduce us to new cultures. As a result, living inside it, we may miss some of the mental flexibility and openness that contact with difference creates.
But perhaps the biggest problem is that the personalized Web encourages us to spend less time in discovery mode in the first place.
In Where Good Ideas Come From, science author Steven Johnson offers a “natural history of innovation,” in which he inventories and elegantly illustrates how creativity arises. Creative environments often rely on “liquid networks” where different ideas can collide in different configurations. They arrive through serendipity—we set out looking for the answer to one problem and find another—and as a result, ideas emerge frequently in places where random collision is more likely to occur. “Innovative environments,” he writes, “are better at helping their inhabitants explore the adjacent possible”—the bisociated area in which existing ideas combine to produce new ones—“because they expose a wide and diverse sample of spare parts—mechanical or conceptual—and they encourage novel ways of recombining those parts.”
His book is filled with examples of these environments, from primordial soup to coral reefs and high-tech offices, but Johnson continually returns to two: the city and the Web.
“For complicated historical reasons,” he writes, “they are both environments that are powerfully suited for the creation, diffusion, and adoption of good ideas.”
There’s no question that Johnson was right: The old, unpersonalized web offered an environment of unparalleled richness and diversity. “Visit the ‘serendipity’ article in Wikipedia,” he writes, and “you are one click away from entries on LSD, Teflon, Parkinson’s disease, Sri Lanka, Isaac Newton, and about two hundred other topics of comparable diversity.”
But the filter bubble has dramatically changed the informational physics that determines which ideas we come in contact with. And the new, personalized Web may no longer be as well suited for creative discovery as it once was.
In the early days of the World Wide Web, when Yahoo was its king, the online terrain felt like an unmapped continent, and its users considered themselves discoverers and explorers. Yahoo was the village tavern where sailors would gather to swap tales about what strange beasts and distant lands they found out at sea. “The shift from exploration and discovery to the intent-based search of today was inconceivable,” an early Yahoo editor told search journalist John Battelle. “Now, we go online expecting everything we want to find will be there. That’s a major shift.”
This shift from a discovery-oriented Web to a search and retrieval–focused Web mirrors one other piece of the research surrounding creativity. Creativity experts mostly agree that it’s a process with at least two key parts: Producing novelty requires a lot of divergent, generative thinking—the reshuffling and recombining that Koestler describes. Then there’s a winnowing process—convergent thinking—as we survey the options for one that’ll fit the situation. The serendipitous Web attributes that Johnson praises—the way one can hop from article to article on Wikipedia—are friendly to the divergent part of that process.
But the rise of the filter bubble means that increasingly the convergent, synthetic part of the process is built in. Battelle calls Google a “database of intentions,” each query representing something that someone wants to do or know or buy. Google’s core mission, in many ways, is to transform those intentions into actions. But the better it gets at that, the worse it’ll be at providing serendipity, which, after all, is the process of stumbling across the unintended. Google is great at helping us find what we know we want, but not at finding what we don’t know we want.
To some degree, the sheer volume of information available mitigates this effect. There’s far more online content to choose from than there was in even the largest libraries. For an enterprising informational explorer, there’s endless terrain to cover. But one of the prices of personalization is that we become a bit more passive in the process. The better it works, the less exploring we have to do.
David Gelernter, a Yale professor and early supercomputing visionary, believes that computers will only serve us well when they can incorporate dream logic. “One of the hardest, most fascinating problems of this cyber-century is how to add ‘drift’ to the net,” he writes, “so that your view sometimes wanders (as your mind wanders when you’re tired) into places you hadn’t planned to go. Touching the machine brings the original topic back. We need help overcoming rationality sometimes, and allowing our thoughts to wander and metamorphose as they do in sleep.” To be truly helpful, algorithms may need to work more like the fuzzyminded, nonlinear humans they’re supposed to serve.
In 1510, the Spanish writer Garci Rodriguez de Montalvo published a swashbuckling Odyssey-like novel, The Exploits of Esplandian, which included a description of a vast island called California:
On the right hand from the Indies exists an island called California very close to a side of the Earthly Paradise; and it was populated by black women, without any man existing there, because they lived in the way of the Amazons. They had beautiful and robust bodies, and were brave and very strong. Their island was the strongest of the World, with its cliffs and rocky shores. Their weapons were golden and so were the harnesses of the wild beasts that they were accustomed to domesticate and ride, because there was no other metal in the island than gold.
Rumors of gold propelled the legend of the island of California across Europe, prompting adventurers throughout the continent to set off in search of it. Hernán Cortés, the Spanish conquistador who led the colonization of the Americas, requested money from Spain’s king to lead a worldwide hunt. And when he landed in what we now know as Baja California in 1536, he was certain he’d found the place. It wasn’t until one of his navigators, Francisco de Ulloa, traveled up the Gulf of California to the mouth of the Colorado river that it became clear to Cortez that, gold or no, he hadn’t found the mythical island.
Despite this discovery, however, the idea that California was an island persisted for several more centuries. Other explorers discovered Puget Sound, near Vancouver, and were certain that it must connect to Baja. Dutch maps from the 1600s routinely show a distended long fragment off the coast of America stretching half the length of the continent. It took Jesuit missionaries literally marching inland and never reaching the other side to fully repudiate the myth.
It may have persisted for one simple reason: There was no sign on the maps for “don’t know,” and so the distinction between geographic guesswork and sights that had been witnessed firsthand became blurred. One of history’s major cartographic errors, the island of California reminds us that it’s not what we don’t know that hurts us as much as what we don’t know we don’t know—what ex–secretary of defense Donald Rumsfeld famously called the unknown unknowns.
This is one other way that personalized filters can interfere with our ability to properly understand the world: They alter our sense of the map. More unsettling, they often remove its blank spots, transforming known unknowns into unknown ones.
Traditional, unpersonalized media often offer the promise of representativeness. A newspaper editor isn’t doing his or her job properly unless to some degree the paper is representative of the news of the day. This is one of the ways one can convert an unknown unknown into a known unknown. If you leaf through the paper, dipping into some articles and skipping over most of them, you at least know there are stories, perhaps whole sections, that you passed over. Even if you don’t read the article, you notice the headline about a flood in Pakistan—or maybe you’re just reminded that, yes, there is a Pakistan.
In the filter bubble, things look different. You don’t see the things that don’t interest you at all. You’re not even latently aware that there are major events and ideas you’re missing. Nor can you take the links you do see and assess how representative they are without an understanding of what the broader environment from which they were selected looks like. As any statistician will tell you, you can’t tell how biased the sample is from looking at the sample alone: You need something to compare it to.
As a last resort, you might look at your selection and ask yourself if it looks like a representative sample. Are there conflicting views? Are there different takes, and different kinds of people reflecting? Even this is a blind alley, however, because with an information set the size of the Internet, you get a kind of fractal diversity: at any level, even within a very narrow information spectrum (atheist goth bowlers, say) there are lots of voices and lots of different takes.
We’re never able to experience the whole world at once. But the best information tools give us a sense of where we stand in it—literally, in the case of a library, and figuratively in the case of a newspaper front page. This was one of the CIA’s primary errors with Yuri Nosenko. The agency had collected a specialized subset of information about Nosenko without realizing how specialized it was, and thus despite the many brilliant analysts working for years on the case, it missed what would have been obvious from a whole picture of the man.
Because personalized filters usually have no Zoom Out function, it’s easy to lose your bearings, to believe the world is a narrow island when in fact it’s an immense, varied continent.