3. BELIEF

Alice laughed: “There’s no use trying,” she said; “one can’t be lieve impossible things.”

“I daresay you haven’t had much practice,” said the Queen. “When I was younger, I always did it for half an hour a day. Why, sometimes I’ve believed as many as six impossible things before breakfast.”

— LEWIS CARROLL, Alice’s Adventures in Wonderland

“You HAVE A NEED for other people to like and admire you, and yet you tend to be critical of yourself. While you have some personality weaknesses, you are generally able to compensate for them. You have considerable unused capacity that you have not turned to your advantage. Disciplined and self-controlled on the outside, you tend to be worrisome and insecure on the inside.”

Would you believe me if I told you that I wrote that description just for you? It’s actually a pastiche of horoscopes, constructed by a psychologist named Bertram Forer. Forer’s point was that we have a tendency to read too much into bland generalities, believing that they are (specifically) about us — even when they aren’t. Worse, we are even more prone to fall victim to this sort of trap if the bland description includes a few positive traits. Televangelists and late-night infomercials prey upon us in the same way — working hard to sound as if they are speaking to the individual listener rather than a crowd. As a species, we’re only too ready to be fooled. This chapter is, in essence, an investigation of why.

The capacity to hold explicit beliefs that we can talk about, evaluate, and reflect upon is, like language, a recently evolved innovation — ubiquitous in humans, rare or perhaps absent in most other species.[13] And what is recent is rarely fully debugged. Instead of an objective machine for discovering and encoding Truth with a capital T, our human capacity for belief is haphazard, scarred by evolution and contaminated by emotions, moods, desires, goals, and simple self-interest — and surprisingly vulnerable to the idiosyncrasies of memory. Moreover, evolution has left us distinctly gullible, which smacks more of evolutionary shortcut than good engineering. All told, though the systems that underlie our capacity for belief are powerful, they are also subject to superstition, manipulation, and fallacy. This is not trivial stuff: beliefs, and the imperfect neural tools we use to evaluate them, can lead to family conflicts, religious disputes, and even war.

In principle, an organism that trafficked in beliefs ought to have a firm grasp on the origins of its beliefs and how strongly the evidence supports them. Does my belief that Colgate is a good brand of toothpaste derive from (1) my analysis of a double-blind test conducted and published by Consumer Reports, (2) my enjoyment of Colgate’s commercials, or (3) my own comparisons of Colgate against the other “leading brands”? I should be able to tell you, but I can’t.

Because evolution built belief mainly out of off-the-shelf components that evolved for other purposes, we often lose track of where our beliefs come from — if we ever knew — and even worse, we are often completely unaware of how much we are influenced by irrelevant information.

Take, for example, the fact that students rate better-looking professors as teaching better classes. If we have positive feelings toward a given person in one respect, we tend to automatically generalize that positive regard to other traits, an illustration of what is known in psychology as the “halo effect.” The opposite applies too: see one negative characteristic, and you expect all of an individual’s traits to be negative, a sort of “pitchfork effect.” Take, for example, the truly sad study in which people were shown pictures of one of two children, one more attractive, the other less so. The subjects were then told that the child, let’s call him Junior, had just thrown a snowball, with a rock inside it, at another child; the test subjects then were asked to interpret the boy’s behavior. People who saw the unattractive picture characterized Junior as a thug, perhaps headed to reform school; those shown the more attractive picture delivered judgments that were rather more mild, suggesting, for example, that Junior was merely “having a bad day.” Study after study has shown that attractive people get better breaks in job interviews, promotions, admissions interviews, and so on, each one an illustration of how aesthetics creates noise in the channel of belief.

In the same vein, we are more likely to vote for candidates who (physically) “look more competent” than the others. And, as advertisers know all too well, we are more likely to buy a particular brand of beer if we see an attractive person drinking it, more likely to want a pair of sneakers if we see a successful athlete like Michael Jordan wearing them. And though it may be irrational for a bunch of teenagers to buy a particular brand of sneakers so they can “be like Mike,” the halo effect, ironically, makes it entirely rational for Nike to spend millions of dollars to secure His Airness’s endorsement. And, in a particularly shocking recent study, children of ages three to five gave have higher ratings to foods like carrots, milk, and apple juice if they came in McDonald’s packaging. Books and covers, carrots and Styrofoam packaging. We are born to be suckered.

The halo effect (and its devilish opposite) is really just a special case of a more general phenomenon: just about anything that hangs around in our mind, even a stray word or two, can influence how we perceive the world and what we believe. Take, for example, what happens if I ask you to memorize this list of words: furniture, self-confident, corner, adventuresome, chair, table, independent, and television. (Got that? What follows is more fun if you really do try to memorize the list.)

Now read the following sketch, about a man named Donald:

Donald spent a great amount of his time in search of what he liked to call excitement. He had already climbed Mt. McKinley, shot the Colorado rapids in a kayak, driven in a demolition derby, and piloted a jet-powered boat — without knowing very much about boats. He had risked injury, and even death, a number of times. Now he was in search of new excitement. He was thinking, perhaps, he would do some skydiving or maybe cross the Atlantic in a sailboat.

To test your comprehension, I ask you to sum up Donald in a single word. And the word that pops into your mind is… (see the footnote).[14] Had you memorized a slightly different list, say, furniture, conceited, corner, reckless, chair, table, aloof television, the first word that would have come to mind would likely be different — not adventuresome, but reckless. Donald may perfectly well be both reckless and adventuresome, but the connotations of each word are very different — and people tend to pick a characterization that relates to what was already on their mind (in this case, slyly implanted by the memory list). Which is to say that your impression of Donald is swayed by a bit of information (the words in the memory list) that ought to be entirely irrelevant.

Another phenomenon, called the “focusing illusion,” shows how easy it is to manipulate people simply by directing their attention to one bit of information or another. In one simple but telling study, college students were asked to answer two questions: “How happy are you with your life in general?” and “How many dates did you have last month?” One group heard the questions in exactly that order, while another heard them in the opposite order, second question first. In the group that heard the question about happiness first, there was almost no correlation between the people’s answers; some people who had few dates reported that they were happy, some people with many dates reported that they were sad, and so forth. Flipping the order of the questions, however, put people’s focus squarely on romance; suddenly, they could not see their happiness as independent of their love life. People with lots of dates saw themselves as happy, people with few dates viewed themselves as sad. Period. People’s judgments in the dates-first condition (but not in the happiness-first condition) were strongly correlated with the number of dates they’d had. This may not surprise you, but it ought to, because it highlights just how malleable our beliefs really are. Even our own internal sense of self can be influenced by what we happen to focus on at a given moment.

The bottom line is that every belief passes through the unpredictable filter of contextual memory. Either we directly recall a belief that we formed earlier, or we calculate what we believe based on whatever memories we happen to bring to mind.

Yet few people realize the extent to which beliefs can be contaminated by vagaries of memory. Take the students who heard the dating question first. They presumably thought that they were answering the happiness question as objectively as they could; only an exceptionally self-aware undergraduate would realize that the answer to the second question might be biased by the answer to the first. Which is precisely what makes mental contamination so insidious. Our subjective impression that we are being objective rarely matches the objective reality: no matter how hard we try to be objective, human beliefs, because they are mediated by memory, are inevitably swayed by minutiae that we are only dimly aware of.

From an engineering standpoint, humans would presumably be far better off if evolution had supplemented our contextually driven memory with a way of systematically searching our inventory of memories. Just as a pollster’s data are most accurate if taken from a representative cross section of a population, a human’s beliefs would be soundest if they were based on a balanced set of evidence. But alas, evolution never discovered the statistician’s notion of an unbiased sample.

Instead, we routinely take whatever memories are most recent or most easily remembered to be much more important than any other data. Consider, for example, an experience I had recently, driving across country and wondering at what time I’d arrive at the next motel. When traffic was moving well, I’d think to myself, “Wow, I’m driving at 80 miles per hour on the interstate; I’ll be there in an hour.” When traffic slowed due to construction, I’d say, “Oh no, it’ll take me two hours.” What I was almost comically unable to do was to take an average across two data points at the same time, and say, “Sometimes the traffic moves well, sometimes it moves poorly. I anticipate a mixture of good and bad, so I bet it will take an hour and a half.”

Some of the world’s most mundane but common interpersonal friction flows directly from the same failure to reflect on how well our samples represent reality. When we squabble with our spouse or our roommate about whose turn it is to wash the dishes, we probably (without realizing it) are better able to remember the previous times when we, ourself, took care of them (as compared to the times when our roommate or spouse did); after all, our memory is organized to focus primarily on our own experience. And we rarely compensate for that imbalance — so we come to believe we’ve done more work overall and perhaps end up in a self-righteous huff. Studies show that in virtually any collaborative enterprise, from taking care of a household to writing academic papers with colleagues, the sum of each individual’s perceived contribution exceeds the total amount of work done. We cannot remember what other people did as well as we recall what we did ourselves — which leaves everybody (even shirkers!) feeling that others have taken advantage of them. Realizing the limits of our own data sampling might make us all a lot more generous.

Mental contamination is so potent that even entirely irrelevant information can lead us by the nose. In one pioneering experiment, the psychologists Amos Tversky and Daniel Kahneman spun a wheel of fortune, marked with the numbers 1-100, and then asked their subjects a question that had nothing to do with the outcome of spinning the wheel: what percentage of African countries are in the United Nations? Most participants didn’t know for sure, so they had to estimate — fair enough. But their estimates were considerably affected by the number on the wheel. When the wheel registered 10, a typical response to the UN question was 25 percent, whereas when the wheel came up at 65, a typical response was 45 percent.[15]

This phenomenon, which has come to be known as “anchoring and adjustment,” occurs again and again. Try this one: Add 400 to the last three digits of your cell phone number. When you’re done, answer the following question: in what year did Attila the Hun’s rampage through Europe finally come to an end? The average guess of people whose phone number, plus 400, yields a sum less than 600 was A.D. 629, whereas the average guess of people whose phone number digits plus 400 came in between 1,200 and 1,399 was A.D. 979, 350 years later.[16]

What’s going on here? Why should a phone number or a spin on a wheel of fortune influence a belief about history or the composition of the UN? During the process of anchoring and adjustment, people begin at some arbitrary starting point and keep moving until they find an answer they like. If the number 10 pops up on the wheel, people start by asking themselves, perhaps unconsciously, “Is 10 a plausible answer to the UN question?” If not, they work their way up until they find a value (say, 25) that seems plausible. If 65 comes up, they may head in the opposite direction: “Is 65 a plausible answer? How about 55?” The trouble is, anchoring at a single arbitrarily chosen point can steer us toward answers that are just barely plausible: starting low leads people to the lowest plausible answer, but starting high leads them to the highest plausible answer. Neither strategy directs people to what might be the most sensible response — one in the middle of the range of plausible answers. If you think that the correct answer is somewhere between 25 and 45, why say 25 or 45? You’re probably better off guessing 35, but the psychology of anchoring means that people rarely do.

Anchoring has gotten a considerable amount of attention in psychological literature, but it’s by no means the only illustration of how beliefs and judgments can be contaminated by peripheral or even irrelevant information. To take another example, people who are asked to hold a pen between their teeth gently, without letting it touch their lips, rate cartoons as more enjoyable than do people who hold a pen with pursed lips. Why should that be? You can get a hint if you try following these instructions while looking in a mirror: Hold a pen between your teeth “gently, without letting it touch the lips.” Now look at the shape of your lips. You’ll see that the corners are upturned, in the position of a smile. And thus, through the force of context-dependent memory, upturned lips tend to automatically lead to happy thoughts.

A similar line of experiments asked people to use their non-dominant hand (the left, for right-handed people) to write down names of celebrities as fast as they could while classifying them into categories (like, don’t like, neutral). They had to do this while either (1) pressing their dominant hand, palm down, against the top of a table or (2) pushing their dominant hand, palm upward, against the bottom of a table. Palms-up people listed more positive than negative names, while palms-down people produced more negative names than positive. Why? Palms-up people were positioned in a positive “approach” posture while palms-down people were positioned in an “avoid” posture. The data show that such subtle differences routinely affect our memories and, ultimately, our beliefs.

Another source of contamination is a kind of mental shortcut, the human tendency to believe that what is familiar is good. Take, for example, an odd phenomenon known as the “mere familiarity” effect: if you ask people to rate things like the characters in Chinese writing, they tend to prefer those that they have seen before to those they haven’t. Another study, replicated in at least 12 different languages, showed that people have a surprising attachment to the letters found in their own names, preferring words that contain those letters to words that don’t. One colleague of mine has even suggested, somewhat scandalously, that people may love famous paintings as much for their familiarity as for their beauty.

From the perspective of our ancestors, a bias in favor of the familiar may well have made sense; what great-great-great-grandma knew and didn’t kill her was probably a safer bet than what she didn’t know — which might do her in. Preference for the familiar may well have been adaptive in our ancestors, selected for in the usual ways: creatures with a taste for the well known may have had more offspring than creatures with too extreme a predilection for novelty. Likewise, our desire for comfort foods, presumably those most familiar to us, seems to increase in times of stress; again, it’s easy to imagine an adaptive explanation.

In the domain of aesthetics, there’s no real downside to preferring what I’m already used to — it doesn’t really matter whether I like this Chinese character better than that one. Likewise, if my love of 1970s disco stems from mere familiarity rather than the exquisite musicianship of Donna Summer, so be it.

But our attachment to the familiar can be problematic too, especially when we don’t recognize the extent to which it influences our putatively rational decision making. In fact, the repercussions can take on global significance. For example, people tend to prefer social policies that are already in place to those that are not, even if no well-founded data prove that the current policies are working. Rather than analyze the costs and benefits, people often use this simple heuristic: “If it’s in place, it must be working.”

One recent study suggested that people will do this even when they have no idea what policies are in place. A team of Israeli researchers decided to take advantage of the many policies and local ordinances that most people know little about. So little, in fact, that the experimenters could easily get the subjects to believe whatever they suggested; the researchers then tested how attached people had become to whatever “truth” they had been led to believe in. For example, subjects were asked to evaluate policies such as the feeding of alley cats — should it be okay, or should it be illegal? The experimenter told half the subjects that alley-cat feeding was currently legal and the other half that it wasn’t, and then asked people whether the policy should be changed. Most people favored whatever the current policy was and tended to generate more reasons to favor it over the competing policy. The researchers found similar results with made-up rules about arts-and-crafts instruction. (Should students have five hours of instruction or seven? The current policy is X.) The same sort of love-the-familiar reasoning applies, of course, in the real world, where the stakes are higher, which explains why incumbents are almost always at an advantage in an election. Even recently deceased incumbents have been known to beat their still-living opponents.[17]

The more we are threatened, the more we tend to cling to the familiar. Just think of the tendency to reach for comfort food. Other things being equal, people under threat tend to become more attached than usual to their own groups, causes, and values. Laboratory studies, for example, have shown that if you make people contemplate their own death (“Jot down, as specifically as you can, what you think will happen to you as you physically die…”) , they tend to be nicer than normal to members of their own religious and ethnic groups, but more negative toward outsiders. Fears of death also tend to polarize people’s political and religious beliefs: patriotic Americans who are made aware of their own mortality are more appalled (than patriots in a control group) by the idea of using the American flag as a sieve; devout Christians who are asked to reflect upon their own death are less tolerant of someone using a crucifix as a substitute hammer. (Charities, take note: we also open up our wallets more when we’ve just thought about death.) Another study has shown that all people tend to become more negative toward minority groups in times of crisis; oddly enough, this holds true not just for members of the majority but even for members of the minority groups themselves.

People may even come to love, or at least accept, systems of government that profoundly threaten their self-interest. As the psychologist John Jost has noted, “Many people who lived under feudalism, the Crusades, slavery, communism, apartheid, and the Taliban believed that their systems were imperfect but morally defensible and [even sometimes] better than the alternatives they could envision.” In short, mental contamination can be very serious business.

Each of these examples of mental contamination — the focusing illusion, the halo effect, anchoring and adjustment, and the familiarity effect — underscores an important distinction that will recur throughout this book: as a rough guide, our thinking can be divided into two streams, one that is fast, automatic, and largely unconscious, and another that is slow, deliberate, and judicious.

The former stream, which I will refer to as the ancestral system, or the reflexive system, seems to do its thing rapidly and automatically, with or without our conscious awareness. The latter stream I will call the deliberative system, because that’s what it does: it deliberates, it considers, it chews over the facts — and tries (sometimes successfully, sometimes not) to reason with them.

The reflexive system is clearly older, found in some form in virtually every multicellular organism. It underlies many of our everyday actions, such as the automatic adjustment of our gait as we walk up and down an uneven surface, or our rapid recognition of an old friend. The deliberative system, which consciously considers the logic of our goals and choices, is a lot newer, found in only a handful of species, perhaps only humans.

As best we can tell, the two systems rely on fairly different neural substrates. Some of the reflexive system depends on evolutionarily old brain systems like the cerebellum and basal ganglia (implicated in motor control) and the amygdala (implicated in emotion). The deliberative system, meanwhile, seems to be based primarily in the forebrain, in the prefrontal cortex, which is present — but vastly smaller — in other mammals.

I describe the latter system as “deliberative” rather than, say, rational because there is no guarantee that the deliberative system will deliberate in genuinely rational ways. Although this system can, in principle, be quite clever, it often settles for reasoning that is less than ideal. In this respect, one might think the deliberative system as a bit like the Supreme Court: its decisions may not always seem sensible, but there’s always at least an intention to be judicious.

Conversely, the reflexive system shouldn’t be presumed irrational; it is certainly more shortsighted than the deliberative system, but it likely wouldn’t exist at all if it were completely irrational. Most of the time, it does what it does well, even if (by definition) its decisions are not the product of careful thought. Similarly, although it might seem tempting, I would also caution against equating the reflexive system with emotions. Although many (such as fear) are arguably reflexive, emotions like schadenfreude — the delight one can take in a rival’s pain — are not. Moreover, a great deal of the reflexive system has little if anything to do with emotion; when we instinctively grab a railing as we stumble on a staircase, our reflexive system is clearly what kicks in to save us — but it may do so entirely without emotion. The reflexive system (really, perhaps a set of systems) is about making snap judgments based on experience, emotional or otherwise, rather than feelings per se.

Even though the deliberative system is more sophisticated, the latest in evolutionary technology, it has never really gained proper control because it bases its decisions on almost invariably secondhand information, courtesy of the less-than-objective ancestral system. We can reason as carefully as we like, but, as they say in computer science jargon, “garbage in, garbage out.” There’s no guarantee that the ancestral system will pass along a balanced set of data. Worse, when we are stressed, tired, or distracted, our deliberative system tends to be the first thing to go, leaving us at the mercy of our lower-tech reflexive system — just when we might need our deliberative system the most.

The unconscious influence of our ancestral system is so strong that when our conscious mind tries to get control of the situation, the effort sometimes backfires. For example, in one study, people were put under time pressure and asked to make rapid judgments. Those who were told to (deliberately) suppress sexist thoughts (themselves presumably the product of the ancestral reflexive system) actually became more likely than control subjects to have sexist thoughts. Even more pernicious is the fact that as evolution layered reason on top of contextually driven memory, it left us with the illusion of objectivity. Evolution gave us the tools to deliberate and reason, but it didn’t give us any guarantee that we’d be able to use them without interference. We feel as if our beliefs are based on cold, hard facts, but often they are shaped by our ancestral system in subtle ways that we are not even aware of.

No matter what we humans think about, we tend to pay more attention to stuff that fits in with our beliefs than stuff that might challenge them. Psychologists call this “confirmation bias.” When we have embraced a theory, large or small, we tend to be better at noticing evidence that supports it than evidence that might run counter to it.

Consider the quasi-astrological description that opened this chapter. A person who wants to believe in astrology might notice the parts that seem true (“you have a need for other people to like and admire you”) and ignore the parts that aren’t (maybe from the outside you don’t really look so disciplined after all). A person who wishes to believe in horoscopes may notice the one time that their reading seems dead-on and ignore (or rationalize) the thousands of times when their horoscopes are worded so ambiguously that they could mean anything. That’s confirmation bias.

Take, for example, an early experiment conducted by the British psychologist Peter Wason. Wason presented his subjects with a triplet of three distinct numbers (for example, 2-4-6) and asked them to guess what rule might have generated their arrangement. Subjects were then asked to create new sequences and received feedback as to whether their new sequences conformed to the rule. A typical subject might guess “4-6-8,” be told yes, and proceed to try “8-10-12” and again be told yes; the subject might then conclude that the rule was something like “sequences of three even numbers with two added each time.” What most people failed to do, however, was consider potentially disconfirming evidence. For example, was 1-3-5 or 1-3-4 a valid sequence? Few subjects bothered to ask; as a consequence, hardly anybody guessed that the actual rule was simply “any sequence of three ascending numbers.” Put more generally, people all too often look for cases that confirm their theories rather than consider whether some alternative principle might work better.

In another, later study, less benign, two different groups of people saw a videotape of a child taking an academic test. One group of viewers was led to believe that the child came from a socioeconomically privileged background, the other to believe that the child came from a socioeconomically impoverished background. Those who thought the child was wealthier reported that the child was doing well and performing above grade level; the other group guessed that the child was performing below grade level.

Confirmation bias might be an inevitable consequence of contextually driven memory. Because we retrieve memory not by systematically searching for all relevant data (as computers do) but by finding things that match, we can’t help but be better at noticing things that confirm the notions we begin with. When you think about the O. J. Simpson murder trial, if you were predisposed to think he was guilty, you’re likely to find it easier to remember evidence that pointed toward his guilt (his motive, the DNA evidence, the lack of other plausible suspects) rather than evidence that cast doubt on it (the shoddy police work and that infamous glove that didn’t fit).

To consider something well, of course, is to evaluate both sides of an argument, but unless we go the extra mile of deliberately foreing ourselves to consider alternatives — not something that comes naturally — we are more prone to recall evidence consistent with an accepted proposition than evidence inconsistent with it. And since we most clearly remember information that seems consistent with our beliefs, it becomes very hard to let those beliefs go, even when they are erroneous.

The same, of course, goes for scientists. The aim of science is to take a balanced approach to evidence, but scientists are human beings, and human beings can’t help but notice evidence that confirms their own theories. Read any science texts from the past and you will stumble on not only geniuses, but also people who in hindsight seem like crackpots — flat-earthers, alchemists, and so forth. History is not kind to scientists who believed in such fictions, but a realist might recognize that in a species so dependent on memory driven by context, such slip-ups are always a risk.

In 1913 Eleanor Porter wrote one of the more influential children’s novels of the twentieth century, Pollyanna, a story of a girl who looked on the bright side of every situation. Over time, the name Pollyanna has become a commonly used term with two different connotations. It’s used in a positive way to describe eternal optimists and in a negative way to describe people whose optimism exceeds the rational bounds of reality. Pollyanna may have been a fictional character, but there’s a little bit of her in all of us, a tendency to perceive the world in positive ways that may or may not match reality. Generals and presidents fight on in wars that can’t be won, and scientists retain beliefs in pet theories long after the weight of evidence is stacked against them.

Consider the following study, conducted by the late Ziva Kunda. A group of subjects comes into the lab. They are told they’ll be playing a trivia game; before they play, they get to watch someone else, who, they are told, will play either on their team (half the subjects hear this) or on the opposite team (that’s what the other half are told). Unbeknownst to the subjects, the game is rigged; the person they’re watching proceeds to play a perfect game, getting every question right. The researchers want to know whether each subject is impressed by this. The result is straight out of Pollyanna: people who expect to play with the perfect-game-playing confederate are impressed; the guy must be great, they think. People who expect to play against the confederate are dismissive; they attribute his good performance to luck rather than skill. Same data, different interpretation: both groups of subjects observe someone play a perfect game, but what they make of that observation depends on the role they expect the observed man to play in their own life.

In a similar study, a bunch of college students viewed videos of three people having a conversation; they were asked to judge how likable each of the three was. The subjects were also told (prior to watching the video) that they would be going out on a date with one of those three people (selected at random for each subject). Inevitably, subjects tended to give their highest rating to the person they were told they would be dating — another illustration of how easily our beliefs (in this case, about someone’s likability) can be contaminated by what we wish to believe. In the words of a musical I loved as a child, Harry Nilsson’s The Point!, “You see what you want to see, and you hear want you want to hear. Dig?”

Our tendency to accept what we wish to believe (what we are motivated to believe) with much less scrutiny than what we don’t want to believe is a bias known as “motivated reasoning,” a kind of flip side to confirmation bias. Whereas confirmation bias is an automatic tendency to notice data that fit with our beliefs, motivated reasoning is the complementary tendency to scrutinize ideas more carefully if we don’t like them than if we do. Take, for example, a study in which Kunda asked subjects, half men, half women, to read an article claiming that caffeine was risky for women. In line with the notion that our beliefs — and reasoning — are contaminated by motivation, women who were heavy caffeine drinkers were more likely to doubt the conclusion than were women who were light caffeine drinkers; meanwhile, men, who thought they had nothing at stake, exhibited no such effect.

The same thing happens all the time in the real world. Indeed, one of the first scientific illustrations of motivated reasoning was not a laboratory experiment but a clever bit of real-world fieldwork conducted in 1964, just after the publication of the first Surgeon General’s report on smoking and lung cancer. The Surgeon General’s conclusion — that smoking appears to cause lung cancer — would hardly seem like news today, but at the time it was a huge deal, covered widely by the media. Two enterprising scientists went out and interviewed people, asking them to evaluate the Surgeon General’s conclusion. Sure enough, smokers were less persuaded by the report than were nonsmokers, who pretty much accepted what the Surgeon General had to say. Smokers, meanwhile, came up with all kinds of dubious counterarguments: “many smokers live a long time” (which ignored the statistical evidence that was presented), “lots of things are hazardous” (a red herring), “smoking is better than excessive eating or drinking” (again irrelevant), or “smoking is better than being a nervous wreck” (an assertion that was typically not supported by any evidence).

The reality is that we are just not born to reason in balanced ways; even sophisticated undergraduates at elite universities tend to fall prey to this weakness. One famous study, for example, asked students at Stanford University to evaluate a set of studies on the effectiveness of capital punishment. Some of the students had prior beliefs in favor of capital punishment, some against. Students readily found holes in studies that challenged what they believed but often missed equally serious problems with studies that led to conclusions that they were predisposed to agree with.

Put the contamination of belief, confirmation bias, and motivated reasoning together, and you wind up with a species prepared to believe, well, just about anything. Historically, our species has believed in a flat earth (despite evidence to the contrary), ghosts, witches, astrology, animal spirits, and the benefits of self-flagellation and bloodletting. Most of those particular beliefs are, mercifully, gone today, but some people still pay hard-earned money for psychic readings and séances, and even I sometimes hesitate before walking under a ladder. Or, to take a political example, some 18 months after the 2003 invasion of Iraq, 58 percent of people who voted for George W. Bush still believed there were weapons of mass destruction in Iraq, despite the evidence to the contrary.

And then there is President George W. Bush himself, who reportedly believes that he has a personal and direct line of communication with an omniscient being. Which, as far as his getting elected was concerned, was a good thing; according to a February 2007 Pew Research Center survey, 63 percent of Americans would be reluctant to vote for anyone who doesn’t believe in God.

To critics like Sam Harris (author of the book The End of Faith), that sort of thing seems downright absurd:

To see how much our culture currently partakes of… irrationality… just substitute the names of your favorite Olympian for “God” wherever this word appears in public discourse. Imagine President Bush addressing the National Prayer Breakfast in these terms: “Behind all of life and all history there is a dedication and a purpose, set by the hand of a just and faithful Zeus.” Imagine his speech to Congress (September 20,2001) containing the sentence “Freedom and fear, justice and cruelty have always been at war and we know that Apollo is not neutral between them.”

Religion in particular enjoys the sway that it does in part because people want it to be true; among other things, religion gives people a sense that the world is just and that hard work will be rewarded. Such faith provides a sense of purpose and belonging, in both the personal and the cosmic realms; there can be no doubt that the desire to believe contributes to the capacity to do so. But none of that explains how people manage to cling to religious beliefs despite the manifest lack of direct evidence.[18] For that we must turn to the fact that evolution left us with the capacity to fool ourselves into believing what we want to believe. (If we pray and something good happens, we notice it; if nothing happens, we fail to notice the non-coincidence.) Without motivated reasoning and confirmation bias, the world might be a very different place.

As one can see in the study of cigarette smokers, biased reasoning has at least one benefit. It can help protect our self-esteem. (Of course it’s not just smokers; I’ve seen scientists do much the same thing, nitpicking desperately at studies that challenge beliefs to which they’re attached.)

The trouble, of course, is that self-deception often costs us down the road. When we fool ourselves with motivated reasoning, we may hold on to beliefs that are misguided or even delusional. They can cause social friction (when we abruptly dismiss the views of others), they can lead to self-destruction (when smokers dismiss the risks of their habit), and they can lead to scientific blunders (when scientists refuse to recognize data challenging their theories).

When people in power indulge in motivated reasoning, dismissing important signs of their own error, the results can be catastrophic. Such was probably the case, for example, in one of the great blunders in modern military history, in the spring of 1944, when Hitler, on the advice of his leading field marshal, Gerd von Rundstedt, chose to protect Calais rather than Normandy, despite the prescient lobbying of a lesser-ranked general, Erwin Rommel. Von Rundstedfs bad advice, born of undue attachment to his own plans, cost Hitler France, and possibly the entire Western Front.[19]

Why does motivated reasoning exist in the first place? Here, the problem is not one of evolutionary inertia but a simple lack of foresight. While evolution gave us the gift of deliberate reasoning, it lacked the vision to make sure we used it wisely: nothing forces us to be evenhanded because there was no one there to foresee the dangers inherent in pairing powerful tools of reasoning with the risky temptations of self-deception. In consequence, by leaving it up to our conscious self to decide how much to use our mechanism of deliberate reasoning, evolution freed us — for better or for worse — to be as biased as we want to be.

Even when we have little at stake, what we already know — or think we know — often further contaminates our capacity to reason and form new beliefs. Take, for example, the classic form of logic known as the syllogism: a formal deductive argument consisting of major premise, minor premise, and conclusion — as stylized as a sonnet:

All men are mortal.

Socrates was a man.

Therefore, Socrates was mortal.

Nobody has trouble with this form of logic; we understand the abstract form and realize that it generalizes freely:

All glorks are frum.

Skeezer is a glork.

Therefore, Skeezer is frum.

Presto — a new way for forming beliefs: take what you know (the minor and major premises), insert them into the inferential schema (all X’s are Y, Q is an X, therefore Q is a Y), and deduce new beliefs. The beauty of the scheme is the way in which true premises are guaranteed, by the rules of logic, to lead to true conclusions.

The good news is that humans can do this sort of thing at all; the bad news is that, without a lot of training, we don’t do it particularly well. If the capacity to reason logically is the product of natural selection, it is also a very recent adaptation with some serious bugs yet to be worked out.

Consider, for example, this syllogism, which has a slight but important difference from the previous one:

All living things need water.

Roses need water.

Therefore, roses are living things.

Is this a valid argument? Focus on the logic, not the conclusion per se; we already know that roses are living things. The question is whether the logic is sound, whether the conclusion follows the premises like the night follows the day. Most people think the argument is solid. But look carefully: the statement that all living things need water doesn’t preclude the possibility that some nonliving things might need water too. My car’s battery, for instance.

The poor logic of the argument becomes clearer if I simply change the words in question:

Premise 1: All insects need oxygen.

Premise 2: Mice need oxygen.

Conclusion: Therefore, mice are insects.

A creature truly noble in reason ought to see, instantaneously, that the rose and mouse arguments follow exactly the same formal structure (all X’s need Y, Z’s need Y, therefore Z’s are X’s) and ought to instantly reject all such reasoning as fallacious. But most of us need to see the two syllogisms side by side in order to get it. All too often we suspend a careful analysis of what is logical in favor of prior beliefs.

What’s going on here? In a system that was superlatively well engineered, belief and the process of drawing inferences (which soon become new beliefs) would be separate, with an iron wall between them; we would be able to distinguish what we had direct evidence for from what we had merely inferred. Instead, in the development of the human mind, evolution took a different path. Long before human beings began to engage in completely explicit, formal forms of logic (like syllogisms), creatures from fish to giraffes were probably making informal inferences, automatically, without a great deal of reflection; if apples are good to eat, pears probably are too. A monkey or a gorilla might make that inference without ever realizing that there is such a thing as an inference. Perhaps one reason people are so apt to confuse what they know with what they have merely inferred is that for our ancestors, the two were scarcely different, with much of inference arising automatically as part of belief, rather than via some separate, reflective system.

The capacity to codify the laws of logic — to recognize that if P then Q; P; therefore Q is valid whereas if Pf then Q; Q; therefore P is y not — presumably evolved only recently, perhaps sometime after the arrival of Homo sapiens. And by that time, belief and inference were already too richly intertwined to allow the two to ever be fully separate in everyday reasoning. The result is very much a kluge: a perfectly sound system of deliberate reasoning, all too often pointlessly clouded by prejudice and prior belief.

Studies of the brain bear this out: people evaluate syllogisms using two different neural circuits, one more closely associated with logic and spatial reasoning (bilateral parietal), the other more closely associated with prior belief (frontal-temporal). The former (logical and spatial) is effortful, the latter invoked automatically; getting the logic right is difficult.

In fact, truly explicit reasoning via logic probably isn’t something that evolved, per se, at all. When humans do manage to be rational, in a formal logical sense, it’s not because we are built that way, but because we are clever enough to learn the rules of logic (and to recognize their validity, once explained). While all normal human beings acquire language, the ability to use formal logic to acquire and reason about beliefs may be more of a cultural product than an evolutionary one, something made possible by evolution but not guaranteed by it. Formal reason seems to be present, if at all, primarily in literate cultures but difficult to discern in preliterate ones. The Russian psychologist Alexander Luria, for example, went to the mountains of central Asia in the late 1930s and asked the indigenous people to consider the logic of syllogisms like this one: “In a certain town in Siberia all bears are white. Your neighbor went to that town and he saw a bear. What color was that bear?” His respondents just didn’t get it; a typical response would be, in essence, “How should I know? Why doesn’t the professor go ask the neighbor himself?” Further studies later in the twentieth century essentially confirmed this pattern; people in nonliterate societies generally respond to queries about syllogisms by relying on the facts that they already know, apparently blind to the abstract logical relations that experimenters are inquiring about. This does not mean that people from those societies cannot learn formal logic — in general, at least the children can — but it does show that acquiring an abstract logic is not a natural, automatic phenomenon in the way that acquiring language is. This in turn suggests that formal tools for reasoning about belief are at least as much learned as they are evolved, not (as assumed by proponents of the idea that humanity is innately rational) standard equipment.

Once we decide something is true (for whatever reason), we often make up new reasons for believing it. Consider, for example, a study that I ran some years ago. Half my subjects read a report of a study that showed that good firefighting was correlated with high scores on a measure of risk-taking ability; the other half of the subjects read the opposite: they were told of a study that showed that good firefighting was negatively correlated with risk-taking ability, that is, that risk takers made poor firefighters. Each group was then further subdivided. Some people were asked to reflect on what they read, writing down reasons for why the study they read about might have gotten the results it did; others were simply kept busy with a series of difficult geometrical puzzles like those found on an IQ test.

Then, as social psychologists so often do, I pulled the rug out from under my subjects: “Headline, this news just in — the study you read about in the first part of the experiment was a fraud. The scientists who allegedly studied firefighting actually made their data up! What I’d like to know is what you really think — is firefighting really correlated with risk taking?”

Even after I told people that the original study was complete rubbish, people in the subgroups who got a chance to reflect (and create their own explanations) continued to believe whatever they had initially read. In short, if you give someone half a chance to make up their own reasons to believe something, they’ll take you up on the opportunity and start to believe it — even if their original evidence is thoroughly discredited. Rational man, if he (or she) existed, would only believe what is true, invariably moving from true premises to true conclusions. Irrational man, kluged product of evolution that he (or she) is, frequently moves in the opposite direction, starting with a conclusion and seeking reasons to believe it.

Belief, I would suggest, is stitched together out of three fundamental components: a capacity for memory (beliefs would be of no value if they came and went without any long-term hold on the mind), a capacity for inference (deriving new facts from old, as just discussed), and a capacity for, of all things, perception.

Superficially, one might think of perception and belief as separate. Perception is what we see and hear, taste, smell, or feel, while belief is what we know or think we know. But in terms of evolutionary history, the two are not as different as they initially appear. The surest path to belief is to see something. When my wife’s golden retriever, Ari, wags his tail, I believe him to be happy; mail falls through the slot, and I believe the mail has arrived. Or, as Chico Marx put it, “Who are you gonna believe, me or your own eyes?”

The trouble kicks in when we start to believe things that we don’t directly observe. And in the modern world, much of what we believe is not directly or readily observable. Our capacity to acquire new beliefs vicariously — from friends, teachers, or the media, without direct experience — is a key to what allows humans to build cultures and technologies of fabulous complexity. My canine friend Ari learns whatever he learns primarily through trial and error; I learn what I learn mainly through books, magazines, and the Internet. I may cast some skepticism on what I read. (Did journalist-investigator Seymour Hersh really have a well-placed, anonymous source? Did movie reviewer Anthony Lane really even see Clerks IE) But largely, for better or worse, I tend to believe what I read, and I learn much of what I know through that medium. Ari (also for better or worse) knows only what he sees, hears, feels, tastes, or smells.

In the early 1990s, the psychologist Daniel Gilbert, now well known for his work on happiness, tested a theory that he traced back to the seventeenth-century philosopher Baruch de Spinoza. Spinoza’s idea was that “all information is [initially] accepted during comprehension and… false information… unaccepted [only later].” As a test of Spinoza’s hypothesis, Gilbert presented subjects with true and false propositions — sometimes interrupting them with a brief, distracting tone (which required them to press a button). Just as Spinoza might have predicted, interruptions increased the chance that subjects would believe the false proposition;[20] other studies showed that people are more likely to accept falsehoods if they are distracted or put under time pressure. The ideas we encounter are, other things being equal, automatically believed — unless and until there is a chance to properly evaluate them.

This difference in order (between hearing, accepting, and evaluating versus hearing, evaluating, and then accepting) might initially seem trivial, but it has serious consequences. Take, for example, a case that was recently described on Ira Glass’s weekly radio show This American Life. A lifelong political activist who was the leading candidate for chair of New Hampshire’s Democratic Party was accused of possessing substantial amounts of child pornography. Even though his accuser, a Republican state representative, offered no proof, the accused was forced to step down, his political career essentially ruined. A two-month investigation ultimately found no evidence, but the damage was done — our legal system may be designed around the principle of “innocent until proven guilty,” but our mind is not.

Indeed, as every good lawyer knows intuitively, just asking about some possibility can increase the chance that someone will believe it. (“Isn’t it true you’ve been reading pornographic magazines since you were twelve?” “Objection — irrelevant!”) Experimental evidence bears this out: merely hearing something in the form of a question — rather than a declarative statement — is often enough to induce belief.

Why do we humans so often accept uncritically what we hear? Because of the way in which belief evolved: from machinery first used in the service of perception. And in perception, a high percentage of what we see is true (or at least it was before the era of television and Photoshop). When we see something, it’s usually safe to believe it. The cycle of belief works in the same way — we gather some bit of information, directly, through our senses, or perhaps more often, indirectly through language and communication. Either way, we tend to immediately believe it and only later, if at all, consider its veracity.

The trouble with extending this “Shoot first, ask questions later” approach to belief is that the linguistic world is much less trustworthy than the visual world. If something looks like a duck and quacks like a duck, we are licensed to think it’s a duck. But if some guy in a trenchcoat tells us he wants to sell us a duck, that’s a different story. Especially in this era of blogs, focus groups, and spin doctors, language is not always a reliable source of truth. In an ideal world, the basic logic of perception (gather information, assume true, then evaluate if there is time) would be inverted for explicit, linguistically transmitted beliefs; but instead, as is often the case, evolution took the lazy way out, building belief out of a progressive overlay of technologies, consequences be damned. Our tendency to accept what we hear and read with far too little skepticism is but one more consequence.

Yogi Berra once said that 90 percent of the game of baseball was half mental; I say, 90 percent of what we believe is half cooked. Our beliefs are contaminated by the tricks of memory, by emotion, and by the vagaries of a perceptual system that really ought be fully separate — not to mention a logic and inference system that is as yet, in the early twenty-first century, far from fully hatched.

The dictionary defines the act of believing both as “accepting something as true” and as “being of the opinion that something exists, especially when there is no absolute proof.” Is belief about what we know to be true or what we want to be true? That it is so often difficult for members of our species to tell the difference is a pointed reminder of our origins.

Evolved of creatures that were often forced to act rather than think, Homo sapiens simply never evolved a proper system for keeping track of what we know and how we’ve come to know it, uncontaminated by what we simply wish were so.

Загрузка...