PART I The Two-Tiered Brain

CHAPTER 1 The New Unconscious The hidden role of our subliminal selves … what it means when you don’t call your mother

The heart has its reasons of which reason knows nothing.

—BLAISE PASCAL

WHEN MY MOTHER was eighty-five she inherited, from my son, a pet Russian tortoise named Miss Dinnerman. It lived in her yard, in a large pen enclosing both shrubs and lawn, delineated by chicken wire. My mother’s knees were starting to go, so she’d had to curtail her traditional two-hour walks around the neighborhood. She was looking for a new friend, one she could easily access, and the tortoise got the job. She decorated the pen with rocks and pieces of wood and visited the animal every day, just like she used to visit the bank teller and the cashiers at Big Lots. On occasion she even brought Miss Dinnerman flowers, which she thought made the pen look pretty, but which the tortoise treated like a delivery from the local Pizza Hut.

My mother didn’t mind when the tortoise ate her bouquets. She thought it was cute. “Look how she enjoys it,” she’d say. But despite the cushy existence, the free room and board, and the freshly cut flowers, Miss Dinnerman’s main goal in life seemed to be escape. Whenever she wasn’t eating or sleeping, Miss Dinnerman would walk the perimeter, poking around for a hole in the chicken wire. She would even try to climb it, as awkward as a skateboarder trying to scale a spiral staircase. My mother saw this behavior, too, in human terms. To her, it was a heroic effort, like POW Steve McQueen plotting his breakout from a Nazi camp in The Great Escape. “Every creature wants freedom,” my mother told me. “Even if she has it good here, she doesn’t like being confined.” My mother believed that Miss Dinnerman recognized her voice and responded to it. She believed that Miss Dinnerman understood her. “You’re reading too much into her behavior,” I told my mother. “Tortoises are primitive creatures.” I would even demonstrate my point, waving my hands and hollering like a crazy person, then pointing out how the tortoise just ignored me. “So what?” she’d say. “Your kids ignore you, and you don’t call them primitive creatures.”

It can be difficult to distinguish willed, conscious behavior from that which is habitual or automatic. Indeed, as humans, our tendency to believe in consciously motivated behavior is so powerful that we read consciousness into not only our own behaviors but those of the animal kingdom as well. We do this with our pets, of course. It’s called anthropomorphizing. The tortoise is as brave as a POW, the cat peed on the suitcase because it was mad at us for going away, the dog must hate the mailman for some good reason. Simpler organisms, too, can appear to behave with humanlike thoughtfulness and intentionality. The lowly fruit fly, for example, goes through an elaborate mating ritual, which the male initiates by tapping the female with his foreleg and vibrating his wing in order to play her a courtship song.1 If the female accepts the advance, she will do nothing, and the male will take over from there. If she is not sexually receptive, she will either strike him with her wings or legs, or run away. Though I have elicited frighteningly similar responses from human females, this fruit fly mating ritual is completely programmed. Fruit flies don’t worry about issues such as where their relationship is headed; they simply exercise a routine that is hardwired within them. In fact, their actions are so directly related to their biological constitution that scientists have discovered a chemical that, when applied to a male of the species, will, within hours, convert a heterosexual fruit fly into one that is gay.2 Even the roundworm called C. elegans—a creature made of only about a thousand cells—can appear to act with conscious intent. For instance, it may slither past a bit of perfectly digestible bacteria and toward another tidbit that awaits it elsewhere on the petri dish. One might be tempted to conclude that the roundworm is exercising its free will, as we ourselves might do when rejecting an unappealing vegetable or a high-calorie desert. But a roundworm does not think to itself, I’d better watch my diameter; it simply moves toward the nutrient it has been programmed to hunt down.3

Animals like fruit flies and tortoises are at the lower end on the brain-power scale, but the role of automatic processing is not limited to such primitive creatures. We humans also perform many automatic, unconscious behaviors. We tend to be unaware of them, however, because the interplay between our conscious and our unconscious minds is so complex. This complexity has its roots in the physiology of our brains. As mammals, we have new layers of cortex built upon the base of our more primitive reptilian brains; and as humans, we have yet more cerebral matter built upon those. We have an unconscious mind and, superimposed upon it, a conscious brain. How much of our feelings, judgments, and behavior is due to each can be very hard to say, as we are constantly shifting back and forth between them. For example, one morning we mean to stop at the post office on the way to work, but at the key intersection, we turn right, toward the office, because we are running on autopilot—that is, acting unconsciously. Then, when trying to explain to the police officer the reason for our subsequent illegal U-turn, our conscious mind calculates the optimal excuse, while our autopilot unconscious handles the proper use of gerunds, subjunctive verbs, and indefinite articles so that our plea is expressed in fine grammatical form. If asked to step out of the car, we will consciously obey, then instinctively stand about four feet from the officer, although when talking to friends we automatically adjust that separation to about two and a half feet. (Most of us follow these unspoken rules of interpersonal distance without ever thinking about them and can’t help feeling uncomfortable when they are violated.)

Once attention is called to them, it is easy to accept many of our simple behaviors (like making that right turn) as being automatic. The real issue is the extent to which more complex and substantive behaviors, with the potential to have a much greater impact on our lives, are also automatic—even though we may feel sure that they are carefully thought through and totally rational. How does our unconscious affect our attitude about questions like Which house should I buy? Which stock should I sell? Should I hire that person to take care of my child? Or: Are bright blue eyes into which I can’t stop staring a sufficient basis for a long-term loving relationship?

If it is difficult to recognize automatic behavior in animals, it is even more difficult to recognize habitual behavior in ourselves. When I was in graduate school, long before my mother’s tortoise stage, I used to phone her around eight every Thursday night. Then, one Thursday, I didn’t. Most parents would have concluded that I forgot, or maybe that I finally “got a life” and was out for the evening. But my mother had a different interpretation. Starting around nine she began to call my apartment, asking for me. My roommate apparently didn’t mind the first four or five calls, but after that, as I discovered the next morning, her reservoir of good will had dried up. Especially when my mother started accusing her of hiding the fact that I had been severely injured and hence was not calling because I was under sedation in the local hospital. By midnight, my mother’s imagination had goosed that scenario up a couple notches—she was now accusing my roommate of covering up my recent death. “Why lie about it?” my mother asked. “I am going to find out.”

Most children would be embarrassed to learn that their mother, a person who has known them intimately their whole life, would think it more plausible that they had been killed than that they had been out on a date. But I had seen my mother exhibit such behavior before. To outsiders, she appeared to be a perfectly normal individual, except for a few quirks, like believing in evil spirits and enjoying accordion music. Those were to be expected, remnants of the culture she grew up with in the old country, Poland. But my mother’s mind worked differently from that of anyone else I knew. Today I understand why, even though my mother herself does not recognize it: decades earlier, her psyche had been restructured to view situations within a context that most of us could never imagine. It all started in 1939, when my mother was sixteen. Her own mother had died from abdominal cancer after suffering at home in excruciating pain for an entire year. Then, a short while later, my mother came home from school one day and found that her father had been taken by the Nazis. My mother and her sister, Sabina, were soon also taken away, to a forced labor camp, which her sister did not survive. Virtually overnight, my mother’s life had been transformed from that of a well-loved and well-cared-for teenager in a well-to-do family to that of an orphaned, hated, and starving slave laborer. After her liberation my mother emigrated, married, settled in a peaceful neighborhood in Chicago, and had a stable and safe lower-middle-class family existence. She no longer had any rational reason to fear the sudden loss of everything dear to her, and yet that fear has driven her interpretation of everyday events for the rest of her life.

My mother interpreted the meanings of actions through a dictionary that was different from the one most of us use, and via her own unique rules of grammar. Her interpretations had become automatic to her, not consciously arrived at. Just as we all understand spoken language without any conscious application of linguistic rules, so too did she understand the world’s message to her without any awareness that her early experiences had forever reshaped her expectations. My mother never recognized that her perceptions were skewed by the ever-present fear that at any moment justice, probability, and logic could cease to have force or meaning. Whenever I’d suggest it to her, she’d scoff at the idea of seeing a psychologist and deny that her past had had any negative effect on her view of the present. “Oh no?” I’d reply. “How come none of my friends’ parents accuse their roommates of conspiring to cover up their death?”

We all have implicit frames of reference—with luck, less extreme—that produce habitual thinking and behavior. Our experiences and actions always seem to be rooted in conscious thought, and like my mother, we can find it difficult to accept that there are hidden forces at work behind the scenes. But though those forces may be invisible, they still exert a powerful pull. In the past there was a lot of speculation about the unconscious mind, but the brain was like a black box, its workings inaccessible to our understanding. The current revolution in thinking about the unconscious came about because, with modern instruments, we can watch as different structures and substructures in the brain generate feelings and emotions. We can measure the electrical output of individual neurons. We can map the neural activity that forms a person’s thoughts. Today scientists can go beyond talking to my mother and guessing how her experiences affected her; today they can actually pinpoint the brain alterations that result from traumatic early experiences like hers and understand how such experiences cause physical changes in stress-sensitive brain regions.4

The modern concept of the unconscious, based on such studies and measurements, is often called the “new unconscious,” to distinguish it from the idea of the unconscious that was popularized by a neurologist-turned-clinician named Sigmund Freud. Early on, Freud made several notable contributions to the fields of neurology, neuropathology, and anesthesia.5 For example, he introduced the use of gold chloride to stain nerve tissue and used the technique to study the neural interconnections between the medulla oblongata, in the brain stem, and the cerebellum. In that, Freud was far ahead of his time, because it would be many decades before scientists understood the importance of brain connectivity and developed the tools needed to study it in any depth. But Freud himself did not pursue that study for long. Instead, he became interested in clinical practice. In treating his patients, Freud came to the correct conclusion that much of their behavior was governed by mental processes of which they were unaware. Lacking the technical tools with which to explore that idea in any scientific way, however, he simply talked to his patients, tried to draw them out about what was going on in the furthest recesses of their minds, observed them, and made whatever inferences he deemed valid. As we’ll see, however, such methods are unreliable, and many unconscious processes can never be directly revealed through the kind of self-reflection encouraged by therapy, because they transpire in areas of the brain not open to the conscious mind. As a result, Freud was mainly off the mark.


HUMAN BEHAVIOR IS the product of an endless stream of perceptions, feelings, and thoughts, at both the conscious and the unconscious levels. The idea that we are not aware of the cause of much of our behavior can be difficult to accept. Although Freud and his followers believed in it, among research psychologists—the scientists within the field—the idea that the unconscious is important to our behavior was, until recent years, shunned as pop psychology. As one researcher wrote, “Many psychologists were reluctant to use the word ‘unconscious’ out of fear that their colleagues would think they had gone soft in the head.”6 John Bargh, a psychologist at Yale, recounts that when he started as a graduate student at the University of Michigan, in the late 1970s, it was almost universally assumed that not only our social perceptions and our judgments but also our behaviors were conscious and deliberate.7 Anything that threatened that assumption was greeted with derision, as when Bargh told a close relative, a successful professional, about some of the early studies showing that people did things for reasons they were unaware of. Using his own experience as evidence that the studies were wrong, Bargh’s relative insisted that he was unaware of even a single instance in which he’d done something for reasons he wasn’t aware of.8 Says Bargh, “We all hold dear the idea that we’re the captain of our own soul, and we’re in charge, and it’s a very scary feeling when we’re not. In fact, that’s what psychosis is—the feeling of detachment from reality and that you’re not in control, and that’s a very frightening feeling for anyone.”

Though psychological science has now come to recognize the importance of the unconscious, the internal forces of the new unconscious have little to do with the innate drives described by Freud, such as a boy’s desire to kill his father in order to marry his mom, or a woman’s envy of the male sexual organ.9 We should certainly credit Freud with understanding the immense power of the unconscious—this was an important achievement—but we also have to recognize that science has cast serious doubt on the existence of many of the specific unconscious emotional and motivational factors he identified as molding the conscious mind.10 As the social psychologist Daniel Gilbert wrote, the “supernatural flavor of Freud’s Unbewusst [unconscious] made the concept generally unpalatable.”11

The unconscious envisioned by Freud was, in the words of a group of neuroscientists, “hot and wet; it seethed with lust and anger; it was hallucinatory, primitive, and irrational,” while the new unconscious is “kinder and gentler than that and more reality bound.”12 In the new view, mental processes are thought to be unconscious because there are portions of the mind that are inaccessible to consciousness due to the architecture of the brain, rather than because they have been subject to motivational forces like repression. The inaccessibility of the new unconscious is not considered to be a defense mechanism, or unhealthy. It is considered normal.

If there are times when a phenomenon I discuss sounds vaguely Freudian, the modern understanding of that phenomenon and its causes won’t be. The new unconscious plays a far more important role than protecting us from inappropriate sexual desires (for our mothers or fathers) or from painful memories. Instead, it is a gift of evolution that is crucial to our survival as a species. Conscious thought is a great aid in designing a car or deciphering the mathematical laws of nature, but for avoiding snake bites or cars that swerve into your path or people who may mean to harm you, only the speed and efficiency of the unconscious can save you. As we’ll see, to ensure our smooth functioning in both the physical and the social world, nature has dictated that many processes of perception, memory, attention, learning, and judgment are delegated to brain structures outside conscious awareness.


SUPPOSE YOUR FAMILY vacationed in Disneyland last summer. Looking back, you might question the rationality of having braved the crowds and ninety-five-degree heat to watch your little daughter spin in a giant teacup. But then you might remember that when you planned the trip, you assessed all the possibilities and concluded that her big smile would be all the payoff you needed. We are usually confident that we know the causes of our behavior. And sometimes that confidence is warranted. Yet if forces outside our awareness play a great role in our judgment and behavior, then we must not know ourselves as well as we think we do. I took the job because I wanted a new challenge. I like that fellow because he has a great sense of humor. I trust my gastroenterologist because she lives and breathes intestines. Each day we ask and answer many questions about our feelings and our choices. Our answers usually seem to make sense, but nonetheless they are often dead wrong.

How do I love thee? Elizabeth Barrett Browning felt she could count the ways, but chances are, she couldn’t accurately list the reasons. Today we are beginning to be able to do just that, as you’ll see when you have a look at the following table. It shows who has been marrying whom in three states of the southeastern United States.13 One would think that both the who and the whom married for love, and no doubt they did. But what is love’s source? It can be the beloved’s smile, generosity, grace, charm, sensitivity—or the size of his biceps. The source of love has been pondered for eons by lovers, poets, and philosophers, but it is probably safe to say that none of them has ever waxed eloquent about this particular factor: the person’s name. This table, however, shows that a person’s name can subtly influence your heart—if the name matches your own.

Listed along the horizontal and vertical axes are the five most common U.S. surnames. The numbers in the table represent how many marriages occurred between a bride and a groom with the corresponding names. Note that the largest numbers, by far, occur along the diagonal—that is, Smiths marry other Smiths three to five times as often as they marry Johnsons, Williamses, Joneses, or Browns. In fact, Smiths marry other Smiths about as often as they marry people with all those other names, combined. And the Johnsons, Williamses, Joneses, and Browns behave similarly. What makes the effect even more striking is that these are the raw numbers—that is, since there are almost twice as many Smiths as Browns, if all else were equal, you’d expect Browns to marry the ubiquitous Smiths far more often than the rarer Browns—but even so, by far the greatest number of marriages among Browns is to other Browns.

What does this tell us? People have a basic desire to feel good about themselves, and we therefore have a tendency to be unconsciously biased in favor of traits similar to our own, even such seemingly meaningless traits as our names. Scientists have even identified a discrete area of the brain, called the dorsal striatum, as the structure that mediates much of this bias.14

Research suggests that when it comes to understanding our feelings, we humans have an odd mix of low ability and high confidence. You might feel certain you took a job because it presented a challenge, but perhaps you were really more interested in the greater prestige. You might swear you like that fellow for his sense of humor, but you might really like him for his smile, which reminds you of your mother’s. You might think you trust your gastroenterologist because she is a great expert, but you might really trust her because she is a good listener. Most of us are satisfied with our theories about ourselves and accept them with confidence, but we rarely see those theories tested. Scientists, however, are now able to test those theories in the laboratory, and they have proven astonishingly inaccurate.

An example: Imagine you are on your way into a movie theater when a person who appears to be an employee of the theater comes up to you and asks if you will answer a few questions about the theater and its concessions in exchange for a free tub of popcorn and a drink. What that person doesn’t tell you is that the popcorn you will be given comes in two sizes, one smaller than the other, but both so huge that you could not possibly finish the contents—and in two “flavors,” one that subjects will later describe as “good” and “high quality,” and another that will be described as “stale,” “soggy,” and “terrible.” Nor will you be told that you are actually participating in a scientific study to measure how much you eat of the popcorn and why. Now, here’s the question the researchers were studying: What will have a greater influence on the amount of popcorn you eat, its taste or the amount you are given? To address that question, they handed out four different popcorn-and-box combinations. Moviegoers were given either good popcorn in the smaller box, good popcorn in the larger box, bad popcorn in the smaller box, or bad popcorn in the larger box. The result? People seemed to “decide” how much to eat based on box size as much as taste. Other studies support this result, showing that doubling the size of a container of snack food increases consumption by 30 to 45 percent.15

I put quotation marks around “decide” above because that word often connotes a conscious action. It’s unlikely that these decisions fit that description. The subjects did not say to themselves, This free popcorn tastes awful, but there’s plenty of it, so I may as well gorge. Instead, research such as this supports what advertisers have long suspected—that “environmental factors” such as package design, package or portion size, and menu descriptions unconsciously influence us. What is most surprising is the magnitude of the effect—and of people’s resistance to the idea that they could have been manipulated. While we sometimes acknowledge that such factors can influence other people, we usually believe—wrongly—that they cannot possibly affect us.16

In truth, environmental factors have a powerful—and unconscious—influence not only on how much we choose to eat but also on how the food tastes. For example, suppose you don’t eat just in movie theaters but sometimes go to restaurants, sometimes even restaurants that provide more than just a menu board listing various types of hamburgers. These more elegant restaurants commonly offer menus peppered with terms like “crispy cucumbers,” “velvety mashed potatoes,” and “slow-roasted beets on a bed of arugula,” as if at other restaurants the cucumbers are limp, the mashed potatoes have the texture of wool, and the beets are flash-fried, then made to sit up in an uncomfortable chair. Would a crispy cucumber, by any other name, taste as crisp? Would a bacon cheeseburger, presented in Spanish, become Mexican food? Could poetic description convert macaroni and cheese from a limerick to a haiku? Studies show that flowery modifiers not only tempt people to order the lyrically described foods but also lead them to rate those foods as tasting better than the identical foods given only a generic listing.17 If someone were to ask about your taste in fine dining and you were to say, “I lean toward food served with vivid adjectives,” you’d probably get a pretty strange look; yet a dish’s description turns out to be an important factor in how it tastes. So the next time you have friends over for dinner, don’t serve them salad from the store down the street; go for the subliminal effect and serve them a mélange of local greens.

Let’s go a step further. Which would you enjoy more, velvety mashed potatoes or velvety mashed potatoes? Nobody has yet done a study on the effect of fonts on the taste of mashed potatoes, but a study has been done on the effects of font on attitudes toward preparing food. In that study participants were asked to read a recipe for creating a Japanese lunch dish, then to rate the amount of effort and skill they thought the recipe would require and how likely they were to prepare the dish at home. Subjects who were presented with the recipe in a difficult-to-read font rated the recipe as more difficult and said they were less likely to attempt to make the dish. The researchers repeated the experiment, showing other subjects a one-page description of an exercise routine instead of a recipe, and found similar results: subjects rated the exercise as harder and said they were less likely to try it when the instructions were printed in a font that was hard to read. Psychologists call this the “fluency effect.” If the form of information is difficult to assimilate, that affects our judgments about the substance of that information.18

The science of the new unconscious is full of reports about phenomena such as these, quirks in our judgment and perception of people and events, artifacts that arise from the usually beneficial ways in which our brains automatically process information. The point is that we are not like computers that crunch data in a relatively straightforward manner and calculate results. Instead, our brains are made up of a collection of many modules that work in parallel, with complex interactions, most of which operate outside of our consciousness. As a consequence, the real reasons behind our judgments, feelings, and behavior can surprise us.


IF UNTIL RECENTLY academic psychologists have been reluctant to accept the power of the unconscious, so have others in the social sciences. Economists, for example, built their textbook theories on the assumption that people make decisions in their own best interests, by consciously weighing the relevant factors. If the new unconscious is as powerful as modern psychologists and neuroscientists believe it to be, economists are going to have to rethink that assumption. Indeed, in recent years a growing minority of maverick economists have had great success questioning the theories of their more traditional colleagues. Today, behavioral economists like Caltech’s Antonio Rangel are changing the way economists think by presenting strong evidence that the textbook theories are flawed.

Rangel is nothing like what most people think of when they picture economists—theorists who pore over data and build complex computer models to describe market dynamics. A portly Spaniard who is himself a great lover of the good things in life, Rangel works with real people, often student volunteers, whom he drags into his lab to study while they taste wine or stare at candy bars after having fasted all morning. In a recent experiment, he and his colleagues showed that people would pay 40 to 61 percent more for an item of junk food if, rather than choosing from a text or image display, they were presented with the actual item.19 The study also found that if the item is presented behind Plexiglas, rather than being available for you to simply grab, your willingness to pay sinks back down to the text and image levels. Sound weird? How about rating one detergent as being superior to another because it comes in a blue-and-yellow box? Or would you buy German wine rather than French because German beer hall music was playing in the background as you walked down the liquor aisle? Would you rate the quality of silk stockings as higher because you liked their scent?

In each of these studies, people were strongly influenced by the irrelevant factors—the ones that speak to our unconscious desires and motivations, which traditional economists ignore. Moreover, when quizzed about the reasons for their decisions, the subjects proved completely unaware that those factors had influenced them. For example, in the detergent study, subjects were given three different boxes of detergent and asked to try them all out for a few weeks, then report on which they liked best and why. One box was predominantly yellow, another blue, and the third was blue with splashes of yellow. In their reports, the subjects overwhelmingly favored the detergent in the box with mixed colors. Their comments included much about the relative merits of the detergents, but none mentioned the box. Why should they? A pretty box doesn’t make the detergent work better. But in reality it was just the box that differed—the detergents inside were all identical.20 We judge products by their boxes, books by their covers, and even corporations’ annual reports by their glossy finish. That’s why doctors instinctively “package” themselves in nice shirts and ties and it’s not advisable for attorneys to greet clients in Budweiser T-shirts.

In the wine study, four French and four German wines, matched for price and dryness, were placed on the shelves of a supermarket in England. French and German music were played on alternate days from a tape deck on the top shelf of the display. On days when the French music played, 77 percent of the wine purchased was French, while on the days of German music, 73 percent of the wine purchased was German. Clearly, the music was a crucial factor in which type of wine shoppers chose to buy, but when asked whether the music had influenced their choice, only one shopper in seven said it had.21 In the stocking study, subjects inspected four pairs of silk stockings that, unbeknownst to them, were absolutely identical, except that each had had a different and very faint scent applied to it. The subjects “found no difficulty in telling why one pair was the best” and reported perceiving differences in texture, weave, feel, sheen, and weight. Everything but scent. Stockings with one particular scent were rated highest much more often than the others, but the subjects denied using scent as a criterion, and only 6 of the 250 subjects even noticed that the stockings had been perfumed.22

“People think that their enjoyment of a product is based on the qualities of the product, but their experience of it is also very much based on the product’s marketing,” says Rangel. “For example, the same beer, described in different ways, or labeled as different brands, or with a different price, can taste very different. The same is true for wine, even though people like to believe it’s all in the grapes, and the winemaker’s expertise.” Studies have indeed shown that when wines are tasted blind, there is little correlation between a wine’s taste and its cost, but that there is a strong correlation when the wines are not sampled blind.23 Since people generally expect higher-priced wine to taste better, Rangel was not surprised when volunteers he recruited to sip a series of wines labeled only by price rated a $90 bottle as better than another wine in the series that was marked as costing just $10.24 But Rangel had cheated: those two wines, perceived as disparate, were actually identical—they were both from the $90 bottle. More important, the study had another twist: the wine tasting was conducted while the subjects were having their brains scanned in an fMRI machine. The resulting images showed that the price of the wine increased activity in an area of the brain behind the eyes called the orbitofrontal cortex, a region that has been associated with the experience of pleasure.25 So though the two wines were not different, their taste difference was real, or at least the subjects’ relative enjoyment of the taste was.

How can a brain conclude that one beverage tastes better than another when they are physically the same? The naive view is that sensory signals, such as taste, travel from the sense organ to the region of the brain where they are experienced in a more or less straightforward fashion. But as we’ll see, brain architecture is not that simple. Though you are unaware of it, when you run cool wine over your tongue, you don’t just taste its chemical composition; you also taste its price. The same effect has been demonstrated in the Coke-Pepsi wars, only with regard to brand. The effect was long ago dubbed the “Pepsi paradox,” referring to the fact that Pepsi consistently beats Coke in blind taste tests, although people seem to prefer Coke when they know what they are drinking. Over the years, various theories have been proposed to explain this. One obvious explanation is the effect of the brand name, but if you ask people whether it is all those uplifting Coke ads they’ve seen that they are really tasting when they slurp their beverage, they almost always deny it. In the early 2000s, however, new brain-imaging studies found evidence that an area of the brain that neighbors the orbitofrontal cortex, called the ventromedial prefrontal cortex, or VMPC, is the seat of warm, fuzzy feelings such as those we experience when we contemplate a familiar brand-name product.26 In 2007, researchers recruited a group of participants whose brain scans showed significant VMPC damage, and also a group whose VMPCs were healthy. As expected, both the normal and the brain-damaged volunteers preferred Pepsi to Coke when they did not know what they were drinking. And, as expected, those with healthy brains switched their preference when they knew what they were drinking. But those who had damage to their VMPC—their brain’s “brand-appreciation” module—did not change preferences. They liked Pepsi better whether or not they knew what they were drinking. Without the ability to unconsciously experience a warm and fuzzy feeling toward a brand name, there is no Pepsi paradox.

The real lesson here has nothing to do with either wine or Pepsi. It is that what is true of beverages and brands is also true of the other ways we experience the world. Both direct, explicit aspects of life (the drink, in this case) and indirect, implicit aspects (the price or brand) conspire to create our mental experience (the taste). They key word here is “create.” Our brains are not simply recording a taste or other experience, they are creating it. That’s a theme we’ll come back to again and again. We’d like to think that, when we pass up one guacamole in favor of another, it is because we have made a conscious choice based on taste, caloric content, price, our mood, the principle that guacamole should not contain mayonnaise, or any of a hundred other factors under our control. We believe that when we choose a laptop or a laundry detergent, plan a vacation, pick a stock, take a job, assess a sports star, make a friend, judge a stranger, and even fall in love, we understand the principal factors that influenced us. Very often nothing could be further from the truth. As a result, many of our most basic assumptions about ourselves, and society, are false.


IF THE INFLUENCE of the unconscious is so great, it shouldn’t just make itself known in the isolated situations of our private lives; it ought to have a demonstrable collective effect on our society as a whole. And it does—for instance, in the financial world. Since money is very important to us, each individual should be motivated to make financial decisions based exclusively on conscious and rational deliberation. That’s why the foundations of classical economic theory are built on the idea that people do just that—that they behave rationally, in accordance with the guiding principle of their self-interest. While no one has yet figured out how to devise a general economic theory that takes into account the fact that “rationally” is not how people act, plenty of economic studies have demonstrated the societal implications of our collective deviation from the cold calculations of the conscious mind.

Consider the fluency effect I mentioned earlier. If you were debating whether to invest in a stock, you’d certainly take a look at the industry, the business climate, and the financial details of a company before deciding if you should put your money behind it. Low on any rational thinker’s list, we probably agree, would be the ease with which you can pronounce the company’s name. If you let that affect your investment decision, you probably have relatives scheming to seize control of your nest egg on the grounds that you are mentally incompetent. Still, as we saw with typefaces, the ease with which a person can process information (such as the name of a stock) does exert an unconscious effect on people’s assessment of that information. While you may find it plausible that the fluency of information might affect people’s judgment of a recipe for a Japanese dish, could it really affect a decision as important as choosing an investment? Do companies with simple names do better than companies whose names are tongue twisters?

Think about a firm preparing for an initial public offering (IPO). Its leaders will make a pitch regarding the company’s wonderful future prospects, and they will back up that pitch with data. But privately held companies are usually far less familiar to prospective investors than companies that are already on the exchange, and since the newcomers have no long public track record, there is even more guessing than usual involved in this type of investment. To see whether savvy Wall Street traders making real investments are unconsciously prejudiced against companies with hard-to-pronounce names, researchers turned to data concerning actual IPOs. As the graph below indicates, they found that investors were indeed more likely to invest in the initial public offerings of companies whose name or ticker symbols were easy to pronounce than in companies with complicated names or symbols. Notice how the effect fades over time, which is to be expected, because with time firms develop both a track record and a reputation. (In case the effect also applies to books and authors, please take note of how easy it is to pronounce my name: Ma-lah-DI-nov.)

Performance of shares with pronounceable and unpronounceable ticker codes in the NYSE 1 day, 1 week, 6 months, and 1 year after entry into the market, from 1990 to 2004. A similar effect was found concerning IPOs on the American exchange.

Researchers have found other factors irrelevant to finance (but relevant to the human psyche) that affect stock performance. Take sunshine. Psychologists have long known that sunshine exerts subtly positive effects on human behavior. For example, one researcher recruited six waitresses at a restaurant in a shopping center in Chicago to keep track of their tips and the weather over thirteen randomly chosen spring days. Customers were probably unaware that the weather influenced them, but when it was sunny outside, they were significantly more generous.27 Another study produced a similar result concerning the gratuities received by a waiter delivering meals to guests’ rooms in an Atlantic City casino.28 Could the same effect that induces customers to give an extra buck to a waiter for bringing them curly fries also apply to sophisticated traders evaluating the future earnings prospects of General Motors? Again, the idea can be tested. Much of the trading on Wall Street is, of course, done on behalf of people who reside far from New York, and investors are located across the country, but the trading patterns of agents in New York City have a significant effect on overall New York Stock Exchange performance. For example, at least before the global financial crisis of 2007–8, much of Wall Street’s activity was due to proprietary trading—that is, big firms trading for their own accounts. As a result, plenty of money was traded by people who had occasion to know whether the sun was shining in New York—because they lived there. And so a finance professor at the University of Massachusetts decided to look into the relationship between local New York City weather and daily changes in the indices of stocks traded on Wall Street.29 Analyzing data from between 1927 and 1990, he found that both very sunny and totally cloudy weather influenced stock prices.

You would be right to be skeptical of this. There are inherent dangers in what is called data mining, the wholesale sifting through data in the hope of discovering previously unrecognized patterns. According to the laws of chance, if you look around enough, you are bound to find something interesting. That “something interesting” may be an artifact of randomness or a real trend, and telling the difference between the two can require considerable expertise. The fool’s gold in data mining is the statistical correlation that appears surprising and profound, even though it is meaningless. In the case of the sunshine study, if the connection between stock price and weather were a coincidence, one would probably find no such correlation in the data regarding stock markets in other cities. And so another pair of researchers repeated the earlier study, looking at stock market indices in twenty-six countries from 1982 through 1997.30 They confirmed the correlation. According to their statistics, if a year had included only perfectly sunny days, the market return of the New York Stock Exchange would have averaged 24.8 percent, while if a year had been made up of completely overcast days, it would have averaged only 8.7 percent. (Unfortunately, they also found that there is little or nothing to be gained from buying and selling according to this observation, because the large number of trades required to keep up with the changing weather would eat up your profits in transaction costs.)

We all make personal, financial, and business decisions, confident that we have properly weighed all the important factors and acted accordingly—and that we know how we came to those decisions. But we are aware of only our conscious influences, and so have only partial information. As a result, our view of ourselves and our motivations, and of society, is like a jigsaw puzzle with most of the pieces missing. We fill in blanks and make guesses, but the truth about us is far more complex and subtle than that which can be understood as the straightforward calculation of conscious and rational minds.


WE PERCEIVE, WE remember our experiences, we make judgments, we act—and in all of these endeavors we are influenced by factors we aren’t aware of. We’ll run into many more examples of this in the pages that follow, as I describe the different aspects of the unconscious brain. We’ll see how our brains process information through two parallel tiers, one conscious, the other unconscious, and we’ll begin to recognize the power of the unconscious. The truth is that our unconscious minds are active, purposeful, and independent. Hidden they may be, but their effects are anything but, for they play a critical role in shaping the way our conscious minds experience and respond to the world.

To begin our tour of the hidden areas of the mind, let’s consider the way we receive sensory input, the conscious and unconscious pathways through which we absorb information about the physical world.

CHAPTER 2 Senses Plus Mind Equals Reality The two-tier system of the brain … how you can see something without knowing it

The eye that sees is not a mere physical organ but a means of perception conditioned by the tradition in which its possessor has been reared.

—RUTH BENEDICT

THE DISTINCTION BETWEEN the conscious and the unconscious has been made in one form or another since the time of the Greeks.1 Among the most influential of the thinkers delving into the psychology of consciousness was the eighteenth-century German philosopher Immanuel Kant. During his time, psychology was not an independent subject but merely a catchall category for what philosophers and physiologists discussed when they speculated about the mind.2 Their laws concerning human thought processes were not scientific laws but philosophical pronouncements. Since these thinkers required little empirical basis for their theorizing, each one was free to favor his own purely speculative theory over his rival’s purely speculative theory. Kant’s theory was that we actively construct a picture of the world rather than merely documenting objective events, that our perceptions are not based just on what exists but, rather, are somehow created—and constrained—by the general features of the mind. That belief was surprisingly near the modern perspective, though today scholars generally take a more expansive view than Kant’s of the mind’s general features, especially with regard to biases arising from our desires, needs, beliefs, and past experiences. Today we believe that when you look at your mother-in-law, the image you see is based not only on her optical qualities but also on what is going on in your head—for example, your thoughts about her bizarre child-rearing practices or whether it was a good idea to agree to live next door.

Kant felt that empirical psychology could not become a science because you cannot weigh or otherwise measure the events that occur in your brain. In the nineteenth century, however, scientists took a stab at it. One of the first practitioners was the physiologist E. H. Weber, the man who, in 1834, performed the simple experiment on the sense of touch that involved placing a small reference weight at a spot on his subjects’ skin, then asking them to judge whether a second weight was heavier or lighter than the first.3 The interesting thing Weber discovered was that the smallest difference a person could detect was proportional to the magnitude of the reference weight. For example, if you were just barely able to sense that a six-gram weight was heavier than a reference object that weighed five grams, one gram would be the smallest detectible difference. But if the reference weight were ten times heavier, the smallest difference you’d be able to detect would be ten times as great—in this case, ten grams. This doesn’t sound like an earth-shattering result, but it was crucial to the development of psychology because it made a point: through experimentation one can uncover mathematical and scientific laws of mental processing.

In 1879 another German psychologist, Wilhelm Wundt, petitioned the Royal Saxon Ministry of Education for money to start the world’s first psychology laboratory.4 Though his request was denied, he established the laboratory anyway, in a small classroom he had already been using, informally, since 1875. That same year, a Harvard MD and professor named William James, who had taught Comparative Anatomy and Physiology, started teaching a new course called The Relations Between Physiology and Psychology. He also set up an informal psychology laboratory in two basement rooms of Lawrence Hall. In 1891 it attained official status as the Harvard Psychological Laboratory. In recognition of their pathbreaking efforts, a Berlin newspaper referred to Wundt as “the psychological Pope of the Old World” and James as “the psychological Pope of the New World.”5 It was through their experimental work, and that of others inspired by Weber, that psychology was finally put on a scientific footing. The field that emerged was called the “New Psychology.” For a while, it was the hottest field in science.6

The pioneers of the New Psychology each had his own views about the function and importance of the unconscious. The British physiologist and psychologist William Carpenter was one of the most prescient. In his 1874 book Principles of Mental Physiology, he wrote that “two distinct trains of Mental action are carried on simultaneously, one consciously, the other unconsciously,” and that the more thoroughly we examine the mechanisms of the mind, the clearer it becomes “that not only an automatic, but an unconscious action enters largely into all its processes.”7 This was a profound insight, one we continue to build on to this day.

Despite all the provocative ideas brewing in European intellectual circles after the publication of Carpenter’s book, the next big step in understanding the brain along the lines of Carpenter’s two-trains concept came from across the ocean, from the American philosopher and scientist Charles Sanders Peirce—the man who did the studies of the mind’s ability to detect what should have been undetectable differences in weight and brightness. A friend of William James’s at Harvard, Peirce had founded the philosophical doctrine of pragmatism (though it was James who elaborated on the idea and made it famous). The name was inspired by the belief that philosophical ideas or theories should be viewed as instruments, not absolute truths, and their validity judged by their practical consequences in our lives.

Peirce had been a child prodigy.8 He wrote a history of chemistry when he was eleven. He had his own laboratory when he was twelve. At thirteen, he studied formal logic from his older brother’s textbook. He could write with both hands and enjoyed inventing card tricks. He was also, in later life, a regular user of opium, which was prescribed to relieve a painful neurological disorder. Still, he managed to turn out twelve thousand printed pages of published works, on topics ranging from the physical sciences to the social sciences. His discovery of the fact that the unconscious mind has knowledge unknown to the conscious mind—which had its unlikely origin in the incident in which he was able to form an accurate hunch about the identity of the man who stole his gold watch—was the forerunner of many other such experiments. The process of arriving seemingly by chance at a correct answer you aren’t aware of knowing is now used in what is called a “forced choice” experiment, which has become a standard tool in probing the unconscious mind. Although Freud is the cultural hero associated with popularizing the unconscious, it is really to pioneers like Wundt, Carpenter, Peirce, Jastrow, and William James that we can trace the roots of modern scientific methodology and thought about the unconscious mind.


TODAY WE KNOW that Carpenter’s “two distinct trains of Mental action” are actually more like two entire railway systems. To update Carpenter’s metaphor, we would say that the conscious and unconscious railways each comprise a myriad of densely interconnected lines, and that the two systems are also connected to each other at various points. The human mental system is thus far more complex than Carpenter’s original picture, but we’re making progress in deciphering its map of routes and stations.

What has become abundantly clear is that within this two-tier system, it is the unconscious tier that is the more fundamental. It developed early in our evolution, to deal with the basic necessities of function and survival, sensing and safely responding to the external world. It is the standard infrastructure in all vertebrate brains, while the conscious can be considered an optional feature. In fact, while most nonhuman species of animals can and do survive with little or no capacity for conscious symbolic thought, no animal can exist without an unconscious.

According to a textbook on human physiology, the human sensory system sends the brain about eleven million bits of information each second.9 However, anyone who has ever taken care of a few children who are all trying to talk to you at once can testify that your conscious mind cannot process anywhere near that amount. The actual amount of information we can handle has been estimated to be somewhere between sixteen and fifty bits per second. So if your conscious mind were left to process all that incoming information, your brain would freeze like an overtaxed computer. Also, though we don’t realize it, we are making many decisions each second. Should I spit out my mouthful of food because I detect a strange odor? How shall I adjust my muscles so that I remain standing and don’t tip over? What is the meaning of the words that person across the table from me is uttering? And what kind of person is he, anyway?

Evolution has provided us with an unconscious mind because our unconscious is what allows us to survive in a world requiring such massive information intake and processing. Our sensory perception, our memory recall, our everyday decisions, judgments, and activities all seem effortless—but that is only because the effort they demand is expended mainly in parts of the brain that function outside awareness.

Take speech. Most people who read the sentence “The cooking teacher said the children made good snacks” instantly understand a certain meaning for the word “made.” But if you read, “The cannibal said the children made good snacks,” you automatically interpret the word “made” in a more alarming sense. Though we think that making these distinctions is easy, the difficulty in making sense of even simple speech is well appreciated by computer scientists who struggle to create machines that can respond to natural language. Their frustration is illustrated by a possibly apocryphal story of the early computer that was given the task of translating the homily “The spirit is willing but the flesh is weak” into Russian and then back into English. According to the story, it came out: “The vodka is strong but the meat is rotten.” Luckily, our unconscious does a far better job, and handles language, sense perception, and a teeming multitude of other tasks with great speed and accuracy, leaving our deliberative conscious mind time to focus on more important things, like complaining to the person who programmed the translation software. Some scientists estimate that we are conscious of only about 5 percent of our cognitive function. The other 95 percent goes on beyond our awareness and exerts a huge influence on our lives—beginning with making our lives possible.

One sign that there is a lot of activity going on in our brains of which we are not aware comes from a simple analysis of energy consumption.10 Imagine yourself sprawled on the couch watching television; you are subject to few demands on your body. Then imagine yourself doing something physically demanding—say, racing down a street. When you run fast, the energy consumption in your muscles is multiplied by a factor of one hundred compared to the energy you use as a couch potato. That’s because, despite what you might tell your significant other, your body is working a lot harder—one hundred times so—when you’re running than when you’re stretched out on the sofa. Let’s contrast this energy multiplier with the multiplier that is applicable when you compare two forms of mental activity: vegging out, in which your conscious mind is basically idle, and playing chess. Assuming that you are a good player with an excellent knowledge of all the possible moves and strategies and are concentrating deeply, does all that conscious thought tax your conscious mind to the same degree that running taxed your muscles? No. Not remotely. Deep concentration causes the energy consumption in your brain to go up by only about 1 percent. No matter what you are doing with your conscious mind, it is your unconscious that dominates your mental activity—and therefore uses up most of the energy consumed by the brain. Regardless of whether your conscious mind is idle or engaged, your unconscious mind is hard at work doing the mental equivalent of push-ups, squats, and wind sprints.


ONE OF THE most important functions of your unconscious is the processing of data delivered by your eyes. That’s because, whether hunting or gathering, an animal that sees better eats better and avoids danger more effectively, and hence lives longer. As a result, evolution has arranged it so that about a third of your brain is devoted to processing vision: to interpreting color, detecting edges and motion, perceiving depth and distance, deciding the identity of objects, recognizing faces, and many other tasks. Think of it—a third of your brain is busy doing all those things, yet you have little knowledge of or access to the processing. All that hard work proceeds outside your awareness, and then the result is offered to your conscious mind in a neat report, with the data digested and interpreted. As a result, you never have to bother figuring out what it means if these rods or those cones in your retinas absorb this or that number of photons, or to translate optic nerve data into a spatial distribution of light intensities and frequencies, and then into shapes, spatial positions, and meaning. Instead, while your unconscious mind is working feverishly to do all those things, you can relax in bed, recognizing, seemingly without effort, the lighting fixture on the ceiling—or the words in this book. Our visual system is not only one of the most important systems within our brain, it is also among the most studied areas in neuroscience. Understanding its workings can shed a lot of light on the way the two tiers of the human mind function together—and apart.

One of the most fascinating of the studies that neuroscientists have done on the visual system involved a fifty-two-year-old African man referred to in the literature as TN. A tall, strong-looking man, a doctor who, as fate would have it, was destined to become renowned as a patient, TN took the first step on his path to pain and fame one day in 2004 when, while living in Switzerland, he had a stroke that knocked out the left side of a part of his brain called the visual cortex.

The main part of the human brain is divided into two cerebral hemispheres, which are almost mirror images of each other. Each hemisphere is divided into four lobes, a division originally motivated by the bones of the skull that overlie them. The lobes, in turn, are covered by a convoluted outer layer about the thickness of a formal dinner napkin. In humans, this outer covering, the neocortex, forms the largest part of the brain. It consists of six thinner layers, five of which contain nerve cells, and the projections that connect the layers to one another. There are also input and output connections from the neocortex to other parts of the brain and nervous system. Though thin, the neocortex is folded in a manner that allows almost three square feet of neural tissue—about the size of a large pizza—to be packed into your skull.11 Different parts of the neocortex perform different functions. The occipital lobe is located at the very back of your head, and its cortex—the visual cortex—contains the main visual processing center of the brain.

A lot of what we know about the function of the occipital lobe comes from creatures in which that lobe has been damaged. You might look askance at someone who seeks to understand the function of the brakes on a car by driving one that doesn’t have any—but scientists selectively destroy parts of animals’ brains on the theory that one can learn what those parts do by studying animals in which they no longer do it. Since university ethics committees would frown on killing off parts of the brain in human subjects, researchers also comb hospitals seeking unfortunate people whom nature or an accident has rendered suitable for their study. This can be a tedious search because Mother Nature doesn’t care about the scientific usefulness of the injuries she inflicts. TN’s stroke was noteworthy in that it pretty cleanly took out just the visual center of his brain. The only drawback—from the research point of view—was that it affected only the left side, meaning that TN could still see in half his field of vision. Unfortunately for TN, that situation lasted for just thirty-six days. Then a tragic second hemorrhage occurred, freakishly destroying what was almost the mirror image of the first region.

After the second stroke, doctors did tests to see whether it had rendered TN completely blind, for some of the blind have a small measure of residual sight. They can see light and dark, for example, or read a word if it covers the side of a barn. TN, though, could not even see the barn. The doctors who examined him after his second stroke noted that he could not discern shapes or detect movement or colors, or even the presence of an intense source of light. An exam confirmed that the visual areas in his occipital lobe were not functioning. Though the optical part of TN’s visual system was still fully functional, meaning his eyes could gather and record light, his visual cortex lacked the ability to process the information that his retinas were sending it. Because of this state of affairs—an intact optical system, but a completely destroyed visual cortex—TN became a tempting subject for scientific research, and, sure enough, while he was still in the hospital a group of doctors and researchers recruited him.

There are many experiments one can imagine performing on a blind subject like TN. One could test for an enhanced sense of hearing, for example, or memory for past visual experiences. But of all possible questions, one that would probably not make your list would be whether a blind man can sense your mood by staring at your face. Yet that is what these researchers chose to study.12

They began by placing a laptop computer a couple feet in front of TN and showing him a series of black shapes—either circles or squares—presented on a white background. Then, in the tradition of Charles Sanders Peirce, they presented him with a forced choice: when each shape appeared, they asked him to identify it. Just take a stab at it, the researchers pleaded. TN obliged. He was correct about half the time, just what one would expect if he truly had no idea what he was seeing. Now comes the interesting part. The scientists displayed a new series of images—this time, a series of angry and happy faces. The game was essentially the same: to guess, when prompted, whether the face on the screen was angry or happy. But identifying a facial expression is a far different task from perceiving a geometric shape, because faces are much more important to us than black shapes.

Faces play a special role in human behavior.13 That’s why, despite men’s usual preoccupation, Helen of Troy was said to have “the face that launched a thousand ships,” not “the breasts that launched a thousand ships.” And it’s why, when you tell your dinner guests that the tasty dish they are savoring is cow pancreas, you pay attention to their faces and not their elbows—or their words—to get a quick and accurate report of their attitudes toward organ meat. We look to faces to quickly judge whether someone is happy or sad, content or dissatisfied, friendly or dangerous. And our honest reactions to events are reflected in facial expressions controlled in large part by our unconscious minds. Expressions, as we’ll see in Chapter 5, are a key way we communicate and are difficult to suppress or fake, which is why great actors are hard to find. The importance of faces is reflected in the fact that, no matter how strongly men are drawn to the female form, or women to a man’s physique, we know of no part of the human brain dedicated to analyzing the nuances of bulging biceps or the curves of firm buttocks or breasts. But there is a discrete part of the brain that is used to analyze faces. It is called the fusiform face area. To illustrate the brain’s special treatment of faces, look at the photos of President Barack Obama here.14

The photo on the left of the right-side-up pair looks horribly distorted, while the left member of the upside-down pair does not look very unusual. In reality the bottom pair is identical to the top pair, except that the top photos have been flipped. I know because I flipped them, but if you don’t trust me just rotate this book 180 degrees, and you’ll see that what is now the top pair will appear to have the bad photo, and what is now the bottom pair will look pretty good. Your brain devotes a lot more attention (and neural real estate) to faces than to many other kinds of visual phenomena because faces are more important—but not upside-down faces, since we rarely encounter those, except when performing headstands in a yoga class. That’s why we are far better at detecting the distortion on the face that is right side up than on the one that is flipped over.

www.moillusions.com. Used with permission.

The researchers studying TN chose faces as their second series of images in the belief that the brain’s special and largely unconscious focus on faces might allow TN to improve his performance, even though he’d have no conscious awareness of seeing anything. Whether he was looking at faces, geometric shapes, or ripe peaches ought to have been a moot point, given that TN was, after all, blind. But on this test TN identified the faces as happy or angry correctly almost two times out of three. Though the part of his brain responsible for the conscious sensation of vision had obviously been destroyed, his fusiform face area was receiving the images. It was influencing the conscious choices he made in the forced-choice experiment, but TN didn’t know it.

Having heard about the first experiment involving TN, a few months later another group of researchers asked him if he would participate in a different test. Reading faces may be a special human talent, but not falling on your face is even more special. If you suddenly notice that you are about to trip over a sleeping cat, you don’t consciously ponder strategies for stepping out of the way; you just do it.15 That avoidance is governed by your unconscious, and it is the skill the researchers wanted to test in TN. They proposed to watch as he walked, without his cane, down a cluttered hallway.16

The idea excited all those involved except the person not guaranteed to remain vertical. TN refused to participate.17 He may have had some success in the face test, but what blind man would consent to navigating an obstacle course? The researchers implored him, in effect, to just do it. And they kindly offered to have an escort trail him to make sure he didn’t fall. After some prodding, he changed his mind. Then, to the amazement of everyone, including himself, he zigged and zagged his way perfectly down the corridor, sidestepping a garbage can, a stack of paper, and several boxes. He didn’t stumble once, or even collide with any objects. When asked how he’d accomplished this, TN had no explanation and, one presumes, requested the return of his cane.

The phenomenon exhibited by TN—in which individuals with intact eyes have no conscious sensation of seeing but can nevertheless respond in some way to what their eyes register—is called “blindsight.” This important discovery “elicited disbelief and howls of derision” when first reported and has only recently come to be accepted.18 But in a sense it shouldn’t have been surprising: it makes perfect sense that blindsight would result when the conscious visual system is rendered nonfunctional but a person’s eyes and unconscious system remain intact. Blindsight is a strange syndrome—a particularly dramatic illustration of the two tiers of the brain operating independently of each other.


THE FIRST PHYSICAL indication that vision occurs through multiple pathways came from a British Army doctor named George Riddoch in 1917.19 In the late nineteenth century, scientists had begun to study the importance of the occipital lobe in vision by creating lesions in dogs and monkeys. But data on humans was scarce. Then came World War I. Suddenly the Germans were turning British soldiers into promising research subjects at an alarming pace. This was partly because British helmets tended to dance atop the soldiers’ heads, which might have looked fashionable but didn’t cover them very well, especially in the back. Also, the standard in that conflict was trench warfare. As it was practiced, a soldier’s job was to keep all of his body protected by the solid earth except for his head, which he was instructed to stick up into the line of fire. As a result, 25 percent of all penetrating wounds suffered by British soldiers were head wounds, especially of the lower occipital lobe and its neighbor the cerebellum.

The same path of bullet penetration today would turn a huge swath of the brain into sausage meat and almost certainly kill the victim. But in those days bullets were slower and more discrete in their effects. They tended to bore neat tunnels through the gray matter without disturbing the surrounding tissue very much. This left the victims alive and in better condition than you might imagine given that their heads now had the topology of a doughnut. One Japanese doctor who worked under similar conditions in the Russo-Japanese War saw so many patients injured in that manner that he devised a method for mapping the precise internal brain injury—and the deficits expected—based on the relation of the bullet holes to various external landmarks on the skull. (His official job had been to determine the size of the pension owed the brain-damaged soldiers.)20

Dr. Riddoch’s most interesting patient was a Lieutenant Colonel T., who had a bullet sail through his right occipital lobe while he was leading his men into battle. After taking the hit he bravely brushed himself off and proceeded to continue leading his men. When asked how he felt, he reported being dazed but said he was otherwise just fine. He was wrong. Fifteen minutes later, he collapsed. When he woke up it was eleven days later, and he was in a hospital in India.

Although he was now conscious again, one of the first signs that something was amiss came at dinner, when Lieutenant Colonel T. noted that he had a hard time seeing bits of meat residing on the left side of his plate. In humans, the eyes are wired to the brain in such a way that visual information from the left side of your field of vision is transmitted to the right side of your brain, and vice versa, no matter which eye that information comes from. In other words, if you stare straight ahead, everything to your left is transmitted to the right hemisphere of your brain, which is where Lieutenant Colonel T. took the bullet. After he was transferred to a hospital in England, it was established that Lieutenant Colonel T. was totally blind on the left side of his visual field, with one bizarre exception. He could detect motion there. That is, he couldn’t see in the usual sense—the “moving things” had no shape or color—but he did know if something was moving. It was partial information, and of little use. In fact, it annoyed him, especially during train rides, when he would sense that things were moving past at his left but he couldn’t see anything there.

Since Lieutenant Colonel T. was consciously aware of the motion he detected, his wasn’t a case of true blindsight, as TN’s was, but still, the case was groundbreaking for its suggestion that vision is the cumulative effect of information traveling along multiple pathways, both conscious and unconscious. George Riddoch published a paper on Lieutenant Colonel T. and others like him, but unfortunately another British Army doctor, one far better known, derided Riddoch’s work. With that it virtually disappeared from the literature, not to resurface for many decades.


UNTIL RECENTLY, UNCONSCIOUS vision was difficult to investigate because patients with blindsight are exceedingly rare.21 But in 2005, Antonio Rangel’s Caltech colleague Christof Koch and a coworker came up with a powerful new way to explore unconscious vision in healthy subjects. Koch arrived at this discovery about the unconscious because of his interest in its flip side—the meaning of consciousness. If studying the unconscious was, until recently, not a good career move, Koch says that studying consciousness was, at least until the 1990s, “considered a sign of cognitive decline.” Today, however, scientists study the two subjects hand in hand, and one of the advantages of research on the visual system is that it is in some sense simpler than, say, memory or social perception.

The technique Koch’s group discovered exploits a visual phenomenon called binocular rivalry. Under the right circumstances, if one image is presented to your left eye while a different image is presented to your right eye, you won’t see both of them, somehow superimposed. Instead, you’ll perceive just one of the two images. Then, after a while, you’ll see the other image, and then the first again. The two images will alternate in that manner indefinitely. What Koch’s group found, however, was that if they present a changing image to one eye and a static one to the other, people will see only the changing image, and never the static one.22 In other words, if your right eye were exposed to a film of two monkeys playing Ping-Pong and your left to a photo of a hundred-dollar bill, you’d be unaware of the static photo even though your left eye had recorded the data and transmitted it to your brain. The technique provides a powerful tool for creating, in a sense, artificial blindsight—a new way to study unconscious vision without destroying any part of the brain.

Employing the new technique, another group of scientists performed an experiment on normal people analogous to the one the facial expression researchers performed on patient TN.23 They exposed each subject’s right eye to a colorful and rapidly changing mosaic-like image, and each subject’s left eye to a static photograph that pictured an object. That object was positioned near either the right edge of the photograph or the left, and it was their subjects’ task to guess where the object was, even though they did not consciously perceive the static photo. The researchers expected that, as in the case of TN, the subjects’ unconscious cues would be powerful only if the object pictured was of vital interest to the human brain. This led to an obvious category. And so when the researchers performed this experiment, they selected, for one of the static images, pornography—or, in their scientific jargon, a “highly arousing erotic image.” You can get erotica at almost any newsstand, but where do you get scientifically controlled erotica? It turns out that psychologists have a database for that. It is called the International Affective Picture System, a collection of 480 images ranging from sexually explicit material to mutilated bodies to pleasant images of children and wildlife, each categorized according to the level of arousal it produces.

As the researchers expected, when presented with unprovocative static images and asked whether the object was on the left- or the right-hand side of the photo, the subjects’ answers were correct about half the time, which is what you would expect from completely random, uninformed guesses, a rate comparable to TN’s when he was making guesses about circles versus squares. But when heterosexual male subjects were shown an image of a naked woman, they gained a significant ability to discern on which side of the image she was located, as did females who were shown images of naked men. That didn’t happen when men were shown naked men, or when women were shown naked women—with one exception, of course. When the experiment was repeated on homosexual subjects, the results flipped in the manner you might expect. The results mirrored the subjects’ sexual preferences.

Despite their successes, when asked afterward what they had seen, all the subjects described just the tedious progression of rapidly changing mosaic images the researchers had presented to their right eye. The subjects were clueless that while their conscious minds were looking at a series of snoozers, their unconscious minds were feasting on Girls (or Boys) Gone Wild. This means that while the processing of the erotic image was never delivered to the consciousness, it did register powerfully enough in the unconscious that the subjects had a subliminal awareness of it. We are reminded again of the lesson Peirce learned: We don’t consciously perceive everything that registers in our brain, so our unconscious mind may notice things that our conscious mind doesn’t. When that happens we may get a funny feeling about a business associate or a hunch about a stranger and, like Peirce, not know the source.

I learned long ago that it is often best to follow those hunches. I was twenty, in Israel just after the Yom Kippur War, and went up to visit the Golan Heights, in Israeli-occupied Syria. While hiking along a deserted road I spotted an interesting bird in a farmer’s field, and being a bird-watcher, I resolved to get a closer look. The field was ringed by a fence, which doesn’t normally deter bird-watchers, but this fence had a curious sign on it. I pondered what the sign might say. It was in Hebrew, and my Hebrew wasn’t quite good enough to decipher it. The usual message would have been “No Trespassing,” but somehow this sign seemed different. Should I stay out? Something told me yes, a something I now imagine was very much like the something that told Peirce who had stolen his watch. But my intellect, my conscious deliberative mind, said, Go ahead. Just be quick. And so I climbed the fence and walked into the field, toward the bird. Soon I heard some yelling in Hebrew, and I turned to see a man down the road on a tractor, gesturing at me in a very animated fashion. I returned to the road. It was hard to understand the man’s loud jabbering, but between my broken Hebrew and his hand gestures, I soon figured out the issue. I turned to the sign, and now realized that I did recognize those Hebrew words. The sign said, “Danger, Minefield!” My unconscious had gotten the message, but I had let my conscious mind overrule it.

It used to be difficult for me to trust my instincts when I couldn’t produce a concrete, logical basis for them, but that experience cured me. We are all a bit like patient TN, blind to certain things, being advised by our unconscious to dodge to the left and right. That advice can often save us, if we are willing to open ourselves to the input.


PHILOSOPHERS HAVE FOR centuries debated the nature of “reality,” and whether the world we experience is real or an illusion. But modern neuroscience teaches us that, in a way, all our perceptions must be considered illusions. That’s because we perceive the world only indirectly, by processing and interpreting the raw data of our senses. That’s what our unconscious processing does for us—it creates a model of the world. Or as Kant said, there is Das Ding an sich, a thing as it is, and there is Das Ding für uns, a thing as we know it. For example, when you look around, you have the feeling that you are looking into three-dimensional space. But you don’t directly sense those three dimensions. Instead, your brain reads a flat, two-dimensional array of data from your retinas and creates the sensation of three dimensions. Your unconscious mind is so good at processing images that if you were fitted with glasses that turn the images in your eyes upside down, after a short while you would see things right side up again. If the glasses were then removed, you would see the world upside down again, but just for a while.24 Because of all that processing, when we say, “I see a chair,” what we really mean is that our brain has created a mental model of a chair.

Our unconscious doesn’t just interpret sensory data, it enhances it. It has to, because the data our senses deliver is of rather poor quality and must be fixed up in order to be useful. For example, one flaw in the data your eyes supply comes from the so-called blind spot, a spot on the back of your eyeball where the wire connecting your retina and your brain is attached. This creates a dead region in each eye’s field of vision. Normally you don’t even notice it because your brain fills in the picture based on the data it gets from the surrounding area. But it is possible to design an artificial situation in which the hole becomes visible. For example, close your right eye, look at the number 1 on the right side of the line below, and move the book toward you (or away from you) until the sad face disappears—it will then be in your blind spot. Keeping your head still, now look at the 2, the 3, and so on, still with your left eye. The sad face will reappear, probably around the number 4.

To help compensate for their imperfections, your eyes change position a tiny bit several times each second. These jiggling motions are called microsaccades, to distinguish them from ordinary saccades, the larger, more rapid patterns your eyes ceaselessly follow when you study a scene. These happen to be the fastest movements executed by the human body, so rapid that they cannot be observed without special instruments. For example, as you read this text your eye is making a series of saccades along the line. And if I were talking to you, your gaze would bounce around my face, mostly near my eyes. All told, the six muscles controlling your eyeball move it some 100,000 times each day, about as many times as your heart beats.

If your eyes were a simple video camera, all that motion would make the video unwatchable. But your brain compensates by editing out the period during which your eye is in transit and filling in your perception in a way that you don’t notice. You can illustrate that edit quite dramatically, but you’ll need to enlist as your partner a good friend, or perhaps an acquaintance who has had a few glasses of wine. Here is what you do: Stand facing your partner, with about four inches separating your noses, then ask your partner to fixate midway between your eyes. Next, have your partner look toward your left ear and back. Repeat this a couple of times. Meanwhile, your job is to observe your partner’s eyes and note that you have no difficulty seeing them move back and forth. The question is, If you could stand nose to nose with yourself and repeat the procedure, would you see your own eyes move? If it is true that your brain edits out visual information received during eye movements, you would not. How can you perform this test? Stand facing a mirror, with your nose two inches from the mirror’s surface (this corresponds to four inches from a real person). Look first right between your eyes, then at your left ear, then back. Repeat a couple of times. Miraculously, you get the two views but never see your eye move between them.

Another gap in the raw data delivered by your eyes has to do with your peripheral vision, which is quite poor. In fact, if you hold your arm out and gaze at your thumbnail, the only part of your field of vision with good resolution will be the area within, and perhaps just bordering, your nail. Even if you have twenty-twenty vision, your visual acuity outside that central region will be roughly comparable to that experienced by a person who needs thick glasses and doesn’t have them. You can get a taste for that if you look at this page from a distance of a couple feet and stare at the central asterisk in the first line below (try not to cheat—it isn’t easy!). The F’s in that line are a thumbnail apart. You’ll probably be able to recognize the A and F just fine, but not much of the other letters at all. Now go down to the second line. Here, the increasing size of the letters gives you some help. But if you’re like me, you won’t be able to clearly read all the letters unless they are as large as they appear in the third line. The size of the magnification required for you to be able to see the letters at the periphery is an indication of the poor quality of your peripheral vision.

The blind spot, saccades, poor peripheral vision—all these issues should cause you severe problems. When you look at your boss, for example, the true retinal image would show a fuzzy, quivering person with a black hole in the middle of his or her face. However emotionally appropriate that may seem, it is not an image you’ll ever perceive, because your brain automatically processes the data, combining the input from both eyes, removing the effects of the jiggling, and filling in gaps on the assumption that the visual properties of neighboring locations are similar. The images below illustrate some of the processing your brain does for you. On the left is the scene as recorded by a camera. On the right is the same image as it would appear if recorded by a human retina with no additional processing. Fortunately for you, that processing gets done in the unconscious, making the images you see as polished and refined as those picked up by the camera.

Our hearing works in an analogous manner. For example, we unconsciously fill in gaps in auditory data. To demonstrate this, in one study experimenters recorded the sentence “The state governors met with their respective legislatures convening in the capital city,” then erased the 120-millisecond portion of the sentence containing the first “s” sound in “legislatures” and replaced it with a cough. They told twenty experimental subjects that they would hear a recording containing a cough and would be given printed text so they could circle the exact position in the text at which the cough occurred. The subjects were also asked if the cough had masked any of the circled sounds. All of the volunteers reported hearing the cough, but nineteen of the twenty said that there was no missing text. The only subject who reported that the cough had obscured any phonemes named the wrong one.25 What’s more, in follow-up work the researchers found that even practiced listeners couldn’t identify the missing sound. Not only could they not pinpoint the exact location of the cough—they couldn’t even come close. The cough didn’t seem to occur at any clear point within the sentence; rather, it seemed to coexist with the speech sounds without affecting their intelligibility.

Original image, made by a camera. The same image seen by a retina (right eye, fixation at the X.)
Courtesy of Laurent Itti.

Even when the entire syllable “gis” in “legislatures” was obliterated by the cough, subjects could not identify the missing sound.26 The effect is called phonemic restoration, and it’s conceptually analogous to the filling in that your brain does when it papers over your retinal blind spot, and enhances the low resolution in your peripheral vision—or fills holes in your knowledge of someone’s character by employing clues based on their appearance, their ethnic group, or the fact that they remind you of your uncle Jerry. (About that, more later.)

Phonemic restoration has a striking property: because it is based on the context in which you hear words, what you think you heard at the beginning of a sentence can be affected by the words that come at the end. For example, letting an asterisk denote the cough, listeners in another famous study reported hearing the word “wheel” in the sentence “It was found that the *eel was on the axle.” But they heard “heel” when they listened to the sentence “It was found that the *eel was on the shoe.” Similarly, when the final word in the sentence was “orange” they heard “peel,” and when it was “table,” they heard “meal.”27 In each case the data provided to each subject’s brain included the same sound, “*eel.” Each brain patiently held the information, awaiting more clues as to the context. Then, after hearing the word “axle,” “shoe,” “orange,” or “table,” the brain filled in the appropriate consonant. Only at that time did it pass to the subject’s conscious mind, leaving the subject unaware of the alteration and quite confident of having accurately heard the word that the cough had partially obscured.

———

IN PHYSICS, SCIENTISTS invent models, or theories, to describe and predict the data we observe about the universe. Newton’s theory of gravity is one example; Einstein’s theory of gravity is another. Those theories, though they describe the same phenomenon, constitute very different versions of reality. Newton, for example, imagined that masses affect each other by exerting a force, while in Einstein’s theory the effects occur through a bending of space and time and there is no concept of gravity as a force. Either theory could be employed to describe, with great accuracy, the falling of an apple, but Newton’s would be much easier to use. On the other hand, for the calculations necessary for the satellite-based global positioning system (GPS) that helps you navigate while driving, Newton’s theory would give the wrong answer, and so Einstein’s must be used. Today we know that actually both theories are wrong, in the sense that both are only approximations of what really happens in nature. But they are also both correct, in that they each provide a very accurate and useful description of nature in the realms in which they do apply.

As I said, in a way, every human mind is a scientist, creating a model of the world around us, the everyday world that our brains detect through our senses. Like our theories of gravity, our model of the sensory world is only approximate and is based on concepts invented by our minds. And like our theories of gravity, though our mental models of our surroundings are not perfect, they usually work quite well.

The world we perceive is an artificially constructed environment whose character and properties are as much a result of unconscious mental processing as they are a product of real data. Nature helps us overcome gaps in information by supplying a brain that smooths over the imperfections, at an unconscious level, before we are even aware of any perception. Our brains do all of this without conscious effort, as we sit in a high chair enjoying a jar of strained peas or, later in life, on a couch, sipping a beer. We accept the visions concocted by our unconscious minds without question, and without realizing that they are only an interpretation, one constructed to maximize our overall chances of survival, but not one that is in all cases the most accurate picture possible.

That brings up a question to which we will return again and again, in contexts ranging from vision to memory to the way we judge the people we meet: If a central function of the unconscious is to fill in the blanks when there is incomplete information in order to construct a useful picture of reality, how much of that picture is accurate? For example, suppose you meet someone new. You have a quick conversation, and on the basis of that person’s looks, manner of dress, ethnicity, accent, gestures—and perhaps some wishful thinking on your part—you form an assessment. But how confident can you be that your picture is a true one?

In this chapter I focused on the realm of visual and auditory perception to illustrate the brain’s two-tier system of data processing and the ways in which it supplies information that does not come directly from the raw data in front of it. But sensory perception is just one of many arenas of mental processing in which portions of the brain that operate at the unconscious level perform tricks to fill in missing data. Memory is another, for the unconscious mind is actively involved in shaping your memory. As we are about to see, the unconscious tricks that our brains employ to create memories of events—feats of imagination, really—are as drastic as the alterations they make to the raw data received by our eyes and ears. And the way the tricks conjured up by our imaginations supplement the rudiments of memory can have far-reaching—and not always positive—effects.

CHAPTER 3 Remembering and Forgetting How the brain builds memories … why we sometimes remember what never happened

A man sets himself the task of portraying the world. Through the years he peoples a space with images of provinces, kingdoms, mountains, bays, ships, islands, fishes, rooms, instruments, stars, horses, and people. Shortly before his death, he discovers that the patient labyrinth of lines traces the image of his face.

—JORGE LUIS BORGES

JUST SOUTH OF the Haw River in central North Carolina lies the old mill town of Burlington. It’s a part of the country that is home to blue herons, tobacco, and hot, humid summer nights. The Brookwood Garden Apartments is a typical Burlington complex. A pleasant single-story building made of gray brick, it is situated a few miles east of Elon College, now Elon University, a private school that, with the decline of the mills, came to dominate the town. On one of those hot nights in July 1984, a twenty-two-year-old Elon student named Jennifer Thompson was asleep in bed when a man snuck up to her back door.1 It was three o’clock in the morning. As her air conditioner hummed and rattled, the man cut Jennifer Thompson’s phone line, busted the lightbulb outside her door, and broke in. The noise was not enough to rouse her from her sleep, but the man’s footsteps inside her apartment eventually did. She opened her eyes and made out the form of someone crouching in the darkness at her side. A moment later the man jumped on her, put a knife to her throat, and threatened to kill her if she resisted. Then, as the intruder raped her, she studied his face, focusing on being able to identify him if she survived.

Thompson eventually tricked the rapist into allowing her to turn on a light and fix him a drink, at which point she escaped, naked, out the back door. She frantically pounded on the door of the next unit. The sleeping occupants didn’t hear her, but the rapist did, and he came after her. Thompson raced across the lawn toward a brick house that had a light on. The rapist gave up and moved on to a nearby building, where he again broke in, and raped another woman. Thompson, meanwhile, was taken to Memorial Hospital, where the police obtained samples of her hair and vaginal fluid. Afterward, they took her to the station, where Thompson recounted her study of the rapist’s face for the police sketch artist.

The next day the tips started pouring in. One pointed to a man named Ronald Cotton, twenty-two, who worked at a restaurant near Thompson’s apartment. Cotton had a record. He had previously pleaded guilty to a charge of breaking and entering and, while a teenager, to sexual assault. Three days after the incident, Detective Mike Gauldin summoned Thompson to headquarters to look at six photos, which he lined up on a table. According to the police report, Thompson studied the photos for five minutes. “I can almost remember feeling like I was at an SAT test,” she said. One of the photos was a shot of Cotton. She picked him out. A few days later, Gauldin presented Thompson with a physical lineup of five men. Each man was asked to step forward, utter a line, then turn and step back. At first unsure whether the rapist was the fourth man or the fifth, Thompson eventually settled on the fifth. Cotton again. According to Thompson, when informed that this was the same man she had identified from the photo lineup, she thought to herself, “Bingo, I did it right.” In court Thompson pointed her finger at Cotton and once more identified him as her rapist. The jury reached a verdict in forty minutes, and the judge sentenced Cotton to life plus fifty years. Thompson said it was the happiest day of her life. She celebrated with champagne.

The first sign that something was amiss, other than the defendant’s denials, came after Cotton, working in the prison kitchen, encountered a man named Bobby Poole. Poole bore a resemblance to Cotton and, therefore, also to the face in the police sketch based on Thompson’s description. What’s more, Poole was in prison for the same crime, rape. Cotton confronted Poole about the Thompson case, but Poole denied any involvement. Luckily for Cotton, Bobby Poole blabbed to another inmate that he had indeed raped Thompson and the other woman. Ronald Cotton had by pure chance run into the actual rapist. As a result of the prison confession, Cotton won a new trial.

At the second trial Jennifer Thompson was asked again if she could identify her rapist. She stood fifteen feet from both Poole and Cotton and looked them over. Then she pointed at Cotton and reaffirmed that he was her rapist. Poole looked something like Cotton, but thanks to the experiences that she had had during the time after the rape—her identifying Cotton in a photo, then in a lineup, then in the courtroom—Cotton’s was the face forever burned into her memory of that night. Instead of becoming a free man, Cotton emerged from his second trial with an even harsher punishment: he got two life sentences.

Seven more years passed. What was left of the evidence from the ten-year-old crime, including a fragment of a single sperm from the perpetrator, languished on a shelf in the Burlington Police Department. Meanwhile, the new technology of DNA testing was making the news, thanks to the double-murder trial of O. J. Simpson. Cotton prodded his attorney to request that the sperm fragment be tested. Eventually, his attorney was able to get the test done. The result proved that Bobby Poole, not Ronald Cotton, had raped Jennifer Thompson.

In the Thompson case, all we know is that the victim misremembered her attacker. We’ll never know how accurately or inaccurately Thompson remembered the other details of her attack because no objective record of the crime exists. But it is difficult to imagine a witness more reliable than Jennifer Thompson. She was bright. She stayed relatively calm during the assault. She studied her attacker’s face. She focused on remembering it. She had no prior knowledge of or bias against Cotton. Yet she fingered the wrong man. That has to be disturbing, for if Jennifer Thompson was mistaken in her identification, perhaps no eyewitness can be trusted to reliably identify an unknown assailant. There’s plenty of evidence to suggest that this is the case—some of it from the very people who organize lineups like the one that resulted in Cotton’s arrest.

About seventy-five thousand police lineups take place each year, and statistics on those show that 20 to 25 percent of the time witnesses make a choice that the police know is incorrect. They can be sure of this because the witnesses have chosen one of the “known innocents” or “fillers” that the police inserted into the lineup simply to fill it out.2 These are often police detectives themselves, or inmates plucked from the local jail. Such false identifications don’t get anyone in trouble, but think about the implications: the police know that a fifth to a quarter of the time a witness will identify an individual who they are certain did not commit the crime, yet when a witness fingers the person who is their suspect, the police—and the courts—assume that that identification is reliable. As the above statistics reveal, it’s not. In fact, experimental studies in which people are exposed to mock crimes suggest that when the true culprit is not in the lineup, more than half the time eyewitnesses will do exactly what Jennifer Thompson did: they will choose someone anyway, selecting the person who best matches their memory of the criminal.3 As a result, false eyewitness identification seems to be the leading cause of wrongful conviction. An organization called the Innocence Project, for example, found that of the hundreds of people exonerated on the basis of postconviction DNA testing, 75 percent had been imprisoned because of inaccurate eyewitness identification.4

You would think that such findings would result in a massive overhaul of the process and the use of eyewitness identification. Unfortunately, the legal system is resistant to change, especially when the changes are fundamental—and inconvenient. As a result, to this day the magnitude and probability of memory error has gone virtually unnoticed. Certainly the law occasionally pays lip service to the fact that eyewitnesses can be mistaken, but most police departments still rely heavily on lineups, and you can still convict someone in court solely on the eyewitness testimony of a stranger. In fact, judges often prohibit the defense from introducing testimony about the scientific research on the flaws of eyewitness identification. “Judges say it’s either too complicated, abstract, and unconnected for jurors to understand, and other times they say it’s too simplistic,” says Brandon Garrett, the author of a book called Convicting the Innocent.5 The courts even discourage jurors who are deliberating from using the trial transcript to aid their memory of the testimony they heard in court. The state of California, for example, recommends that judges inform juries that “their memories should prevail over the written transcript.”6 Lawyers will tell you there are practical reasons for that policy—for instance, that deliberations would take too long if jurors pored over the trial transcripts. But to me, that seems outrageous, like saying we should believe someone’s testimony about an incident rather than a film of the incident itself. We’d never settle for such thinking in other areas of life. Imagine the American Medical Association telling doctors not to rely on patients’ charts. “Heart murmur? I don’t remember any heart murmur. Let’s take you off that medication.”


IT’S RARE TO have proof of what actually happened, so in most cases we’ll never know how accurate our memories really are. But there are exceptions. In fact, there is one example in which those who study memory distortion were provided with a record that couldn’t have been surpassed had they orchestrated the incident themselves. I’m referring to the Watergate scandal of the 1970s. That scandal concerned a break-in by Republican operatives at the headquarters of the Democratic National Committee and the subsequent cover-up by the administration of President Richard Nixon. A fellow named John Dean, the White House counsel to Nixon, was deeply involved in orchestrating the cover-up, which eventually led to Nixon’s resignation. Dean was said to have an extraordinary memory, and as millions around the world watched on live television, he testified at hearings held by the United States Senate. In his testimony, Dean recalled incriminating conversations with Nixon and other principals in such great detail that he became known as the “human tape recorder.” What endows Dean’s testimony with scientific importance is the fact that the Senate committee later discovered that there was also a real tape recorder listening in on the president: Nixon was secretly recording his conversations for his own later use. The human tape recorder could be checked against reality.

The psychologist Ulric Neisser did the checking. He painstakingly compared Dean’s testimony to the actual transcripts and cataloged his findings.7 John Dean, it turns out, was more like a historical novelist than a tape recorder. He was almost never right in his recollections of the content of the conversations, and he was usually not even close.

For example, on September 15, 1972—before the scandal engulfed the White House—a grand jury concluded its investigation by handing down indictments against seven men. They included the five Watergate burglars but only two of the people involved in planning the crime, and they were the “small fish”—Howard Hunt and Gordon Liddy. The Justice Department said it had no evidence on which to indict anyone higher up. That seemed to be a victory for Nixon. In his testimony, Dean had this to say about the president’s reaction:

Late that afternoon I received a call requesting me to come to the President’s Oval Office. When I arrived at the Oval Office I found Haldeman [Nixon’s chief of staff] and the President. The President asked me to sit down. Both men appeared to be in very good spirits and my reception was very warm and cordial. The President then told me that Bob—referring to Haldeman—had kept him posted on my handling of the Watergate case. The President told me I had done a good job and he appreciated how difficult a task it had been and the President was pleased that the case had stopped with Liddy. I responded that I could not take credit because others had done much more difficult things than I had done. As the President discussed the present status of the situation I told him that all I had been able to do was to contain the case and assist in keeping it out of the White House. I also told him there was a long way to go before this matter would end and that I certainly could make no assurances that the day would not come when this matter would start to unravel.

On comparing this meticulous account of the meeting to the transcript, Neisser found that hardly a word of it was true. Nixon didn’t make any of the statements Dean attributed to him. He didn’t ask Dean to sit down; he didn’t say that Haldeman had kept him posted; he didn’t say that Dean had done a good job; and he didn’t say anything about Liddy or the indictments. Nor did Dean say any of the things he attributed to himself. In fact, not only did Dean not say that he “could make no assurances” that the matter wouldn’t start to unravel, he actually said pretty much the opposite, reassuring Nixon that “nothing is going to come crashing down.” Of course, Dean’s testimony sounds self-serving, and he might have been intentionally lying about his role. But if he was lying, he did a poor job of it, because, on the whole, his Senate testimony is just as self-incriminating as the actual, though very different, conversations revealed by the transcripts. And in any case, what is most interesting are the little details, neither incriminating nor exonerating, about which Dean seemed so certain, and was so wrong.

Perhaps you are thinking that the distortions so frequent in the memories of those who were the victims of serious crimes (or those who, like Dean, were trying to cover up such crimes) don’t have much to do with your everyday life, with how well you remember the details of your personal interactions. But memory distortions occur in everyone’s life. Think, for example, about a business negotiation. The various parties to the negotiation go back and forth, over the course of some days, and you are sure that you remember both what you and what the others said. In constructing your memory, however, there is what you said, but there is also what you communicated, what the other participants in the process interpreted as your message, and, finally, what they recalled about those interpretations. It’s quite a chain, and so people often strongly disagree in their recollections of events. That’s why when they are having important conversations, lawyers take notes. Though this doesn’t eliminate the potential for memory lapses, it does minimize it. Unfortunately, if you go through life taking notes on all your interpersonal interactions, chances are you won’t have many.

Cases like those of John Dean and Jennifer Thompson raise the same questions that have been raised, over the years, in thousands of other court cases: What is it about the way human memory works that produces such distortions? And how much can we trust our own memories of day-to-day life?


THE TRADITIONAL VIEW of memory, and the one that persists among most of us, is that it is something like a storehouse of movies on a computer’s hard drive. This is a concept of memory analogous to the simple video camera model of vision I described in the last chapter, and it is just as misguided. In the traditional view, your brain records an accurate and complete record of events, and if you have trouble remembering, it is because you can’t find the right movie file (or don’t really want to) or because the hard drive has been corrupted in some way. As late as 1991, in a survey conducted by the psychologist Elizabeth Loftus, most people, including the great majority of psychologists, still held this traditional view of memory: that whether accessible or repressed, clear or faded, our memory is a literal recorder of events.8 Yet if memories were indeed like what a camera records, they could be forgotten or they could fade so that they were no longer clear and vivid, but it would be difficult to explain how people—like Thompson and Dean—could have memories that are both clear and vivid while also being wrong.

One of the first scientists to realize that the traditional view does not accurately describe the way human memory operates had his epiphany after a case of false testimony—his own. Hugo Münsterberg was a German psychologist.9 He hadn’t started out intending to study the human mind, but when he was a student at the University of Leipzig he attended a series of lectures by Wilhelm Wundt. That was in 1883, just a few years after Wundt had started his famous psychology lab. Wundt’s lectures not only moved Münsterberg, they changed his life. Two years later Münsterberg completed a PhD under Wundt in physiological psychology, and in 1891 he was appointed assistant professor at the University of Freiburg. That same year, while attending the First International Congress in Paris, Münsterberg met William James, who had been impressed by his work. James was then officially the director of the new Harvard Psychological Laboratory, but he wanted to resign from the post to focus on his interests in philosophy. He lured Münsterberg across the Atlantic as his replacement, despite the fact that although Münsterberg could read English he could not speak it.

The incident that inspired Münsterberg’s particular interest in memory occurred a decade and a half later, in 1907.10 While he was vacationing with his family at the seashore, his home in the city was burglarized. Informed of this by the police, Münsterberg rushed back and took stock of the condition of his house. Later, he was called to testify under oath about what he had found. He gave the court a detailed account of his survey, which included the trail of candle wax he had seen on the second floor, a large mantel clock the burglar had wrapped in paper for transport but then left on the dining room table, and evidence that the burglar had entered through a cellar window. Münsterberg testified with great certainty, for as a scientist and a psychologist, he was trained in careful observation, and he was known to have a good memory, at least for dry intellectual facts. “During the last eighteen years,” Münsterberg once wrote, “I have delivered about three thousand university lectures. For those three thousand coherent addresses I had not once a single written or printed line or any notes whatever on the platform…. My memory serves me therefore rather generously.” But this was no university lecture. In this case, each of the above statements proved to be false. His confident testimony, like Dean’s, was riddled with errors.

Those errors alarmed Münsterberg. If his memory could mislead him, others must be having the same problem. Maybe his errors were not unusual but the norm. He began to delve into reams of eyewitness reports, as well as some early pioneering studies of memory, in order to investigate more generally how human memory functions. In one case Münsterberg studied, after a talk on criminology in Berlin, a student stood up and shouted a challenge to the distinguished speaker, one Professor Franz von Liszt, a cousin of the composer Franz Liszt. Another student jumped to his feet to defend von Liszt. An argument ensued. The first student pulled a gun. The other student rushed him. Then von Liszt joined the fray. Amid the chaos, the gun went off. The entire room erupted into bedlam. Finally von Liszt shouted for order, saying it was all a ruse. The two enraged students weren’t really students at all but actors following a script. The altercation had been part of a grand experiment. The purpose of the exercise? To test everyone’s powers of observation and memory. Nothing like a fake shootout in psych class to liven things up.

After the event, von Liszt divided the audience into groups. One group was asked to immediately write an account of what they had seen, another was cross-examined in person, and others were asked to write reports a little later. In order to quantify the accuracy of the reports, von Liszt divided the performance into fourteen bite-sized components, some referring to people’s actions, others to what they said. He counted as errors omissions, alterations, and additions. The students’ error rates varied from 26 to 80 percent. Actions that never occurred were attributed to the actors. Other important actions were missed. Words were put into the arguing students’ mouths, and even into the mouths of students who had said nothing.

As you might imagine, the incident received a fair amount of publicity. Soon staged conflicts became the vogue among psychologists all over Germany. They often involved, as the original had, a revolver. In one copycat experiment, a clown rushed into a crowded scientific meeting, followed by a man wielding a gun. The man and the clown argued, then fought and, after the gun went off, ran out of the room—all in less than twenty seconds. Clowns are not unheard of in scientific meetings, but they rarely wear clown costumes, so it is probably safe to assume that the audience knew the incident was staged, and why. But although the observers were aware that a quiz would follow, their reports were grossly inaccurate. Among the inventions that appeared in the reports were a wide variety of different costumes attributed to the clown and many details describing the fine hat on the head of the man with the gun. Hats were common in those days, but the gunman had not worn one.

From the nature of these memory errors, and those documented in many other incidents he studied, Münsterberg fashioned a theory of memory. He believed that none of us can retain in memory the vast quantity of details we are confronted with at any moment in our lives and that our memory mistakes have a common origin: they are all artifacts of the techniques our minds employ to fill in the inevitable gaps. Those techniques include relying on our expectations and, more generally, on our belief systems and our prior knowledge. As a result, when our expectations, beliefs, and prior knowledge are at odds with the actual events, our brains can be fooled.

For example, in his own case, Münsterberg had overheard police conversations about the burglar entering through the cellar window and, without realizing it, incorporated that information into his memory of the crime scene. But there was no such evidence, for, as the police later discovered, their initial speculation had been wrong. The burglar had actually entered by removing the lock on the front door. The clock Münsterberg remembered packed in paper for transport had actually been packed in a tablecloth, but, as Münsterberg wrote, his “imagination gradually substituted the usual method of packing with wrapping paper.” As for the candle wax he so clearly remembered having seen on the second floor, it was actually in the attic. When he spotted it, he wasn’t aware of its importance, and by the time the issue came up, he was focused on the strewn papers and other disorder on the second floor, apparently causing him to recall having seen the candle wax there.

Münsterberg published his ideas about memory in a book that became a best seller, On the Witness Stand: Essays on Psychology and Crime.11 In it, he elaborated on a number of key concepts that many researchers now believe correspond to the way memory really does work: first, people have a good memory for the general gist of events but a bad one for the details; second, when pressed for the unremembered details, even well-intentioned people making a sincere effort to be accurate will inadvertently fill in the gaps by making things up; and third, people will believe the memories they make up.

Hugo Münsterberg died on December 17, 1917, at age fifty-three, after suffering a cerebral hemorrhage and collapsing while delivering a lecture to a class at Radcliffe.12 His ideas on memory, and his pioneering work in applying psychology to law, education, and business, had made him famous, and he’d counted as friends notables like President Theodore Roosevelt and the philosopher Bertrand Russell. But one person Münsterberg did not consider to be a friend in his later years was his onetime sponsor and mentor, William James.13 For one, James had become fascinated with psychics, communication with the dead, and other mystical activities, which Münsterberg and many others considered to be pure quackery. For another, James, if not a convert to psychoanalysis, had at least followed Freud’s work with interest and saw value in it. Münsterberg, on the other hand, was blunt about his view of the unconscious, writing, “The story of the subconscious mind can be told in three words: there is none.”14 In fact, when Freud visited Boston in 1909 to speak—in German—at Harvard, Münsterberg showed his disapproval by remaining conspicuously absent.

Between them, Freud and Münsterberg had come up with theories of mind and memory that were of great importance, but unfortunately the men had little impact on each other: Freud understood much better than Münsterberg did the immense power of the unconscious, but he thought that repression, rather than a dynamic act of creation on the part of the unconscious, was the reason for the gaps and inaccuracies in our memory; while Münsterberg understood much better than Freud did the mechanics and the reasons for memory distortion and loss—but had no sense at all of the unconscious processes that created them.


HOW COULD A memory system that discards so much of our experience have survived the rigors of evolution? Though human memory is subject to the distortion of memory reconstruction, if those subliminal distortions had proved seriously detrimental to our ancestors’ survival, our memory system, or perhaps our species, would not have survived. Though our memory system is far from perfect, it is, in most situations, exactly what evolution requires: it is good enough. In fact, in the big picture, human memory is wonderfully efficient and accurate—sufficient to have enabled our ancestors to generally recognize the creatures they should avoid and those they should hunt down, where the best trout streams are, and the safest way back to camp. In modern terms, the starting point in understanding how memory works is Münsterberg’s realization that the mind is continuously bombarded by a quantity of data so vast that it cannot possibly handle all of it—the roughly eleven million bits per second I mentioned in the last chapter. And so we have traded perfect recall for the ability to handle and process that staggering amount of information.

When we hold a baby’s birthday party in the park, we experience two intense hours of sights and sounds. If we crammed all of them into memory, we’d soon have a huge warehouse of smiles, frosting mustaches, and poopy diapers. Important aspects of the experience would be stored amid irrelevant clutter, such as the patterns of color on each mother’s blouse, the small talk made by each dad, the cries and screams of every child present, and the steadily growing number of ants on the picnic table. The truth is, you don’t care about the ants or the small talk, and you don’t want to remember everything. The challenge that the mind faces, and that the unconscious meets, is to be able to sift through this inventory of data in order to retain the parts that actually do matter to you. If the sifting doesn’t occur, you just get lost in the data dump. You see the trees but not the forest.

There is, in fact, a famous study that illustrates the downside of an unfiltered memory, a case study of an individual who had such a memory. The study was performed over the course of thirty years, starting in the 1920s, by the Russian psychologist A. R. Luria.15 The man who couldn’t forget was a famed mnemonist named Solomon Shereshevsky. Shereshevsky apparently remembered in great detail everything that happened to him. Once Luria asked Shereshevsky to recount their initial meeting. Shereshevsky recalled that they were in Luria’s apartment and described exactly what the furniture looked like and what Luria was wearing. Then he recited without error the list of seventy words that—fifteen years earlier—Luria had read aloud and asked him to repeat.

The downside of Shereshevsky’s flawless memory was that the details often got in the way of understanding. For instance, Shereshevsky had great trouble recognizing faces. Most of us store in memory the general features of the faces we remember, and when we see someone we know, we identify the person by matching the face we’re looking at to a face in that limited catalog. But Shereshevsky’s memory housed a great many versions of every face he had ever seen. To Shereshevsky, each time a face changed its expression or was seen in different lighting, it was a new face, and he remembered them all. So any given person had not one face but dozens, and when Shereshevsky encountered someone he knew, matching that person’s face to the faces stored in his memory meant performing a search of a vast inventory of images to try to find an exact equivalent to what he was seeing.

Shereshevsky had similar problems with language. If you spoke to him, though he could always play back your exact words, he had trouble understanding your point. The comparison with language is apt, because this is another trees-and-forest problem. Linguists recognize two types of language structure: surface structure and deep structure. Surface structure refers to the specific way an idea is expressed, such as the words used and their order. Deep structure refers to the gist of the idea.16 Most of us avoid the problems of clutter by retaining the gist but freely discarding details. As a result, although we can retain deep structure—the meaning of what was said—for long periods of time, we can accurately remember surface structure—the words in which it was said—for just eight to ten seconds.17 Shereshevsky apparently had an exact and long-lasting memory of all the details of the surface structure, but those details interfered with his ability to extract the gist of what was being said. His inability to forget the irrelevant became so frustrating that at times he would write things down on paper and then burn the page, hoping his memory of them would also go up in flames. It didn’t work.

Read the following list of words, and please pay careful attention: candy, sour, sugar, bitter, good, taste, tooth, nice, honey, soda, chocolate, heart, cake, eat, and pie. If you read only the first few words carefully and then skimmed the rest because you lack patience and feel silly allowing yourself to be ordered around by a book, please reconsider—it is important. Please read through the list. Study it for half a minute. Now cover the list so you can’t see the words, and keep it covered while you read the next paragraph.

If you are a Shereshevsky you’ll have no trouble recalling all the words on the list, but chances are, your memory works a bit differently. In fact, I have given the little exercise I am about to give you to a dozen groups over the years, and the result is always the same. I’ll tell you the punch line after I explain the exercise. It is simple: just identify which of the following three words appeared on the above list: taste, point, sweet. Your answer doesn’t have to be just one word. Perhaps all of them were listed? Or none of them? Please give this some thought. Assess each word carefully. Can you picture seeing it on the list? Are you confident? Don’t choose a word as being on the list unless you are sure of it and can picture it there. Please settle on your answer. Now please uncover the list in the previous paragraph and see how you did.

The vast majority of people recall with great confidence that “point” was not on the list. The majority also recall that “taste” was. The punch line of the exercise has to do with the other word: “sweet.” If you recalled seeing that word, it is an illustration of the fact that your memory is based on your recollection of the gist of the list you saw and not the actual list: the word “sweet” was not on the list, but most of the words on the list were related thematically to the concept of sweetness. The memory researcher Daniel Schacter wrote that he gave tests like this to many audiences and the great majority of people claimed that “sweet” was on the list, even though it was not.18 I have also given this test to many large groups, and while I did not find a great majority remembering that “sweet” was on the list, I did consistently get about half of my audience claiming it was—about the same number who correctly recalled that “taste” was on it. That result was consistent across many cities and countries. The difference between my results and Schacter’s may stem from the way I phrase the question—for I always stress that people should not designate a word unless they are sure, unless they can picture the list and vividly see that the word is on it.

Our process of remembering can be said to be analogous to the way computers store images, except that our memories have the added complexity that the memory data we store changes over time—we’ll get to that later. In computers, to save storage space, images are often highly “compressed,” meaning that only certain key attributes of the original image are kept; this technique can reduce the file size from megabytes to kilobytes. When the image is viewed, the computer predicts, from the limited information in the compressed file, what the original image looked like. If we view a small “thumbnail”-sized image made from a highly compressed data file, it usually looks very much like the original. But if we blow the image up, if we look closely at the details, we see many errors—blocks and bands of solid color where the software guessed wrong and the missing details were incorrectly filled in.

That’s how both Jennifer Thompson and John Dean got fooled, and it’s essentially the process Münsterberg envisioned: remember the gist, fill in the details, believe the result. Thompson recalled the “gist” of her rapist’s face, and when she saw a man in the lineup of photographs who fit the general parameters of what she remembered, she filled in the details of her memory with the face of the man in front of her, working off the expectation that the police wouldn’t show her a set of pictures unless they had reason to believe the rapist’s photo was among them (though as it turned out, it wasn’t). Similarly, Dean remembered few of the details of his individual conversations, but when he was pressed, his mind filled them in, using his expectations about what Nixon would have said. Neither Thompson nor Dean was aware of those fabrications. And both had them reinforced by repeatedly being asked to relive the events they were remembering, for when we are repeatedly asked to re-create a memory, we reinforce it each time, so that in a way we are remembering the memory, not the event.

You can easily see how this happens in your own life. Your brain, for example, might have recorded in its neurons the feeling of being embarrassed when you were teased by a fourth-grade boy because you brought your favorite teddy bear to school. You probably wouldn’t have retained a picture of the teddy bear, or the boy’s face, or the look on that face when you threw your peanut butter sandwich at him (or was it ham and cheese?). But suppose that years later you had reason to relive the moment. Those details might then have come to mind, filled in by your unconscious. If, for some reason, you returned to the incident again and again—perhaps because in retrospect it had become a funny story about your childhood that people always enjoyed hearing—you most likely created a picture of the incident so indelibly vivid and clear to yourself that you would believe totally in the accuracy of all the details.

If this is so, you may be wondering, then why have you never noticed your memory mistakes? The problem is that we rarely find ourselves in the position that John Dean was in—the position of having an accurate recording of the events we claim to remember. And so we have no reason to doubt our memories. Those who have made it their business to investigate memory in a serious fashion, however, can provide you with plenty of reasons for doubting. For example, the psychologist Dan Simons, ever the scientist, became so curious about his own memory errors that he picked an episode from his own life—his experiences on September 11, 2001—and did something few of us would ever make the effort to do.19 He investigated, ten years later, what had actually happened. His memory of that day seemed very clear. He was in his lab at Harvard with his three graduate students, all named Steve, when they heard the news, and they spent the rest of the day together, watching the coverage. But Simons’s investigation revealed that only one of the Steves was actually present—another was out of town with friends, and the third was giving a talk elsewhere on campus. As Münsterberg might have predicted, the scene Simons remembered was the scene he’d have expected, based on prior experience, since those three students were usually in the lab—but it wasn’t an accurate picture of what happened.


THROUGH HIS LOVE of case studies and real-life interactions, Hugo Münsterberg advanced the frontiers of our understanding of how we store and retrieve memories. But Münsterberg’s work left open a major issue: How does memory change over time? As it turned out, at about the same period when Münsterberg was writing his book, another pioneer, a laboratory scientist who, like Münsterberg, swam against the Freudian tide, was studying the evolution of memory. The son of a shoemaker from the tiny country town of Stow-on-the-Wold in England, the young Frederic Bartlett had to take over his own education when the town’s equivalent of a high school closed.20 That was in 1900. He did the job well enough that he ended up an undergraduate at Cambridge University, where he remained for graduate school; he eventually became the institution’s first professor in the new field of experimental psychology. Like Münsterberg, Bartlett did not go into academia planning to study memory. He came to it through an interest in anthropology.

Bartlett was curious about the way culture changes as it is passed from person to person, and through the generations. The process, he thought, must be similar to the evolution of an individual’s personal memories. For example, you might remember a crucial high school basketball game in which you scored four points, but years later, you might remember that number as being fourteen. Meanwhile, your sister might swear you spent the game in a beaver costume, dressed as the team’s mascot. Bartlett studied how time and social interactions among people with differing recollections of events change the memory of those events. He hoped, through that work, to gain an understanding of how “group memory,” or culture, develops.

Bartlett imagined that the evolution of both cultural and personal memories resembles the whisper game (also called the telephone game). You probably recall the process: the first person in a chain whispers a sentence or two to the next person in the chain, who whispers it to the next person, and so on. By the end, the words bear little resemblance to what was said at the beginning. Bartlett used the whisper game paradigm to study how stories evolve as they pass from one person’s memory to the next. But his real breakthrough was to adapt the procedure to study how the story can evolve over time within an individual’s memory. Essentially, he had his subjects play the whisper game with themselves. In his most famous work, Bartlett read his subjects the Native American folktale “The War of the Ghosts.” The story is about two boys who leave their village to hunt seals at the river. Five men in a canoe come along and ask the boys to accompany them in attacking some people in a town upriver. One of the boys goes along, and, during the attack, he hears one of the warriors remark that he—the boy—had been shot. But the boy doesn’t feel anything, and he concludes that the warriors are ghosts. The boy returns to his village and tells his people about his adventure. The next day, when the sun rises, he falls over, dead.

After reading the story to his subjects, Bartlett asked them to remind themselves of the tale after fifteen minutes, and then at irregular intervals after that, sometimes over a period of weeks or months. Bartlett studied the way that his subjects recounted the stories over time, and he noted an important trend in the evolution of memory: there wasn’t just memory loss; there were also memory additions. That is, as the original reading of the story faded into the past, new memory data was fabricated, and that fabrication proceeded according to certain general principles. The subjects maintained the story’s general form but dropped some details and changed others. The story became shorter and simpler. With time, supernatural elements were eliminated. Other elements were added or reinterpreted so that “whenever anything appeared incomprehensible, it was either omitted or explained” by adding content.21 Without realizing it, people seemed to be trying to alter the strange story into a more understandable and familiar form. They provided the story with their own organization, making it seem to them more coherent. Inaccuracy was the rule, and not the exception. The story, Bartlett wrote, “was robbed of all its surprising, jerky and inconsequential form.”

This figurative “smoothing out” of memories is strikingly similar to a literal smoothing out that Gestalt psychologists in the 1920s had noted in studies of people’s memory for geometric shapes: if you show someone a shape that is irregular and jagged, and quiz them about it later, they’ll recall the shape as being far more regular and symmetrical than it actually was.22 In 1932, after nineteen years of research, Bartlett published his results. The process of fitting memories into a comfortable form “is an active process,” he wrote, and depends on the subject’s own prior knowledge and beliefs about the world, the “preformed tendencies and bias which the subject brings to the task” of remembering.23

For many years Bartlett’s work on memory was forgotten, though he went on to an illustrious career in which he helped train a generation of British researchers to work in experimental psychology. Today Bartlett’s memory research has been rediscovered, and replicated in a modern setting. For example, the morning after the explosion of the space shuttle Challenger, Ulric Neisser, the man who did the John Dean study, asked a group of Emory University students how they’d first heard the news. The students all wrote clear accounts of their experiences. Then, about three years later, he asked the forty-four students who were still on campus to again recall that experience.24 Not one of the accounts was entirely correct, and about one-quarter of them were entirely wrong. The act of hearing the news became less random and more like the dramatic stories or clichés you might expect someone to tell, just as Bartlett might have predicted. For example, one subject, who’d heard the news while chatting with friends at the cafeteria, later reported how “some girl came running down the hall screaming ‘the space shuttle just blew up.’” Another, who’d heard it from various classmates in her religion class, later remembered, “I was sitting in my freshman dorm room with my roommate and we were watching TV. It came on a news flash and we were both totally shocked.” Even more striking than the distortions were the students’ reactions to their original accounts. Many insisted that their later memories were more accurate. They were reluctant to accept their earlier description of the scene, even though it was in their own handwriting. Said one, “Yes, that’s my handwriting—but I still remember it the other way!” Unless all these examples and studies are just strange statistical flukes, they ought to give us pause regarding our own memories, especially when they conflict with someone else’s. Are we “often wrong but never in doubt”? We might all benefit from being less certain, even when a memory seems clear and vivid.


HOW GOOD AN eyewitness are you? The psychologists Raymond Nickerson and Marilyn Adams invented a neat challenge. Just think of—but don’t look at—an American penny. It’s an object you might have viewed thousands of times, but how well do you really know it? Can you draw one? Take a moment to try, or at least try to imagine one. What are the main features on each side? When you are done, have a look at the graphic on the following page and try this easier task: pick out the correct penny from among the beautiful sketches Nickerson and Adams kindly provided.25

If you picked A, you would have been in the minority of subjects who chose the correct coin in Nickerson and Adams’s experiment. And if your drawings or imaginings have all eight features of the penny—features such as the profile of Abraham Lincoln on one side, and phrases like IN GOD WE TRUST and E PLURIBUS UNUM—then you are in the top 5 percent in memory for detail. If you did poorly on this test, it doesn’t mean you have a bad memory. Your memory for general features might be excellent. In fact, most people can remember previously viewed photographs surprisingly well, even after a long interval. But they are remembering only general content, not precise form.26 To not store in memory the details of a penny is for most of us an advantage; unless we have to answer a question on a game show with a lot of money at stake, we have no need to remember what’s on a penny, and to do so could get in the way of our remembering more important things.

Reprinted from R. S. Nickerson and M. J. Adams, “Long-Term Memory for a Common Object,” Cognitive Psychology 11, 287–307, copyright 1979, with permission from Elsevier

One reason we don’t retain details of the images that our eyes pick up is that in order for us to remember them, the details first have to have captured our conscious attention. But while the eye delivers a multitude of details, our conscious mind doesn’t register most of them. The disparity between what we see and what we register and, therefore, remember, can be dramatic.

The key to one experiment investigating that disparity was the fact that when you study an image with many objects in it, your eye will shift among the different objects displayed. For example, if an image shows two people seated at a table with a vase on it, you might look at one person’s face, then the vase, then the other person’s face, then perhaps the vase again, then the tabletop, and so on, all in rapid succession. But remember the experiment in the last chapter, in which you stood facing a mirror and noted that there were blanks in your perception during the time your eyes were moving? The researchers who performed this study cleverly realized that if, during the split second their subjects’ eyes were in motion, the image the subjects were looking at changed subtly, the subjects might not notice. Here is how it worked: Each subject started by looking at some initial image on a computer screen. The subject’s eyes would move from object to object, bringing different aspects of the scene into focus. After a while, during one of the subject’s numerous eye shifts, the experimenters would replace the image with one that was slightly altered. As a result, once the subject’s eyes settled on the new target object, certain details of the image were different—for example, the hats the two men in the scene had been wearing were exchanged. The great majority of subjects didn’t notice. In fact, only half the subjects noticed when the two people exchanged heads!27

It’s interesting to speculate how important a detail has to be to register with us. To test if memory gaps like this also happen when the objects that change from shot to shot are the focus of attention, Dan Simons and his fellow psychologist Daniel Levin created videos depicting simple events in which the actor playing a particular character changed from scene to scene.28 Then they recruited sixty Cornell University students, who agreed to watch the videos in exchange for candy. In a typical video, as depicted by the sample frames below, a person sitting at a desk hears a phone ring, gets up, and walks toward the door. The video then cuts to a view of the hallway, where a different actor walks to the telephone and answers it. The change is not as drastic as, say, replacing Brad Pitt with Meryl Streep. But neither were the two actors hard to tell apart. Would the students notice the switch?

Figure provided by Daniel Simons

After viewing the film, the students were asked to write a brief description of it. If they didn’t mention the actors’ change, they were asked directly, “Did you notice that the person who was sitting at the desk was a different person than the one who answered the phone?” About two-thirds of them admitted that they hadn’t noticed. Surely during each shot they were aware of the actor and her actions. But they didn’t retain in their memory the details of her identity. Emboldened by that startling find, the researchers decided to go a step further. They examined whether this phenomenon, called change blindness, also occurred in real-world interactions. This time they took their experiment outdoors, onto the Cornell University campus.29 There, a researcher carrying a campus map approached unsuspecting pedestrians to ask for directions to a nearby building. After the researcher and pedestrian had spoken for ten or fifteen seconds, two other men, each holding one end of a large door, rudely passed between them. As the door passed, it blocked the pedestrian’s view of the experimenter for about one second. During that time, a new researcher with an identical map stepped in to continue the direction-asking interaction while the original researcher walked off behind the door. The substitute researcher was two inches shorter, wore different clothing, and had a noticeably different voice than the original. The pedestrian’s conversational partner had suddenly morphed into someone else. Still, most of the pedestrians didn’t notice, and were quite surprised when told of the switch.

Figure provided by Daniel Simons

IF WE’RE NOT very good at noticing or remembering the details of scenes that occurred, an even more serious issue is recalling something that never happened at all. Remember the people in my audiences who reported seeing in their mind’s eye a vivid picture of the word “sweet” on the list I had presented to them? Those people were having a “false memory,” a memory that seemed real but wasn’t. False memories feel no different than memories that are based in reality. For example, in the many variations of the word list experiment researchers have performed over the years, people who “remembered” phantom words rarely felt they were taking a shot in the dark. They reported recalling them vividly, and with great confidence. In one of the more revealing experiments, two word lists were read to volunteers by two different readers, a man and a woman.30 After the readings, the volunteers were presented with another list, this one containing words they both had and had not heard. They were asked to identify which were which. For each word they remembered hearing, they were also asked whether it had been uttered by the male or the female speaker. The subjects were pretty accurate in recalling whether the man or woman had said the words they’d actually heard. But to the researchers’ surprise, the subjects almost always also expressed confidence in identifying whether it was the man or the woman who had spoken the words they were wrong about having heard. That is, even when the subjects were remembering a word that had not actually been uttered, their memory of its utterance was vivid and specific. In fact, when told in a postexperiment debriefing that they hadn’t really heard a word they thought they had heard, the subjects frequently refused to believe it. In many cases the experimenters had to replay the videotape of the session to convince them, and even then, some of the subjects, like Jennifer Thompson in Ronald Cotton’s second trial, refused to accept the evidence that they were mistaken—they accused the researchers of switching the tape.

The idea that we can remember events that never happened was a key plot element of the famous Philip K. Dick story “We Can Remember It for You Wholesale,” which begins with a man approaching a company to have the memory of an exciting visit to Mars implanted in his brain. As it turns out, planting simple false memories is not that hard, and requires no high-tech solution like the one Dick envisioned. Memories of events that supposedly happened long ago are particularly easy to implant. You might not be able to convince anyone that they have been to Mars, but if your child’s fantasy is a ride in a hot air balloon, research has shown that it is possible to supply that memory with none of the expense or bother of arranging the actual experience.31

In one study scientists recruited twenty subjects who had never been in a hot air balloon, as well as one accompanying family member. Each family member secretly provided the researchers with three photos depicting the subject in the midst of some moderately significant event that occurred when the subject was between four and eight years old. They also provided other shots, which the researchers used to create a bogus photo of the subject in a hot air balloon. The photos, both real and faked, were then presented to the subjects, who were not aware of the ruse. The subjects were asked to recall everything they could about the scene depicted by each photo and were given a few minutes to think about it, if needed. If nothing came to them, they were asked to close their eyes and try to picture themselves as they appeared in the photo. The process was repeated two more times, at intervals of three to seven days. When it was over, half the subjects recalled memories of the balloon trip. Some recounted sensory details of the ride. Said one subject after being told the photo was a phony, “I still feel in my head that I actually was there; I can sort of see images of it….”

False memories and misinformation are so easy to plant that they have been induced in three-month-old infants, gorillas, and even pigeons and rats.32 As humans, we are so prone to false memories that you can sometimes induce one simply by casually telling a person about an incident that didn’t really happen. Over time, that person may “remember” the incident but forget the source of that memory. As a result, he or she will confuse the imagined event with his or her actual past. When psychologists employ this procedure, they are typically successful with between 15 and 50 percent of their subjects. For example, in a recent study, subjects who had actually been to Disneyland were asked to repeatedly read and think about a fake print advertisement for the amusement park.33 The copy in the fake ad invited the reader to “imagine how you felt when you first saw Bugs Bunny with your own eyes up close…. Your mother pushing you in his direction so you would shake his hand, waiting to capture the moment with a picture. You needed no urging, but somehow the closer you got, the bigger he got…. He doesn’t look that big on TV, you thought…. And it hits you hard. Bugs, the character you idolized on TV, is only several feet away…. Your heart stops but that doesn’t stop your hands from sweating. You wipe them off just before reaching up to grab his hand….” Later, when asked in a questionnaire about their personal memories of Disneyland, more than a quarter of the subjects reported having met Bugs Bunny there. Of those, 62 percent remembered shaking his hand, 46 percent recalled hugging him, and one recalled that he was holding a carrot. It was not possible that such encounters really occurred, because Bugs Bunny is a Warner Brothers property, and Disney inviting Bugs to roam Disneyland is something like the king of Saudi Arabia hosting a Passover Seder.

In other studies people have been led to believe that they had once gotten lost in a shopping mall, been rescued by a lifeguard, survived a vicious animal attack, and been uncomfortably licked on the ear by Pluto.34 They have been made to believe that they once had a finger caught in a mousetrap,35 spilled a punch bowl at a wedding reception,36 and were hospitalized overnight for a high fever.37 But even when memories are entirely fabricated, they are usually based on something true. Kids might be induced into believing they took a ride on a hot air balloon—but the details the child fills in to explain the bogus balloon ride photo percolate from the child’s unconscious, from a body of stored sensory and psychological experiences and the expectations and beliefs that stem from those experiences.


THINK BACK ON your life. What do you remember? When I do that, I find that it is not enough. Of my father, for example, who died more than twenty years ago, my memory holds but meager scraps. Walking with him after his stroke, as he leans for the first time on a cane. Or his glittering eyes and warm smile at one of my then-infrequent visits home. Of my earlier years I recall even less. I remember his younger self beaming with joy at a new Chevrolet and erupting with anger when I threw away his cigarettes. And if I go back still further, trying to remember the earliest days of childhood, I have yet fewer, ever more out-of-focus snapshots: of my father hugging me sometimes, or my mother singing to me while she held me and stroked my hair.

I know, when I shower my children with my usual excess of hugs and kisses, that most of those scenes will not stay with them. They will forget, and for good reason. I would not wish upon them the unforgetting life of a Shereshevsky. But my hugs and kisses do not vanish without a trace. They remain, at least in aggregate, as fond feelings and emotional bonds. I know that my memory of my parents would overflow any tiny vessel formed from merely the concrete episodes that my consciousness recalls, and I hope that the same will be true of my children. Moments in time may be forever forgotten, or viewed through a hazy or distorting lens, yet something of them nonetheless survives within us, permeating our unconscious. From there, they impart to us a rich array of feelings that bubble up when we think about those who were dearest to our hearts—or when we think of others whom we’ve only met, or the exotic and ordinary places we’ve lived in and visited, or the events that shaped us. Though imperfectly, our brains still manage to communicate a coherent picture of our life experience.

In the last chapter we saw how our unconscious takes the incomplete data provided by our senses, fills in what’s missing, and passes the perception to our conscious minds. When we look at a scene we think we are seeing a sharp, well-defined picture, like a photograph, but we really see only a small part of the picture clearly, and our subliminal brains paint in the rest. Our brains use the same trick in memory. If you were designing the system for human memory, you probably would not choose a process that tosses out data wholesale and then, when asked to retrieve it, makes things up. But for the vast majority of us, the method works well, most of the time. Our species would not have survived if that weren’t so. Through evolution, perfection may be abandoned, but sufficiency must be achieved. The lesson that teaches me is to be both humble and grateful. Humble, because any great confidence I feel in any particular memory could well be misplaced; but grateful, both for the memories I retain and the ability to not retain all of them. Conscious memory and perception accomplish their miracles with a heavy reliance on the unconscious. In the chapter that follows, we’ll see that this same two-tier system affects what is most important to us: the way we function in our complex human societies.

CHAPTER 4 The Importance of Being Social The fundamental role of human social character … why Tylenol can mend a broken heart

Strange is our situation here on earth. Each of us comes for a short visit, not knowing why, yet sometimes seeming to a divine purpose. From the standpoint of daily life, however, there is one thing we do know: that we are here for the sake of others.

—ALBERT EINSTEIN

I CAME HOME FROM work late one evening, hungry and frustrated, and popped into my mother’s house, which was next door to mine. She was eating a frozen dinner and sipping from a mug of hot water. CNN blared on the TV in the background. She asked how my day had been. I said, “Oh, it was good.” She looked up from her black plastic food tray and, after a moment, said, “No, it wasn’t. What happened? Have some pot roast.” My mother was eighty-eight, hard of hearing, and half blind in her right eye—which was her good eye. But when it came to perceiving her son’s emotions, my mother’s X-ray vision was unimpaired.

As she read my mood with such fluency, I thought about the man who had been my coworker and partner in frustration that day—the physicist Stephen Hawking, who could hardly move a muscle, thanks to a forty-five-year struggle with motor neuron disease. By this stage in the progression of his illness, he could communicate only by painstakingly twitching the cheek muscle under his right eye. That twitch was detected by a sensor on his glasses and communicated to a computer in his wheelchair. In this manner, with the help of some special software, he managed to select letters and words from a screen, and eventually to type out what he wanted to express. On his “good” days, it was as if he were playing a video game where the prize was the ability to communicate a thought. On his “bad” days, it was as if he were blinking in Morse code but had to look up the dot and dash sequence between each letter. On the bad days—and this had been one of them—our work was frustrating for both of us. And yet, even when he could not form words to express his ideas about the wave function of the universe, I had little trouble detecting when his attention shifted from the cosmos to thoughts of calling it quits and moving on to a nice curry dinner. I always knew when he was content, tired, excited, or displeased, just from a glance at his eyes. His personal assistant had this same ability. When I asked her about it, she described a catalog of expressions she’d learned to recognize over the years. My favorite was “the steely-faced glint of glee” he displayed when composing a potent rejoinder to someone with whom he strongly disagreed. Language is handy, but we humans have social and emotional connections that transcend words, and are communicated—and understood—without conscious thought.

The experience of feeling connected to others seems to start very early in life. Studies on infants show that even six-month-olds make judgments about what they observe of social behavior.1 In one such study infants watched as a “climber,” which was nothing more than a disk of wood with large eyes glued onto its circular “face,” started at the bottom of a hill and repeatedly tried but failed to make its way to the top. After a while, a “helper,” a triangle with similar eyes glued on, would sometimes approach from farther downhill and help the climber with an upward push. On other attempts, a square “hinderer” would approach from uphill and shove the circular disk back down.

The experimenters wanted to know if the infants, unaffected and uninvolved bystanders, would cop an attitude toward the hinderer square. How does a six-month-old show its disapproval of a wooden face? The same way six-year-olds (or sixty-year-olds) express social displeasure: by refusing to play with it. That is, when the experimenters gave the infants a chance to reach out and touch the figures, the infants showed a definite reluctance to reach for the hinderer square, as compared to the helper triangle. Moreover, when the experiment was repeated with either a helper and a neutral bystander block or a hinderer and a neutral block, the infants preferred the friendly triangle to the neutral block, and the neutral block to the nasty square. Squirrels don’t set up foundations to cure rabies, and snakes don’t help strange snakes cross the road, but humans place a high value on kindness. Scientists have even found that parts of our brain linked to reward processing are engaged when we participate in acts of mutual cooperation, so being nice can be its own reward.2 Long before we can verbalize attraction or revulsion, we are attracted to the kind and repelled by the unkind.

One advantage of belonging to a cohesive society in which people help one another is that the group is often better equipped than an unconnected set of individuals to deal with threats from the outside. People intuitively realize that there is strength in numbers and take comfort in the company of others, especially in times of anxiety or need. Or, as Patrick Henry famously said, “United we stand, divided we fall.” (Ironically, Henry collapsed and fell into the arms of bystanders shortly after uttering the phrase.)

Consider a study performed in the 1950s. About thirty female students at the University of Minnesota, none of whom had previously met, were ushered into a room and asked not to speak to each other.3 In the room was a “gentleman of serious mien, horn-rimmed glasses, dressed in a white laboratory coat, stethoscope dribbling out of his pocket, behind him an array of formidable electrical junk.” Seeking to induce anxiety, he melodramatically introduced himself as “Dr. Gregor Zilstein of the Medical School’s Departments of Neurology and Psychiatry.” Actually, he was Stanley Schachter, a harmless professor of social psychology. Schachter told the students he had asked them there to serve as subjects in an experiment on the effects of electric shocks. He would be shocking them, he said, and studying their reactions. After going on for seven or eight minutes about the importance of the research, he concluded by saying,

“These shocks will hurt, they will be painful…. It is necessary that our shocks be intense…. [We will] hook you into apparatus such as this [motioning toward the scary equipment behind him], give you a series of shocks, and take various measures such as your pulse rate, blood pressure, and so on.”

Schachter then told the students that he needed them to leave the room for about ten minutes while he brought in still more equipment and set it all up. He noted that there were many rooms available, so they could wait either in a room by themselves or in one with other subjects. Later, Schachter repeated the scenario with a different group of about thirty students. But this time, he aimed to lull them into a state of relaxation. And so, instead of the scary part about intense shocks, he said,

“What we will ask each of you to do is very simple. We would like to give each of you a series of very mild electric shocks. I assure you that what you feel will not in any way be painful. It will resemble more a tickle or a tingle than anything unpleasant.”

He then gave these students the same choice about waiting alone or with others. In reality, that choice was the climax of the experiment; there would be no electric shocks for either group.

The point of the ruse was to see if, because of their anxiety, the group expecting a painful shock would be more likely to seek the company of others than the group not expecting one. The result: about 63 percent of the students who were made anxious about the shocks wanted to wait with others, while only 33 percent of those expecting tickly, tingly shocks expressed that preference. The students had instinctively created their own support groups. It’s a natural instinct. A quick look at a web directory of support groups in Los Angeles, for example, turned up groups focused on abusive behavior, acne, Adderall addiction, addiction, ADHD, adoption, agoraphobia, alcoholism, albinism, Alzheimer’s, Ambien users, amputees, anemia, anger management, anorexia, anxiety, arthritis, Asperger’s syndrome, asthma, Ativan addiction, and autism—and that’s just the A’s. Joining support groups is a reflection of the human need to associate with others, of our fundamental desire for support, approval, and friendship. We are, above all, a social species.

Social connection is such a basic feature of human experience that when we are deprived of it, we suffer. Many languages have expressions—such as “hurt feelings”—that compare the pain of social rejection to the pain of physical injury. Those may be more than just metaphors. Brain-imaging studies show that there are two components to physical pain: an unpleasant emotional feeling and a feeling of sensory distress. Those two components of pain are associated with different structures in the brain. Scientists have discovered that social pain is also associated with a brain structure called the anterior cingulate cortex—the same structure involved in the emotional component of physical pain.4

It’s fascinating that the pain of a stubbed toe and the sting of a snubbed advance share a space in your brain. The fact that they are roommates gave some scientists a seemingly wild idea: Could painkillers that reduce the brain’s response to physical brain also subdue social pain?5 To find out, researchers recruited twenty-five healthy subjects to take two tablets twice each day for three weeks. Half received extra-strength Tylenol (acetaminophen) tablets, the other half placebos. On the last day, the researchers invited the subjects, one by one, into the lab to play a computer-based virtual ball-tossing game. Each person was told they were playing with two other subjects located in another room, but in reality those roles were played by the computer, which interacted with the subjects in a carefully designed manner. In round 1, those reputedly human teammates played nicely with the subjects, but in round 2, after tossing the virtual ball to the subject a few times, the teammates started playing only with each other, rudely excluding the subject from the game, like soccer players who refuse to pass the ball to a peer. After the exercise, the subjects were asked to fill out a questionnaire designed to measure social distress. Compared to those who took the placebo, those who took the Tylenol reported a reduced level of hurt feelings.

There was also a twist. Remember Antonio Rangel’s experiment in which the subjects tasted wine while having their brains scanned in an fMRI machine? These researchers employed the same technique—they had the subjects play the virtual ball game while lying in an fMRI machine. So while they were being snubbed by their teammates, their brains were being scanned by the machine. It showed that the subjects who’d taken Tylenol had reduced activity in the brain areas associated with social exclusion. Tylenol, it seems, really does reduce the neural response to social rejection.

When the Bee Gees long ago sang “How Can You Mend a Broken Heart?” they probably didn’t foresee that the answer was to take two Tylenols. That Tylenol would help really does sound far-fetched, so the brain researchers also performed a clinical test to see if Tylenol had the same effect outside the lab, in the real world of social rejection. They asked five dozen volunteers to fill out a “hurt feelings” survey, a standard psychological tool, every day for three weeks. Again, half the volunteers took a dose of Tylenol twice a day, while the other half took a placebo. The result? The volunteers on Tylenol did indeed report significantly reduced social pain over that time period.

The connection between social pain and physical pain illustrates the links between our emotions and the physiological processes of the body. Social rejection doesn’t just cause emotional pain; it affects our physical being. In fact, social relationships are so important to humans that a lack of social connection constitutes a major risk factor for health, rivaling even the effects of cigarette smoking, high blood pressure, obesity, and lack of physical activity. In one study, researchers surveyed 4,775 adults in Alameda County, near San Francisco.6 The subjects completed a questionnaire asking about social ties such as marriage, contacts with extended family and friends, and group affiliation. Each individual’s answers were translated into a number on a “social network index,” with a high number meaning the person had many regular and close social contacts and a low number representing relative social isolation. The researchers then tracked the health of their subjects over the next nine years. Since the subjects had varying backgrounds, the scientists employed mathematical techniques to isolate the effects of social connectivity from risk factors such as smoking and the others I mentioned above, and also from factors like socioeconomic status and reported levels of life satisfaction. They found a striking result. Over the nine-year period, those who’d placed low on the index were twice as likely to die as individuals who were similar with regard to other factors but had placed high on the social network index. Apparently, hermits are bad bets for life insurance underwriters.


SOME SCIENTISTS BELIEVE that the need for social interaction was the driving force behind the evolution of superior human intelligence.7 After all, it is nice to have the mental capacity to realize that we live in a curved four-dimensional space-time manifold, but unless the lives of early humans depended on having a GPS unit to locate the nearest sushi restaurant, the capability to develop such knowledge was not important to the survival of our species and, hence, did not drive our cerebral evolution. On the other hand, social cooperation and the social intelligence it requires seem to have been crucial to our survival. Other primates also exhibit social intelligence, but not nearly to the extent that we do. They may be stronger and faster, but we have the superior ability to band together and coordinate complex activities. Do you need to be smart to be social? Could the need for innate skill at social interaction have been the reason we developed our “higher” intelligence—and could what we usually think of as the triumphs of our intelligence, such as science and literature, be just a by-product?

Eons ago, having a sushi dinner involved skills a bit more advanced than saying, “Pass the wasabi.” It required catching a fish. Before about fifty thousand years ago, humans did not do that; nor did they eat other animals that were available but difficult to catch. Then, rather abruptly (on the evolutionary scale of time), humans changed their behavior.8 According to evidence uncovered in Europe, within the span of just a few millennia people started fishing, catching birds, and hunting down dangerous but tasty and nutritious large animals. At about the same time, they also started building structures for shelter and creating symbolic art and complex burial sites. Suddenly they had both figured out how to gang up on woolly mammoths and begun to participate in the rituals and ceremonies that are the rudiments of what we now call culture. In a brief period of time, the archaeological record of human activity changed more than it had in the previous million years. The sudden manifestation of the modern capacity for culture, ideological complexity, and cooperative social structure—without any change in human anatomy to explain it—is evidence that an important mutation may have occurred within the human brain, a software upgrade, so to speak, that enabled social behavior and thereby bestowed on our species a survival advantage.

When we think of humans versus dogs and cats, or even monkeys, we usually assume that what distinguishes us is our IQ. But if human intelligence evolved for social purposes, then it is our social IQ that ought to be the principal quality that differentiates us from other animals. In particular, what seems special about humans is our desire and ability to understand what other people think and feel. Called “theory of mind,” or “ToM,” this ability gives humans a remarkable power to make sense of other people’s past behavior and to predict how their behavior will unfold given their present or future circumstances. Though there is a conscious, reasoned component to ToM, much of our “theorizing” about what others think and feel occurs subliminally, accomplished through the quick and automatic processes of our unconscious mind. For example, if you see a woman racing toward a bus that pulls away before she can get on it, you know without giving it any thought that she was frustrated and possibly ticked off about not reaching the bus in time, and when you see a woman moving her fork toward and away from a piece of chocolate cake, you assume she’s concerned about her weight. Our tendency to automatically infer mental states is so powerful that we apply it not only to other people but to animals and even, as the six-month-olds did in the wooden disk study I described above, to inanimate geometrical shapes.9

It is difficult to overestimate the importance to the human species of ToM. We take the operation of our societies for granted, but many of our activities in everyday life are possible only as a result of group efforts, of human cooperation on a large scale. Building a car, for example, requires the participation of thousands of people with diverse skills, in diverse lands, performing diverse tasks. Metals like iron must be extracted from the ground and processed; glass, rubber, and plastics must be created from numerous chemical precursors and molded; batteries, radiators, and countless other parts must be produced; electronic and mechanical systems must be designed; and it all must come together, coordinated from far and wide, in one factory so that the car can be assembled. Today, even the coffee and bagel you might consume while driving to work in the morning is the result of the activities of people all over the world—wheat farmers in one state, bakers most likely in another, dairy farmers yet elsewhere; coffee plantation workers in another country, and roasters hopefully closer to you; truckers and merchant marines to bring it all together; and all the people who make the roasters, tractors, trucks, ships, fertilizer, and whatever other devices and ingredients are involved. It is ToM that enables us to form the large and sophisticated social systems, from farming communities to large corporations, upon which our world is based.

Scientists are still debating whether nonhuman primates use ToM in their social activities, but if they do, it seems to be at only a very basic level.10 Humans are the only animal whose relationships and social organization make high demands on an individual’s ToM. Pure intelligence (and dexterity) aside, that’s why fish can’t build boats and monkeys don’t set up fruit stands. Pulling off such feats makes human beings unique among the animals. In our species, rudimentary ToM develops in the first year. By age four, nearly all human children have gained the ability to assess other people’s mental states.11 When ToM breaks down, as in autism, people can have difficulty functioning in society. In his book An Anthropologist on Mars, the clinical neurologist Oliver Sacks profiled Temple Grandin, a high-functioning autistic woman. She had told him about what it was like to go to the playground when she was a child, observing the other children’s responses to social signals she could not herself perceive. “Something was going on between the other kids,” he described her as thinking, “something swift, subtle, constantly changing—an exchange of meanings, a negotiation, a swiftness of understanding so remarkable that sometimes she wondered if they were all telepathic.”12

One measure of ToM is called intentionality.13 An organism that is capable of reflecting about its own state of mind, about its own beliefs and desires, as in I want a bite of my mother’s pot roast—is called “first-order intentional.” Most mammals fit in that category. But knowing about yourself is a far different skill from knowing about someone else. A second-order intentional organism is one that can form a belief about someone else’s state of mind, as in I believe my son wants a bite of my pot roast. Second-order intentionality is defined as the most rudimentary level of ToM, and all healthy humans have it, at least after their morning coffee. If you have third-order intentionality you can go a step further, reasoning about what a person thinks a second person thinks, as in I believe my mom thinks that my son wants a bite of her pot roast. And if you are capable of going a level beyond that, of thinking I believe my friend Sanford thinks that my daughter Olivia thinks that his son Johnny thinks she is cute or I believe my boss, Ruth, knows that our CFO, Richard, thinks that my colleague John doesn’t believe her budgets and revenue projections can be trusted, then you’re engaging in fourth-order intentionality, and so on. Fourth-order thinking makes for a pretty complicated sentence, but if you ponder these for a minute, you’ll probably realize you engage in it quite frequently, for it is typical of what is involved in human social relationships.

Fourth-order intentionality is required to create literature, for writers must make judgments based on their own experiences of fourth-order intentionality, such as I think that the cues in this scene will signal to the reader that Horace thinks that Mary intends to dump him. It is also necessary for politicians and business executives, who could easily be outmaneuvered without that skill. For example, I knew a newly hired executive at a computer games company—call her Alice—who used her highly developed ToM to get out of a touchy situation. Alice felt certain that an outside company that had a long-term contract for programming services with her new employer was guilty of certain financial improprieties. Alice had no proof, and the outside company had an airtight long-term contract that required a $500,000 payment for early termination. But: Alice knew that Bob (the CEO of the outside company) knew that Alice, being new on the job, was afraid to make a misstep. That’s third-order intentionality. Also: Alice knew that Bob knew that she knew that Bob was not afraid of a fight. That’s fourth-order thinking. Understanding this, Alice considered a ploy: What if she made a bluff that she had proof of the impropriety and used that to force Bob to let them out of the contract? How would Bob react? She used her ToM analysis to look at the situation from Bob’s point of view. Bob saw her as someone who was hesitant to take chances and who knew that he was a fighter. Would such a person make a grand claim she couldn’t back up? Bob must have thought not, for he agreed to let Alice’s employer out of the contract for a small fraction of the contractually obligated sum.

The evidence on nonhuman primates seems to show that they fall somewhere between first- and second-order thinking. A chimp may think to itself, I want a banana or even I believe George wants my banana, but it wouldn’t go as far as thinking, I believe George thinks that I want his banana. Humans, on the other hand, commonly engage in third- and fourth-order intentionality and are said to be capable of sixth-order. Tackling those higher-order ToM sentences taxes the mind in a way that, to me, feels analogous to the thinking required when doing research in theoretical physics, in which one must be able to reason about long chains of interrelated concepts.

If ToM both enables social connection and requires extraordinary brain power, that may explain why scientists have discovered a curious connection between brain size and social group size among mammals. To be precise, the size of a species’ neocortex—the most recently evolved part of the brain—as a percentage of that species’ whole brain seems to be related to the size of the social group in which members of that species hang out.14 Gorillas form groups of under ten, spider monkeys closer to twenty, and macaques more like forty—and these numbers accurately reflect the neocortex-to-whole-brain ratio of each of these species.

Suppose we use the mathematical relationship that describes the connection between group size and relative neocortex size in nonhuman primates to predict the size of human social networks. Does it work? Does the ratio of neocortex to overall brain size apply to calculating the size of human networks, too?

To answer that question, we first have to come up with a way of defining group size among humans. Group size in nonhuman primates is defined by the typical number of animals in what are called grooming cliques. These are social alliances like the cliques our kids form in school or those adults have been known to form at the PTA. In primates, clique members regularly clean each other, removing dirt, dead skin, insects, and other objects by stroking, scratching, and massaging. Individuals are particular about both whom they groom and whom they are groomed by, because these alliances act as coalitions to minimize harassment from others of their kind. Group size in humans is harder to define in any precise way because humans relate to one another in many different types of groups, with different sizes, different levels of mutual understanding, and different degrees of bonding. In addition, we have developed technologies designed specifically to aid large-scale social communication, and we have to be careful to exclude from group size measurements people such as e-mail contacts we hardly know. In the end, when scientists look at groups that seem to be the cognitive equivalent of nonhuman grooming cliques—the clans among Australian aboriginals, the hair-care networks of female bushmen, or the number of individuals to whom people send Christmas cards—the human group size comes out to about 150, just about what the neocortex size model predicts.15

Why should there be a connection between brain power and the number of members in a social network? Think about human social circles, circles consisting of friends and relatives and work associates. If these are to remain meaningful, they can’t get too big for your cognitive capacities, or you won’t be able to keep track of who is who, what they all want, how they relate to one another, who can be trusted, who can be counted on to help out with a favor, and so on.16

To explore just how connected we humans are, in the 1960s the psychologist Stanley Milgram selected about 300 people at random in Nebraska and Boston and asked each of them to start a chain letter.17 The volunteers were sent a packet of materials with a description of the study, including the name of a “target person”—a randomly chosen man in Sharon, Massachusetts, who worked as a stockbroker in Boston. They were instructed to forward the packet to the target person if they knew him or, if they didn’t, to send it to whichever of their acquaintances they deemed most likely to know him. The intention was that the acquaintance, upon receiving the packet, would also follow the instructions and send it along, until eventually someone would be found who did know the target person and would send it directly to him.

Many people along the way didn’t bother, and broke the chain. But out of the initial 300 or so individuals, 64 did generate chains that ultimately found the man in Sharon, Massachusetts. How many intermediaries did it take until someone knew someone who knew someone who knew someone … who knew the target? The median number was only about 5. The study led to the coining of the term “six degrees of separation,” based on the idea that six links of acquaintanceship are enough to connect any two people in the world. The same experiment, made much easier by the advent of e-mail, was repeated in 2003.18 This time the researchers started with 24,000 e-mail users in more than 100 countries, and 18 different target people spread far and wide. Of the 24,000 e-mail chains those subjects started, only about 400 reached their target. But the result was similar: the target was contacted in a median of five to seven steps.

We give out Nobel Prizes in scientific fields like physics and chemistry, but the human brain also deserves a gold medallion for its extraordinary ability to create and maintain social networks, such as corporations, government agencies, and basketball teams, in which people work smoothly together to accomplish a common goal with a minimum of miscommunication and conflict. Perhaps 150 is the natural group size for humans in the wild, unaided by formal organizational structures or communications technology, but given those innovations of civilization, we have blasted through the natural barrier of 150 to accomplish feats that only thousands of humans working together could possibly attain. Sure, the physics behind the Large Hadron Collider, a particle accelerator in Switzerland, is a monument to the human mind. But so are the scale and complexity of the organization that built it—one LHC experiment alone required more than 2,500 scientists, engineers, and technicians in 37 countries to work together, solving problems cooperatively in an ever-changing and complex environment. The ability to form organizations that can create such achievements is as impressive as the achievements themselves.


THOUGH HUMAN SOCIAL behavior is clearly more complex than social behavior in other species, there are also striking commonalities in certain fundamental aspects of the way all mammals connect with others of their species. One of the interesting aspects of most nonhuman mammals is that they are “small-brained.” By that, scientists mean the part of the brain that in humans is responsible for conscious thought is, in nonhuman mammals, relatively small compared to the part of the brain involved in unconscious processes.19 Of course, no one is quite sure exactly how conscious thought arises, but it seems to be centered mainly in the frontal lobe of the neocortex, in particular in a region called the prefrontal cortex. In other animals, these regions of the brain are either much smaller or nonexistent. In other words, animals react more and think less, if at all. So a human’s unconscious mind might raise an alarm at the sight of Uncle Matt stabbing his arm with a shish kebab skewer, only to have the conscious mind remind that human that Uncle Matt thinks it is funny to perform shocking magic tricks. Your pet rabbit’s reaction, in contrast, would probably not be mitigated by such conscious, rational considerations. The rabbit’s reaction would be automatic. It would follow its gut instincts and simply flee Uncle Matt and his skewer. But although a rabbit just can’t take a joke, the brain regions responsible for a rabbit’s unconscious processing are not that different from ours.

In fact, the organization and chemistry of the unconscious brain is shared across mammal species, and many automatic neural mechanisms in apes and monkeys and even lower mammals are similar to our own, and produce startlingly humanlike behavior.20 So although other animals can’t teach us much about ToM, they they can provide insights into some of the other automatic and unconscious aspects of our social tendencies. That’s why, while other people read books like Men Are from Mars, Women Are from Venus to learn about male and female social roles, I turn to sources like “Mother-Infant Bonding and the Evolution of Mammalian Social Relationships”—which, some say, serves to minimize the mammalian social relationships in my own life.

Consider this quote from that work:

Reproductive success in males is generally determined by competing with other males to mate with as many females as possible. Hence, males rarely form strong social bonds and male coalitions are typically hierarchical with an emphasis on aggressive rather than affiliative behavior.21

That sounds like something you’d observe hanging out at a sports bar, but scientists are discussing the behavior of nonhuman mammals. Perhaps the difference between human males and bulls, tomcats, and male sheep is not that nonhuman mammals don’t have sports bars but that, to nonhuman mammals, the whole world is a sports bar. Of females, those same researchers write:

The female reproductive strategy is one of investing in the production of a relatively few offspring … and success is determined by the quality of care and the ability to enable infant survival beyond the weaning age. Females therefore form strong social bonds with their infants and female-female relationships are also strongly affiliative.

That, too, sounds familiar. One has to be careful about reading too much into mammalian behavior “in general,” but this does seem to explain why it is mostly women who have slumber parties and form book clubs, and why, despite my promises to be affiliative rather than aggressive, they have never let me into either. The fact that on some level human and nonhuman mammals seem to behave similarly does not mean that a cow would enjoy a candlelight dinner, that a mother sheep wants nothing more than to see her babies grow up happy and well-adjusted, or that rodents aspire to retiring in Tuscany with their soul mates. What it does suggest is that although human social behavior is far more complex than that of other animals, the evolutionary roots of our behaviors can be found in those animals, and we can learn something about ourselves by studying them.

Just how programmed is the social behavior of nonhuman mammals? Take sheep, for example.22 A female sheep—a ewe—is by disposition rather nasty to baby sheep (or, as the meat industry likes us to call them, lambs). If a lamb approaches, wishing to suckle, the ewe will scream at it with a high-pitched bleat, and maybe throw in a head butt or two. However, the birthing process transforms the mother. It seems magical, that transformation from shrew to nurturer. But it doesn’t seem to be due to conscious, maternal thoughts of her child’s love. It’s chemical, not magical. The process is instigated by the stretching of the birth canal, which causes a simple protein called oxytocin to be released in the ewe’s brain. This opens a window of a couple hours’ duration in which the ewe is open to bonding. If a lamb approaches her while that window is open, the ewe will bond with it, whether it is her baby, her neighbor’s, or a baby from the farm down the street. Then, once the oxytocin window has closed, she’ll stop bonding with new lambs. After that, if she has bonded with a lamb, she’ll continue to suckle it and to speak soothingly to it—which in sheep talk means low-pitched bleats. But she’ll be her nasty old self to all other lambs, even to her own if it didn’t approach her during the bonding window. Scientists, however, can open and close this bonding window at will, by injecting the ewe with oxytocin or inhibiting her from producing it herself. It’s like flicking a switch on a robot.

Another famous series of studies in which scientists have been able to program mammalian behavior by chemical manipulation concerns the vole, a small rodent that resembles a mouse and encompasses about 150 different species. One of those species, the prairie vole, would be a model citizen in human society. Prairie voles mate for life. They are loyal—among prairie voles whose partner disappears, for example, fewer than 30 percent will shack up with someone else.23 And they make responsible fathers—the males stick around to guard the nest and share in the parenting. Scientists study prairie voles because they are a fascinating contrast with two related species of voles, the montane vole and the meadow vole. In contrast to prairie voles, montane and meadow voles form societies of sexually promiscuous loners.24 The males of those species are, in human terms, ne’er-do-wells. They will mate with whatever female is around, then wander off and leave her to take care of the kids. If placed randomly in a large room, they avoid others of their species, preferring to crawl off to some isolated corner. (Prairie voles, on the other hand, will cluster in little chat groups.)

What is amazing about these creatures is that scientists have been able to identify the specific brain characteristic responsible for the behavioral differences among vole species, and to use that knowledge to change their behavior from that of one species to that of another. The chemical involved is again oxytocin. To have an effect on brain cells, oxytocin molecules first have to bind to receptors—specific molecules on the surface membrane of a cell. Monogamous prairie voles have many receptors for oxytocin and a related hormone called vasopressin in a particular region of the brain. A similarly high concentration of oxytocin and vasopressin receptors is found in that region of the brain in other monogamous mammals. But in promiscuous voles, there is a dearth of those receptors. And so, for example, when scientists manipulate a meadow vole’s brain to increase the number of receptors, the loner meadow vole suddenly becomes outgoing and sociable like its cousin the prairie vole.25

Unless you’re an exterminator, I’ve probably now supplied more than you need to know about prairie voles, and as for lambs, most of us never come into contact with them except those accompanied by mint jelly. But I’ve gone into detail about oxytocin and vasopressin because they play an important role in the modulation of social and reproductive behavior in mammals, including ourselves. In fact, related compounds have played a role in organisms for at least seven hundred million years, and are at work even in invertebrates such as worms and insects.26 Human social behavior is obviously more advanced and more nuanced than that of voles and sheep. Unlike them, we have ToM, and we are far more capable of overruling unconscious impulses through conscious decisions. But in humans, too, oxytocin and vasopressin regulate bonding.27 In human mothers, as in ewes, oxytocin is released during labor and delivery. It is also released in a woman when her nipples or cervix are stimulated during sexual intimacy and in both men and women when they reach sexual climax. And in both men and women, the oxytocin and vasopressin that are released into the brain after sex promote attraction and love. Oxytocin is even released during hugs, especially in women, which is why mere casual physical touch can lead to feelings of emotional closeness even in the absence of a conscious, intellectual connection between the participants.

In the broader social environment, oxytocin also promotes trust, and is produced when people have positive social contact with others.28 In one experiment, two strangers played a game in which they could cooperate to earn money. But the game was designed so that each contestant could also gain at the expense of the other. As a result, trust was an issue, and as the game progressed the players gauged each other’s character. Each assessed whether his or her partner tended to play fairly, so both players could benefit equally, or selfishly, to reap a greater benefit at his or her expense.

The unique aspect of this study was that the researchers monitored the players’ oxytocin levels by taking blood samples after they made their decisions. They found that when a player’s partner played in a manner that indicated trust, the player’s brain responded to that show of trust by releasing oxytocin. In another study, in which subjects played an investment game, investors who inhaled an oxytocin nose spray were much more likely to show trust in their partners, by investing more money with them. And when asked to categorize faces based on their expression, volunteers who were given oxytocin rated strangers as appearing more trustworthy and attractive than did other subjects not administered the drug. (Not surprisingly, oxytocin sprays are now available over the Internet, though they are not very effective unless the oxytocin is sprayed directly into the target person’s nostril.)

One of the most striking pieces of evidence of our automatic animal nature can be seen in a gene that governs vasopressin receptors in human brains. Scientists discovered that men who have two copies of a certain form of this gene have fewer vasopressin receptors, which makes them analogous to promiscuous voles. And, indeed, they exhibit the same sort of behavior: men with fewer vasopressin receptors are twice as likely to have experienced marital problems or the threat of divorce and half as likely to be married as men who have more vasopressin receptors.29 So although we are much more complex in our behaviors than sheep and voles, people, too, are hardwired to certain unconscious social behaviors, a remnant of our animal past.


SOCIAL NEUROSCIENCE IS a new field, but the debate over the origin and nature of human social behavior is probably as old as human civilization itself. Philosophers of centuries past didn’t have access to studies like those of the lambs and voles; however, as long as they have speculated about the mind, they have debated the degree to which we are in conscious control of our lives.30 They used different conceptual frameworks, but observers of human behavior from Plato to Kant usually found it necessary to distinguish between direct causes of behavior—those motivations we can be in touch with through introspection—and hidden internal influences that could only be inferred.

In modern times, as I mentioned, it was Freud who popularized the unconscious. But though his theories had great prominence in clinical applications and popular culture, Freud influenced books and films more than he influenced experimental research in psychology. Through most of the twentieth century, empirical psychologists simply neglected the unconscious mind.31 Odd as it may sound today, in the first half of that century, which was dominated by those in the behaviorist movement, psychologists even sought to do away with the concept of mind altogether. They not only likened the behavior of humans to that of animals, they considered both humans and animals to be merely complex machines that responded to stimuli in predictable ways. However, though the introspection elicited by Freud and his followers is unreliable, and the inner workings of the brain were, at the time, unobservable, the idea of completely disregarding the human mind and its thought processes struck many as absurd. By the end of the 1950s the behaviorist movement had faded, and two new movements grew in its place, and flourished. One was cognitive psychology, inspired by the computer revolution. Like behaviorism, cognitive psychology generally rejected introspection. But cognitive psychology did embrace the idea that we have internal mental states such as beliefs. It treated people as information systems that process those mental states much in the way a computer processes data. The other movement was social psychology, which aimed to understand how people’s mental states are affected by the presence of others.

With these movements, psychology once again embraced the study of the mind, but both movements remained dubious about the mysterious unconscious. After all, if people are unaware of subliminal processes, and if one cannot trace them within the brain, what evidence do we have that such mental states are even real? In both cognitive and social psychology, the term “unconscious” was thus usually avoided. Still, like the therapist who doggedly brings you back again and again to the subject of your father, a handful of scientists kept doing experiments whose outcomes suggested that such processes had to be investigated, because they played such an important role in social interactions. By the 1980s, a number of now-classic experiments offered powerful evidence of the unconscious, automatic components of social behavior.

Some of those early studies of behavior drew directly on Frederic Bartlett’s memory theories. Bartlett believed that the distortions he had observed in people’s recall could be accounted for by assuming that their minds followed certain unconscious mental scripts, which were aimed at filling in gaps and making information consistent with the way they thought the world to be. Wondering whether our social behavior might also be influenced by some unconscious playbook, cognitive psychologists postulated the idea that many of our daily actions proceed according to predetermined mental “scripts”32—that they are, in fact, mindless.

In one test of that idea, an experimenter sat in a library and kept an eye on the copier. When someone approached it, the experimenter rushed up and tried to cut in front, saying, “Excuse me, I have five pages. May I use the Xerox machine?” Sure, sharing is caring, but unless the subject was making a great many more than five copies, the experimenter has provided no justification for the intrusion, so why yield? Apparently a good number of people felt that way: 40 percent of the subjects gave the equivalent of that answer, and refused. The obvious way to increase the likelihood of compliance is to offer a valid and compelling reason why someone should let you go first. And indeed, when the experimenter said, “Excuse me, I have five pages. May I use the Xerox machine, because I’m in a rush?” the rate of refusals fell radically, from 40 percent to just 6 percent. That makes sense, but the researchers suspected that something else might be going on; maybe people weren’t consciously assessing the reason and deciding it was a worthy one. Maybe they were mindlessly—automatically—following a mental script.

That script might go something like this: Someone asks a small favor with zero justification: say no; someone asks a small favor but offers a reason, any reason: say yes. Sounds like a robot or computer program, but could it apply to people? The idea is easy to test. Just walk up to people approaching a photocopier and to each of them say something like “Excuse me, I have five pages. May I use the Xerox machine, because xxx,” where “xxx” is a phrase that, though parading as the reason for the request, really provides no justification at all. The researchers chose as “xxx” the phrase “because I have to make some copies,” which merely states the obvious and does not offer a legitimate reason for butting in. If the people making copies consciously weighed this nonreason against their own needs, one would expect them to refuse in the same proportion as in the case in which no reason was offered—about 40 percent. But if the very act of giving a reason was important enough to trigger the “yes” aspect of the script, regardless of the fact that the excuse itself had no validity, only about 6 percent should refuse, as occurred in the case in which the reason provided—“I’m in a rush”—was compelling. And that’s exactly what the researchers found. When the experimenter said, “Excuse me, I have five pages. May I use the Xerox machine, because I have to make some copies?” only 7 percent refused, virtually the same number as when a valid and compelling reason was given. The lame reason swayed as many people as the legitimate one.

In their research report, those who conducted this experiment wrote that to unconsciously follow preset scripts “may indeed be the most common mode of social interaction. While such mindlessness may at times be troublesome, this degree of selective attention, of tuning the external world out, may be an achievement.” Indeed, in evolutionary terms, here is the unconscious performing its usual duty, automating tasks so as to free us to respond to other demands of the environment. In modern society, that is the essence of multitasking—the ability to focus on one task while, with the aid of automatic scripts, performing others.

Throughout the 1980s, study after study seemed to show that, because of the influence of the unconscious, people did not realize the reasons for their feelings, behavior, and judgments of other people, or how they communicated nonverbally with others. Eventually psychologists had to rethink the role of conscious thought in social interactions. And so the term “unconscious” was resurrected, though also sometimes replaced by the untainted “nonconscious,” or more specific terms like “automatic,” “implicit,” or “uncontrolled.” But these experiments were mainly clever behavioral studies, and psychologists could still only guess at the mental processes that caused the participants’ reactions. You can tell a lot about a restaurant’s recipes by sitting at a table and sampling the food, but to really know what is going on, you have to look in the kitchen, and the human brain remained hidden behind the closed doors of the skull, its inner workings virtually as inaccessible as they had been a century earlier.


THE FIRST SIGN that the brain could be observed in action came in the nineteenth century when scientists noted that nerve activity causes changes in blood flow and oxygen levels. By monitoring those levels, one could, in theory, watch a reflection of the brain at work. In his 1890 book The Principles of Psychology, William James references the work of the Italian physiologist Angelo Mosso, who recorded the pulsation of the brain in patients who had gaps in their skull following brain surgery.33 Mosso observed that the pulsation in certain regions increased during mental activity, and he speculated, correctly, that the changes were due to neuronal activity in those regions. Unfortunately, with the technology of that day, one could make such observations and measurements only if the skull was physically cut away, making the brain accessible.34 That’s not a viable strategy for studying the human brain, but that is exactly what scientists at Cambridge University did in 1899—to dogs, cats, and rabbits. The Cambridge scientists employed electric currents to stimulate various nerve pathways in each animal, then measured the brain’s response with tools applied directly to the living tissue. They showed a link between brain circulation and metabolism, but the method was both crude and cruel, and it didn’t catch on. Nor did the invention of X-rays provide an alternative, for X-rays can detect only the physical structures of the brain, not its dynamic, ever-changing electrical and chemical processes. And so for another century the working brain remained off-limits. Then, in the late 1990s, about a hundred years after Freud’s book The Interpretation of Dreams, fMRI suddenly became widely available.

As I mentioned in the Prologue, fMRI, or functional magnetic resonance imaging, is a twist on the ordinary MRI machine your doctor uses. The nineteenth-century scientists had concluded correctly that the key to identifying what part of the brain is at work at any given time is that when nerve cells are active, circulation increases, because the cells increase their consumption of oxygen. With fMRI, scientists can map oxygen consumption from outside the skull, through the quantum electromagnetic interactions of atoms within the brain. Thus fMRI allows the noninvasive three-dimensional exploration of the normal human brain in operation. It not only provides a map of the structures in the brain but indicates which among them are active at any given moment, and allows scientists to follow how the areas that are active change over time. In that way, mental processes can now be associated with specific neural pathways and brain structures.

On many occasions in the past pages I’ve said that an experimental subject’s brain had been imaged, or remarked that a particular part of the brain was or was not active in some circumstance. For example, I said that patient TN’s occipital lobe was not functioning, explained that it is the orbitofrontal cortex that is associated with the experience of pleasure, and reported that brain-imaging studies show the existence of two centers of physical pain. All these statements were made possible by the technology of fMRI. There have been other new and exciting technologies developed in recent years, but the advent of fMRI changed the way scientists study the mind, and this advance continues to play a role of unparalleled importance in basic research.

Were we sitting in front of a computer housing your fMRI data, scientists would be able to make a slice of any section of your brain, and in any orientation, and view it almost as if they had dissected the brain itself. The image above, for example, displays a slice along the brain’s central plane, as the subject engages in daydreaming. The shaded areas on the left and right indicate activity in the medial prefrontal cortex and the posterior cingulate cortex, respectively.

Courtesy of Mike Tyszka

Neuroscientists today commonly divide the brain into three crude regions, based on their function, physiology, and evolutionary development.35 In that categorization, the most primitive region is the “reptilian brain,” responsible for basic survival functions such as eating, breathing, and heart rate, and also for primitive versions of the emotions of fear and aggression that drive our fight-or-flight instincts. All vertebrate creatures—birds, reptiles, amphibians, fish, and mammals—have the reptilian brain structures.

The second region, the limbic system, is more sophisticated, the source of our unconscious social perception. It is a complex system whose definition can vary a bit from researcher to researcher, because although the original designation was anatomical, the limbic system has come to be defined instead by its function as the system in the brain instrumental in the formation of social emotions. In humans, the limbic system is often defined as a ring of structures, some of which we have already run into, including the ventromedial prefrontal cortex, dorsal anterior cingulate cortex, amygdala, hippocampus, hypothalamus, components of the basal ganglia, and, sometimes, the orbitofrontal cortex.36 The limbic system augments the reflexive reptilian emotions and is important in the genesis of social behaviors.37 Many of the structures in this second region are sometimes grouped together into what is called the “old mammalian brain,” which all mammals have, as opposed to the third region—the neocortex, or “new” mammalian brain—whose structures the more primitive mammals generally lack.

The neocortex lies above most of the limbic system.38 You may recall from Chapter 2 that it is divided into lobes and is oversized in humans. It is this gray matter that people usually think of when they talk about the brain. In Chapter 2, I talked about the occipital lobe, which is located at the back of your head and contains your visual primary processing centers. In this chapter, I’ve talked about the frontal lobe, which is, as the name indicates, located at the front.

The genus Homo, of which humans, Homo sapiens, are the only surviving species, first evolved about two million years ago. Anatomically, Homo sapiens reached its present form about two hundred thousand years ago, but as I’ve said, behaviorally, we humans did not take on our present characteristics, such as culture, until about fifty thousand years ago. In the time between the original Homo species and ourselves, the brain doubled in size. A disproportionate share of that growth occurred in the frontal lobe, and so it stands to reason that the frontal lobe is the location of some of the specific qualities that make humans human. What does this expanded structure do to enhance our survival ability to a degree that might have justified nature’s favoring it?

The frontal lobe contains regions governing the selection and execution of fine motor movements—especially of the fingers, hands, toes, feet, and tongue—that are clearly important for survival in the wild. It is interesting to note that control of the motor movements of the face is based in the frontal lobe, too. As we’ll see in Chapter 5, the fine nuances of facial expression are also crucial to survival because of the role they play in social communication. In addition to regions associated with motor movements, as I mentioned earlier, the frontal lobe contains a structure called the prefrontal cortex. “Prefrontal” means, literally, “in front of the front,” and that’s where the prefrontal cortex sits, just behind the forehead. It is in this structure that we most clearly see our humanity. The prefrontal cortex is responsible for planning and orchestrating our thoughts and actions in accordance with our goals, and integrating conscious thought, perception, and emotion; it is thought to be the seat of our consciousness.39 The ventromedial prefrontal cortex and the orbitofrontal cortex, parts of the limbic system, are subsystems within the prefrontal cortex.

Though this anatomical division of the brain into reptilian; limbic, or old mammalian; and neocortex, or new mammalian, is useful—and I’ll occasionally refer to it—it’s important to realize that it is a simplified picture. The full story is more complex. For example, the neat evolutionary steps it implies are not quite the way things happened; some so-called primitive creatures have neocortical-like tissue.40 As a result, the behavior of those animals may not be as completely instinct-driven as once thought. Also, the three discrete areas are described as almost independent, but in reality they are integrated and work in concert, with numerous neural interconnections among them. The complexity of the brain is reflected by the fact that the hippocampus alone, a tiny structure deep in the brain, is the subject of a textbook several inches thick. Another recent work, an academic article that described research on a single type of nerve cell in the hypothalamus, was over one hundred pages long and cited seven hundred intricate experiments. That’s why, despite all the research, the human mind, both conscious and unconscious, still holds enormous mystery, and why tens of thousands of scientists worldwide are still working to elucidate the function of these regions, on the molecular, cellular, neural, and psychological levels, providing ever deeper insights into how the pathways interact to produce our thoughts, feelings, and behavior.

With the advent of fMRI and the growing ability of scientists to study how different brain structures contribute to thoughts, feelings, and behavior, the two movements that followed behaviorism began to join forces. Social psychologists realized they could untangle and validate their theories of psychological processes by connecting them to their sources in the brain. Cognitive psychologists realized they could trace the origins of mental states. Also, the neuroscientists who focused on the physical brain realized they could better understand its functioning if they learned about the mental states and psychological processes the different structures produce. And so the new field of social cognitive neuroscience, or, simply, social neuroscience, emerged. It is a ménage à trois, a “household of three”: social psychology, cognitive psychology, and neuroscience. I said earlier that the first ever social neuroscience meeting took place in April 2001. To get an idea of how fast the field exploded, consider this: The first ever academic publication employing fMRI came in 1991.41 In 1992, there were only four such publications in the entire year. Even as late as 2001, an Internet search using the words “social cognitive neuroscience” yielded just 53 hits. But an identical search performed in 2007 yielded more than 30,000.42 By then, neuroscientists were turning out fMRI studies every three hours.

Today, with researchers’ new ability to watch the brain at work and to understand the origins and depth of the unconscious, the dreams of Wundt, James, and the others in the New Psychology who wanted to make that field into a rigorous experimental science are finally being realized. And though Freud’s concept of the unconscious was flawed, his stress on the importance of unconscious thought is appearing ever more valid. Vague concepts like the id and the ego have now given way to maps of brain structure, connectivity, and function. What we’ve learned is that much of our social perception—like our vision, hearing, and memory—appears to proceed along pathways that are not associated with awareness, intention, or conscious effort. How this subliminal programming affects our lives, the way we present ourselves, the way we communicate with and judge people, the way we react to social situations, and the way we think of ourselves, is the territory we are about to explore.

Загрузка...