PART II The Social Unconscious

CHAPTER 5 Reading People How we communicate without speaking … how to know who’s the boss by watching her eyes

Your amicable words mean nothing if your body seems to be saying something different.

—JAMES BORG

IN THE LATE summer of 1904, just a few months before the start of Einstein’s “miracle year,” the New York Times reported on another German scientific miracle, a horse that “can do almost everything but talk.”1 The story, the reporter assured us, was not drawn from the imagination but was based on the observations of a commission appointed by the Prussian minister of education, as well as the observations of the reporter himself. The subject of the article was described as a stallion, later dubbed Clever Hans, who could perform arithmetic and intellectual tasks on the level of those performed in one of today’s third-grade classrooms. Since Hans was nine that would have been appropriate for his age, if not his species. In fact, rather like the average human nine-year-old, Hans had by then received four years of formal instruction, homeschooled by his owner, a Herr Wilhelm von Osten. Von Osten, who taught math at a local gymnasium—something like a high school—had a reputation for being an old crank, and also for not caring if he was viewed that way. Every day at a certain hour von Osten stood before Hans—in full view of his neighbors—and instructed the horse by employing various props and a blackboard, then rewarded him with a carrot or a piece of sugar.

Hans learned to respond to his master’s questions by stamping his right hoof. The New York Times reporter described how, on one occasion, Hans was told to stamp once for gold, twice for silver, and three times for copper, and then correctly identified coins made from those metals. He identified colored hats in an analogous manner. Using the sign language of hoof taps, he could also tell time; identify the month and the day of the week; indicate the number of 4’s in 8, 16, and 32; add 5 and 9; and even indicate the remainder when 7 was divided by 3. By the time the reporter witnessed this display, Hans had become something of a celebrity. Von Osten had been exhibiting him at gatherings throughout Germany—even at a command performance before the kaiser himself—and he never charged admission, because he was trying to convince the public of the potential for humanlike intelligence in animals. So much interest was there in the phenomenon of the high-IQ horse that a commission had been convened to assess von Osten’s claims, and it concluded that no trickery was involved in Hans’s feats. According to the statement issued by the commission, the explanation for the horse’s ability lay in the superior teaching methods employed by von Osten—methods that corresponded to those employed in Prussia’s own elementary schools. It’s not clear if the “superior teaching methods” referred to the sugar or the carrots, but according to one commission member, the director of the Prussian Natural History Museum, “Herr von Osten has succeeded in training Hans by cultivating in him a desire for delicacies.” He added, “I doubt whether the horse really takes pleasure in his studies.” Even more evidence, I suppose, of Hans’s startling humanity.

But not everyone was convinced by the commission’s conclusions. One telling indication that there might be more to Hans’s feats than an advance in equine teaching methodology was that Hans could sometimes answer von Osten’s questions even if von Osten didn’t verbalize them. That is, von Osten’s horse seemed to be able to read his mind. A psychologist named Oskar Pfungst decided to investigate. With von Osten’s encouragement, Pfungst conducted a series of experiments. He discovered that the horse could answer questions posed by people other than von Osten, but only if the questioners knew the answer, and only if they were visible to Hans during the hoof tapping.

It required a series of additional careful experiments, but Pfungst eventually found that the key to the horse’s intellectual feats lay in involuntary and unconscious cues displayed by the questioner. As soon as a problem was posed, Pfungst discovered, the questioner would involuntarily and almost imperceptibly bend forward, which prompted Hans to begin tapping. Then, as the correct answer was reached, another slight bit of body language would signal Hans to stop. It was a “tell,” as the poker crowd calls it, an unconscious change of demeanor that broadcasts a clue to a person’s state of mind. Every one of the horse’s questioners, Pfungst noted, made similar “minimal muscular movements” without being aware of doing so. Hans might not have been a racehorse, but he had the heart of a poker player.

In the end Pfungst demonstrated his theory with a flourish by playing the role of Hans and enlisting twenty-five experimental subjects to question him. None were aware of the precise purpose of the experiment, but all were aware they were being observed for clues that might give the answer away. Twenty-three of the twenty-five made such movements anyway, though all denied having done so. Von Osten, for the record, refused to accept Pfungst’s conclusions and continued to tour Germany with Hans, drawing large and enthusiastic crowds.

As anyone who has ever been on the receiving end of a fellow driver’s display of the middle finger knows, nonverbal communication is sometimes quite obvious and conscious. But then there are those times when a significant other says, “Don’t look at me like that,” and you respond, “Don’t look at you like what?,” knowing full well the nature of the feelings you were so sure you had hidden. Or you might smack your lips and proclaim that your spouse’s scallop-and-cheddar casserole is yummy but somehow still elicit the response “What, you don’t like it?” Don’t fret; if a horse can read you, why not your spouse?

Scientists attach great importance to the human capacity for spoken language. But we also have a parallel track of nonverbal communication, and those messages may reveal more than our carefully chosen words and sometimes be at odds with them. Since much, if not most, of the nonverbal signaling and reading of signals is automatic and performed outside our conscious awareness and control, through our nonverbal cues we unwittingly communicate a great deal of information about ourselves and our state of mind. The gestures we make, the position in which we hold our bodies, the expressions we wear on our faces, and the nonverbal qualities of our speech—all contribute to how others view us.


THE POWER OF nonverbal cues is particularly evident in our relationship with animals because, unless you live in a Pixar movie, nonhuman species have a limited understanding of human speech. Like Hans, though, many animals are sensitive to human gestures and body language.2 One recent study, for example, found that when trained properly, a wolf can be a decent acquaintance and respond to a human’s nonverbal signals.3 Though you wouldn’t want to name a wolf Fido and leave it to play with your one-year-old, wolves are actually very social animals, and one reason they can respond to nonverbal cues from humans is that they have a rich repertoire of such signals within their own community. Wolves engage in a number of cooperative behaviors that require skill in predicting and interpreting the body language of their peers. So if you’re a wolf, you know that when a fellow wolf holds its ears erect and forward and its tail vertical, it is signaling dominance. If it pulls its ears back and narrows its eyes, it is suspicious. If it flattens its ears against its head and tucks its tail between its legs, it is fearful. Wolves haven’t been explicitly tested, but their behavior seems to imply that they are capable of at least some degree of ToM. Still, wolves are not man’s best friend. Instead it is the dog, which originated from wolves, that is best at reading human social signals. At that task, dogs appear even more skilled than our primate relatives. That finding surprised a lot of people because primates are far superior at other typical human endeavors, like problem solving and cheating.4 This suggests that during the process of domestication, evolution favored those dogs who developed mental adaptations allowing them to be better companions to our species5—and hence to avail themselves of the benefits of home and hearth.

One of the most revealing studies of human nonverbal communication was performed using an animal with which humans rarely share their homes, at least not intentionally: the rat. In that study, students in an experimental psychology class were each given five of those creatures, a T-shaped maze, and a seemingly simple assignment.6 One arm of the T was colored white, the other gray. Each rat’s job was to learn to run to the gray side, at which time it would be rewarded with food. The students’ job was to give each rat ten chances each day to learn that the gray side of the maze was the one that led to food and to objectively record each rat’s learning progress, if any. But it was actually the students, not the rats, who were the guinea pigs in this experiment. The students were informed that through careful breeding it was possible to create strains of maze-genius and maze-dummy rats. Half the students were told that their rats were the Vasco da Gamas of maze explorers, while the other half were told that theirs had been bred to have no sense of direction at all. In reality, no such selective breeding had been performed, and the animals were effectively interchangeable, except perhaps to their mothers. The real point of the experiment was to compare the results obtained by the two distinct groups of humans, to see if their expectations would bias the results achieved by their rats.

The researchers found that the rats the students thought were brilliant performed significantly better than the rats believed to be on the dumb side. The researchers then asked each student to describe his or her behavior toward the rats, and an analysis showed differences in the manner in which students in each group related to the animals. For example, judging from their reports, those who believed their rats to be high achievers handled them more and were gentler, thereby communicating their attitude. Of course, that might have been intentional, and the cues we are interested in are those that are unintentional and difficult to control. Luckily, another pair of researchers shared that curiosity.7 They essentially repeated the experiment but added an admonishment to the students that a key part of their task was to treat each rat as they would if they had no prior knowledge about its breeding. Differences in handling, they were warned, could skew the results and, by implication, their grade. Despite these caveats, the researchers also found superior performance among the rats whose handlers expected it. The students attempted to act impartially, but they couldn’t. They unconsciously delivered cues, based on their expectations, and the rats responded.

It’s easy to draw analogies with how unconsciously communicated expectations might also affect human performance, but are they accurate? One of the researchers in the rat study, Robert Rosenthal, decided to find out.8 His plan was to again have his students conduct an experiment, but this time they would experiment on people, not rats. That, of course, involved altering the experiment to be better suited to human subjects. Rosenthal came up with this: he asked the student experimenters—who were themselves the true subjects of the experiment—to show their subjects photographs of people’s faces and request that they rate each face on the degree of success or failure they felt it reflected. Rosenthal had pretested a large set of photos, and he gave his students only those photos that had been judged as neutral. But that’s not what he told them. He said he was trying to duplicate an experiment that had already been performed, and he told half the experimenters that their stack of photos depicted faces that had been rated as successful, and the other half that theirs were rated as failures.

In order to make sure the student experimenters did not use any verbal language to communicate their expectations, Rosenthal gave them all a written script to follow and warned them not to deviate from it in any way or speak any other words. Their job was merely to present the photos to their subjects, read the instructions, and record their subjects’ responses. One could hardly take stronger precautions to discourage experimenter bias. But would their nonverbal communication nevertheless flag their expectations? Would the human subjects respond to these cues just as the rats had done?

Not only, on average, did the students who expected their subjects to accord high success ratings to the photos obtain such ratings but, in addition, every single student who had been led to expect high ratings obtained higher ratings from their subjects than did any of those expecting low ratings. Somehow they were subliminally communicating their expectations. But how?

A year later, another set of researchers repeated Rosenthal’s study, with a twist.9 During the course of that study, they recorded the experimenters’ instructions to their subjects. Then they conducted another experiment, in which they eliminated the human experimenters and instead communicated the instructions to the subjects using the tape recordings, thus getting rid of all cues other than those that could be transmitted through the sound of the voice. Again the results were biased, but only about half as much. So one important way the experimenters’ expectations were communicated was through the inflection and tonal quality of their voices. But if that is just half the story, what’s the other half? No one knows for sure. Over the years, many scientists have tried to find out by doing variants of the experiment, but though they confirmed the effect, none was ever able to specify any more precisely just what the other nonverbal signals were. Whatever they were, they were subtle and unconscious and probably varied considerably among the individuals.

The lesson learned has obvious applications in our personal and professional lives, with regard to our family, our friends, our employees, our employers and even the subjects being interviewed in a marketing focus group: whether or not we wish to, we communicate our expectations to others, and they often respond by fulfilling those expectations. You can probably think of expectations, whether stated or not, that you have regarding most people you interact with. And they have expectations of you. That’s one of the gifts I received from my parents: to be treated like the Vasco da Gama rats, to be made to feel as if I could navigate my way to success in whatever I set out to do. It’s not that my parents talked to me about their belief in me, but I somehow felt it, and it has always been a source of strength.

Rosenthal went on to study precisely that—what expectations mean for our children.10 In one line of research he showed that teachers’ expectations greatly affect their students’ academic performance, even when the teachers try to treat them impartially. For example, he and a colleague asked schoolkids in eighteen classrooms to complete an IQ test. The teachers, but not the students, were given the results. The researchers told the teachers that the test would indicate which children had unusually high intellectual potential.11 What the teachers didn’t know was that the kids named as gifted did not really score higher than average on the IQ test—they actually had average scores. Shortly afterward, the teachers rated those not labeled gifted as less curious and less interested than the gifted students—and the students’ subsequent grades reflected that.

But what is really shocking—and sobering—is the result of another IQ test, given eight months later. When you administer an IQ test a second time, you expect that each child’s score will vary some. In general, about half of the children’s scores should go up and half down, as a result of changes in the individual’s intellectual development in relation to his peers or simply of random variation. When Rosenthal administered the second test, he indeed found that about half the kids labeled “normal” showed a gain in IQ. But among those who’d been singled out as brilliant, he obtained a different result: about 80 percent had an increase of at least 10 points. What’s more, about 20 percent of the “gifted” group gained 30 or more IQ points, while only 5 percent of the other children gained that many. Labeling children as gifted had proved to be a powerful self-fulfilling prophecy. Wisely, Rosenthal hadn’t falsely labeled any kids as being below average. The sad thing is that such labeling does happen, and it is reasonable to assume that the self-fulfilling prophecy also works the other way: that branding a child a poor learner will contribute to making the child exactly that.


HUMANS COMMUNICATE VIA a rich linguistic system whose development was a defining moment in the evolution of our species, an innovation that remade the character of human society. It’s an ability that seems to be unique.12 In other animals, communication is limited to simple messages, such as identifying themselves or issuing warnings; there is little complex structure. Had Hans, for example, been required to answer in complete sentences, the gig would have been up. Even among primates, no species naturally acquires more than a few signals or combines them in anything but a rudimentary manner. The average human, on the other hand, is familiar with tens of thousands of words and can string them together according to complex rules, with hardly any conscious effort, and without formal instruction.

Scientists don’t understand yet how language evolved. Many believe that earlier human species, such as Homo habilis and Homo erectus, possessed primitive language-like or symbolic communication systems. But the development of language as we know it probably didn’t occur until modern humans came into the picture. Some say language originated one hundred thousand years ago, some later; but the need for sophisticated communication certainly became more urgent once “behaviorally modern” social humans developed, fifty thousand years ago. We’ve seen how important social interactions are to our species, and social interactions go hand in hand with the need to communicate. That need is so powerful that even deaf babies develop language-like gesture systems and, if taught sign language, will babble using their hands.13

Why did humans develop nonverbal communication? One of the first to seriously study the issue was an English fellow, spurred by his interest in the theory of evolution. By his own assessment, he was no genius. He had “no great quickness of apprehension or wit” or “power to follow a long and purely abstract train of thought.”14 On the many occasions when I share those feelings, I find it encouraging to review those words because that Englishman did okay for himself—his name was Charles Darwin. Thirteen years after publishing The Origin of Species, Darwin published another radical book, this one called The Expression of the Emotions in Man and Animals. In it, Darwin argued that emotions—and the ways they are expressed—provide a survival advantage and that they are not unique to humans but occur in many species. Clues to the role of emotions therefore can be found by examining the similarities and differences of nonverbal emotional expression across various species.

If Darwin didn’t consider himself brilliant, he did believe he possessed one great intellectual strength: his powers of careful and detailed observation. And, indeed, though he was not the first to suggest the universality of emotion and its expression,15 he spent several decades meticulously studying the physical manifestations of mental states. He watched his countrymen, and he observed foreigners, too, looking for cultural similarities and differences. He even studied domestic animals and those in the London Zoo. In his book, Darwin categorized numerous human expressions and gestures of emotion and offered hypotheses about their origin. He noted how lower animals, too, display intent and emotion through facial expression, posture, and gesture. Darwin speculated that much of our nonverbal communication might be an innate and automatic holdover from earlier phases of our evolution. For example, we can bite affectionately, as do other animals. We also sneer like other primates by flaring our nostrils and baring our teeth.

The smile is another expression we share with lower primates. Suppose you’re sitting in some public place and notice someone looking at you. If you return the gaze and the other person smiles, you’ll probably feel good about the exchange. But if the other person continues to stare without any hint of a smile, you’ll probably feel uncomfortable. Where do these instinctual responses come from? In trading the currency of smiles, we are sharing a feeling experienced by many of our primate cousins. In the societies of nonhuman primates, a direct stare is an aggressive signal. It often precedes an attack—and, therefore, can precipitate one. As a result, if, say, a submissive monkey wants to check out a dominant one, it will bare its teeth as a peace signal. In monkey talk, bared teeth means Pardon my stare. True, I’m looking, but I don’t plan to attack, so PLEASE don’t attack me first. In chimpanzees, the smile can also go the other way—a dominant individual may smile at a submissive one, saying, analogously, Don’t worry, I’m not going to attack you. So when you pass a stranger in the corridor and that person flashes a brief smile, you’re experiencing an exchange with roots deep in our primate heritage. There is even evidence that with chimps, as with humans, when a smile is exchanged, it can be a sign of friendship.16

You might think a smile is a rather shoddy barometer of true feelings because, after all, anyone can fake one. It’s true that we can consciously decide to exhibit a smile, or any other expression, by using the muscles in our faces in ways we are practiced at doing. Think about what you do when trying to make a good impression at a cocktail party, even though you are miserable about being there. But our facial expressions are also governed subliminally, by muscles over which we have no conscious control. So our real expressions cannot be faked. Sure, anyone can create a posed smile by contracting the zygomatic major muscles, which pull the corners of the mouth up toward the cheekbones. But a genuine smile involves contraction of an additional pair of actors, the orbicularis oculi muscles, which pull the skin surrounding the eye toward the eyeball, causing an effect that looks like crow’s-feet but can be very subtle. That was first pointed out by the nineteenth-century French neurologist Duchenne de Boulogne, who was an influence on Darwin and collected a large number of photographs of people smiling. There are two distinct neural pathways for these smile muscles: a voluntary one for the zygomatic major, and an involuntary one for the orbicularis oculi.17 So a smile-seeking photographer might implore us to say “cheese,” which nudges our mouths into the smile position, but unless you’re the kind who actually rejoices when asked to speak the word “cheese,” the smile won’t look genuine.

In viewing photographs of the two types of smiles given to him by Duchenne de Boulogne, Darwin remarked that though people could sense the difference, he found it very difficult to consciously pinpoint what that difference was, remarking, “It has often struck me as a curious fact that so many shades of expression are instantly recognized without any conscious process of analysis on our part.”18 No one paid much attention to such issues until recently, but modern studies have shown that, as Darwin observed, even people untrained in smile analysis have a good enough gut feeling to distinguish real smiles from phony ones when they can observe the same individual creating both.19 Smiles we intuitively recognize as fake are one reason used-car salesmen, politicians, and others who smile when they don’t mean it are often described as looking sleazy. Actors in the Method dramatic tradition try to get around this by training themselves to actually feel the emotion they are supposed to manifest, and many successful politicians are said to be talented at conjuring up genuine feelings of friendliness and empathy when talking to a roomful of strangers.

Darwin realized that if our expressions evolved along with our species, then many of the ways we express the basic emotions—happiness, fear, anger, disgust, sadness, and surprise—should be shared by humans from different cultures. And so in 1867 he arranged for a questionnaire to be circulated on five continents among indigenous people, some of whom had had little contact with Europeans.20 The survey asked questions like “Is astonishment expressed by the eyes and mouth being opened wide and by the eyebrows being raised?” On the basis of the answers he received, Darwin concluded that “the same state of mind is expressed throughout the world with remarkable uniformity.” Darwin’s study was biased in that his questionnaire asked such leading questions, and like so many other early contributions to psychology, his were overridden—in this case, by the idea that facial expressions are learned behavior, acquired during infancy, as a baby mimics its caretakers and others in the immediate environment. However, in recent years a substantial body of cross-cultural research has offered evidence that Darwin was right after all.21

In the first of a series of famous studies, the psychologist Paul Ekman showed photos of people’s expressions to subjects in Chile, Argentina, Brazil, the United States, and Japan.22 Within a few years, he and a colleague had shown such pictures to people in twenty-one countries. Their findings were the same as Darwin’s, demonstrating that people across a diversity of cultures had a similar understanding of the emotional meaning of a range of facial expressions. Still, such studies alone don’t necessarily mean that those expressions are innate, or even truly universal. Adherents of the “learned expressions” theory argued that Ekman’s results conveyed no deeper truth than the fact that people in the societies studied had all watched Gilligan’s Island, or other movies and television shows. So Ekman traveled to New Guinea, where an isolated Neolithic culture had recently been discovered.23 The natives there had no written language and were still using stone implements. Very few had seen a photograph, much less film or television. Ekman recruited hundreds of these subjects, who had never been previously exposed to outside cultures, and, through a translator, presented them with photographs of American faces illustrating the basic emotions.

The primitive foragers proved to be as nimble as those in the twenty-one literate countries at recognizing happiness, fear, anger, disgust, sadness, and surprise in the face of an emoting American. The scientists also reversed the research design. They photographed the New Guineans as they acted out how they would respond if they saw that their child had died, or found a dead pig that had been lying there for a long time, and so on. The expressions Ekman recorded were unequivocally recognizable.24

This universal capability to create and recognize facial expressions starts at or near birth. Young infants have been observed making nearly all the same facial muscle movements used by adults to signify emotion. Infants can also discriminate among the facial expressions of others and, like adults, modify their behavior based on what they see.25 It is doubtful that these are learned behaviors. In fact, congenitally blind young children, who have never seen a frown or a smile, express a range of spontaneous facial emotions that are almost identical to those of the sighted.26 Our catalog of facial expressions seems to be standard equipment—it comes with the basic model. And because it is a largely innate, unconscious part of our being, communicating our feelings comes naturally, while hiding them requires great effort.

———

IN HUMANS, BODY language and nonverbal communication are not limited to simple gestures and expressions. We have a highly complex system of nonverbal language, and we routinely participate in elaborate nonverbal exchanges, even when we are not consciously aware of doing so. For example, in the case of casual contact with the opposite sex, I’d have been willing to bet a year’s pass to a Manhattan cinema that if a male pollster type approached a guy’s date while they were standing in line to buy a ticket at said theater, few of the fellows approached would be so insecure that they’d consciously feel threatened by the pollster. And yet, consider this experiment, conducted over two mild autumn weekend evenings in an “upper-middle-class” neighborhood in Manhattan.27 The subjects approached were all couples, yes, waiting in line to buy tickets to a movie.

The experimenters worked in teams of two. One team member discreetly observed from a short distance while the other approached the female of the couple and asked if she would be willing to answer a few survey questions. Some of the women were asked neutral questions, such as “What is your favorite city and why?” Others were asked personal questions, such as “What is your most embarrassing childhood memory?” The researchers expected these more personal questions to be more threatening to the boyfriend, more invasive to his sense of intimate space. How did the boyfriends respond?

Unlike the male hamadryas baboon, who starts a fight when he sees another male sitting too close to a female in his group,28 the boyfriends didn’t do anything overtly aggressive; but they did display certain nonverbal cues. The scientists found that when the interviewer was nonthreatening—either a male who asked impersonal questions or a female—the man in the couple tended to just hang out. But when the interviewer was a male asking personal questions, the boyfriend would subtly inject himself into the powwow, flashing what are called “tie-signs,” nonverbal cues meant to convey a connection with the woman. These male smoke signals included orienting himself toward his partner and looking into her eyes as she interacted with the other man. It is doubtful that the men consciously felt the need to defend their relationship from the polite interviewer, but even though the tie-signs fell short of a baboonlike fist in the face, they were an indication of the men’s inner primate pushing its way to the fore.

Another, more complex mode of nonverbal “conversation” has to do with dominance. Nonhuman primates actually maintain fine distinctions along that dimension; they have precise dominance hierarchies, something like the ranks in the army. Without the pretty insignias, though, one might wonder how a chimp knows whom to salute. Dominant primates pound their chests and use voice and other signals to indicate their high rank. One way a chimp can signal its acknowledgment that it is lower in rank, as I said, is to smile. Another is to turn around, bend over, and moon its superior. Yes, that particular behavior, though still practiced by humans, seems to have changed its meaning somewhere along the road of evolution.

In modern human society, there are two kinds of dominance.29 One is physical dominance, based on aggression or the threat of aggression. Physical dominance in humans is similar to dominance in nonhuman primates, though we signal it differently: it is the rare chimpanzee who announces his dominance, as some humans do, by carrying around a switchblade or a .357 Magnum, or by wearing a tight muscle shirt. Humans, however, can also achieve another kind of dominance: social dominance.

Social dominance is based on admiration rather than fear and is acquired through social accomplishment rather than physical prowess. Signals of social dominance—like wearing a Rolex or driving a Lamborghini—can be just as clear and overt as the chest-pounding a male baboon might display. But they can also be subtle, such as declining any conspicuous display of affluence by showing up unexpectedly in torn, faded nondesigner jeans and an old Gap T-shirt, or by refusing to wear anything with a logo on it. (Take that, you silly Prada and Louis Vuitton bag toters!)

Humans have many ways indeed of signaling “I’m the general and you’re not” without mooning or wearing a shoulder patch with stars on it. As in other primate societies, gaze direction and stare are important signals of dominance in human society.30 For example, if a child looks away while the parent is scolding, the adult might say, “Look at me while I’m talking to you!” I’ve said that myself on occasion, though since you don’t hear with your eyes, the demand seems to serve no functional purpose. The interaction is really about the parent’s demand for respect—or in primate language, dominance. What the adult is really saying is Stand at attention. Salute. I am dominant, so when I speak, you must look at me!

We may not realize it, but we don’t just play the gaze game with our children; we play it with our friends and acquaintances, our superiors and subordinates, when we speak to a queen or a president, to a gardener or a store clerk, or to strangers we meet at a party. We automatically adjust the amount of time we spend looking into another’s eyes as a function of our relative social position, and we typically do it without being aware that we are doing it.31 That might sound counterintuitive, because some people like to look everyone in the eye, while others tend to always look elsewhere, whether they are speaking to a CEO or the guy dropping a pack of chicken thighs into their bag at the local grocery store. So how can gazing behavior be related to social dominance?

It is not your overall tendency to look at someone that is telling but the way in which you adjust your behavior when you switch between the roles of listener and speaker. Psychologists have been able to characterize that behavior with a single quantitative measure, and the data they produce using that measure is striking.

Here is how it works: take the percentage of time you spend looking into someone’s eyes while you are speaking and divide it by the percentage spent looking at that same person’s eyes while you are listening. For example, if, no matter which of you is talking, you spend the same amount of time looking away, your ratio would be 1.0. But if you tend to look away more often while you are speaking than when you are listening, your ratio will be less than 1.0. If you tend to look away less often when you are speaking than when you are listening, you have a ratio higher than 1.0. That quotient, psychologists discovered, is a revealing statistic. It is called the “visual dominance ratio,” and it reflects your position on the social dominance hierarchy relative to your conversational partner. A visual dominance ratio near 1.0, or larger, is characteristic of people with relatively high social dominance. A visual dominance ratio less than 1.0 is indicative of being lower on the dominance hierarchy. In other words, if your visual dominance ratio is around 1.0 or higher, you are probably the boss; if it is around 0.6, you are probably the bossed.

The unconscious mind provides us with many wonderful services and performs many awesome feats, but I can’t help being impressed by this one. What is so striking about the data is not just that we subliminally adjust our gazing behavior to match our place on the hierarchy but that we do it so consistently, and with numerical precision. Here is a sample of the data: when speaking to each other, ROTC officers exhibited ratios of 1.06, while ROTC cadets speaking to officers had ratios of 0.61;32 undergraduates in an introductory psychology course scored 0.92 when talking to a person they believed to be a high school senior who did not plan to go to college but 0.59 when talking to a person they believed to be a college chemistry honor student accepted into a prestigious medical school;33 expert men speaking to women about a subject in their own field scored 0.98, while men talking to expert women about the women’s field, 0.61; expert women speaking to nonexpert men scored 1.04, and nonexpert women speaking to expert men scored 0.54.34 These studies were all performed on Americans. The numbers probably vary among cultures, but the phenomenon probably doesn’t.

Whatever your culture, since people unconsciously detect these signals, it stands to reason that one can also adjust the impression one makes by consciously looking at or away from a conversational partner. For example, when applying for a job, talking to your boss, or negotiating a business deal, it might be advantageous to signal a certain level of submission—but how much would depend on the circumstances. In a job interview, if the job requires great leadership ability, a display of too much submissiveness would be a bad strategy. But if the interviewer seemed very insecure, a pleasing display of just the right amount of submissiveness could be reassuring and incline that person in the applicant’s favor. A highly successful Hollywood agent once mentioned to me that he made a point to negotiate only over the telephone so as to avoid being influenced—or inadvertently revealing anything—through eye contact with the opposite party.

My father learned both the power and the danger of a simple look when he was imprisoned in the Buchenwald concentration camp. Weighing under a hundred pounds, he was then little more than a walking corpse. In the camp, if you were not being spoken to, locking eyes with one of your captors could spur rage. Lower forms were not supposed to make uninvited eye contact with the master race. Sometimes when I think in terms of the dichotomy between humans and “lower primates,” I remember my father’s experience, and the thin margin of extra frontal lobe that distinguishes civilized human from brute animal. If the purpose of that extra brain matter is to elevate us, it sometimes fails. But my father also told me that with certain guards, the right kind of eye contact could bring a word, a conversation, even a minor kindness. He said that when that happened it was because the eye contact raised him to the status of being human. But I think that by eliciting a human response from a guard, what his eye contact really did was raise the level of humanity of his captor.


TODAY MOST HUMANS live in large, crowded cities. In many cities, a single neighborhood could encompass the entire world population at the time of the great human social transformation. We walk down sidewalks and through crowded malls and buildings with hardly a word, and no traffic signs, and yet we don’t bump into others or get into fights about who is going to step through the swinging door first. We hold conversations with people we don’t know or hardly know or wouldn’t want to know and automatically stand at a distance that is acceptable to both of us. That distance varies from culture to culture and from individual to individual, and yet, without a word, and usually without giving it any thought, we adjust to a distance of mutual comfort. (Or most of us do, anyway. We can all think of exceptions!) When we talk, we automatically sense when it is time to leave a pause for others to jump in. As we’re about to yield the floor, we typically lower our volume, stretch out our last word, cease gesturing, and look at the other person.35 Along with ToM, these skills aided our survival as a species, and it is still these skills that allow us to maneuver through the complex social world of the human.

Nonverbal communication forms a social language that is in many ways richer and more fundamental than our words. Our nonverbal sensors are so powerful that just the movements associated with body language—that is, minus the actual bodies—are enough to engender within us the ability to accurately perceive emotion. For example, researchers made video clips of participants who had about a dozen small lights or illuminated patches attached at certain key positions on their bodies, as in the picture here.36 The videos were shot in light so dim that only the patches were visible. In these studies, when the participants stood still, the patches gave the impression of a meaningless collection of points. But when the participants stirred, observers were able to decode a surprising amount of information from the moving lights. They were able to judge the participants’ sex, and even the identity of people with whom they were familiar, from their gait alone. And when the participants were actors, mimes, or dancers asked to move in a way that expressed the basic emotions, the observers had no trouble detecting the emotion portrayed.

Courtesy of A. P. Atkinson. From A. P. Atkinson et al., “Emotion Perception from Dynamic and Static Body Expressions in Point-Light and Full-Light Displays,” Perception 33, 724. Copyright 2004.

By the time children reach school age, there are some with full social calendars, while others spend their days shooting spitballs at the ceiling. One of the major factors in social success, even at an early age, is a child’s sense of nonverbal cues. For example, in a study of sixty kindergartners, the children were asked to identify which of their classmates they’d prefer to sit with at storytime, play a game with, or work with on a painting. The same children were judged on their ability to name the emotions exhibited in twelve photographs of adults and children with differing facial expressions. The two measures proved to be related. That is, the researchers found a strong correlation between a child’s popularity and his or her ability to read others.37

In adults, nonverbal ability bestows advantages in both personal and business life and plays a significant role in the perception of a person’s warmth,38 credibility,39 and persuasive power.40 Your uncle Stu might be the kindest man in the world, but if he tends to speak at length on subjects like the moss he observed in Costa Rica and never notices the moss beginning to grow on his listeners’ faces, he’s probably not the most popular guy to hang out with. Our sensitivity to other people’s signals regarding their thoughts and moods helps make social situations proceed smoothly, with a minimum of conflict. From early childhood on, those who are good at giving and receiving signals have an easier time forming social structures and achieving their goals in social situations.

In the early 1950s, many linguists, anthropologists, and psychiatrists attempted to classify nonverbal cues in much the same way we classify verbal language. One anthropologist even developed a transcription system, providing a symbol for virtually every possible human movement so that gestures could be written down like speech.41 Today social psychologists sometimes categorize our nonverbal communication into three basic types. One category concerns body movements: facial expression, gestures, posture, eye movements. Another is called paralanguage, which includes the quality and pitch of your voice, the number and duration of pauses, and nonverbal sounds such as clearing one’s throat or saying “uh.” And finally, there is proxemics, the use of personal space.

Many popular books claim to provide guides to the interpretation of these factors and advise how you can employ them to your benefit. They tell you that tensely folded arms mean you are closed to what someone is telling you, while if you like what you hear, you’ll probably adopt an open posture, maybe even lean forward a little. They’ll say that moving your shoulders forward signifies disgust, despair, or fear, and that maintaining a large interpersonal distance while you speak signals low social stature.42 There haven’t been a lot of studies on the efficacy of the hundred and one ways these books tell you to act, but it’s probably true that assuming those different postures can have at least a subtle effect on how people perceive you, and that understanding what nonverbal cues mean can bring to your consciousness clues about people that otherwise only your unconscious might pick up. Yet even without a conscious understanding, you are a storehouse of information about nonverbal cues. The next time you view a film in a language you don’t know, try blocking out the subtitles. You’ll be surprised by how much of the story you can comprehend without a single word to communicate what is happening.

CHAPTER 6 Judging People by Their Covers What we read into looks, voice, and touch … how to win voters, attract a date, or beguile a female cowbird

There is a road from the eye to the heart that does not go through the intellect.

—G. K. CHESTERTON

IF YOU ARE a man, being compared to a cowbird probably doesn’t sound like a compliment, and it probably isn’t. The male cowbird, you see, is a real slacker: he doesn’t stake out a territory, take care of the baby cowbirds, or bring home a paycheck (which scientists call “resources”). In cowbird society, as one research paper asserted, “females gain few direct benefits from males.”1 Apparently all a male cowbird is good for—or after—is one thing. But the one thing a male cowbird does have to offer is very desirable, so female cowbirds seek out male cowbirds, at least in mating season.

To an amorous female cowbird, the equivalent of a chiseled face or great pecs is the male cowbird’s song. Since it is hard to smile when you have a beak, when she hears a song she finds attractive, a female will often signal interest with her own seductive vocalization, called “chatter.” And, like an eager teenage girl of our own species, if a female cowbird is led to believe that other females find a certain male attractive, she will find that male attractive, too. In fact, suppose that prior to mating season a girl cowbird repeatedly hears recordings of a boy’s voice followed by the admiring chatter of other nubile females. Will that girl cowbird exercise the independent judgment our sober parents all urge? No. When mating season comes, upon hearing that male’s song, she will automatically respond with displays inviting him to mate with her. Why do I say her response is automatic, and not part of some thoughtful strategy aimed at wooing the fellow with whom she’d like to share birdseed in her golden years? Because upon hearing the male’s song, the female will commence her come-on behavior even if that song is coming not from a live bird but from a stereo speaker.2

We humans may share many behaviors with lower animals, but flirting with a stereo speaker is surely not one of them. Or is it? We’ve seen that people unintentionally express their thoughts and feelings even when they might prefer to keep them secret, but do we also react automatically to nonverbal social cues? Do we respond, like the smitten cowbird, even in situations in which our logical and conscious minds would deem the reaction inappropriate or undesirable?

A few years ago, a Stanford communications professor named Clifford Nass sat a couple hundred computer-savvy students in front of computers that spoke to them in prerecorded voices.3 The purpose of the exercise, the students were told, was to prepare for a test with the assistance of a computerized tutoring session. The topics taught ranged from “mass media” to “love and relationships.” After completing the tutoring and the test, the students received an evaluation of their performance, delivered either by the same computer that taught them or by another computer. Finally, the students themselves completed the equivalent of a course evaluation form, in which they rated both the course and their computer tutor.

Nass was not really interested in conducting a computer course on mass media or love and relationships. These earnest students were Nass’s cowbirds, and in a series of experiments he and some colleagues studied them carefully, gathering data on the way they responded to the lifeless electronic computer, gauging whether they would react to a machine’s voice as if the machine had human feelings, motivations, or even a human gender. It would be absurd, of course, to expect the students to say “Excuse me” if they bumped into the monitor. That would be a conscious reaction, and in their conscious ruminations, these students certainly realized that the machine was not a person. But Nass was interested in another level of their behavior, behavior the students did not purposely engage in, social behavior he describes as “automatic and unconscious.”

In one of the experiments, the researchers arranged for half their subjects to be tutored and evaluated by computers with male voices, and half by computers with female voices. Other than that, there was no difference in the sessions—the male computers presented the same information in the same sequence as the females, and the male and female computers delivered identical assessments of the students’ performance. As we’ll see in Chapter 7, if the tutors had been real people, the students’ evaluations of their teachers would probably reflect certain gender stereotypes. For example, consider the stereotype that women know more about relationship issues than men. Ask a woman what bonds couples together, and you might expect her to respond, “Open communication and shared intimacy.” Ask a guy, and you might expect him to say, “Huh?” Studies show that as a result of this stereotype, even when a woman and a man have equal ability in that area, the woman is often perceived as more competent. Nash sought to discover whether the students would apply those same gender stereotypes to the computers.

They did. Those who had female-voiced tutors for the love-and-relationships material rated their teachers as having more sophisticated knowledge of the subject than did those who had male-voiced tutors, even though the two computers had given identical lessons. But the “male” and “female” computers got equal ratings when the topic was a gender-neutral one, like mass media. Another unfortunate gender stereotype suggests that forcefulness is desirable in men, but unseemly in women. And sure enough, students who heard a forceful male-voiced computer tutor rated it as being significantly more likable than those who heard a forceful female-voiced tutor, even though, again, both the male and the female voices had uttered the same words. Apparently, even when coming from a computer, an assertive personality in a female is more likely to come off as overbearing or bossy than the same personality in a male.

The researchers also investigated whether people will apply the social norms of politeness to computers. For example, when put in a position where they have to criticize someone face-to-face, people often hesitate or sugarcoat their true opinion. Suppose I ask my students, “Did you like my discussion of the stochastic nature of the foraging habits of wildebeests?” Judging from my experience, I’ll get a bunch of nods and a few audible murmurs. But no one will be honest enough to say, “Wildebeests? I didn’t hear a word of your boring lecture. But the monotonic drone of your voice did provide a soothing background as I surfed the web on my laptop.” Not even those who sat in the front row and clearly were surfing the web on their laptops would be that blunt. Instead, students save that kind of critique for their anonymous course-evaluation forms. But what if the one asking for the input was a talking computer? Would the students have the same inhibition against delivering a harsh judgment “face-to-face” to a machine? Nass and his colleagues asked half the students to enter their course evaluation on the same computer that had tutored them, and the other half to enter it on a different machine, a machine that had a different voice. Certainly the students would not consciously sugarcoat their words to avoid hurting the machine’s feelings—but as you probably guessed, they did indeed hesitate to criticize the computer to its “face.” That is, they rated the computer teacher as far more likable and competent when offering their judgment directly to that computer than when a different computer was gathering the input.4

Having social relations with a prerecorded voice is not a trait you’d want to mention in a job application. But, like the cowbirds, these students did treat it as if it were a member of their species, even though there was no actual person attached. Hard to believe? It was for the actual subjects. When, after some of the studies had been concluded, the researchers informed the students of the experiment’s true purpose, they all insisted with great confidence that they would never apply social norms to a computer.5 The research shows they were wrong. While our conscious minds are busy thinking about the meaning of the words people utter, our unconscious is busy judging the speaker by other criteria, and the human voice connects with a receiver deep within the human brain, whether that voice emanates from a human being or not.


PEOPLE SPEND A lot of time talking and thinking about how members of the opposite sex look but very little time paying attention to how they sound. To our unconscious minds, however, voice is very important. Our genus, Homo, has been evolving for a couple million years. Brain evolution happens over many thousands or millions of years, but we’ve lived in civilized society for less than 1 percent of that time. That means that though we may pack our heads full of twenty-first-century knowledge, the organ inside our skull is still a Stone Age brain. We think of ourselves as a civilized species, but our brains are designed to meet the challenges of an earlier era. Among birds and many other animals, voice seems to play a great role in meeting one of those demands—reproduction—and it seems to be similarly important in humans. As we’ll see, we pick up a great many sophisticated signals from the tone and quality of a person’s voice and from the cadence, but perhaps the most important way we relate to voice is directly analogous to the reaction of the cowbirds, for in humans, too, females are attracted to males by certain aspects of their “call.”

Women may disagree on whether they prefer dark-skinned men with beards, clean-shaven blonds, or men of any appearance sitting in the driver’s seat of a Ferrari—but when asked to rate men they can hear but not see, women miraculously tend to agree: men with deeper voices are rated as more attractive.6 Asked to guess the physical characteristics of the men whose voices they hear in such experiments, women tend to associate low voices with men who are tall, muscular, and hairy-chested—traits commonly considered sexy.

As for men, a group of scientists recently discovered that men unconsciously adjust the pitch of their voices higher or lower in accordance with their assessment of where they stand on the dominance hierarchy with respect to possible competitors. In that experiment, which involved a couple hundred men in their twenties, each man was told he’d be competing with another man for a lunch date with an attractive woman in a nearby room.7 The competitor, it was explained, was a man in a third room.

Each contestant communicated with the woman via a digital video feed, but when he communicated with the other man, he could only hear him, and not see him. In reality, both the competitor and the woman were confederates of the researchers, and they followed a fixed script. Each man was asked to discuss—with both the woman and his competitor—the reasons he might be respected or admired by other men. Then, after pouring his heart out about his prowess on the basketball court, his potential for winning the Nobel Prize, or his recipe for asparagus quiche, the session was ended, and he was asked to answer some questions assessing himself, his competitor, and the woman. The subjects were then dismissed. There would, alas, be no winners anointed.

The researchers analyzed a tape recording of the male contestants’ voices and scrutinized each man’s answers to the questionnaire. One issue the questionnaires probed was the contestant’s appraisal of his level of physical dominance as compared to that of his competitor. And the researchers found that when the participants believed they were physically dominant—that is, more powerful and aggressive—they lowered the pitch of their voices, and when they believed they were less dominant, they raised the pitch, all apparently without realizing what they were doing.

From the point of view of evolution, what’s interesting about all this is that a woman’s attraction to men with low voices is most pronounced when she is in the fertile phase of her ovulatory cycle.8 What’s more, not only do women’s voice preferences vary with the phases of their reproductive cycle, so do their own voicesin their pitch and smoothness—and research indicates that the greater a woman’s risk of conception, the sexier men find her voice.9 As a result, both women and men are especially attracted to each other’s voices during a woman’s fertile period. The obvious conclusion is that our voices act as subliminal advertisements for our sexuality. During a woman’s fertile phase, those ads flash brightly on both sides, tempting us to click the “Buy” button when we are most likely to obtain not only a mate but, for no extra (upfront) cost, also a child.

But there is still something to be explained. Why is it a deep voice, in particular, that attracts women? Why not a high, squeaky voice or one in mid-range? Was it just nature’s random choice, or does a deep voice correlate with male virility? We’ve seen that—in a woman’s eyes—a deep voice is considered indicative of men who are taller, hairier, and more muscular. The truth is, there is little or no correlation between a deep voice and any of those traits.10 However, studies show that what does correlate with a low-pitched voice is testosterone level. Men with lower voices tend to have higher levels of that male hormone.11

It is difficult to test whether nature’s plan works—whether men with more testosterone really produce more children—because modern birth control methods prevent us from judging a man’s reproductive potential by the number of children he fathers. Still, a Harvard anthropologist and some colleagues found a way. In 2007 they traveled to Africa to study the voices and family size of the Hadza people, a monogamous hunter-gatherer population of about one thousand in the savannah woodlands of Tanzania, where men are still men, tubers are plentiful, and no one uses birth control. In those savannahs, the baritones indeed beat the tenors. The researchers found that while the pitch of women’s voices was not a predictor of their reproductive success, men with lower-pitched voices on average fathered more children.12 A woman’s sexual attraction to a deep male voice does seem to have a neat evolutionary explanation. So if you’re a woman and you want a large family, follow your instincts and go for the Morgan Freeman type.


YOU’RE CERTAINLY MORE likely to satisfy an employee by saying, “I value you and will do everything I can to increase your salary” than by explaining, “I have to keep my budget down, and one of the easiest ways is to pay you as little as possible.” But you can also communicate either sentiment, though not the precise meaning, simply by the way you say it. That’s why some people can recount things like “He enjoyed chewing on plump grapes while speeding down a mountain in a monogrammed bobsled” and still give the impression of being profound, while others can say, “The large-scale geometry of the universe is determined by the density of the matter within it” and sound like they are whining. The pitch, timbre, volume, and cadence of your voice, the speed with which you speak, and even the way you modulate pitch and volume, are all hugely influential factors in how convincing you are, and how people judge your state of mind and your character.

Scientists have developed fascinating computer tools that allow them to determine the influence of voice alone, devoid of content. In one method they electronically scramble just enough syllables that the words cannot be deciphered. In another, they excise just the highest frequencies, which wreaks havoc with our ability to accurately identify consonants. Either way, the meaning is unintelligible while the feel of speech remains. Studies show that when people listen to such “content-free” speech, they still perceive the same impressions of the speaker and the same emotional content as do subjects who hear the unaltered speech.13 Why? Because as we are decoding the meaning of the utterances we call language, our minds are, in parallel, analyzing, judging, and being affected by qualities of voice that have nothing to do with words.

In one experiment scientists created recordings of a couple dozen speakers answering the same two questions, one political, one personal: “What is your opinion of college admissions designed to favor minority groups?” and “What would you do if you suddenly won or inherited a great sum of money?”14 Then they created four additional versions of each answer by electronically raising and lowering the speakers’ pitch by 20 percent, and by quickening and slowing their speech rate by 30 percent. The resulting speech still sounded natural, and its acoustic properties remained within the normal range. But would the alterations affect listeners’ perceptions?

The researchers recruited dozens of volunteers to judge the speech samples. The judges each heard and rated just one version of each speaker’s voice, randomly chosen from among the original and the altered recordings. Since the content of the speakers’ answers didn’t vary among the different versions but the vocal qualities did, differences in the listeners’ assessments would be due to the influence of those vocal qualities and not the content of the speech. The result: speakers with higher-pitched voices were judged to be less truthful, less emphatic, less potent, and more nervous than speakers with lower-pitched voices. Also, slower-talking speakers were judged to be less truthful, less persuasive, and more passive than people who spoke more quickly. “Fast-talking” may be a cliché description of a sleazy salesman, but chances are, a little speedup will make you sound smarter and more convincing. And if two speakers utter exactly the same words but one speaks a little faster and louder and with fewer pauses and greater variation in volume, that speaker will be judged to be more energetic, knowledgeable, and intelligent. Expressive speech, with modulation in pitch and volume and with a minimum of noticeable pauses, boosts credibility and enhances the impression of intelligence. Other studies show that, just as people signal the basic emotions through facial expressions, we also do it through voice. For example listeners instinctively detect that when we lower the usual pitch of our voice, we are sad and that when we raise it, we are angry or fearful.15

If voice makes such a huge impression, the key question becomes, To what extent can someone consciously alter their voice? Consider the case of Margaret Hilda Roberts, who in 1959 was elected as a Conservative member of British Parliament for North London. She had higher ambitions, but to those in her inner circle, her voice was an issue.16 “She had a schoolmarmish, very slightly bossy, slightly hectoring voice,” recalled Tim Bell, the mastermind of her party’s publicity campaigns. Her own publicity adviser, Gordon Reese, was more graphic. Her high notes, he said, were “dangerous to passing sparrows.” Proving that though her politics were fixed, her voice was pliable, Margaret Hilda Roberts took her confidants’ advice, lowered the pitch, and increased her social dominance. There is no way to measure exactly how much difference the change made, but she did pretty well for herself. After the Conservatives were defeated in 1974, Margaret Thatcher—she had married the wealthy businessman Denis Thatcher in 1951—became the party’s leader and, eventually, prime minister.


WHEN I WAS in high school, the few times I gathered the courage to approach a girl, the experience felt like I was administering a multiple-choice test and she kept answering, “None of the above.” I had more or less resigned myself to the fact that a boy who spent his free time reading books on non-Euclidean geometry was not likely to be voted “big man on campus.” Then one day when I was in the library looking for a math book, I took a wrong turn and stumbled upon a work whose title went something like How to Get a Date. I hadn’t realized people wrote instructional books on subjects like that. Questions raced through my mind: Didn’t the mere fact that I was interested in such a book mean it would never fulfill the promise of its title? Could a boy who’d rather talk about curved space-time than touchdown passes ever score himself? Was there really a bag of tricks?

The book emphasized that if a girl doesn’t know you very well—and that applied to every girl in my high school—you should not expect her to agree to a date, and you shouldn’t take the rejection personally. Instead, you should ignore the possibly enormous number of girls who turn you down and keep asking, because, even if the odds are low, the laws of mathematics say eventually your number will come up. Since mathematical laws are my kinds of laws, and I’ve always believed that persistence is a good life philosophy, I took the advice. I can’t say the results were statistically significant, but decades later, I was shocked to find that a group of French researchers essentially repeated the exercise the book had suggested. And they did it in a controlled scientific manner, achieving results that were statistically significant. Furthermore, to my surprise, they revealed a way I could have improved my chance of success.17

French culture is known for many great attributes, some of which probably have nothing to do with food, wine, and romance. But regarding the latter, the French are thought to especially excel, and in the experiment in question, they literally made a science of it. The scene was a particularly sunny June day in a pedestrian zone in the city of Vannes, a medium-sized town on the Atlantic coast of Brittany, in the west of France. Over the course of that day, three young and handsome French men randomly approached 240 young women they spotted walking alone and propositioned each and every one of them. To each, they would utter exactly the same words: “Hello. My name’s Antoine. I just want to say that I think you’re really pretty. I have to go to work this afternoon but I wonder if you would give me your phone number. I’ll phone you later and we can have a drink together someplace.” If the woman refused, they’d say, “Too bad. It’s not my day. Have a nice afternoon.” And then they’d look for another young woman to approach. If the woman handed over her number, they’d tell her the proposition was all in the name of science, at which time, according to the scientists, most of the women laughed. The key to the experiment was this: with half the women they propositioned, the young men added a light one-second touch to the woman’s forearm. The other half received no touch.

The researchers were interested in whether the men would be more successful when they touched the women than when they didn’t. How important is touch as a social cue? Over the course of the day, the young men collected three dozen phone numbers. When they didn’t touch the women, they had a success rate of 10 percent; when they touched them, their success rate was 20 percent. That light one-second touch doubled their popularity. Why were the touched women twice as likely to agree to a date? Were they thinking, This Antoine is a good toucher—it’d probably be fun to knock down a bottle of Bordeaux with him some night at Bar de l’Océan? Probably not. But on the unconscious level, touch seems to impart a subliminal sense of caring and connection.

Unlike non-Euclidean geometry, touch research has many obvious applications.18 For example, in an experiment involving eight servers and several hundred restaurant diners, the servers were trained to touch randomly selected customers briefly on the arm toward the end of the meal while asking if “everything was all right.” The servers received an average tip of about 14.5 percent from those they didn’t touch, but 17.5 percent from those they did. Another study found the same effect on tipping at a bar. And in another restaurant study, about 60 percent of diners took the server’s suggestion to order the special after being touched lightly on the forearm, compared with only about 40 percent of those who were not touched. Touching has been found to increase the fraction of single women in a nightclub who will accept an invitation to dance, the number of people agreeing to sign a petition, the chances that a college student will risk embarrassment by volunteering to go to the blackboard in a statistics class, the proportion of busy passersby in a mall willing to take ten minutes to fill out a survey form, the percentage of shoppers in a supermarket who purchase food they had sampled, and the odds that a bystander who had just provided someone with directions will help him pick up a bunch of computer disks he drops.

You might be skeptical of this. After all, some people recoil when a stranger touches them. And it is possible that some of the subjects in the studies I quoted did recoil but that their reactions were outweighed by the reactions of those who reacted positively. Remember, though, these were all very subtle touches, not gropes. In fact, in studies in which the touched person was later debriefed about the experience, typically less than one-third of the subjects were even aware that they had been touched.19

So are touchy-feely people more successful at getting things done? There is no data on whether bosses who dole out the occasional pat on the head run a smoother operation, but a 2010 study by a group of researchers in Berkeley found a case in which a habit of congratulatory slaps to the skull really is associated with successful group interactions.20 The Berkeley researchers studied the sport of basketball, which both requires extensive second-by-second teamwork and is known for its elaborate language of touching. They found that the number of “fist bumps, high fives, chest bumps, leaping shoulder bumps, chest punches, head slaps, head grabs, low fives, high tens, half hugs, and team huddles” correlated significantly with the degree of cooperation among teammates, such as passing to those who are less closely defended, helping others escape defensive pressure by setting what are called “screens,” and otherwise displaying a reliance on a teammate at the expense of one’s own individual performance. The teams that touched the most cooperated the most, and won the most.

Touch seems to be such an important tool for enhancing social cooperation and affiliation that we have evolved a special physical route along which those subliminal feelings of social connection travel from skin to brain. That is, scientists have discovered a particular kind of nerve fiber in people’s skin—especially in the face and arms—that appears to have developed specifically to transmit the pleasantness of social touch. Those nerve fibers transmit their signal too slowly to be of much use in helping you do the things you normally associate with the sense of touch: determining what is touching you and telling you, with some precision, where you were touched.21 “They won’t help you distinguish a pear from pumice or your cheek from your chin,” says the social neuroscientist pioneer Ralph Adolphs. “But they are connected directly to areas of the brain such as the insular cortex, which is associated with emotion.”22

To primatologists, the importance of touch is no surprise. Nonhuman primates touch each other extensively during grooming. And while grooming is ostensibly about hygiene, it would take only about ten minutes of grooming a day for an animal to stay clean. Instead, some species spend hours on it.23 Why? Remember those grooming cliques? In nonhuman primates, social grooming is important for maintaining social relationships.24 Touch is our most highly developed sense when we are born, and it remains a fundamental mode of communication throughout a baby’s first year and an important influence throughout a person’s life.25


AT A QUARTER to eight on the evening of September 26, 1960, Democratic presidential candidate John F. Kennedy strode into the studio of the CBS affiliate WBBM in downtown Chicago.26 He appeared rested, bronzed, and fit. The journalist Howard K. Smith would later compare Kennedy to an “athlete come to receive his wreath of laurel.” Ted Rogers, the TV consultant to Kennedy’s Republican opponent, Richard Nixon, remarked, “When he came into the studio I thought he was Cochise, he was so tan.”

Nixon, on the other hand, looked haggard and pale. He had arrived fifteen minutes before Kennedy’s grand entrance. The two candidates were in Chicago for the first presidential debate in U.S. history. But Nixon had recently been hospitalized for a knee infection, which still plagued him. Then, ignoring advice to continue resting, he’d resumed a grueling cross-country campaign schedule and had lost considerable weight. As he climbed out of his Oldsmobile, he suffered from a 102 degree fever, yet he insisted he was well enough to go through with the debate. When judged by the candidates’ words, Nixon was indeed destined to hold his own that night. But the debate would proceed on two levels, the verbal and the nonverbal.

The issues of the day included the conflict with communism, agriculture and labor problems, and the candidates’ experience. Since elections are high-stakes affairs and debates are about important philosophical and practical issues, the candidates’ words are all that should matter, right? Would you be swayed to vote against a candidate because a knee infection had made him look tired? Like voice and touch, posture, facial appearance, and expression exert a powerful influence on how we judge people. But would we elect a president based on demeanor?

CBS’s debate producer, Don Hewitt, took one look at Nixon’s gaunt face and immediately heard alarm bells. He offered both candidates the services of a makeup artist, but after Kennedy declined, so did Nixon. Then, while an aide rubbed an over-the-counter cosmetic called Lazy Shave over Nixon’s famously heavy five o’clock shadow, out of their view Kennedy’s people proceeded to give Kennedy a full cosmetic touch-up. Hewitt pressed Rogers, Nixon’s TV consultant, about his candidate’s appearance, but Rogers said he was satisfied. Hewitt then elevated his concern to his boss at CBS. He, too, approached Rogers but received the same response.

Some seventy million people watched the debate. When it was over, one prominent Republican in Texas was heard to say, “That son of a bitch just cost us the election.” That prominent Republican was in a good position to know. He was Henry Cabot Lodge Jr., Richard Nixon’s running mate. When the election was held, some six weeks later, Nixon and Lodge lost the popular vote by a hair, just 113,000 out of the 67,000,000 votes cast. That’s less than 1 vote in 500, so even if the debate had convinced just a small percentage of viewers that Nixon wasn’t up to the job, it would have been enough to swing the election.

What’s really interesting here is that, while viewers like Lodge were thinking that Nixon did horribly, a slew of other prominent Republicans had a completely different experience. For example, Earl Mazo, the national political correspondent for the New York Herald Tribune—and a Nixon supporter—attended a kind of debate party with eleven governors and members of their staffs, all in town for the Southern Governors Conference in Hot Springs, Arkansas.27 They thought Nixon did splendidly. Why was their experience so different from Lodge’s? They had listened to the debate over the radio, because the television broadcast was delayed by one hour in Arkansas.

Of the radio broadcast, Mazo said, “[Nixon’s] deep, resonant voice conveyed more conviction, command, and determination than Kennedy’s higher-pitched voice and his Boston-Harvard accent.” But when the television feed came, Mazo and the governors switched to it and watched the first hour again. Mazo then changed his mind about the winner, saying, “On television, Kennedy looked sharper, more in control, more firm.” A Philadelphia commercial research firm, Sindlinger & Co., later confirmed that analysis. According to an article in the trade journal Broadcasting, their research showed that among radio listeners, Nixon won by more than a two-to-one margin, but among the far greater number of television viewers, Kennedy beat him.

The Sindlinger study was never published in a scientific journal, and little niceties like sample size—and the methodology for accounting for demographic differences between radio and TV users—were not revealed. That’s how the issue stood for some forty years. Then, in 2003, a researcher enlisted 171 summer school students at the University of Minnesota to assess the debate, half after watching a video of it, half after listening to the audio only.28 As scientific subjects, these students had an advantage over any group that might have been assembled at the time of the actual debate: they had no vested interest in either candidate and little or no knowledge of the issues. To the voters in 1960, the name Nikita Khrushchev carried great emotional significance. To these students, he sounded like just another hockey player. But their impression of the debate was no different from that of the voters four decades earlier: those students who watched the debate were significantly more likely to think Kennedy won than those who only listened to it.


IT’S LIKELY THAT, like the voters in the 1960 U.S. presidential election, we have all at some time chosen one individual over another based on looks. We vote for political candidates, but we also select from among candidates for spouse, friend, auto mechanic, attorney, doctor, dentist, vendor, employee, boss. How strong an influence does a person’s appearance have on us? I don’t mean beauty—I mean something more subtle, a look of intelligence, or sophistication, or competence. Voting is a good stand-in for probing the effect of appearance in many realms because there is not only plenty of data available but plenty of money to study it.

In one pair of experiments, a group of researchers in California created campaign flyers for several fictional congressional elections.29 Each supposedly pitted a Republican against a Democrat. In reality, the “candidates” were models hired by the researchers to pose for the black-and-white photographs that would appear in the flyers. Half the models looked able and competent. The other half did not look very able. The researchers didn’t rely on their own judgment to determine that: they conducted a preliminary rating session in which volunteers rated each model’s visual appeal. Then, when the researchers made up the campaign flyers, in each case they pitted one of the more able-looking individuals against one of the less able-looking ones to see if the candidate with the better demeanor would get more votes.

In addition to each candidate’s (fake) name and picture, the flyers included substantive information such as party affiliation, education, occupation, political experience, and a three-line position statement on each of three campaign issues. To eliminate the effects of party preference, half the voters saw flyers in which the more able-looking candidate was a Republican, and half saw flyers in which he was a Democrat. In principle, it should have been only the substantive information that would be relevant to a voter’s choice.

The scientists recruited about two hundred volunteers to play the role of voters. The researchers told the volunteers that the campaign flyers were based on real information concerning real candidates. They also misled the volunteers about the purpose of the experiment, saying that they sought to examine how people vote when they have equal information—such as that on the flyers—on all of the candidates. The volunteers’ job, the scientists explained to them, was merely to look over the flyers and vote for the candidate of their choice in each of the elections presented. The “face effect” proved to be large: the candidate with the better demeanor, on average, won 59 percent of the vote. That’s a landslide in modern politics. In fact the only American president since the Great Depression to have won by that big a margin was Lyndon Johnson, when he beat Goldwater with 61 percent of the vote in 1964. And that was an election in which Goldwater was widely portrayed as a man itching to start a nuclear war.

In the second study, the researchers’ methodology was similar, except this time the pool of people whose photos were used to portray the candidates was chosen differently. In the first study, the candidates were all men who’d been judged by a voting committee as looking either more or less competent. In this study, the candidates were all women whose appearance had been assessed by a committee as being neutral. The scientists then employed a Hollywood-style makeup specialist and a photographer to create two photographic versions of each candidate: one in which she appeared more competent, and another in which she appeared less competent. In this mock election, a competent version of one woman was always pitted against an incompetent version of another. The result: on average, looking more like a leader equated to a vote swing of 15 percent at the polls. To get an idea of the magnitude of that effect, consider that in one recent California congressional election, a swing of that size would have changed the outcome in fifteen of the fifty-three districts.

I found these studies astounding and alarming. They imply that before anyone even discusses the issues, the race may be over, since looks alone can give a candidate a huge head start. With all the important issues of the day, it’s hard to accept that a person’s face would really sway our vote. One obvious criticism of this research is that these were mock elections. The studies might show that a competent appearance can give a candidate a boost, but they don’t address the issue of how “soft” that preference may be. Certainly one would expect that voters with strong ideological preferences would not be easily swayed by appearance. Swing voters ought to be more easily affected but is the phenomenon strong enough to affect elections in the real world?

In 2005, researchers at Princeton gathered black-and-white head shots of all the winners and runners-up in ninety-five races for the U.S. Senate and six hundred races for the House of Representatives from 2000, 2002, and 2004.30 Then they assembled a group of volunteers to evaluate the candidates’ competence based on just a quick look at the photographs, discarding the data on any of the faces a volunteer recognized. The results were astonishing: the candidate the volunteers perceived as more competent had won in 72 percent of the Senate races and 67 percent of the House races, even higher success rates than in the California laboratory experiment. Then, in 2006, the scientists performed an experiment with even more astonishing—and, when you think about it, depressing—results. They conducted the face evaluations before the elections in question and predicted the winners based solely on the candidates’ appearance. They were strikingly accurate: the candidate voted as more competent-looking went on to win in 69 percent of the gubernatorial races and 72 percent of the Senate races.

I’ve gone into detail regarding these political studies not just because they are important in themselves but because, as I said earlier, they shed light on our broader social interactions. In high school, our vote for class president might be based on looks. It would be nice to think that we outgrow those primitive ways, but it’s not easy to graduate from our unconscious influences.

In his autobiography, Charles Darwin reported that he was almost denied the chance to make his historic voyage on the Beagle on account of his looks, in particular, because of his nose, which was large and somewhat bulbous.31 Darwin himself later used his nose, facetiously, as an argument against intelligent design, writing, “Will you honestly tell me … whether you believe that the shape of my nose was ordained and ‘guided by an intelligent cause’?”32 The Beagle’s captain wanted to keep Darwin off the ship because he had a personal belief that you could judge character by the shape of the nose, and a man with Darwin’s, he felt, could not possibly “possess sufficient energy and determination for the voyage.” In the end, of course, Darwin got the job. Of the captain, Darwin later wrote, “I think he was afterwards well-satisfied that my nose had spoken falsely.”33


TOWARD THE END of The Wizard of Oz, Dorothy and company approach the great Wizard, offering him the broomstick of the Wicked Witch of the West. They can see only fire, smoke, and a floating image of the Wizard’s face as he responds in booming, authoritative tones that have Dorothy and her cohorts trembling with fear. Then Dorothy’s dog, Toto, tugs aside a curtain, revealing that the ominous Wizard is just an ordinary-looking man speaking into a microphone and pulling levers and twisting dials to orchestrate the fireworks. He yanks the curtain closed and admonishes, “Pay no attention to that man behind the curtain,” but the jig is up, and Dorothy discovers that the Wizard is just a genial old man.

There is a man or woman behind the curtain of everybody’s persona. Through our social relationships we get to know a small number of beings with the level of intimacy that allows us to peel back the curtain—our friends, close neighbors, family members, and perhaps the family dog (though certainly not the cat). But we don’t get to pull the curtain very far back on most of the people we meet, and it is usually drawn fully closed when we encounter someone for the first time. As a result, certain superficial qualities, such as voice, face and expression, posture, and the other nonverbal characteristics I’ve been talking about, mold many of the judgments we make about people—the nice or nasty people we work with, our neighbors, our doctors, our kids’ teachers, the politicians we vote for or against or simply try to ignore. Every day we meet people and form judgments like I trust that babysitter, This lawyer knows what she is doing, or That guy seems like the type who would gently stroke my back while reciting Shakespeare sonnets by candlelight. If you are a job applicant, the quality of your handshake can affect the outcome of your employment interview. If you are a salesperson, your degree of eye contact can influence your rating of customer satisfaction. If you are a doctor, the tone of your voice can have an impact on not only your patients’ assessment of their visit but their propensity to sue if something goes wrong. We humans are superior to cowbirds in our conscious understanding. But we also have a deep inner cowbird mind that reacts to nonverbal cues, uncensored by those logical judgments of consciousness. The expression “to be a real human being” means to act with compassion. Other languages have similar expressions, such as the German “ein Mensch sein.” A human being, by nature, cannot help but pick up on the emotions and intentions of others. That ability is built into our brains, and there is no off switch.

CHAPTER 7 Sorting People and Things Why we categorize things and stereotype people … what Lincoln, Gandhi, and Che Guevara had in common

We would be dazzled if we had to treat everything we saw, every visual input, as a separate element, and had to figure out the connections anew each time we opened our eyes.

—GARY KLEIN

IF YOU READ someone a list of ten or twenty items that could be bought at a supermarket, that person will remember only a few. If you recite the list repeatedly, the person’s recall will improve. But what really helps is if the items are mentioned within the categories they fall into—for example, vegetables, fruits, and cereals. Research suggests that we have neurons in our prefrontal cortex that respond to categories, and the list exercise illustrates the reason: categorization is a strategy our brains use to more efficiently process information.1 Remember Shereshevsky, the man with the flawless memory who had great trouble recognizing faces? In his memory, each person had many faces: faces as viewed from different angles, faces in varying lighting, faces for each emotion and for each nuance of emotional intensity. As a result, the encyclopedia of faces on the bookshelf of Shereshevsky’s brain was exceptionally thick and difficult to search, and the process of identifying a new face by matching it to one previously seen—which is the essence of what categorization is—was correspondingly cumbersome.

Every object and person we encounter in the world is unique, but we wouldn’t function very well if we perceived them that way. We don’t have the time or the mental bandwidth to observe and consider each detail of every item in our environment. Instead, we employ a few salient traits that we do observe to assign the object to a category, and then we base our assessment of the object on the category rather than the object itself. By maintaining a set of categories, we thus expedite our reactions. If we hadn’t evolved to operate that way, if our brains treated everything we encountered as an individual, we might be eaten by a bear while still deciding whether this particular furry creature is as dangerous as the one that ate Uncle Bob. Instead, once we see a couple bears eat our relatives, the whole species gets a bad reputation. Then, thanks to categorical thinking, when we spot a huge, shaggy animal with large, sharp incisors, we don’t hang around gathering more data; we act on our automatic hunch that it is dangerous and move away from it. Similarly, once we see a few chairs, we assume that if an object has four legs and a back, it was made to sit on; or if the driver in front of us is weaving erratically, we judge that it is best to keep our distance.

Thinking in terms of generic categories such as “bears,” “chairs,” and “erratic drivers” helps us to navigate our environment with great speed and efficiency; we understand an object’s gross significance first and worry about its individuality later. Categorization is one of the most important mental acts we perform, and we do it all the time. Even your ability to read this book depends on your ability to categorize: mastering reading requires grouping similar symbols, like b and d, in different letter categories, while recognizing that symbols as disparate as b, b, , and b all represent the same letter.

Classifying objects isn’t easy, . Mixed fonts aside, it is easy to underestimate the complexity of what is involved in categorization because we usually do it quickly and without conscious effort. When we think of food types, for example, we automatically consider an apple and a banana to be in the same category—fruit—though they appear quite different, but we consider an apple and a red billiard ball to be in different categories, even though they appear quite similar. An alley cat and a dachshund might both be brown and of roughly similar size and shape, while an Old English sheepdog is far different—large, white, and shaggy—but even a child knows that the alley cat is in the category feline and the dachshund and sheepdog are canines. To get an idea of just how sophisticated that categorization is, consider this: it was just a few years ago that computer scientists finally learned how to design a computer vision system that could accomplish the task of distinguishing cats from dogs.

As the above examples illustrate, one of the principal ways we categorize is by maximizing the importance of certain differences (the orientation of d versus b or the presence of whiskers) while minimizing the relevance of others (the curviness of versus b or the color of the animal). But the arrow of our reasoning can also point the other way. If we conclude that a certain set of objects belongs to one group and a second set of objects to another, we may then perceive those within the same group as more similar than they really are—and those in different groups as less similar than they really are. Merely placing objects in groups can affect our judgment of those objects. So while categorization is a natural and crucial shortcut, like our brain’s other survival-oriented tricks, it has its drawbacks.

One of the earliest experiments investigating the distortions caused by categorization was a simple study in which subjects were asked to estimate the lengths of a set of eight line segments. The longest of those lines was 5 percent longer than the next in the bunch, which, in turn, was 5 percent longer than the third longest, and so on. The researchers asked half their subjects to estimate the lengths of each of the lines, in centimeters. But before asking the other subjects to do the same, they artificially grouped the lines into two sets—the longer four lines were labeled “Group A,” the shorter four labeled “Group B.” The experimenters found that once the lines were thought of as belonging to a group, the subjects perceived them differently. They judged the lines within each group as being closer in length to one another than they really were, and the length difference between the two groups as being greater than it actually was.2

Analogous experiments have since shown the same effect in many other contexts. In one experiment, the judgment of length was replaced by a judgment of color: volunteers were presented with letters and numbers that varied in hue and asked to judge their “degree of redness.” Those who were given the color samples with the reddest characters grouped together judged those to be more alike in color and more different from the other group than did volunteers who appraised the same samples presented without being grouped.3 In another study, researchers found that if you ask people in a given city to estimate the difference in temperature between June 1 and June 30, they will tend to underestimate it; but if you ask them to estimate the difference in temperature between June 15 and July 15, they will overestimate it.4 The artificial grouping of days into months skews our perception: we see two days within a month as being more similar to each other than equally distant days that occur in two different months, even though the time interval between them is identical.

In all these examples, when we categorize, we polarize. Things that for one arbitrary reason or another are identified as belonging to the same category seem more similar to each other than they really are, while those in different categories seem more different than they really are. The unconscious mind transforms fuzzy differences and subtle nuances into clear-cut distinctions. Its goal is to erase irrelevant detail while maintaining information on what is important. When that’s done successfully, we simplify our environment and make it easier and faster to navigate. When it’s done inappropriately, we distort our perceptions, sometimes with results harmful to others, and even ourselves. That’s especially true when our tendency to categorize affects our view of other humans—when we view the doctors in a given practice, the attorneys in a given law firm, the fans of a certain sports team, or the people in a given race or ethnic group as more alike than they really are.


A CALIFORNIA ATTORNEY wrote about the case of a young Salvadoran man who had been the only nonwhite employee at a box-manufacturing plant in a rural area. He had been denied a promotion, then fired for habitual tardiness and for being “too easy-going.” The man claimed that the same could be said of others but that their tardiness went unnoticed. With them, he said, the employer seemed to understand that sometimes a sickness in the family, a problem with a child, or trouble with the car can lead to being late. But with him, lateness was automatically attributed to laziness. His shortcomings were amplified, he said, and his achievements went unrecognized. We’ll never know whether his employer really overlooked the Salvadoran man’s individual traits, whether his employer lumped him in the general category “Hispanic” and then interpreted his behavior in terms of a stereotype. The employer certainly disputed that accusation. And then he added, “Mateo’s being a Mexican didn’t make any difference to me. It’s like I didn’t even notice.”5

The term “stereotype” was coined in 1794 by the French printer Firmin Didot.6 It referred to a type of printing process by which cookie-cutter-like molds could be used to produce duplicate metal plates of hand-set type. With these duplicate plates, newspapers and books could be printed on several presses at once, enabling mass production. The term was first used in its current sense by the American journalist and intellectual Walter Lippmann in his 1922 book Public Opinion, a critical analysis of modern democracy and the role of the public in determining its course. Lippmann was concerned with the ever-growing complexity of the issues facing the voters and the manner in which they developed their views on those issues. He was particularly worried about the role of the mass media. Employing language that sounds as if it was pulled from a recent scholarly article on the psychology of categories, Lippmann wrote, “The real environment is altogether too big, too complex, and too fleeting for direct acquaintance…. And although we have to act in that environment, we have to reconstruct it on a simpler model before we can manage with it.”7 That simpler model was what he called the stereotype.

Lippmann recognized that the stereotypes people use come from cultural exposure. His was an era in which mass-circulation newspapers and magazines, as well as the new medium of film, were distributing ideas and information to audiences larger and more far-flung than had ever before been possible. They made available to the public an unprecedentedly wide array of experiences of the world, yet without necessarily providing an accurate picture. The movies, in particular, conveyed a vivid, real-looking portrait of life, but one often peopled by stock caricatures. In fact, in the early days of film, filmmakers combed the streets looking for “character actors,” easily identifiable social types, to play in their movies. As Lippmann’s contemporary Hugo Münsterberg wrote, “If the [producer] needs the fat bartender with his smug smile, or the humble Jewish peddler, or the Italian organ grinder, he does not rely on wigs and paint; he finds them all ready-made on the East Side [of New York].” Stock character types were (and still are) a convenient shorthand—we recognize them at once—but their use amplifies and exaggerates the character traits associated with the categories they represent. According to the historians Elizabeth Ewen and Stuart Ewen, by noting the analogy between social perception and a printing process capable of generating an unlimited number of identical impressions, “Lippmann had identified and named one of the most potent features of modernity.”8

People, categorized according to the animal they resemble. Courtesy of the National Library of Medicine.

Though categorizations due to race, religion, gender, and nationality get the most press, we categorize people in many other ways as well. We can probably all think of cases in which we lumped athletes with athletes, or bankers with bankers, in which we and others have categorized people we’ve met according to their profession, appearance, ethnicity, education, age, or hair color or even by the cars they drive. Some scholars in the sixteenth and seventeenth centuries even categorized people according to the animal they best resembled, as pictured on the facing page, in images from De Humana Physiognomonia, a kind of field guide to human character written in 1586 by the Italian Giambattista della Porta.9

A more modern illustration of categorization by appearance played out early one afternoon in an aisle of a large discount department store in Iowa City. There, an unshaven man in soiled, patched blue jeans and a blue workman’s shirt shoved a small article of clothing into the pocket of his jacket. A customer down the aisle looked on. A little later, a well-groomed man in pressed dress slacks, a sports jacket, and a tie did the same, observed by a different customer who happened to be shopping nearby. Similar incidents occurred again and again that day, well into the evening, over fifty more times, and there were a hundred more such episodes at other nearby stores. It was as if a brigade of shoplifters had been dispatched to rid the town of cheap socks and tacky ties. But the occasion wasn’t National Kleptomaniacs’ Day; it was an experiment by two social psychologists.10 With the full cooperation of the stores involved, the researchers’ aim was to study how the reactions of bystanders would be affected by the social category of the offender.

The shoplifters were all accomplices of the researchers. Immediately after each shoplifting episode, the thief walked out of hearing distance of the customer but remained within eyesight. Then another research accomplice, dressed as a store employee, stepped to the vicinity of the customer and began rearranging merchandise on the shelves. This gave the customer an easy opportunity to report the crime. The customers all observed the identical behavior, but they did not all react to it in the same way. Significantly fewer of the customers who saw the well-dressed man commit the crime reported it, as compared to those who had watched the scruffy individual. Even more interesting were the differences in attitude the customers had when they did alert the employee to the crime. Their analysis of events went beyond the acts they had observed—they seemed to form a mental picture of the thief based as much on his social category as on his actions. They were often hesitant when reporting the well-dressed criminal but enthusiastic when informing on the unkempt perpetrator, spicing up their accounts with utterances along the lines of “that son of a bitch just stuffed something down his coat.” It was as if the unkempt man’s appearance was a signal to the customers that shoplifting must be the least of his sins, an indicator of an inner nature as soiled as his clothes.

We like to think we judge people as individuals, and at times we consciously try very hard to evaluate others on the basis of their unique characteristics. We often succeed. But if we don’t know a person well, our minds can turn to his or her social category for the answers. Earlier we saw how the brain fills in gaps in visual data—for instance, compensating for the blind spot where the optic nerve attaches to the retina. We also saw how our hearing fills gaps, such as when a cough obliterated a syllable or two in the sentence “The state governors met with their respective legislatures convening in the capital city.” And we saw how our memory will add the details of a scene we remember only in broad strokes and provide a vivid and complete picture of a face even though our brains retained only its general features. In each of these cases our subliminal minds take incomplete data, use context or other cues to complete the picture, make educated guesses, and produce a result that is sometimes accurate, sometimes not, but always convincing. Our minds also fill in the blanks when we judge people, and a person’s category membership is part of the data we use to do that.

The realization that perceptual biases of categorization lie at the root of prejudice is due largely to the psychologist Henri Tajfel, the brain behind the line-length study. The son of a Polish businessman, Tajfel would likely have become a forgotten chemist rather than a pioneering social psychologist were it not for the particular social category to which he himself was assigned. Tajfel was a Jew, a category identification that meant he was banned from enrolling in college, at least in Poland. So he moved to France. There he studied chemistry, but he had no passion for it. He preferred partying—or, as one colleague put it, “savoring French culture and Parisian life.”11 His savoring ended when World War II began, and in November 1939, he joined the French army. Even less savory was where he ended up: in a German POW camp. There Tajfel was introduced to the extremes of social categorization that he would later say led him to his career in social psychology.

The Germans demanded to know the social group to which Tajfel belonged. Was he French? A French Jew? A Jew from elsewhere? If the Nazis thought of Jews as less than human, they nevertheless distinguished between pedigrees of Jew, like vintners distinguishing between the châteaus of origin of soured wine. To be French meant to be treated as an enemy. To be a French Jew meant to be treated as an animal. To admit being a Polish Jew meant swift and certain death. No matter what his personal characteristics or the quality of his relationship with his German captors, as he would later point out, if his identity were discovered, it would be his classification as a Polish Jew that would determine his fate.12 But there was also danger in lying. So, from the menu of stigmatization, Tajfel chose the middle dish: he spent the next four years pretending to be a French Jew.13 He was liberated in 1945 and in May of that year, as he put it, was “disgorged with hundreds of others from a special train arriving at the Gare d’Orsay in Paris … [soon to discover] that hardly anyone I knew in 1939—including my family—was left alive.”14 Tajfel spent the next six years working with war refugees, especially children and adolescents, and mulling over the relationships between categorical thinking, stereotyping, and prejudice. According to the psychologist William Peter Robinson, today’s theoretical understanding of those subjects “can almost without exception be traced back to Tajfel’s theorizing and direct research intervention.”15

Unfortunately, as was the case with other pioneers, it took the field many years to catch up with Tajfel’s insights. Even well into the 1980s, many psychologists viewed discrimination as a conscious and intentional behavior, rather than one commonly arising from normal and unavoidable cognitive processes related to the brain’s vital propensity to categorize.16 In 1998, however, a trio of researchers at the University of Washington published a paper that many see as providing smoking-gun evidence that unconscious, or “implicit,” stereotyping is the rule rather than the exception.17 Their paper presented a computerized tool called the “Implicit Association Test,” or IAT, which has become one of social psychology’s standard tools for measuring the degree to which an individual unconsciously associates traits with social categories. The IAT has helped revolutionize the way social scientists look at stereotyping.


IN THEIR ARTICLE, the IAT pioneers asked their readers to “consider a thought experiment.” Suppose you are shown a series of words naming male and female relatives, such as “brother” or “aunt.” You are asked to say “hello” when presented with a male relative and “good-bye” when shown a female. (In the computerized version you see the words on a screen and respond by pressing letters on the keyboard.) The idea is to respond as quickly as possible while not making too many errors. Most people who try this find that it is easy and proceed rapidly. Next, the researchers ask that you repeat the game, only this time with male and female names, like “Dick” or “Jane” instead of relatives. The names are of unambiguous gender, and again, you can fly through them. But this is just an appetizer.

The real experiment starts now: in phase 1, you are shown a series of words that can be either a name or a relative. You are asked to say “hello” for the male names and relatives and “good-bye” for the female names and relatives. It’s a slightly more complex task than before, but still not taxing. What’s important is the time it takes you to make each selection. Try it with the following word list; you can say “hello” or “good-bye” to yourself if you are afraid of scaring away your own relatives who may be within earshot (hello = male name or relative; good-bye = female name or relative):

John, Joan, brother, granddaughter, Beth, daughter, Mike, niece, Richard, Leonard, son, aunt, grandfather, Brian, Donna, father, mother, grandson, Gary, Kathy.

Now for phase 2. In phase 2 you see a list of the names and relatives again, but this time you are asked to say “hello” when seeing a male name or female relative and “good-bye” when you see a female name or male relative. Again, what’s important is the time it takes you to make your selections. Try it (hello = male name or female relative; good-bye = female name or male relative):

John, Joan, brother, granddaughter, Beth, daughter, Mike, niece, Richard, Leonard, son, aunt, grandfather, Brian, Donna, father, mother, grandson, Gary, Kathy.

The phase 2 response times are typically far greater than those for phase 1: perhaps three-fourths of a second per word, as opposed to just half a second. To understand why, let’s look at this as a task in sorting. You are being asked to consider four categories of objects: male names, male relatives, female names, and female relatives. But these are not independent categories. The categories male names and male relatives are associated—they both refer to males. Likewise, the categories female names and female relatives are associated. In phase 1 you are asked to label the four categories in a manner consistent with that association—to label all males in the same manner, and all females in the same manner. In phase 2, however, you are asked to ignore your association, to label males one way if you see a name but the other way if you see a relative, and to also label female terms differently depending upon whether the term is a name or a relative. That is complicated, and the complexity eats up mental resources, slowing you down.

That is the crux of the IAT: when the labeling you are asked to do follows your mental associations, it speeds you up, but when it mixes across associations, it slows you down. As a result, by examining the difference in speed between the two ways you are asked to label, researchers can probe how strongly a person associates traits with a social category.

For example, suppose that instead of words denoting male and female relatives, I showed you terms related to either science or the arts. If you had no mental association linking men and science or women and the arts, it wouldn’t matter if you had to say “hello” for men’s names and science terms and “good-bye” for women’s names and arts terms, or “hello” for men’s names and arts terms and “good-bye” for women’s names and science terms. Hence there would be no difference between phase 1 and phase 2. But if you had strong associations linking women and the arts and linking men and science—as most people do—the exercise would be very similar to the original task, with male and female relatives and male and female names, and there would be a considerable difference in your response times in phase 1 and phase 2.

When researchers administer tests analogous to this, the results are stunning. For example, they find that about half the public shows a strong or moderate bias toward associating men with science and women with the arts, whether they are aware of such links or not. In fact, there is little correlation between the IAT results and measures of “explicit,” or conscious, gender bias, such as self-reports or attitude questionnaires. Similarly, researchers have shown subjects images of white faces, black faces, hostile words (awful, failure, evil, nasty, and so on), and positive words (peace, joy, love, happy, and so on). If you have pro-white and anti-black associations, it will take you longer to sort words and images when you have to connect positive words to the black category and hostile words to the white category than when black faces and hostile words go in the same bin. About 70 percent of those who have taken the test exhibit this pro-white association, including many who are (consciously) appalled at learning that they hold such attitudes. Even many black people, it turns out, exhibit an unconscious pro-white bias on the IAT. It is difficult not to when you live in a culture that embodies negative stereotypes about African Americans.

Though your evaluation of another person may feel rational and deliberate, it is heavily informed by automatic, unconscious processes—the kind of emotion-regulating processes carried out within the ventromedial prefrontal cortex. In fact, damage to the VMPC has been shown to eliminate unconscious gender stereotyping.18 As Walter Lippmann recognized, we can’t avoid mentally absorbing the categories defined by the society in which we live. They permeate the news, television programming, films, all aspects of our culture. And because our brains naturally categorize, we are vulnerable to acting on the attitudes those categories represent. But before you recommend incorporating VMPC obliteration into your company’s management training course, remember that the propensity to categorize, even to categorize people, is for the most part a blessing. It allows us to understand the difference between a bus driver and a bus passenger, a store clerk and a customer, a receptionist and a physician, a maître d’ and a waiter, and all the other strangers we interact with, without our having to pause and consciously puzzle out everyone’s role anew during each encounter. The challenge is not how to stop categorizing but how to become aware of when we do it in ways that prevent us from being able to see individual people for who they really are.


THE PSYCHOLOGY PIONEER Gordon Allport wrote that categories saturate all that they contain with the same “ideational and emotional flavor.”19 As evidence of that, he cited a 1948 experiment in which a Canadian social scientist wrote to 100 different resorts that had advertised in newspapers around the holidays.20 The scientist drafted two letters to each resort, requesting accommodations on the same date. He signed one letter with the name “Mr. Lockwood” and the other with the name “Mr. Greenberg.” Mr. Lockwood received a reply with an offer of accommodations from 95 of the resorts. Mr. Greenberg received such a reply from just 36. The decisions to spurn Mr. Greenberg were obviously not made on Mr. Greenberg’s own merits but on the religious category to which he presumably belonged.

Prejudging people according to a social category is a time-honored tradition, even among those who champion the underprivileged. Consider this quote by a famed advocate for equality:

Ours is one continued struggle against degradation sought to be inflicted upon us by the European, who desire to degrade us to the level of the raw Kaffir [black African] … whose sole ambition is to collect a certain number of cattle to buy a wife with, and then pass his life in indolence and nakedness.21

That was Mahatma Gandhi. Or consider the words of Che Guevara, a revolutionary who, according to Time magazine, left his native land “to pursue the emancipation of the poor of the earth” and helped overthrow the Cuban dictator Fulgencio Batista.22 What did this Marxist champion of poor oppressed Cubans think of the poor blacks in the United States? He said, “The Negro is indolent and lazy, and spends his money on frivolities, whereas the European is forward-looking, organized and intelligent.”23 And how about this famous advocate for civil rights:

I will say then that I am not, nor ever have been in favor of bringing about in any way the social and political equality of the white and black races … there is a physical difference between the white and black races which I believe will forever forbid the two races living together on terms of social and political equality … and I as much as any other man am in favor of having the superior position assigned to the white race.

That was Abraham Lincoln in a debate at Charlestown, Illinois, in 1858. He was incredibly progressive for his time but still believed that social, if not legal, categorization would forever endure. We’ve made progress. Today in many countries it is difficult to imagine a serious candidate for national political office voicing views such as Lincoln’s—or if he did, at least he wouldn’t be considered the pro–civil rights candidate. Today culture has evolved to the point where most people feel it is wrong to willfully cheat someone out of an opportunity because of character traits we infer from their category identity. But we are only beginning to come to grips with unconscious bias.

Unfortunately, if science has recognized unconscious stereotyping, the law has not. In the United States, for example, individuals claiming discrimination based on race, color, religion, sex, or national origin must prove not only that they were treated differently but that the discrimination was purposeful. No doubt discrimination often is purposeful. There will always be people like the Utah employer who consciously discriminated against women and was quoted in court as having said, “Fucking women, I hate having fucking women in the office.”24 It is relatively easy to address discrimination by people who preach what they practice. The challenge science presents to the legal community is to move beyond that, to address the more difficult issue of unconscious discrimination, of bias that is subtle and hidden even from those who exercise it.

We can all personally fight unconscious bias, for research has shown that our tendency to categorize people can be influenced by our conscious goals. If we are aware of our bias and motivated to overcome it, we can. For example, studies of criminal trials reveal one set of circumstances in which people’s bias regarding appearance is routinely overcome. In particular, it has long been known that people’s attributions of guilt and recommendations of punishment are subliminally influenced by the looks of the defendant.25 But: typically, more attractive defendants receive more lenient treatment only when accused of minor crimes such as traffic infractions or swindles, and not with regard to more serious crimes like murder. Our unconscious judgment, which relies heavily on the categories to which we assign people, is always competing with our more deliberative and analytical conscious thought, which may see them as individuals. As these two sides of our minds battle it out, the degree to which we view a person as an individual versus a generic group member can vary on a sliding scale. That’s what seems to be happening in criminal trials. Serious crimes usually involve longer, more detailed examination of the defendant, with more at stake, and the added conscious focus seems to outweigh the attractiveness bias.

The moral of the story is that if we wish to overcome unconscious bias, it requires effort. A good way to start is by taking a closer look at those we are judging, even if they are not on trial for murder but, instead, are simply asking for a job or a loan—or our vote. Our personal knowledge of a specific member of a category can easily override our category bias, but more important, over time repeated contact with category members can act as an antidote to the negative traits society assigns to people in that category.

I recently had my eyes opened to the way experience can trump bias. It happened after my mother moved into an assisted living center. Her cohorts there are mainly around ninety. Since I have had little exposure to large numbers of people that age, I initially viewed all of them as alike: white hair, slouched posture, tethered to their walkers. I figured that if they’d ever held a job, it must have been building the pyramids. I saw them not as individuals but, rather, as exemplars of their social stereotype, assuming they were all (except my mother, of course) rather dim and feebleminded and forgetful.

My thinking changed abruptly one day in the dining room, when my mother remarked that on the afternoons when the hairdresser visited the assisted living center, she felt pain and dizziness as she leaned her head back to have her hair washed. One of my mother’s friends said that this was a very bad sign. My initial thoughts were dismissive: What does she mean by a bad sign? Is that an astrological prediction? But the friend went on to explain that my mother’s complaints were the classic symptoms of an occluded carotid artery, which could lead to a stroke, and urged that she see her physician about it. My mother’s friend wasn’t just a ninety-year-old; she was a doctor. And as I got to know others in the home, over time, I started to see ninety-year-olds as varied and unique characters, with many different talents, none of which related to the pyramids.

The more we interact with individuals and are exposed to their particular qualities, the more ammunition our minds have to counteract our tendency to stereotype, for the traits we assign to categories are products not just of society’s assumptions but of our own experience. I didn’t take the IAT before and after, but my guess is that my implicit prejudice concerning the very old has been considerably reduced.


IN THE 1980S, scientists in London studied a seventy-seven-year-old shopkeeper who had had a stroke in the lower part of his occipital lobe.26 His motor system and memory were unaffected, and he retained good speaking and visual skills. For the most part he seemed cognitively normal, but he did have one problem. If shown two objects that had the same function but were not identical—say, two different trains, two brushes, or two jugs—he could not recognize the connection between them. He could not tell, even, that the letters a and A meant the same thing. As a result, the patient reported great difficulty in everyday life, even when attempting simple tasks such as setting the table. Scientists say that without our ability to categorize we would not have survived as a species, but I’ll go further: without that ability, one could hardly survive even as an individual. In the previous pages, we’ve seen that categorization, like many of our unconscious mental processes, has both up- and downsides. In the next chapter, we’ll find out what happens when we categorize ourselves, when we define ourselves as being connected, by some trait, to certain other individuals. How does that affect the way we view and treat those within our group and those on the outside?

CHAPTER 8 In-Groups and Out-Groups The dynamics of us and them … the science behind Lord of the Flies

All groups… develop a way of living with characteristic codes and beliefs.

—GORDON ALLPORT

THE CAMP WAS in a densely wooded area in southeastern Oklahoma, about seven miles from the nearest town. Hidden from view by heavy foliage and ringed by a fence, it was situated in the midst of a state park called Robbers Cave. The park got its name because Jesse James had once used it as a hideout, and it was still an ideal place to hole up if being left undisturbed was a priority. There were two large cabins inside the perimeter, separated by rough terrain and out of sight and hearing both from any road and from each other. In the 1950s, before cell phones and before the Internet, this was enough to ensure their occupants’ isolation. At ten-thirty on the night of the raid, the inhabitants of one of those cabins darkened their faces and arms with dirt, then quietly made their way through the forest to the other cabin and, while its occupants slept, entered through the unlocked door. The intruders were angry and out for revenge. They were eleven years old.

For these kids, revenge meant ripping the mosquito netting off the beds, yelling insults, and grabbing a prized pair of blue jeans. Then, as their victims awoke, the invaders ran back to their own cabin as suddenly as they had arrived. They’d intended to inflict insult, not injury. Sounds like nothing more than a typical story of summer camp gone awry, but this camp was different. As these boys played and fought, ate and talked, planned and plotted, a corps of adults was secretly watching and listening, studying their every move with neither their knowledge nor their consent.

The boys at Robbers Cave that summer had been enrolled in a pioneering and ambitious—and, by today’s standards, unethical—field experiment in social psychology.1 According to a later report on the study, the experimental subjects had been carefully chosen for uniformity. A researcher laboriously screened each child before recruiting him, surreptitiously observing him on the playground and perusing his school records. The subjects were all middle-class, Protestant, Caucasian, and of average intelligence. All were well-adjusted boys who had just completed the fifth grade. None knew any of the others. After targeting two hundred prospects, the researchers had approached their parents offering a good deal. They could enroll their son in a three-week summer camp for a nominal fee, provided they agreed to have no contact with their child throughout that period. During that time, the parents were told, the researchers would study the boys and their “interactions in group activities.”

Twenty-two sets of parents took the bait. The researchers divided the boys into two groups of eleven, balanced for height, weight, athletic ability, popularity, and certain skills related to the activities they would be engaging in at camp. The groups were assembled separately, not told of each other’s existence, and kept isolated during their first week. In that week, there were really two boys’ camps at Robbers Cave, and the boys in each were kept unaware of the other.

As the campers engaged in baseball games, singing, and other normal camp activities, they were watched closely by their counselors, who in reality were all researchers studying them and secretly taking notes. One point of interest to the researchers was whether, how, and why each collection of boys would coalesce into a cohesive group. And coalesce they did, each group forming its own identity, choosing a name (the Rattlers and the Eagles), creating a flag, and coming to share “preferred songs, practices and peculiar norms” that were different from those of the other group. But the real point of the study was to investigate how and why, once the groups had coalesced, they would react to the presence of a new group. And so, after the first week, the Rattlers and the Eagles were introduced to each other.

Films and novels depicting either the distant past or the postapocalyptic future warn that isolated groups of Homo sapiens should always be approached with care, their members more likely to cut off your nose than offer you free incense. The physicist Stephen Hawking once famously endorsed that view, arguing that it would be better to beware of aliens than to invite them in for tea. Human colonial history seems to confirm this. When people from one nation land on the shores of another with a far different culture, they may say they come in peace, but they soon start shooting. In this case, the Rattlers and Eagles had their Christopher Columbus moment at the start of the second week. That’s when an observer-counselor separately told each group of the other’s existence. The groups had a similar reaction: let’s challenge the other to a sports tournament. After some negotiations, a series of events was arranged to take place over the following week, including baseball games, tug-of-war matches, tent-pitching contests, and a treasure hunt. Camp counselors agreed to provide trophies, medals, and prizes for the winners.

It didn’t take long for the Rattlers and the Eagles to settle into the dynamics of the countless other warring factions that had preceded them. On the first day of competition, after losing at tug-of-war, the Eagles, on their way back to their cabin, happened by the ball field where the Rattlers had hung their flag high up on the backstop. A couple of Eagles, agitated about getting beaten, climbed up and took it down. They set it on fire, and when the fire went out, one of them climbed back up and rehung it. The counselors had no response to the flag burning, except to dutifully and surreptitiously take their notes. And then they arranged the next meeting of the members of the two groups, who were told that they would now compete at baseball and other activities.

After breakfast the following morning, the Rattlers were taken to the ball field, where, while they waited for the Eagles to arrive, they discovered their burnt flag. The researchers watched as the Rattlers plotted their retaliation, which resulted in a mass brawl when the Eagles did show up. The staff observed for a while, then intervened to stop the fighting. But the feud continued, with the Rattlers’ raid on the Eagles’ cabin the next night, and other events in the days that followed. The researchers had hoped that by setting up groups with competitive goals but no inherent differences, they could observe the generation and evolution of derogatory social stereotypes, genuine intergroup hostility, and all the other symptoms of intergroup conflict we humans are known for. They were not disappointed. Today, the boys of Robbers Cave are past retirement age, but the tale of their summer, and the researchers’ analysis of it, is still being cited in the psychological literature.

Humans have always lived in bands. If competing in a tug-of-war contest generated intergroup hostility, imagine the hostility between bands of humans with too many mouths to feed and too few elephant carcasses to dine on. Today we think of war as being at least in part based on ideology, but the desire for food and water is the strongest ideology. Long before communism, democracy, or theories of racial superiority were invented, neighboring groups of people regularly fought with and even massacred each other, inspired by the competition for resources.2 In such an environment, a highly evolved sense of “us versus them” would have been crucial to survival.

There was also a sense of “us versus them” within bands, for, as in other hominid species, prehistoric humans formed alliances and coalitions inside their own groups.3 While a talent for office politics is useful in the workplace today, twenty thousand years ago group dynamics might determine who got fed, and the human resources department might have disciplined slackers with a spear through the back. So if the ability to pick up cues that signal political allegiances is important in contemporary work, in prehistoric times it was vital, for the equivalent of being fired was being dead.

Scientists call any group that people feel part of an “in-group,” and any group that excludes them an “out-group.” As opposed to the colloquial usage, the terms “in-group” and “out-group” in this technical sense refer not to the popularity of those in the groups but simply to the us-them distinction. It is an important distinction because we think differently about members of groups we are part of and those in groups we are not part of, and, as we shall see, we also behave differently toward them. And we do this automatically, regardless of whether or not we consciously intend to discriminate between the groups. In the last chapter I talked about how putting other people into categories affects our assessment of them. Putting ourselves into in- and out-group categories also has an effect—on the way we see our own place in the world and on how we view others. In what follows we’ll learn what happens when we use categorization to define ourselves, to differentiate “us” from others.


WE ALL BELONG to many in-groups. As a result, our self-identification shifts from situation to situation. At different times the same person might think of herself as a woman, an executive, a Disney employee, a Brazilian, or a mother, depending on which is relevant—or which makes her feel good at the time. Switching the in-group affiliation we’re adopting for the moment is a trick we all use, and it’s helpful in maintaining a cheery outlook, for the in-groups we identify with are an important component of our self-image. Both experimental and field studies have found, in fact, that people will make large financial sacrifices to help establish a feeling of belonging to an in-group they aspire to feel part of.4 That’s one reason, for example, that people pay so much to be members of exclusive country clubs, even if they don’t utilize the facilities. A computer games executive once shared with me a great example of the willingness to give up money for the prestige of a coveted in-group identity. One of his senior producers marched into his office after finding out that he had given another producer a promotion and raise. He told her he couldn’t promote her for a while yet, because of financial constraints. But she was insistent on being given a raise, now that she knew her colleague had gotten one. It was tough for this executive because his business was ultracompetitive, and other companies were always hovering in the background looking to steal good producers, yet he didn’t have the funds to hand out raises to all who deserved them. After discussing the matter for a while, he noticed that what really bothered his employee was not the lack of a raise but that the other producer, who was junior to her, now had the same title. And so they agreed on a compromise: he would promote her and give her a new title now, but the raise would come later. Like the country club sales office, this executive had awarded her a high-status in-group membership in exchange for money. Advertisers are very much attuned to that dynamic. That’s why, for example, Apple spends hundreds of millions of dollars on marketing campaigns in an attempt to associate the Mac in-group with smarts, elegance, and hipness, and the PC in-group with loser qualities, the opposites of those.

Once we think of ourselves as belonging to an exclusive country club, executive rank, or class of computer users, the views of others in the group seep into our thinking, and color the way we perceive the world. Psychologists call those views “group norms.” Perhaps the purest illustration of their influence came from the man who engineered the Robbers Cave study. His name was Muzafer Sherif. A Turk who immigrated to America for graduate school, Sherif earned his PhD from Columbia University in 1935. His dissertation focused on the influence of group norms on vision. You’d think vision would arise through an objective process, but Sherif’s work showed that a group norm can affect something as basic as the way you perceive a point of light.

In his work, decades ahead of its time, Sherif brought subjects into a dark room and displayed a small illuminated dot on a wall. After a few moments, the dot would appear to move. But that was just an illusion. That appearance of motion was the result of tiny eye movements that caused the image on the retina to jiggle. As I mentioned in Chapter 2, under normal conditions the brain, detecting the simultaneous jiggling of all the objects in a scene, corrects for this jiggling, and you perceive the scene as motionless. But when a dot of light is viewed without context the brain is fooled and perceives the dot as moving in space. Moreover, since there are no other objects for reference, the magnitude of the motion is open to a wide degree of interpretation. Ask different people how far the dot has moved and you get widely different answers.

Sherif showed the dot to three people at a time and instructed them that whenever they saw the dot move, they should call out how far it had moved. An interesting phenomenon occurred: people in a given group would call out different numbers, some high and some low, but eventually their estimates would converge to within a narrow range, the “norm” for that group of three. Although the norm varied widely from group to group, within each group the members came to agree upon a norm, which they arrived at without discussion or prompting. Moreover, when individual group members were invited back a week later to repeat the experiment, this time on their own, they replicated the estimates arrived at by their group. The perception of the subjects’ in-group had become their perception.

———

SEEING OURSELVES AS a member of a group automatically marks everyone as either an “us” or a “them.” Some of our in-groups, like our family, our work colleagues, or our bicycling buddies, include only people we know. Others, like females, Hispanics, or senior citizens, are broader groups that society defines and assigns traits to. But whatever in-groups we belong to, they consist by definition of people we perceive as having some kind of commonality with us. This shared experience or identity causes us to see our fate as being intertwined with the fate of the group, and thus the group’s successes and failures as our own. It is natural, then, that we have a special place in our hearts for our in-group members.

We may not like people in general, but however little or much we like our fellow human beings, our subliminal selves tend to like our fellow in-group members more. Consider the in-group that is your profession. In one study, researchers asked subjects to rate the likability of doctors, lawyers, waiters, and hairdressers, on a scale from 1 to 100.5 The twist was, every subject in this experiment was him- or herself either a doctor, a lawyer, a waiter, or a hairdresser. The results were very consistent: those in three of the four professions rated the members of the other professions as average, with a likability around 50. But they rated those in their own profession significantly higher, around 70. There was only one exception: the lawyers, who rated both those in the other professions and other lawyers at around 50. That probably brings to mind several lawyer jokes, so there is no need for me to make any. However, the fact that lawyers do not favor fellow lawyers is not necessarily due to the circumstance that the only difference between a lawyer and a catfish is that one is a bottom-feeding scavenger and the other is a fish. Of the four groups assessed by the researchers, lawyers, you see, form the only one whose members regularly oppose others in their own group. So while other lawyers may be in a given lawyer’s in-group, they are also potentially in his or her out-group. Despite that anomaly, research suggests that, whether with regard to religion, race, nationality, computer use, or our operating unit at work, we generally have a built-in tendency to prefer those in our in-group. Studies show that common group membership can even trump negative personal attributes.6 As one researcher put it, “One may like people as group members even as one dislikes them as individual persons.”

This finding—that we find people more likable merely because we are associated with them in some way—has a natural corollary: we also tend to favor in-group members in our social and business dealings, and we evaluate their work and products more favorably than we might otherwise, even if we think we are treating everyone equally.7 For example, in one study researchers divided people into groups of three. Each group was paired with another, and then each of the paired groups was asked to perform three varied tasks: to use a children’s toy set to make a work of art, to sketch a plan for a senior housing project, and to write a symbolic fable that imparts a moral to the reader. For each task, one member of each group in the pair (the “nonparticipant”) was separated from his or her cohorts, and did not take part in the tasks. After each pair of groups had completed a task, the two nonparticipants were asked to rate the results of the efforts of both groups.

The nonparticipants had no vested interest in the products their in-group had turned out; nor had the groups been formed with regard to any distinctive shared qualities. If the nonparticipants had been objective, therefore, you’d think that on average they would have preferred the products of their out-group just as often as they preferred those of their in-group. But they didn’t. In two cases out of three, when they had a preference, it was for what their in-group had produced.

Another way the in- and out-group distinction affects us is that we tend to think of our in-group members as more variegated and complex than those in the out-group. For example, the researcher conducting the study involving doctors, lawyers, waiters, and hairdressers asked all of his subjects to estimate how much those in each profession vary with regard to creativity, flexibility, and several other qualities. They all rated those in the other professions as significantly more homogeneous than those in their own group. Other studies have come to the same conclusion with regard to groups that differ by age, nationality, gender, race, and even the college people attended and the sorority women belonged to.8 That’s why, as one set of researchers pointed out, newspapers run by the predominantly white establishment print headlines such as “Blacks Seriously Split on Middle East,” as if it is news when all African Americans don’t think alike, but they don’t run headlines like “White People Seriously Split on Stock Market Reform.”9

It might seem natural to perceive more variability in our in-groups because we often know their members better, as individuals. For instance, I know a great many theoretical physicists personally, and to me they seem to be quite a varied bunch. Some like piano music; others prefer the violin. Some read Nabokov; others, Nietzsche. Okay, maybe they’re not that varied. But now suppose I think of investment bankers. I know very few of those, but in my mind I see them as even less varied than theoretical physicists: I imagine they all read only the Wall Street Journal, drive fancy cars, and don’t listen to music at all, preferring to watch the financial news on television (unless the news is bad, in which case they just skip it and pop open a $500 bottle of wine). The surprise is that the feeling that our in-group is more varied than our out-group does not depend on having more knowledge of our in-group. Instead, the categorization of people into in-groups and out-groups alone is enough to trigger that judgment. In fact, as we’ll see in just a bit, our special feelings toward our in-group persist even when researchers artificially sort strangers into random in-groups and out-groups. When Mark Antony addressed the throngs after Caesar’s assassination, saying, in Shakespeare’s version of the events, “Friends, Romans, countrymen, lend me your ears,” he was really saying, “In-group members, in-group members, in-group members …” A wise appeal.


A FEW YEARS ago, three Harvard researchers gave dozens of Asian American women at Harvard a difficult math test.10 But before getting them started, the researchers asked them to fill out a questionnaire about themselves. These Asian American women were members of two in-groups with conflicting norms: they were Asians, a group identified with being good at math, and they were women, a group identified as being poor at it. One set of participants received a questionnaire asking about what languages they, their parents, and grandparents spoke and how many generations of their family had lived in America. These questions were designed to trigger the women’s identity as Asian Americans. Other subjects answered queries about coed dormitory policy, designed to trigger their identity as women. A third group, the control group, was quizzed about their phone and cable TV service. After the test, the researchers gave the participants an exit survey. Measured by the subjects’ self-reports in that exit questionnaire, the initial questionnaire had had no impact on their conscious assessment of either their ability or the test. Yet something had clearly affected them subliminally, because the women who had been manipulated to think of themselves as Asian Americans had done better on the test than did the control group, who, in turn, had done better than the women reminded of their female in-group. Your in-group identity influences the way you judge people, but it also influences the way you feel about yourself, the way you behave, and sometimes even your performance.

We all belong to multiple in-groups, and, like the groups Asian Americans and women, they can have conflicting norms. I’ve found that once we are conscious of this, we can use it to our advantage. For example, I occasionally smoke a cigar, and when I do I feel a certain in-group kinship with my best friend in college, my PhD adviser, and Albert Einstein, all fellow physicists who liked their cigars. But when I think my smoking is getting dangerously out of hand, I find I can kill the urge quickly by coaxing myself to focus instead on another in-group of smokers, one that includes my father, who suffered from lung problems, and my cousin, who had debilitating mouth cancer.

The conflicting norms of our in-groups can at times lead to rather curious contradictions in our behavior. For example, from time to time, the media will broadcast public service announcements aimed at reducing petty crimes like littering and pilfering relics from national parks. These ads often also decry the alarming frequency with which these crimes occur. In one such ad, a Native American dressed in traditional garb canoes across a debris-ridden river. After the Native American reaches the heavily littered opposite shore, a driver—John Q. Public—zooms down an adjacent road and tosses trash out of his car, strewing garbage at the Native American’s feet. The ad cuts to a close-up, showing a lone teardrop running down the man’s face. That ad explicitly preaches an anti-litter message to our conscious minds. But it also conveys a message to our unconscious: those in our in-group, our fellow parkgoers, do litter. So which message wins out, the ethical appeal or the group norm reminder? No one studied the effects of that particular ad, but in a controlled study done on public service announcements, another ad that simply denounced littering was successful in inhibiting the practice, while a similar ad that included the phrase “Americans will produce more litter than ever!” led to increased littering.11 It’s doubtful that anyone consciously interpreted “Americans will produce more litter than ever!” as an order rather than a criticism, but by identifying littering as a group norm, it had that result.

In a related study, researchers created a sign condemning the fact that many visitors steal the wood from Petrified Forest National Park.12 They placed the sign on a well-used pathway, along with some secretly marked pieces of wood. Then they watched to see what effect the sign would have. They found that in the absence of a sign, souvenir hunters stole about 3 percent of the wood pieces in just a ten-hour period. But with the warning sign in place, that number almost tripled, to 8 percent. Again, it is doubtful that many of the pilferers literally said to themselves, Everyone does it, so why not me? But that seems to be the message received by their unconscious. The researchers pointed out that messages that condemn yet highlight undesired social norms are common, and that they invite counterproductive results. So while a college administration may think it is warning students when it says, “Remember! You must cut down on binge drinking, which is prevalent on campus!” what sinks in may instead be a call to action: Remember! Binge drinking is prevalent on campus! When, as a child, I tried to use my friend’s habits to justify, say, playing baseball on Saturday instead of going to the synagogue, my mother would say something like “So, if Joey jumped into a volcano, would you do it, too?” Now, decades later, I realize I should have said, “Yeah, Mom. Studies show that I would.”


I’VE SAID THAT we treat our in-groups and out-groups differently in our thinking, whether or not we consciously intend to make the distinction. Over the years, curious psychologists have tried to determine the minimal requirements necessary for a person to feel a kinship with an in-group. They have found that there is no minimal requirement. It is not necessary for you to share any attitudes or traits with your fellow group members, or even for you to have met the other group members. It is the simple act of knowing that you belong to a group that triggers your in-group affinity.

In one study, researchers had subjects look at images of paintings by the Swiss artist Paul Klee and the Russian painter Wassily Kandinsky and then indicate which they preferred.13 The researchers labeled each subject as either a Kandinsky fan or a Klee fan. The two painters had distinctive styles, but unless the subjects happened to be fanatic art historians specializing in early-twentieth-century avant-garde European painters, they probably had no reason to feel any particular warmth for those who shared their opinion. For the vast majority of people, on the passion scale, Klee versus Kandinsky was not exactly Brazil versus Argentina or fur coat versus cloth coat.

After labeling their subjects, the researchers did something that may appear odd. They, in essence, gave each subject a bucket of money and told them to divide it among the other subjects in any way they saw fit. The division was carried out in private. None of the subjects knew any of the other subjects, or could even see them during the course of the experiment. Still, when passing out the money, they favored their in-group, those who shared their group label.

A large body of research replicates the finding that our group-based social identity is so strong that we will discriminate against them and favor us even if the rule that distinguishes them from us is akin to flipping a coin. That’s right: not only do we identify with a group based on the flimsiest of distinctions, we also look at group members differently—even if group membership is unrelated to any relevant or meaningful personal qualities. That’s not just important in our personal lives; it also affects organizations. For example, companies can gain by fostering their employees’ in-group identification, something that can be accomplished by creating and making salient a distinctive corporate culture, as was done very successfully by companies such as Disney, Apple, and Google. On the other hand, it can be dicey when a company’s internal departments or divisions develop a strong group identity, for that can lead to both in-group favoritism and out-group discrimination, and research suggests that hostility erupts more readily between groups than between individuals.14 But regardless of what kind of shared identity does or doesn’t exist within a company, many companies find it effective to use marketing to foster a group identity among their customers. That’s why in-groups based on Mac versus PC ownership, or Mercedes versus BMW versus Cadillac, are more than just computer clubs or car clubs: we treat such categorizations as meaningful in a far broader realm than they have any right to be.

Dog person versus cat person. Rare meat versus medium. Powdered detergent versus liquid. Do we really draw broad inferences from such narrow distinctions as these? The Klee/Kandinsky study, and literally dozens more like it, followed a classic experimental paradigm invented by Henri Tajfel, who conducted the line-length experiment.15 In this paradigm, subjects were assigned to one of two groups. They were told that their group assignments had been made on the basis of something they shared with other members of the group but which, objectively speaking, was really quite meaningless as a way of affiliating with a group—either the Klee/Kandinsky preference or whether they had overestimated or underestimated the number of dots that were quickly flashed on a screen.

As in the study I quoted earlier, Tajfel allowed his subjects to dole out awards to their fellow subjects. To be precise, he had them give out points that could later be cashed in for money. The subjects did not know the identities of the people they were giving points to. But in all cases they knew the group to which the person belonged. In Tajfel’s original study, the handing out of points was a bit complicated, but the crux of the experiment lies in just the way it was done, so it is worth describing.

The experiment consisted of over a dozen stages. At each stage, a subject (“awarder”) had to make a choice regarding how to dole out points to two other subjects (“recipients”), who, as I said, were anonymous. Sometimes the two recipients were both members of the subject’s own group or both members of the other group; sometimes one was a member of the subject’s own group and the other was a member of the other group.

The catch was that the choices offered to awarders did not represent a zero-sum game. That is, they did not entail simply deciding how to divide a fixed number of points. Rather, the options offered added up to varying point totals, as well as differing ways of splitting those points among the two recipients. At each stage, the awarder had to choose from among over a dozen alternative ways to award points. If the awarders felt no in-group favoritism, the logical action would be to choose whichever alternative bestows upon the two recipients the greatest total number of points. But the awarders did that in only one circumstance: when they were dividing points among two members of their in-group. When awarding points to two members of the out-group, they chose options that resulted in awarding far fewer points. And what is really extraordinary is that when the options required awarders to divide points between one in-group member and one out-group member, they tended to make choices that maximized the difference between the rewards they gave to the two group members, even if that action resulted in a lesser reward for their own group member!

That’s right: as a trend, over dozens of individual reward decisions, subjects sought not to maximize their own group’s reward but the difference between the reward their group would receive and that which the other group would be awarded. Remember, this experiment has been replicated many times, with subject pools of all ages and many different nationalities, and all have reached the same conclusion: we are highly invested in feeling different from one another—and superior—no matter how flimsy the grounds for our sense of superiority, and no matter how self-sabotaging that may end up being.

You may find it discouraging to hear that, even when group divisions are anonymous and meaningless, and even at their group’s own personal cost, people unambiguously choose to discriminate in favor of their in-group, rather than acting for the greatest good. But this does not doom us to a world of never-ending social discrimination. Like unconscious stereotyping, unconscious discrimination can be overcome. In fact, though it doesn’t take much to establish grounds for group discrimination, it takes less than we think to erase those grounds. In the Robbers Cave experiment, Sherif noted that mere contact between the Eagles and the Rattlers did not reduce the negative attitude each group had for the other. But another tactic did: he set up a series of difficulties that the groups had to work together to overcome.

In one of those scenarios, Sherif arranged for the camp water supply to be cut off. He announced the problem, said its cause was a mystery, and asked twenty-five volunteers to help check the water system. In reality, the researchers had turned off a key valve and shoved two boulders over it and had also clogged a faucet. The kids worked together for about an hour, found the problems, and fixed them. In another scenario, Sherif arranged for a truck that was supposed to get food for the boys not to start. The staff member who drove the truck “struggled and perspired” and got the truck to make all sorts of noises, as more and more of the boys gathered around to watch. Finally the boys figured out that the driver might be able to start the truck if they could just get it moving. But the truck was on an uphill slope. So twenty of the boys, from both groups, tied a tug-of-war rope to the truck and pulled it until it started.

These and several other scenarios that gave the groups common goals and required cooperative intergroup actions, the researchers noted, sharply reduced the intergroup conflict. Sherif wrote, “The change in behavior patterns of interaction between the groups was striking.”16 The more that people in different traditionally defined in-groups, such as race, ethnicity, class, gender, or religion, find it advantageous to work together, the less they discriminate against one another.17

As one who lived near the World Trade Center in New York City, I experienced that personally on September 11, 2001, and in the months that followed. New York is called a melting pot, but the different elements tossed into the pot often don’t melt, or even blend very well with one another. The city is perhaps more like a stew made of diverse ingredients—bankers and bakers, young and old, black and white, rich and poor—that may not mingle and sometimes distinctly clash. As I stood beneath the north tower of the World Trade Center at 8:45 a.m. on that September 11, among the bustling crowd of immigrant street vendors, suited Wall Street types, and Orthodox Jews in their traditional garb, the city’s class and ethnic divisions were amply apparent. But at 8:46 a.m., as that first plane hit the north tower and chaos erupted, as the fiery debris fell toward us and a horrific sight of death unfolded above us, something subtle and magical also transpired. All those divisions seemed to evaporate, and people began to help other people, regardless of who they were. For a few months, at least, we were all first and foremost New Yorkers. With thousands dead, and tens of thousands of all professions, races, and economic status suddenly homeless, or jobless because their place of work had been shut down, and with millions of us in shock over what those in our midst had suffered, we New Yorkers of all kinds pulled together as I had never before experienced. As entire blocks continued to smolder, as the corrosive smell of the destruction filled the air we breathed, and as the photos of the missing looked down on us from buildings and lampposts, subway stations and cyclone fences, we showed a kindness to one another, in acts large and small, that was probably unprecedented. It was the best of our human social nature at work, a vivid exhibition of the positive healing power of our human group instinct.

CHAPTER 9 Feelings The nature of emotions … why the prospect of falling hundreds of feet onto large boulders has the same effect as a flirtatious smile and a black silk nightgown

Each of us is a singular narrative, which is constructed, continually, unconsciously, by, through, and in us.

—OLIVER SACKS

IN THE EARLY 1950s, a twenty-five-year-old woman named Chris Costner Sizemore walked into a young psychiatrist’s office complaining of severe and blinding headaches.1 These, she said, were sometimes followed by blackouts. Sizemore appeared to be a normal young mother, in a bad marriage but with no major psychological problems. Her doctor would later describe her as demure and constrained, circumspect, and meticulously truthful. He and she discussed various emotional issues, but nothing that occurred over the next few months of treatment indicated that Sizemore had actually lost consciousness or that she suffered from any serious mental condition. Nor was her family aware of any unusual episodes. Then one day during therapy she mentioned that she had apparently gone on a recent trip but had no memory of it. Her doctor hypnotized her, and the amnesia cleared. Several days later, the doctor received an unsigned letter. From the postmark and the familiar penmanship, he knew it had come from Sizemore. In the letter, Sizemore said she was disturbed by the recovered memory—how could she be sure she remembered everything, and how could she know the memory loss wouldn’t happen again? There was also another sentence scrawled at the bottom of the letter, in a different handwriting that was difficult to decipher.

On her next visit Sizemore denied having sent the letter, though she recalled having begun one that, she said, she had never completed. Then she began to exhibit signs of stress and agitation. Suddenly she asked—with obvious embarrassment—if hearing an imaginary voice meant she was insane. As the therapist thought about it, Sizemore altered her posture, crossed her legs, and took on a “childishly daredevil air” he had never before seen in her. As he later described it, “A thousand minute alterations of manner, gesture, expression, posture, of nuances in reflex or instinctive reaction, of glance, of eyebrow tilting and eye movement, all argued that this could only be another woman.” Then that “other woman” began to speak of Chris Sizemore and her problems in the third person, using “she” or “her” in every reference.

When asked her identity, Sizemore now replied with a different name. It was she, this person who suddenly had a new name, she said, who had found the unfinished letter, added a sentence, and mailed it. In the coming months Sizemore’s doctor administered psychological personality tests while Sizemore took on each of her two identities. He submitted the tests to independent researchers, who were not told that they’d come from the same woman.2 The analysts concluded that the two personalities had markedly different self-images. The woman who’d originally entered therapy saw herself as passive, weak, and bad. She knew nothing of her other half, a woman who saw herself as active, strong, and good. Sizemore was eventually cured. It took eighteen years.3

Chris Sizemore’s was an extreme case, but we all have many identities. Not only are we different people at fifty than we are at thirty, we also change throughout the day, depending on circumstances and our social environment, as well as on our hormonal levels. We behave differently when we are in a good mood than when we are in a bad one. We behave differently having lunch with our boss than when having lunch with our subordinates. Studies show that people make different moral decisions after seeing a happy film,4 and that women, when ovulating, wear more revealing clothing, become more sexually competitive, and increase their preference for sexually competitive men.5 Our character is not indelibly stamped on us but is dynamic and changing. And as the studies of implicit prejudice revealed, we can even be two different people at the same time, an unconscious “I” who holds negative feelings about blacks—or the elderly, or fat people, or gays, or Muslims—and a conscious “I” who abhors prejudice.

Despite this, psychologists have traditionally assumed that the way a person feels and behaves reflects fixed traits that form the core of that individual’s personality. They’ve assumed that people know who they are and that they act consistently, as a result of conscious deliberation.6 So compelling was this model that in the 1960s one researcher suggested that, rather than performing costly and time-consuming experiments, psychologists might collect reliable information by simply asking people to predict how they would feel and behave in certain circumstances of interest.7 Why not? Much of clinical psychotherapy is based on what is essentially the same idea: that through intense, therapeutically guided reflection we can learn our true feelings, attitudes, and motives.

But remember the statistics on Browns marrying Browns, and investors undervaluing the IPOs of companies with tongue-twister names? None of the Browns had consciously set out to choose a spouse who shared their name; nor did professional investors think their impressions of a new company had been influenced by the ease of pronouncing that company’s name. Because of the role of subliminal processes, the source of our feelings is often a mystery to us, and so are the feelings themselves. We feel many things we are not aware of feeling. To ask us to talk about our feelings may be valuable, but some of our innermost feelings will not yield their secrets to even the most profound introspection. As a result, many of psychology’s traditional assumptions about our feelings simply do not hold.


“I’VE GONE THROUGH years of psychotherapy,” a well-known neuroscientist told me, “to try to find out why I behave in certain ways. I think about my feelings, my motivations. I talk to my therapist about them, I finally come up with a story that seems to make sense, and it satisfies me. I need a story I can believe, but is it true? Probably not. The real truth lies in structures like my thalamus and hypothalamus, and my amygdala, and I have no conscious access to those no matter how much I introspect.” If we are to have a valid understanding of who we are and, therefore, of how we would react in various situations, we have to understand the reasons for our decisions and behavior, and—even more fundamentally—we have to understand our feelings and their origins. Where do they come from?

Let’s start with something simple: the feeling of pain. The sensory and emotional feeling of pain arises from distinct neural signals and has a well-defined and obvious role in our lives. Pain encourages you to put down that red-hot frying pan, punishes you for pounding your thumb with that hammer, and reminds you that when sampling six brands of single-malt Scotch, you should not make them doubles. A friend may have to draw you out before you understand your feelings toward that financial analyst who took you to the wine bar last night, but a pounding headache is a feeling you’d think you could get in touch with without anyone’s help. And yet it is not that simple, as evidenced by the famous placebo effect.

When we think of the placebo effect, we may imagine an inert sugar pill that relieves a mild headache as well as a Tylenol, as long as we believe we’ve taken the real thing. But the effect can be dramatically more powerful than that. For example, angina pectoris, a chronic malady caused by inadequate blood supply in the muscle of the heart wall, often causes very severe pain. If you have angina and attempt to exercise—which can mean simply walking to answer the door—nerves in your heart muscle act like a “check engine” sensor: they carry signals via your spinal cord to your brain to alert you that improper demands are being placed on your circulatory system. The result can be excruciating pain, a warning light that is hard to ignore. In the 1950s, it was common practice for surgeons to tie off certain arteries in the chest cavity as a treatment for patients with severe angina pain. They believed new channels would sprout in nearby heart muscle, improving circulation. The surgery was performed on a large number of patients with apparent success. Yet something was amiss: pathologists who later examined these patients’ cadavers never saw any of the expected new blood vessels.

Apparently the surgery was a success at relieving the patients’ symptoms but a failure at addressing their cause. In 1958, curious cardiac surgeons conducted an experiment that, for ethical reasons, would not be permitted today: they carried out sham operations. For five patients, surgeons cut through the skin to expose the arteries in question but then stitched each patient back together without actually tying off the arteries. They also performed the true operation on another group of thirteen patients. The surgeons told neither the patients nor their cardiologists which subjects had had the real operation. Among the patients who did receive the real operation, 76 percent saw an improvement in their angina pain. But so did all five in the sham group. Both groups, believing that a relevant surgical procedure had been performed, reported far milder pain than they had had before surgery. Since the surgery produced no physical changes in either group (in terms of the growth of new blood vessels to improve circulation to the heart), both groups would have continued to experience the same level of sensory input to the pain centers of their brains. Yet both groups had a greatly reduced conscious experience of pain. It seems our knowledge of our feelings—even physical ones—is so tenuous that we can’t even reliably know when we are experiencing excruciating pain.8

The view of emotion that is dominant today can be traced not to Freud—who believed that unconscious content was blocked from awareness via the mechanism of repression—but to William James, whose name has already come up in several other contexts. James was an enigmatic character. Born in New York City in 1842 to an extremely wealthy man who used some of his vast fortune to finance extensive travels for himself and his family, James had attended at least fifteen different schools in Europe and America by the time he was eighteen—in New York; Newport, Rhode Island; London; Paris; Boulogne-sur-Mer, in northern France; Geneva; and Bonn. His interests flitted similarly, from subject to subject, landing for a while on art, chemistry, the military, anatomy, and medicine. The flitting consumed fifteen years. At one point during those years he accepted an invitation from the famous Harvard biologist Louis Agassiz to go on an expedition to the Amazon River basin in Brazil, during which James was seasick most of the time and, in addition, contracted smallpox. In the end, medicine was the only course of study James completed, receiving an MD from Harvard in 1869, at the age of twenty-seven. But he never practiced or taught medicine.

It was an 1867 visit to mineral springs in Germany—where he traveled to recuperate from the health problems resulting from the Amazon trip—that led James to psychology. Like Münsterberg sixteen years later, James attended some of Wilhelm Wundt’s lectures and got hooked on the subject, in particular the challenge of turning psychology into a science. He began to read works of German psychology and philosophy, but he returned to Harvard to complete his medical degree. After his graduation from Harvard, he became deeply depressed. His diary from that time reveals little but misery and self-loathing. His suffering was so severe that he had himself committed to an asylum in Somerville, Massachusetts, for treatment; however, he credited his recovery not to the treatment he received but to his discovery of an essay on free will by the French philosopher Charles Renouvier. After reading it, he resolved to use his own free will to break his depression. In truth, it doesn’t seem to have been that simple, for he remained incapacitated for another eighteen months and suffered from chronic depression for the rest of his life.

William James self-portrait. By permission of the Houghton Library, Harvard University.

Still, by 1872 James was well enough to accept a teaching post in physiology at Harvard, and by 1875 he was teaching The Relations Between Physiology and Psychology, making Harvard the first university in the United States to offer instruction in experimental psychology. It was another decade before James put forth to the public his theory of emotions, providing the outline of that theory in an article he published in 1884 called “What Is an Emotion?” The article appeared in a philosophy journal called Mind, rather than in a psychology journal, because the first English-language journal of research psychology wouldn’t be established until 1887.

In his article, James addressed emotions such as “surprise, curiosity, rapture, fear, anger, lust, greed and the like,” which are accompanied by bodily changes such as quickened breath or pulse or movements of the body or the face.9 It may seem obvious that these bodily changes are caused by the emotion in question, but James argued that such an interpretation is precisely backward. “My thesis on the contrary,” James wrote, “is that the bodily changes follow directly the PERCEPTION of [an] exciting fact, and that our feeling of the same changes as they occur IS the emotion…. Without the bodily state following on the perception, the latter would be purely cognitive in form, pale, colorless, destitute of emotional warmth.” In other words, we don’t tremble because we’re angry or cry because we feel sad; rather, we are aware of feeling angry because we tremble, and we feel sad because we cry. James was proposing a physiological basis for emotion, an idea that has gained currency today—thanks in part to the brain-imaging technology that allows us to watch the physical processes involved in emotion as they are actually occurring in the brain.

Emotions, in today’s neo-Jamesian view, are like perceptions and memories—they are reconstructed from the data at hand. Much of that data comes from your unconscious mind, as it processes environmental stimuli picked up by your senses and creates a physiological response. The brain also employs other data, such as your preexisting beliefs and expectations, and information about the current circumstances. All of that information is processed, and a conscious feeling of emotion is produced. That mechanism can explain the angina studies—and, more generally, the effect of placebos on pain. If the subjective experience of pain is constructed from both our physiological state and contextual data, it’s no surprise that our minds can interpret the same physiological data—the nerve impulses signifying pain—in different ways. As a result, when nerve cells send a signal to the pain centers of your brain, your experience of pain can vary even if those signals don’t.10

James elaborated on his theory of emotion, among many other things, in his book The Principles of Psychology, which I mentioned in Chapter 4 regarding Angelo Mosso’s experiments on the brains of patients who had gaps in their skulls following surgery. James had been given a contract to write the book in 1878. He began it, with a flurry of work, on his honeymoon. But once the honeymoon was over, it took him twelve years to finish it. It became a classic, so revolutionary and influential that, in a 1991 survey of historians of psychology, James ranked second among psychology’s most important figures, behind only his early inspiration, Wundt.11

Ironically, neither Wundt nor James was pleased with the book. Wundt was dissatisfied because James’s revolution had by then strayed from Wundt’s brand of experimental psychology, in which everything must be measured. How, for instance, do you quantify and measure emotions? By 1890, James had decided that since one couldn’t, psychology must move beyond pure experiment, and he derided Wundt’s work as “brass instrument psychology.”12 Wundt, on the other hand, wrote of James’s book that “It is literature, it is beautiful, but it is not psychology.”13

James had much more stinging criticism for himself. He wrote, “No one could be more disgusted than I at the sight of the book. No subject is worth being treated of in 1000 pages. Had I ten years more, I could rewrite it in 500; but as it stands it is this or nothing—a loathsome, distended, tumefied, bloated, dropsical mass, testifying to nothing but two facts: 1st, that there is no such thing as a science of psychology, and 2nd, that W. J. is an incapable.”14 After publication of the book, James decided to abandon psychology in favor of philosophy, leading him to lure Münsterberg from Germany to take over the lab. James was then forty-eight.


JAMES’S THEORY OF emotion dominated psychology for a while, but then gave way to other approaches. In the 1960s, as psychology took its cognitive turn, his ideas—now called the James-Lange theory—experienced a new popularity, for the notion that different sorts of data are processed in your brain to create emotions fit nicely into James’s framework. But a nice theory does not necessarily equate to a correct theory, so scientists sought additional evidence. The most famous of the early studies was an experiment performed by Stanley Schachter, the famed Dr. Zilstein in the University of Minnesota experiment, but then at Columbia. He partnered in the research with Jerome Singer, who would later be called the “best second author in psychology” because he held that position on a number of famous research studies.15 If emotions are constructed from limited data rather than direct perception, similar to the way vision and memory are constructed, then, as with perception and memory, there must be circumstances when the way the mind fills in the gaps in the data results in your “getting it wrong.” The result would be “emotional illusions” that are analogous to optical and memory illusions.

For example, suppose you experience the physiological symptoms of emotional arousal for no apparent reason. The logical response would be to think, Wow, my body is experiencing unexplained physiological changes for no apparent reason! What’s going on? But suppose further that when you experience those sensations they occur in a context that encourages you to interpret your reaction as due to some emotion—say, fear, anger, happiness, or sexual attraction—even though there is no actual cause for that emotion. In that sense your experience would be an emotional illusion. To demonstrate this phenomenon, Schachter and Singer created two different artificial emotional contexts—one “happy,” one “angry”—and studied physiologically aroused volunteers who were placed in those situations. The researchers’ goal was to see whether those scenarios could be used to “trick” the volunteers into having an emotion that the psychologists themselves had chosen.

Here is how it worked. Schachter and Singer told all their experimental subjects that the purpose of the experiment they were participating in was to determine how the injection of a vitamin called “Suproxin” would affect their visual skills. Actually, the drug was adrenaline, which causes increased heart rate and blood pressure, a feeling of flushing, and accelerated breathing—all symptoms of emotional arousal. The subjects were divided into three groups. One group (the “informed”) was accurately told about the effects of the injection, explained as the “side effects” of the Suproxin. Another group (the “ignorant”) was told nothing. Its members would feel the same physiological changes but have no explanation for them. The third group, which acted as a control group, was injected with an inert saline solution. This group would feel no physiological effects and was not told that there would be any.

After administering the injection, the researcher excused himself and left each subject alone for twenty minutes with another supposed subject, who was actually a confederate of the scientists. In what was called the “happiness” scenario, this person acted strangely euphoric about the privilege of participating in the experiment, providing the artificial social context. Schachter and Singer also designed an “anger” scenario, in which the person the subjects were left alone with complained incessantly about the experiment and how it was being conducted. The experimenters hypothesized that, depending on which social context they’d been placed in, the “ignorant” subjects would interpret their physiological state as arising from either happiness or anger, while the “informed” subjects would not have any subjective experience of emotion because, even though they had been exposed to the same social context, they already had a good explanation for their physiological changes and would therefore have no need to attribute them to any kind of emotion. Schachter and Singer also expected that those in the control group, who did not experience any physiological arousal, would not experience any emotion, either.

The subjects’ reactions were judged in two ways. First, they were surreptitiously watched from behind a two-way mirror by impartial observers, who coded their behavior according to a prearranged rubric. Second, the subjects were later given a written questionnaire, in which they reported their level of happiness on a scale from 0 to 4. By both measures, all three groups reacted exactly as Schachter and Singer had expected.

Both the informed and the control subjects observed the apparent emotions—euphoria or anger—of the confederate who had been planted in their midst but felt no such emotion in themselves. The ignorant subjects, however, observed the fellow and, depending on whether he seemed to be expressing euphoria or anger about the experiment, drew the conclusion that the physical sensations they themselves were experiencing constituted either happiness or anger. In other words, they fell victim to an “emotional illusion,” mistakenly believing that they were reacting to the situation with the same “emotions” the fake subject was experiencing.

The Schachter and Singer paradigm has been repeated over the years in many other forms, employing means gentler than adrenaline to stimulate the physiological reaction and examining a number of different emotional contexts, one of which—the feeling of sexual arousal—has been particularly popular. Like pain, sex is an area in which we assume we know what we are feeling, and why. But sexual feelings turn out not to be so straightforward after all. In one study, researchers recruited male college students to participate in two back-to-back experiments, one ostensibly having to do with the effects of exercise, and a second in which they would rate a series of “short clips from a film.”16 In reality, both phases were part of the same experiment. (Psychologists never tell their subjects the truth about the point of their experiments, because if they did so the experiments would be compromised.) In the first phase, exercise played the role of the adrenaline injection to provide an unrecognized source of physiological arousal. It would be reasonable to wonder what kind of burnouts wouldn’t realize that their quickened pulse and breathing were due to their just having run a mile on the treadmill, but it turns out that there is a window of several minutes after exercise during which you feel that your body has calmed but it is actually still in an aroused state. It was during that window that the experimenters showed the “uninformed” group the film clips. The “informed” group, on the other hand, saw the clips immediately after exercising, and thus knew the source of their heightened physiological state. As in the Schachter-Singer experiment, there was also a control group, which did no exercise and, hence, experienced no arousal.

Now for the sex. As you may have guessed, in the second phase the “short clips from a film” weren’t taken from a Disney movie. The film was an erotic French movie, The Girl on a Motorcycle, renamed, in America, Naked Under Leather. Both titles are descriptive. The French title relates to the plot: the film is a road movie about a newlywed who deserts her husband and takes off on her motorcycle to visit her lover in Heidelberg.17 That may sound like a compelling plot line to the French, but the American distributor apparently had a different idea about how to telegraph to an audience the nature of the film’s appeal. And it is indeed the “naked under leather” aspect of the movie that inspired the researchers’ choice of clips. On that score, however, the film did not seem to succeed. When asked to rate their degree of sexual arousal, the students in the control group gave the film a 31 on a scale of 100. The informed group agreed; its members rated their sexual stimulation at just 28. But the subjects in the ignorant group—who were aroused by their recent exercise but didn’t know it—apparently mistook their arousal as being of a sexual nature. They gave the film a 52.

An analogous result was obtained by another group of researchers, who arranged for an attractive female interviewer to ask male passersby to fill out a questionnaire for a school project. Some of the subjects were intercepted on a solid wood bridge only ten feet above a small rivulet. Others were queried on a wobbly five-foot-wide, 450-foot-long bridge of wooden boards with a 230-foot drop to rocks below. After the interaction, the interviewer gave out her contact information in case the subject “had any questions.” The subjects interviewed on the scary bridge presumably felt a quickened pulse and other effects of adrenaline. They must have been aware, to some extent, of their bodily reaction to the dangerous bridge. But would they mistake their reaction for sexual chemistry? Of those interviewed on the low, safe bridge, the woman’s appeal was apparently limited: only 2 of the 16 later called her. But of those on the high-anxiety bridge, 9 of the 18 phoned her.18 To a significant number of the male subjects, the prospect of falling hundreds of feet onto an assemblage of large boulders apparently had the same effect as a flirtatious smile and a black silk nightgown.

These experiments illustrate how our subliminal brain combines information about our physical state with other data arising from social and emotional contexts to determine what we are feeling. I think there’s a lesson here for everyday life. There is, of course, a direct analogue, the interesting corollary that walking up a few flights of stairs before evaluating a new business proposal may cause you to say, “Wow” when you would have normally said, “Hmm.” But think, too, about stress. We all know that mental stress leads to unwanted physical effects, but what is less discussed is the other half of the feedback loop: physical tension causing or perpetuating mental stress. Say you have a conflict with a friend or colleague that results in an agitated physical state. Your shoulders and your neck feel tight, you have a headache, your pulse is elevated. If that state persists, and you find yourself having a conversation with someone who had nothing to do with the conflict that precipitated those sensations, it could cause you to misjudge your feelings about that person. For example, a book editor friend of mine told me of an instance in which she had an unexpectedly acrimonious exchange with an agent and concluded that the agent was a particularly belligerent sort, someone she’d try to avoid working with in the future. But in the course of our discussion it became clear that the anger she felt toward the agent had not arisen from the issue at hand but had been baggage she had unconsciously carried over from an unrelated but upsetting incident that had immediately preceded her conflict with the agent.

For ages, yoga teachers have been saying, “Calm your body, calm your mind.” Social neuroscience now provides evidence to support that prescription. In fact, some studies go further and suggest that actively taking on the physical state of a happy person by, say, forcing a smile can cause you to actually feel happier.19 My young son Nicolai seemed to understand this intuitively: after breaking his hand in a freak accident while playing basketball, he suddenly stopped crying and started to laugh—and then explained that when he has pain laughing seems to make it feel better. The old “Fake it till you make it” idea, which Nicolai had rediscovered, is now also the subject of serious scientific research.


THE EXAMPLES I’VE talked about so far imply that we often don’t understand our feelings. Despite that, we usually think that we do. Moreover, when asked to explain why we feel a certain way, most of us, after giving it some thought, have no trouble supplying reasons. Where do we find those reasons, for feelings that may not even be what we think they are? We make them up.

In one interesting demonstration of that phenomenon, a researcher held out snapshots of two women’s faces, each about the size of a playing card, one in each hand. He asked his subject to choose the more attractive one.20 He then flipped both photos facedown, and slid the selected picture over to the participant. He asked the participant to pick up the card and explain the choice he or she had made. Then the researcher went on to another pair of photos, for about a dozen pairs in all. The catch is that in a few cases the experimenter made a switch: through a sleight of hand, he actually slid to his subjects the photograph of the woman they had found less attractive. Only about one-quarter of the time did the subjects see through the ruse. But what is really interesting is what happened the 75 percent of the time they did not see through it: when asked why they preferred the face they really hadn’t preferred, they said things like “She’s radiant. I would rather have approached her in a bar than the other one” or “I like her earrings” or “She looks like an aunt of mine” or “I think she seems nicer than the other one.” Time after time, they confidently described their reasons for preferring the face that, in reality, they had not preferred.

The research was no fluke—the scientists pulled a similar trick in a supermarket, with regard to shoppers’ preferences in taste tests of jam and tea.21 In the jam test, shoppers were asked which of two jams they preferred and were then supposedly given a second spoonful of the one they said they liked better so that they could analyze the reasons for their preference. But the jam jars had a hidden internal divider and a lid on both ends, allowing the deft researchers to dip the spoon into the nonpreferred jam for the second taste. Again, only about a third of the participants noticed the switch, while two-thirds had no difficulty explaining the reasons for their “preference.” A similar ruse, with a similar outcome, occurred in an experiment involving tea.

Sounds like a market researcher’s nightmare: ask people their opinion about a product or packaging to pick up insights about their appeal, and you get wonderful explanations that are sincere, detailed, and emphatic but happen to bear little relation to the truth. That’s also a problem for political pollsters who routinely ask people why they voted the way they did or why they will vote the way they plan to. It’s one thing when people claim to have no opinion, but quite another when you can’t even trust them to know what they think. Research, however, suggests that you can’t.22

The best hints as to what is going on come from research on people with brain abnormalities—for example, a series of famous studies on split-brain patients.23 Recall that information presented to one side of such a patient’s brain is not available to the other hemisphere. When the patient sees something on the left side of his visual field, only the right hemisphere of his brain is aware of it, and vice versa. Similarly, it is the right hemisphere alone that controls the movement of the left hand, and the left hemisphere alone that controls the right hand. One exception to this symmetry is that (in most people) the speech centers are located in the left hemisphere, and so if the patient speaks, it is usually the left hemisphere talking.

Taking advantage of this lack of communication between brain hemispheres, researchers instructed split-brain patients, via their right hemisphere, to perform a task and then asked their left hemisphere to explain why they’d done it. For example, the researchers instructed a patient, via his right hemisphere, to wave. Then they asked the patient why he’d waved. The left hemisphere had observed the waving but was unaware of the instruction to wave. Nevertheless, the left hemisphere did not allow the patient to admit ignorance. Instead, the patient said he’d waved because he’d thought he’d seen someone he knew. Similarly, when researchers instructed the patient, through the right hemisphere, to laugh and then asked him why he was laughing, the patient said he’d laughed because the researchers were funny. Again and again, the left hemisphere responded as if it knew the answer. In these and similar studies, the left brain generated many false reports, but the right brain did not, leading the researchers to speculate that the left hemisphere of the brain has a role that goes beyond simply registering and identifying our emotional feelings, to trying to understand them. It’s as though the left hemisphere has mounted a search for a sense of order and reason in the world in general.

Oliver Sacks wrote about a patient with Korsakoff’s syndrome, a type of amnesia in which victims can lose the ability to form new memories.24 Such patients may forget what is said within seconds, or what they see within minutes. Still, they often delude themselves into thinking that they know what is going on. When Sacks walked in to examine the patient, a Mr. Thompson, Thompson would not remember him from his previous encounters. But Thompson wouldn’t realize he didn’t know. He would always latch onto some available hint and convince himself that he did remember Sacks. On one occasion, since Sacks was wearing a white coat and Thompson had been a grocer, Thompson remembered him as the butcher from down the street. Moments later he forgot that “realization” and altered his story, remembering Sacks as a particular customer. Thompson’s understanding of his world, his situation, his self, was in a constant state of change, but he believed in each of the rapidly changing explanations he evolved in order to make sense of what he was seeing. As Sacks put it, Thompson “must seek meaning, make meaning, in a desperate way, continually inventing, throwing bridges of meaning over abysses of meaninglessness.”

The term “confabulation” often signifies the replacement of a gap in one’s memory by a falsification that one believes to be true. But we also confabulate to fill in gaps in our knowledge about our feelings. We all have those tendencies. We ask ourselves or our friends questions like “Why do you drive that car?” or “Why do you like that guy?” or “Why did you laugh at that joke?” Research suggests that we think we know the answers to such questions, but really we often don’t. When asked to explain ourselves, we engage in a search for truth that may feel like a kind of introspection. But though we think we know what we are feeling, we often know neither the content nor the unconscious origins of that content. And so we come up with plausible explanations that are untrue or only partly accurate, and we believe them.25 Scientists who study such errors have noticed that they are not haphazard.26 They are regular and systematic. And they have their basis in a repository of social, emotional, and cultural information we all share.


IMAGINE YOU’RE BEING driven home from a cocktail party that was in the penthouse of a posh hotel. You remark that you had a lovely time, and your designated driver asks you what you liked about it. “The people,” you say. But did your joy really stem from the fascinating repartee with that woman who wrote the best seller about the virtues of a vegan diet? Or was it something far subtler, like the quality of the harp music? Or the scent of roses that filled the room? Or the expensive champagne you quaffed all night? If your response was not the result of true and accurate introspection, on what basis did you make it?

When you come up with an explanation for your feelings and behavior, your brain performs an action that would probably surprise you: it searches your mental database of cultural norms and picks something plausible. For example, in this case it might have looked up the entry “Why People Enjoy Parties” and chosen “the people” as the most likely hypothesis. That might sound like the lazy way, but studies suggest we take it: when asked how we felt, or will feel, we tend to reply with descriptions or predictions that conform to a set of standard reasons, expectations, and cultural and societal explanations for a given feeling.

If the picture I just painted is correct, there is an obvious consequence that can be tested by experiment. Accurate introspection makes use of our private knowledge of ourselves. Identifying a generic, social-and-cultural-norms explanation as the source of our feelings doesn’t. As a result, if we are truly in touch with our feelings, we should be able to make predictions about ourselves that are more accurate than predictions that others make about us; but if we merely rely on social norms to explain our feelings, outside observers should be just as accurate in predicting our feelings as we are, and ought to make precisely the same mistakes.

One context scientists used to examine this question is familiar to anyone involved in hiring.27 Hiring is difficult because it is an important decision, and it is hard to know someone from the limited exposure afforded by an interview and a résumé. If you’ve ever had to hire people, you might have asked yourself why you thought a particular individual was the right pick. No doubt you could always find justification, but in hindsight, are you sure you chose that person for the reasons you thought you did? Perhaps your reasoning went the other way—you got a feeling about someone, formed a preference, and then, retroactively, your unconscious employed social norms to explain your feelings about that person.

One doctor friend told me that he was certain he had gotten into the top-rated medical school he’d attended for only one reason: he had clicked with one of the professors who’d interviewed him; the man’s parents, like his, had immigrated from a certain town in Greece. After matriculating at the school he got to know that professor, who maintained that my friend’s scores, grades, and character—the criteria demanded by social norms—were the reasons their interview had gone so well. But my friend’s scores and grades were below that school’s average, and he still believes it was their shared family origin that really influenced the professor.

To explore why some people get the job and others don’t, and whether those doing the hiring are aware of what drove their choices, researchers recruited 128 volunteers. Each subject—all of them female—was asked to study and assess an in-depth portfolio describing a woman applying for a job as a counselor in a crisis intervention center. The documents included a letter of recommendation and a detailed report of an interview the applicant had had with the center’s director. After studying the portfolio, subjects were asked several questions regarding the applicant’s qualifications, including How intelligent do you think she is? How flexible? How sympathetic would she be toward clients’ problems? How much do you like her?

The key to the study is that the information given to different subjects differed in a number of details. For example, some subjects read portfolios showing that the applicant had finished second in her class in high school and was now an honor student in college, while others read that she had not yet decided whether to go to college; some saw a mention of the fact that the applicant was quite attractive, others learned nothing about her appearance; some read in the center director’s report that the applicant had spilled a cup of coffee on the director’s desk, while others saw no mention of such an incident; and some portfolios indicated that the applicant had been in a serious automobile accident, while others didn’t. Some subjects were told they’d later meet the applicant, while others were not. These variable elements were shuffled in all possible combinations to create dozens of distinct scenarios. By studying the correlation of the facts the subjects were exposed to, and the judgments they made, researchers could compute mathematically the influence of each piece of information on the subjects’ assessments. Their goal was to compare the actual influence of each factor to the subjects’ perception of each factor’s influence, and also to the predictions of outside observers who didn’t know the subjects.

In order to understand what the subjects thought influenced them, after assessing the applicant, the subjects were polled with regard to each question: Did you judge the applicant’s intelligence by her academic credentials? Were you swayed in your assessment of her likability by her physical attractiveness? Did the fact that she spilled a cup of coffee over the interviewer’s desk affect your assessment of how sympathetic she’d be? And so on. Also, in order to find out what an outside observer would guess the influence of each factor would be, another group of volunteers (“outsiders”) were recruited; they were not shown the portfolios but were simply asked to rate how much they thought each factor would influence a person’s judgment.

The facts that were revealed about the applicant had been cleverly chosen. Some, such as the applicant’s high grades, were factors that social norms dictate ought to exert a positive influence on those assessing the job application. The researchers expected both the subjects and the outsiders to name these factors as an influence. Other factors, such as the coffee-spilling incident and the anticipation of later meeting the applicant, were factors that social norms say nothing about in this regard. The researchers therefore expected the outsiders not to recognize their influence. However the researchers had chosen those factors because studies show that, contrary to the expectations dictated by the norms, they do have an effect on our judgment of people: an isolated pratfall such as the coffee-spilling incident tends to increase the likability of a generally competent-seeming person, and the anticipation of meeting an individual tends to improve your assessment of that individual’s personality.28 The crucial question was whether the subjects, upon self-reflection, would do better than the outsiders and recognize that they’d been swayed by those surprisingly influential factors.

When the researchers examined the subjects’ and the outsiders’ answers, they found that they showed impressive agreement, and that both were way off the mark. Both groups appeared to draw their conclusions about which factors were influential from the social-norms explanations, while ignoring the actual reasons. For example, both the subjects and the outsiders said the coffee-spilling incident would not affect their liking of the applicant, yet it had the greatest effect of all the factors. Both groups expected that the academic factor would have a significant effect on their liking the applicant, but its effect was nil. And both groups reported that the expectation of meeting the applicant would have no effect, but it did. In case after case, both groups were wrong about which factors would not affect them and which factors would. As psychological theory had predicted, the subjects had shown no greater insight into themselves than the outsiders had.


EVOLUTION DESIGNED THE human brain not to accurately understand itself but to help us survive. We observe ourselves and the world and make enough sense of things to get along. Some of us, interested in knowing ourselves more deeply—perhaps to make better life decisions, perhaps to live a richer life, perhaps out of curiosity—seek to get past our intuitive ideas of us. We can. We can use our conscious minds to study, to identify, and to pierce our cognitive illusions. By broadening our perspective to take into account how our minds operate, we can achieve a more enlightened view of who we are. But even as we grow to better understand ourselves, we should maintain our appreciation of the fact that, if our mind’s natural view of the world is skewed, it is skewed for a reason.

I walked into an antiques store while on a trip to San Francisco one day, meaning to buy a beautiful vase in the window that was reduced from $100 to just $50. I walked out carrying a $2,500 Persian rug. To be precise, I’m not sure it was a $2,500 Persian rug; all I know is that $2,500 is what I paid for it. I wasn’t in the market for a rug, I wasn’t planning to spend $2,500 on a San Francisco souvenir, and I wasn’t intending to lug home anything bigger than a bread box. I don’t know why I did it, and none of the introspection I performed in the ensuing days turned up anything. But then again, there are no social norms regarding the purchase of Persian rugs on vacation whims. What I do know is that I like the way the rug looks in my dining room. I like it because it makes the room feel cozy, and its colors go well with the table and the walls. Or does it actually make the room look like a breakfast nook in a cheap hotel? Maybe the true reason I like it is that I’m not comfortable thinking that I spent $2,500 on an ugly rug to lay over my beautiful hardwood floor. That realization doesn’t bother me; it gives me a greater appreciation of my unseen partner, my unconscious, always providing the support I need as I walk and stumble my way through life.

CHAPTER 10 Self How our ego defends its honor … why schedules are overly optimistic and failed CEOs feel they deserve golden parachutes

The secret of rulership is to combine a belief in one’s own infallibility with the power to learn from past mistakes.

—GEORGE ORWELL

IN 2005 HURRICANE Katrina devastated the Gulf Coast of Louisiana and Mississippi. More than a thousand people lost their lives, and hundreds of thousands of others were displaced. New Orleans was flooded, with some parts of the city covered by fifteen feet of water. The U.S. government’s response was, by all accounts, badly botched. Well, by almost all accounts. When Michael Brown, the head of the Federal Emergency Management Agency, was accused of mismanagement and a lack of leadership, and Congress convened a panel to investigate, did the inexperienced Brown admit to any shortcomings? No, he said the poor response was “clearly the fault of a lack of coordination and planning by Louisiana governor Kathleen Blanco and New Orleans mayor Ray Nagin.” In fact, in his own mind, Brown seemed to be some sort of tragic, Cassandra-like figure: “I predicted privately for several years,” he said, “that we were going to reach this point [of crisis] because of the lack of resources and the lack of attention being paid.”1

Perhaps in his heart Brown accepted more responsibility. Perhaps these public statements were simply an awkward attempt to plea-bargain the public accusations against him down from negligence to impotence. Disingenuousness is a little harder to argue in the case of O. J. Simpson, the former sports hero accused of murdering two people but acquitted in criminal court. Afterward, he couldn’t seem to stay out of trouble. Finally, in 2007 he and a couple of buddies burst into a Las Vegas hotel room and seized sports memorabilia from dealers at gunpoint. At his sentencing, O.J. had a chance to apologize and ask the judge for leniency. He would certainly have had strong motive for a bit of either honest or phony self-criticism. But did he do the self-serving thing and, in an attempt to cut a few years off his sentence, express regret for behaving as a criminal? No, he stood his ground. His answer was sincere. He was sorry for his actions, he said, but he believed he had done nothing wrong. Even with years of prison at stake, Simpson felt the need to justify himself.

The stronger the threat to feeling good about yourself, it seems, the greater the tendency to view reality through a distorting lens. In his classic book How to Win Friends and Influence People, Dale Carnegie described the self-images of famous mobsters of the 1930s.2 Dutch Schultz, who terrorized New York, wasn’t shy about murder—and he certainly wouldn’t have been diminished in the eyes of his colleagues in crime had he simply described himself as a man who had built a successful empire by killing people. Instead, he told a newspaper interviewer that he saw himself as a “public benefactor.” Similarly, Al Capone, a purveyor of bootleg alcohol who was responsible for hundreds of killings, said, “I’ve spent the best years of my life giving people the lighter pleasures, helping them have a good time, and all I get is abuse, the existence of a hunted man.” And when a notorious murderer named “Two Gun” Crowley was sentenced to the electric chair for killing a policeman who had asked for his driver’s license, he didn’t express sadness over taking another man’s life. Rather, he complained, “This is what I get for defending myself.”

Do we really believe the enhanced versions of ourselves that we offer up to our audiences? Do we manage to convince ourselves that our corporate strategy was brilliant even though revenue has plummeted, that we deserve our $50 million exit package when the company we led lost twenty times that amount in the three years we ran it, that we argued the case brilliantly, though our client got the chair, or that we are only social smokers, though we go through the same pack a day whether we see another human being or not? How accurately do we perceive ourselves?

Consider a survey of nearly one million high school seniors.3 When asked to judge their ability to get along with others, 100 percent rated themselves as at least average, 60 percent rated themselves in the top 10 percent, and 25 percent considered themselves in the top 1 percent. And when asked about their leadership skills, only 2 percent assessed themselves as being below average. Teachers aren’t any more realistic: 94 percent of college professors say they do above-average work.4

Psychologists call this tendency for inflated self-assessment the “above-average effect,” and they’ve documented it in contexts ranging from driving ability to managerial skills.5 In engineering, when professionals were asked to rate their performance, between 30 percent and 40 percent put themselves in the top 5 percent.6 In the military, officers’ assessments of their leadership qualities (charisma, intellect, and so on) are far rosier than assessments of them made by their subordinates and superiors.7 In medicine, doctors’ assessments of their interpersonal skill are far higher than the ratings they received from their patients and supervisors, and their estimates of their own knowledge are far higher than objective tests bear out.8 In one study, in fact, physicians who diagnosed their patients as having pneumonia reported an average of 88 percent confidence in that diagnosis but proved correct only 20 percent of the time.9

This kind of inflation is equally the rule in the corporate world. Most business executives think their company is more likely to succeed than the typical company in their business, because it’s theirs,10 and CEOs display overconfidence when entering into new markets or embarking on risky projects.11 One result of this is that when companies acquire other firms, they typically pay 41 percent more for the target firm’s stock than its current price, feeling they can run it more profitably, while the combined value of the merging firms usually falls, indicating that impartial observers feel otherwise.12

Stock pickers, too, are overly optimistic about their ability to choose winners. Overconfidence can even lead otherwise savvy and rational investors to think they can predict when a stock market move will occur despite the fact that, on an intellectual level, they believe otherwise. In fact, in a survey conducted by economist Robert Schiller after the crash on Black Monday in October 1987, about one-third of investors claimed that they had a “pretty good idea when a rebound” would occur, though few, when asked, could offer an explicit theory to back up their confidence in predicting the market’s future.13

Ironically, people tend to recognize that inflated self-assessment and overconfidence can be a problem—but only in others.14 That’s right, we even overestimate our ability to resist overestimating our abilities. What’s going on?


IN 1959, THE social psychologist Milton Rokeach gathered three psychiatric patients to live together in Ypsilanti State Hospital in Michigan.15 Each of the patients believed he was Jesus Christ. Since at least two of them had to be wrong, Rokeach wondered how they would process this idea. There were precedents. In a famous seventeenth-century case a fellow named Simon Morin was sent to a madhouse for making the same claim. There he met another Jesus and “was so struck with the folly of his companion that he acknowledged his own.” Unfortunately, he subsequently reverted to his original belief and, like Jesus, ended up being killed—in this case, burned at the stake for blasphemy. No one was burned in Ypsilanti. One patient, like Morin, relinquished his belief; the second saw the others as mentally ill, but not himself; and the third managed to dodge the issue completely. So in this case, two out of the three patients managed to hang on to a self-image at odds with reality. The disconnect may be less extreme, but the same could be said to be true even of many of us who don’t believe we can walk on water. If we probed—or, in many cases, simply bothered to pay attention—most of us would notice that our self-image and the more objective image that others have of us are not quite in sync.

By the time we were two, most of us had a sense of ourselves as social agents.16 Around the time we learned that diapers are not a desirable fashion statement, we began to actively engage with adults to construct visions of our own past experiences. By kindergarten, we were able to do that without adult help. But we had also learned that people’s behavior is motivated by their desires and beliefs. From that time onward, we’ve had to reconcile the person we would like to be with the person whose thoughts and actions we live with each moment of every day.

I’ve talked a lot about how research psychologists reject much of Freudian theory, but one idea Freudian therapists and experimental psychologists agree on today is that our ego fights fiercely to defend its honor. This agreement is a relatively recent development. For many decades, research psychologists thought of people as detached observers who assess events and then apply reason to discover truth and decipher the nature of the social world.17 We were said to gather data on ourselves and to build our self-images based on generally good and accurate inferences. In that traditional view, a well-adjusted person was thought to be like a scientist of the self, whereas an individual whose self-image was clouded by illusion was regarded as vulnerable to, if not already a victim of, mental illness. Today, we know that the opposite is closer to the truth. Normal and healthy individuals—students, professors, engineers, lieutenant colonels, doctors, business executives—tend to think of themselves as not just competent but proficient, even if they aren’t.

Doesn’t the business executive, noting that her department keeps missing its numbers, question her own abilities? Or the lieutenant colonel, noting that he can’t seem to shed that prefix, wonder whether he’s fit to be a colonel? How do we convince ourselves that we’ve got talent and that when the promotion goes to the other guy, it’s only because the boss was misguided?

As the psychologist Jonathan Haidt put it, there are two ways to get at the truth: the way of the scientist and the way of the lawyer. Scientists gather evidence, look for regularities, form theories explaining their observations, and test them. Attorneys begin with a conclusion they want to convince others of and then seek evidence that supports it, while also attempting to discredit evidence that doesn’t. The human mind is designed to be both a scientist and an attorney, both a conscious seeker of objective truth and an unconscious, impassioned advocate for what we want to believe. Together these approaches vie to create our worldview.

Believing in what you desire to be true and then seeking evidence to justify it doesn’t seem to be the best approach to everyday decisions. For example, if you’re at the races, it is rational to bet on the horse you believe is fastest, but it doesn’t make sense to believe a horse is fastest because you bet on it. Similarly, it makes sense to choose a job you believe is appealing, but it’s irrational to believe a job is appealing because you’ve accepted the offer. Still, even though in each case the latter approach doesn’t make rational sense, it is the irrational choice that would probably make you happier. And the mind generally seems to opt for happy. In both these instances, the research indicates, it is the latter choice that people are likely to make.18 The “causal arrow” in human thought processes consistently tends to point from belief to evidence, not vice versa.19

As it turns out, the brain is a decent scientist but an absolutely outstanding lawyer. The result is that in the struggle to fashion a coherent, convincing view of ourselves and the rest of the world, it is the impassioned advocate that usually wins over the truth seeker. We’ve seen in earlier chapters how the unconscious mind is a master at using limited data to construct a version of the world that appears realistic and complete to its partner, the conscious mind. Visual perception, memory, and even emotion are all constructs, made of a mix of raw, incomplete, and sometimes conflicting data. We use the same kind of creative process to generate our self-image. When we paint our picture of self, our attorney-like unconscious blends fact and illusion, exaggerating our strengths, minimizing our weaknesses, creating a virtually Picassoesque series of distortions in which some parts have been blown up to enormous size (the parts we like) and others shrunk to near invisibility. The rational scientists of our conscious minds then innocently admire the self-portrait, believing it to be a work of photographic accuracy.

Psychologists call the approach taken by our inner advocate “motivated reasoning.” Motivated reasoning helps us to believe in our own goodness and competence, to feel in control, and to generally see ourselves in an overly positive light. It also shapes the way we understand and interpret our environment, especially our social environment, and it helps us justify our preferred beliefs. Still, it isn’t possible for 40 percent to squeeze into the top 5 percent, for 60 percent to squeeze into the top decile, or for 94 percent to be in the top half, so convincing ourselves of our great worth is not always an easy task. Fortunately, in accomplishing it, our minds have a great ally, an aspect of life whose importance we’ve encountered before: ambiguity. Ambiguity creates wiggle room in what may otherwise be inarguable truth, and our unconscious minds employ that wiggle room to build a narrative of ourselves, of others, and of our environment that makes the best of our fate, that fuels us in the good times, and gives us comfort in the bad.


WHAT DO YOU see when you look at the figure below? On first glance, you will see it as either a horse or a seal, but if you keep looking, after a while you will see it as the other creature. And once you’ve seen it both ways, your perception tends to automatically alternate between the two animals. The truth is, the figure is both and it is neither. It is just a suggestive assemblage of lines, a sketch that, like your character, personality, and talents, can be interpreted in different ways.

Attention, Perception & Psychophysics 4, no. 3 (1968), p. 191, “Ambiguity of Form: Old and New,” by Gerald H. Fisher, Fig. 3.2, copyright © 1968 by the Psychonomics Society. Reprinted with kind permission from Springer Science+Business Media B.V.

Earlier I said that ambiguity opened the door to stereotyping, to misjudging people we don’t know very well. It also opens the door to misjudging ourselves. If our talents and expertise, our personality and character were all defined by scientific measurement and carved into inalterable stone tablets, it would be difficult to maintain a biased image of who we are. But our characteristics are more like the horse/seal image, open to differing interpretations.

How easy is it for us to tailor reality to fit our desires? David Dunning has spent years pondering questions like that. A social psychologist at Cornell University, he has devoted much of his professional career to studying how and when people’s perception of reality is shaped by their preferences. Consider the horse/seal image. Dunning and a colleague loaded it onto a computer, recruited dozens of subjects, and provided motivation for them to see it as either a horse or a seal.20 Here is how it worked: The scientists told their subjects that they would be assigned to drink one of two liquids. One was a glass of tasty orange juice. The other was a “health smoothie” that looked and smelled so vile that a number of subjects dropped out rather than face the possibility of tasting it. The participants were told that the identity of the beverage they were to drink would be communicated to them via the computer, which would flash a figure—the image above—on the screen for one second. One second is generally not enough time for a person to see the image both ways, so each subject would see either just a horse or just a seal.21

That’s the key to the experiment, for half the subjects were told that if the figure was a “farm animal,” they were to drink the juice and if it was a “sea creature,” they were to drink the smoothie; the other half were told the reverse. Then, after the subjects had viewed the image, the researchers asked them to identify the animal they’d seen. If the students’ motivations biased their perceptions, the unconscious minds of the subjects who were told that farm animal equals orange juice would bias them toward seeing a horse. Similarly, the unconscious minds of those told that farm animal equals disgusting smoothie would bias them toward seeing the seal. And that’s just what happened: among those hoping to see a farm animal, 67 percent reported seeing a horse, while among those hoping to see a sea creature, 73 percent identified a seal.

Dunning’s study was certainly persuasive about the impact of motivation on perception, but the ambiguity at hand was very clear and simple. Everyday life experiences, by contrast, present issues far more complex than deciding what animal you’re looking at. Talent at running a business or a military unit, the ability to get along with people, the desire to act ethically, and myriad other traits that define us are all complicated qualities. As a result, our unconscious can choose from an entire smorgasbord of interpretations to feed our conscious mind. In the end we feel we are chewing on the facts, though we’ve actually been chomping on a preferred conclusion.

Biased interpretations of ambiguous events are at the heart of some of our most heated arguments. In the 1950s, a pair of psychology professors, one from Princeton, the other from Dartmouth, decided to see if, even a year after the event, Princeton and Dartmouth students would be capable of objectivity about an important football game.22 The game in question was a brutal match in which Dartmouth played especially rough but Princeton came out on top. The scientists showed a group of students from each school a film of the match and asked them to take note of every infraction they spotted, specifying which were “flagrant” or “mild.” Princeton students saw the Dartmouth team commit more than twice as many infractions as their own team, while Dartmouth students counted about an equal number on both sides. Princeton viewers rated most of the Dartmouth fouls as flagrant but few of their own as such, whereas the Dartmouth viewers rated only a few of their own infractions as flagrant but half of Princeton’s. And when asked if Dartmouth was playing intentionally rough or dirty, the vast majority of the Princeton fans said “yes,” while the vast majority of the Dartmouth fans who had a definite opinion said “no.” The researchers wrote, “The same sensory experiences emanating from the football field, transmitted through the visual mechanism to the brain … gave rise to different experiences in different people…. There is no such ‘thing’ as a game existing ‘out there’ in its own right which people merely ‘observe.’”

I like that last quote because, though it was written about football, it seems to be true about the game of life in general. Even in my field, science, in which objectivity is worshipped, it is often clear that people’s views of the evidence are highly correlated to their vested interests. For example, in the 1950s and ’60s a debate raged about whether the universe had had a beginning or whether it had always been in existence. One camp supported the big bang theory, which said that the cosmos began in a manner indicated by the theory’s name. The other camp believed in the steady state theory, the idea that the universe had always been around, in more or less the same state that it is in today. In the end, to any disinterested party, the evidence landed squarely in support of the big bang theory, especially after 1964, when the afterglow of the big bang was serendipitously detected by a pair of satellite communications researchers at Bell Labs. That discovery made the front page of the New York Times, which proclaimed that the big bang had won out. What did the steady state researchers proclaim? After three years, one proponent finally accepted it with the words “The universe is in fact a botched job, but I suppose we shall have to make the best of it.” Thirty years later, another leading steady state theorist, by then old and silver-haired, still believed in a modified version of his theory.23

The little research that has been done by scientists on scientists shows that it isn’t uncommon for scientists to operate as advocates rather than impartial judges, especially in the social sciences, in which there is greater ambiguity than in the physical sciences. For example, in one study, advanced graduate students at the University of Chicago were asked to rate research reports dealing with issues on which they already had an opinion.24 Unbeknownst to the volunteers, the research reports were all phony. For each issue, half the volunteers saw a report presenting data that supported one side, while the other half saw a report in which the data supported the opposite camp. But it was only the numbers that differed—the research methodology and presentation were identical in both cases.

When asked, most subjects denied that their assessment of the research depended on whether the data supported their prior opinion. But they were wrong. The researcher’s analysis showed that they had indeed judged the studies that supported their beliefs to be more methodologically sound and clearly presented than the otherwise identical studies that opposed their beliefs—and the effect was stronger for those with strong prior beliefs.25 I’m not saying that claims of truth in science are a sham—they aren’t. History has repeatedly shown that the better theory eventually wins. That’s why the big bang triumphed and the steady state theory died, and no one even remembers cold fusion. But it is also true that scientists with an investment in an established theory sometimes stubbornly cling to their old beliefs. Sometimes, as the economist Paul Samuelson wrote, “science advances funeral by funeral.”26

Because motivated reasoning is unconscious, people’s claims that they are unaffected by bias or self-interest can be sincere, even as they make decisions that are in reality self-serving. For example, many physicians think they are immune to monetary influence, yet recent studies show that accepting industry hospitality and gifts has a significant subliminal effect on patient-care decisions.27 Similarly, studies have shown that research physicians with financial ties to pharmaceutical manufacturers are significantly more likely than independent reviewers to report findings that support the sponsor’s drugs and less likely to report unfavorable findings; that investment managers’ estimates of the probabilities of various events are significantly correlated to the perceived desirability of those events; that auditors’ judgments are affected by the incentives offered; and that, at least in Britain, half the population believes in heaven, but only about a quarter believes in hell.28

Recent brain-imaging studies are beginning to shed light on how our brains create these unconscious biases. They show that when assessing emotionally relevant data, our brains automatically include our wants and dreams and desires.29 Our internal computations, which we believe to be objective, are not really the computations that a detached computer would make but, rather, are implicitly colored by who we are and what we are after. In fact, the motivated reasoning we engage in when we have a personal stake in an issue proceeds via a different physical process within the brain than the cold, objective analysis we carry out when we don’t. In particular, motivated reasoning involves a network of brain regions that are not associated with “cold” reasoning, including the orbitofrontal cortex and the anterior cingulate cortex—parts of the limbic system—and the posterior cingulate cortex and precuneus, which are also activated when one makes emotionally laden moral judgments.30 That’s the physical mechanism for how our brains manage to deceive us. But what is the mental mechanism? What techniques of subliminal reasoning do we employ to support our preferred worldviews?


OUR CONSCIOUS MINDS are not chumps. So if our unconscious minds distorted reality in some clumsy and obvious way, we would notice and we wouldn’t buy into it. Motivated reasoning won’t work if it stretches credulity too far, for then our conscious minds start to doubt and the self-delusion game is over. That there are limits to motivated reasoning is critically important, for it is one thing to have an inflated view of your expertise at making lasagna and it is quite another to believe you can leap tall buildings in a single bound. In order for your inflated self-image to serve you well, to have survival benefits, it must be inflated to just the right degree and no further. Psychologists describe this balance by saying that the resulting distortion must maintain the “illusion of objectivity.” The talent we are blessed with in this regard is the ability to justify our rosy images of ourselves through credible arguments, in a way that does not fly in the face of obvious facts. What tools do our unconscious minds use to shape our cloudy, ambiguous experience into the clear and distinctly positive vision of the self that we wish to see?

One method is reminiscent of an old joke about a Catholic and a Jew—both white—and a black man, all of whom die and approach the gates of heaven. The Catholic says, “I was a good man all my life, but I suffered a lot of discrimination. What do I have to do to get into heaven?”

“That’s easy,” says God. “All you have to do to enter heaven is spell one word.”

“What’s that?” the Catholic asks.

“God,” answers the Lord.

The Catholic spells it out, G-O-D, and is let in. Then the Jew approaches. He, too, says, “I was a good man.” And then he adds, “And it wasn’t easy—I had to deal with discrimination all my life. What do I have to do to get into heaven?”

God says, “That’s easy. All you have to do is spell one word.”

“What’s that?” the Jew asks.

“God,” answers the Lord.

The Jew says, “G-O-D,” and he, too, is let in. Then the black man approaches and says that he was kind to everyone, although he faced nasty discrimination because of the color of his skin.

God says, “Don’t worry, there is no discrimination here.”

“Thank you,” says the black man. “So how do I get into heaven?”

“That’s easy,” says God. “All you have to do is spell one word!”

“What’s that?” the black man asks.

“Czechoslovakia,” answers the Lord.

The Lord’s method of discrimination is classic, and our brains employ it often: when information favorable to the way we’d like to see the world tries to enter the gateway of our mind we ask that it spell “God,” but when unfavorable information comes knocking, we make it spell “Czechoslovakia.”

For example, in one study volunteers were given a strip of paper to test whether they had a serious deficiency of an enzyme called TAA, which would make them susceptible to a variety of serious pancreas disorders.31 The researchers told them to dip the strip of paper in a bit of their saliva and wait ten to twenty seconds to see if the paper turned green. Half the subjects were told that if the strip turned green it meant they had no enzyme deficiency, while the other half were told that if it turned green it meant they had the dangerous deficiency. In reality, no such enzyme exists, and the strip was ordinary yellow construction paper, so none of the subjects were destined to see it change color. The researchers watched as their subjects performed the test. Those who were motivated to see no change dipped the paper, and when nothing happened, they quickly accepted the happy answer and decided the test was complete. But those motivated to see the paper turn green stared at the strip for an extra thirty seconds, on average, before accepting the verdict. What’s more, over half of these subjects engaged in some sort of retesting behavior. One subject redipped the paper twelve times, like a child nagging its parents. Can you turn green? Can you? Please? Please?

Those subjects may seem silly, but we all dip and redip in an effort to bolster our preferred views. People find reasons to continue supporting their preferred political candidates in the face of serious and credible accusations of wrongdoing or ignorance but take thirdhand hearsay about an illegal left turn as evidence that the candidate of the other party ought to be banned from politics for life. Similarly, when people want to believe in a scientific conclusion, they’ll accept a vague news report of an experiment somewhere as convincing evidence. And when people don’t want to accept something, the National Academy of Sciences, the American Association for the Advancement of Science, the American Geophysical Union, the American Meteorological Society, and a thousand unanimous scientific studies can all converge on a single conclusion, and people will still find a reason to disbelieve.

That’s exactly what happened in the case of the inconvenient and costly issue of global climate change. The organizations I named above, plus a thousand academic articles on the topic, were unanimous in concluding that human activity is responsible, yet in the United States more than half the people have managed to convince themselves that the science of global warming is not yet settled.32 Actually, it would be difficult to get all those organizations and scientists to agree on anything short of a declaration stating that Albert Einstein was a smart fellow, so their consensus reflects the fact that the science of global warming is very much settled. It’s just not good news. To a lot of people, the idea that we are descended from apes is also not good news. So they have found ways not to accept that fact, either.

When someone with a political bias or vested interest sees a situation differently than we do, we tend to think that person is deliberately misinterpreting the obvious to justify their politics or to bring about some personal gain. But through motivated reasoning each side finds ways to justify its favored conclusion and discredit the other, while maintaining a belief in its own objectivity. And so those on both sides of important issues may sincerely think that theirs is the only rational interpretation. Consider the following research on the death penalty. People who either supported or opposed capital punishment on the theory that it deterred crime (or didn’t) were shown two phony studies. Each study employed a different statistical method to prove its point. Let’s call them method A and method B. For half the subjects, the study that used method A concluded that capital punishment works as a deterrent, and the study that used method B concluded that it doesn’t. The other subjects saw studies in which the conclusions were reversed. If people were objective, those on both sides would agree that either method A or method B was the best approach regardless of whether it supported or undermined their prior belief (or they’d agree that it was a tie). But that’s not what happened. Subjects readily offered criticisms such as “There were too many variables,” “I don’t think they have complete enough collection of data,” and “The evidence given is relatively meaningless.” But both sides lauded whatever method supported their belief and trashed whatever method did not. Clearly, it was the reports’ conclusions, not their methods, that inspired these analyses.33

Exposing people to well-reasoned arguments both pro– and anti–death penalty did not engender understanding for the other point of view. Rather, because we poke holes in evidence we dislike and plug holes in evidence we like, the net effect in these studies was to amplify the intensity of the disagreement. A similar study found that, after viewing identical samples of major network television coverage of the 1982 massacre in Beirut, both pro-Israeli and pro-Arab partisans rated the programs, and the networks, as being biased against their side.34 There are critical lessons in this research. First, we should keep in mind that those who disagree with us are not necessarily duplicitous or dishonest in their refusal to acknowledge the obvious errors in their thinking. More important, it would be enlightening for all of us to face the fact that our own reasoning is often not so perfectly objective, either.


ADJUSTING OUR STANDARDS for accepting evidence to favor our preferred conclusions is but one instrument in the subliminal mind’s motivated reasoning tool kit. Other ways we find support for our worldviews (including our view of ourselves) include adjusting the importance we assign to various pieces of evidence and, sometimes, ignoring unfavorable evidence altogether. For example, ever notice how, after a win, sports fans crow about their team’s great play, but after a loss they often ignore the quality of play and focus on Lady Luck or the referees?35 Similarly, executives in public companies pat themselves on the back for good outcomes but suddenly recognize the importance of random environmental factors when performance is poor.36 It can be hard to tell whether those attempts to put a spin on a bad outcome are sincere, and the result of unconscious motivated reasoning, or are conscious and self-serving.

One situation in which that ambiguity is not an issue is scheduling. There is no good reason to offer unrealistic promises with regard to deadlines, because in the end you’ll be required to back up those promises by delivering the goods. Yet contractors and businesses often miss their deadlines even when there are financial penalties for doing so, and studies show that motivated reasoning is a major cause of those miscalculations. It turns out that when we calculate a completion date, the method we think we follow in arriving at it is to break the project down into the necessary steps, estimate the time required for each step, and put it all together. But research shows that, instead, our minds often work backward. That is, the desired target date exerts a great and unconscious influence on our estimate of the time required to complete each of the intermediate steps. In fact, studies show that our estimates of how long it will take to finish a task depend directly on how invested we are in the project’s early completion.37

If it’s important for a producer to get the new PlayStation game done in the next two months, her mind will find reasons to believe that the programming and quality-assurance testing will be more problem-free than ever before. Likewise, if we need to get three hundred popcorn balls made in time for Halloween, we manage to convince ourselves that having the kids help on our kitchen assembly line will go smoothly for the first time in the history of our family. It is because we make these decisions, and sincerely believe they are realistic, that all of us, whether we are throwing a dinner party for ten people or building a new jet fighter, regularly create overly optimistic estimates of when we can finish the project.38 In fact, the U.S. General Accounting Office estimated that when the military purchased equipment involving new technology, it was delivered on schedule and within budget just 1 percent of the time.39

In the last chapter I mentioned that research shows that employers often aren’t in touch with the real reasons they hire someone. An interviewer may like or dislike an applicant because of factors that have little to do with the applicant’s objective qualifications. They may both have attended the same school or both be bird-watchers. Or perhaps the applicant reminds the interviewer of a favorite uncle. For whatever reason, once the interviewer makes a gut-level decision, her unconscious often employs motivated reasoning to back that intuitive inclination. If she likes the applicant, without realizing her motivation she will tend to assign high importance to areas in which the applicant excels and take less seriously those in which the applicant falls short.

In one study, participants considered applications from a male and a female candidate for the job of police chief. That’s a stereotypically male position, so the researchers postulated that the participants would favor the male applicant and then unwittingly narrow the criteria by which they judged the applicants to those that would support that decision. Here is how the study worked: There were two types of résumés. The experimenters designed one to portray a streetwise individual who was poorly educated and lacking in administrative skills. They designed the other to reflect a well-educated and politically connected sophisticate who had little street smarts. Some participants were given a pair of résumés in which the male applicant had the streetwise résumé and the female was the sophisticate. Others were given a pair of résumés in which the man’s and the woman’s strong points were reversed. The participants were asked not just to make a choice but to explain it.

The results showed that when the male applicant had the streetwise résumé, the participants decided street smarts were important for the job and selected him, but when the male applicant had the sophisticate’s résumé, they decided that street smarts were overrated and also chose the male. They were clearly making their decisions on the basis of gender, and not on the streetwise-versus-sophisticated distinction, but they were just as clearly unaware of doing so. In fact, when asked, none of the subjects mentioned gender as having influenced them.40

Our culture likes to portray situations in black and white. Antagonists are dishonest, insincere, greedy, evil. They are opposed by heroes who are the opposite in terms of those qualities. But the truth is, from criminals to greedy executives to the “nasty” guy down the street, people who act in ways we abhor are usually convinced that they are right.

The power of vested interest in determining how we weigh the evidence in social situations was nicely illustrated in a series of experiments in which researchers randomly assigned volunteers to the role of plaintiff or defendant in a mock lawsuit based on a real trial that occurred in Texas.41 In one of those experiments, the researchers gave both sides documents regarding the case, which involved an injured motorcyclist who was suing the driver of an automobile that had collided with him. The subjects were told that in the actual case, the judge awarded the plaintiff an amount between $0 and $100,000. They were then assigned to represent one side or the other in mock negotiations in which they were given a half hour to fashion their own version of a settlement. The researchers told the subjects they’d be paid based on their success in those negotiations. But the most interesting part of the study came next: the subjects were also told they could earn a cash bonus if they could guess—within $5,000—what the judge actually awarded the plaintiff.

In making their guesses, it was obviously in the subjects’ interest to ignore whether they were playing the role of plaintiff’s or defendant’s advocate. They’d have the greatest chance at winning the cash bonus if they assessed the payout that would be fair, based solely on the law and the evidence. The question was whether they could maintain their objectivity.

On average, the volunteers assigned to represent the plaintiff’s side estimated that the judge would dictate a settlement of nearly $40,000, while the volunteers assigned to represent the defendant put that number at only around $20,000. Think of it: $40,000 versus $20,000. If, despite the financial reward offered for accurately guessing the size of a fair and proper settlement, subjects artificially assigned to different sides of a dispute disagree by 100 percent, imagine the magnitude of sincere disagreement between actual attorneys representing different sides of a case, or opposing negotiators in a bargaining session. The fact that we assess information in a biased manner and are unaware we are doing so can be a real stumbling block in negotiations, even if both sides sincerely seek a fair settlement.

Another version of the experiment, created around the scenario of that same lawsuit, investigated the reasoning mechanism the subjects employed to reach their conflicting conclusions. In that study, at the end of the bargaining session, the researchers asked the volunteers to explicitly comment on each side’s arguments, to make concrete judgments on issues like Does ordering an onion pizza via cell phone affect one’s driving? Does a single beer an hour or two before getting on a motorcycle impair safety? As in the police chief résumé example, subjects on both sides tended to assign more importance to the factors that favored their desired conclusion than to the factors favoring their opponent. These experiments suggest that, as they were reading the facts of the case, the subjects’ knowledge that they would be taking one side or the other affected their judgment in a subtle and unconscious manner that trumped any motivation to analyze the situation fairly.

To further probe that idea, in another variant on the experiment, researchers asked volunteers to assess the accident information before being told which side they would be representing. Then the subjects were assigned their roles and asked to evaluate the appropriate award, again with the promise of a cash bonus if they came close. The subjects had thus weighed the evidence while still unbiased, but made their guess about the award after the cause for bias had been established. In this situation, the discrepancy in the assessments fell from around $20,000 to just $7,000, a reduction of nearly two-thirds. Moreover, the results showed that due to the subjects’ having analyzed the data before taking sides in the dispute, the proportion of times the plaintiff’s and defendant’s advocates failed to come to an agreement within the allotted half hour fell from 28 percent to just 6 percent. It’s a cliché, but the experience of walking in the other side’s shoes does seem to be the best way to understand their point of view.

As these studies suggest, the subtlety of our reasoning mechanisms allows us to maintain our illusions of objectivity even while viewing the world through a biased lens. Our decision-making processes bend but don’t break our usual rules, and we perceive ourselves as forming judgments in a bottom-up fashion, using data to draw a conclusion, while we are in reality deciding top-down, using our preferred conclusion to shape our analysis of the data. When we apply motivated reasoning to assessments about ourselves, we produce that positive picture of a world in which we are all above average. If we’re better at grammar than arithmetic, we give linguistic knowledge more weight in our view of what is important, whereas if we are good at adding but bad at grammar, we think language skills just aren’t that crucial.42 If we are ambitious, determined, and persistent, we believe that goal-oriented people make the most effective leaders; if we see ourselves as approachable, friendly, and extroverted, we feel that the best leaders are people-oriented.43

We even recruit our memories to brighten our picture of ourselves. Take grades, for example. A group of researchers asked ninety-nine college freshmen and sophomores to think back a few years and recall the grades they had received for high school classes in math, science, history, foreign language study, and English.44 The students had no incentive to lie because they were told that their recollections would be checked against their high school registrars’ records, and indeed all signed forms giving their permission. Altogether, the researchers checked on the students’ memories of 3,220 grades. A funny thing happened. You’d think that the handful of years that had passed would have had a big effect on the students’ grade recall, but they didn’t. The intervening years didn’t seem to affect the students’ memories very much at all—they remembered their grades from their freshman, sophomore, junior, and senior years all with the same accuracy, about 70 percent. And yet there were memory holes. What made the students forget? It was not the haze of years but the haze of poor performance: their accuracy of recall declined steadily from 89 percent for A’s to 64 percent for B’s, 51 percent for C’s, and 29 percent for D’s. So if you are ever depressed over being given a bad evaluation, cheer up. Chances are, if you just wait long enough, it’ll improve.


MY SON NICOLAI, now in tenth grade, received a letter the other day. The letter was from a person who used to live in my household but no longer exists. That is, the letter was written by Nicolai himself, but four years earlier. Though the letter had traveled very little in space, it had traveled very far in time, at least in the time of a young child’s life. He had written the letter in sixth grade as a class assignment. It was a message from an eleven-year-old Nicolai asked to speak to the fifteen-year-old Nicolai of the future. The class’s letters had been collected and held those four years by his wonderful English teacher, who eventually mailed them to the adolescents her sixth-grade children had become.

What was striking about Nicolai’s letter was that it said, “Dear Nicolai … you want to be in the NBA. I look forward to playing basketball on the middle school seventh and eighth grade team, and then in high school, where you are now in your second year.” But Nicolai did not make the team in seventh grade; nor did he make it in eighth grade. Then, as his luck would have it, the coach who passed him over for those teams also turned up as the freshman coach in high school, and again declined to pick Nicolai for the team. That year, only a handful of the boys who tried out were turned away, making the rejection particularly bitter for Nicolai. What’s remarkable here is not that Nicolai wasn’t smart enough to know when to give up but that through all those years he maintained his dream of playing basketball, to the extent that he put in five hours a day one summer practicing alone on an empty court. If you know kids, you understand that if a boy continues to insist that someday he will be in the NBA but year after year fails to make even his local school team, it will not be a plus for his social life. Kids might like to tease a loser, but they love teasing a loser for whom winning would have been everything. And so, for Nicolai, maintaining his belief in himself came at some cost.

The story of Nicolai’s basketball career is not over. At the end of ninth grade, his school’s new junior varsity coach saw him practicing day after day, sometimes until it was so dark he could barely see the ball. He invited Nicolai to practice with the team that summer. This fall he finally made the team. In fact, he is the team captain.

I’ve mentioned the successes of Apple computer a couple of times in this book, and much has been made of Apple cofounder Steve Jobs’s ability to create what has come to be called a “reality distortion field,” which allowed him to convince himself and others that they could accomplish whatever they set their mind to. But that reality distortion field was not just his creation; it is also Nicolai’s, and—to one degree or another—it is a gift of everyone’s unconscious mind, a tool built upon our natural propensity to engage in motivated reasoning.

There are few accomplishments, large or small, that don’t depend to some degree on the accomplisher believing in him- or herself, and the greatest accomplishments are the most likely to rely on that person being not only optimistic but unreasonably optimistic. It’s not a good idea to believe you are Jesus, but believing you can become an NBA player—or, like Jobs, come back from the humiliating defeat of being ejected from your own company, or be a great scientist or author or actor or singer—may serve you very well indeed. Even if it doesn’t end up turning out to be true in the details of what you accomplish, belief in the self is an ultimately positive force in life. As Steve Jobs said, “You can’t connect the dots looking forward; you can only connect them looking backwards. So you have to trust that the dots will somehow connect in your future.”45 If you believe the dots will connect down the road, it will give you the confidence to follow your heart, even when it leads you off the well-worn path.

I’ve attempted, in writing this book, to illuminate the many ways in which a person’s unconscious mind serves them. For me, the extent to which my inner unknown self guides my conscious mind came as a great surprise. An even greater surprise was the realization of how lost I would be without it. But of all the advantages our unconscious provides, it is this one that I value most. Our unconscious is at its best when it helps us create a positive and fond sense of self, a feeling of power and control in a world full of powers far greater than the merely human. The artist Salvador Dalí once said, “Every morning upon awakening, I experience a supreme pleasure: that of being Salvador Dalí, and I ask myself, wonderstruck, what prodigious thing will he do today, this Salvador Dalí?”46 Dalí may have been a sweet guy or he may have been an insufferable egomaniac, but there is something wonderful about his unrestrained and unabashedly optimistic vision of his future.

The psychological literature is full of studies illustrating the benefits—both personal and social—of holding positive “illusions” about ourselves.47 Researchers find that when they induce a positive mood, by whatever means, people are more likely to interact with others and more likely to help others. Those feeling good about themselves are more cooperative in bargaining situations and more likely to find a constructive solution to their conflicts. They are also better problem solvers, more motivated to succeed, and more likely to persist in the face of a challenge. Motivated reasoning enables our minds to defend us against unhappiness, and in the process it gives us the strength to overcome the many obstacles in life that might otherwise overwhelm us. The more of it we do, the better off we tend to be, for it seems to inspire us to strive to become what we think we are. In fact, studies show that the people with the most accurate self-perceptions tend to be moderately depressed, suffer from low self-esteem, or both.48 An overly positive self-evaluation, on the other hand, is normal and healthy.49

I imagine that, fifty thousand years ago, anyone in their right mind looking toward the harsh winters of northern Europe would have crawled into a cave and given up. Women seeing their children die from rampant infections, men watching their women die in childbirth, human tribes suffering drought, flood, and famine must have found it difficult to keep courageously marching forward. But with so many seemingly insurmountable barriers in life, nature provided us with the means to create an unrealistically rosy attitude about overcoming them—which helps us do precisely that.

As you confront the world, unrealistic optimism can be a life vest that keeps you afloat. Modern life, like our primitive past, has its daunting obstacles. The physicist Joe Polchinski wrote that when he started to draft his textbook on string theory, he expected that the project would take one year. It took him ten. Looking back, had I had a sober assessment of the time and effort required to write this book, or to become a theoretical physicist, I would have shrunk before both endeavors. Motivated reasoning and motivated remembering and all the other quirks of how we think about ourselves and our world may have their downsides, but when we’re facing great challenges—whether it’s losing a job, embarking on a course of chemotherapy, writing a book, enduring a decade of medical school, internship, and residency, spending the thousands of practice hours necessary to become an accomplished violinist or ballet dancer, putting in years of eighty-hour weeks to establish a new business, or starting over in a new country with no money and no skills—the natural optimism of the human mind is one of our greatest gifts.

Before my brothers and I were born, my parents lived in a small flat on the North Side of Chicago. My father worked long hours sewing clothes in a sweatshop, but his meager income left my parents unable to make the rent. Then one night my father came home excited and told my mother they were looking for a new seamstress at work, and that he had gotten her the job. “You start tomorrow,” he said. It sounded like a propitious move, since this would almost double their income, keep them off beggars’ row, and give them the comfort of spending far more time together. There was only one drawback: my mother didn’t sew. Before Hitler invaded Poland, before she lost everyone and everything, before she became a refugee in a strange land, my mother had been a child of wealth. Sewing wasn’t anything a teenage girl in her family had needed to learn.

And so my future parents had a little discussion. My father told my mother he could teach her. They would work at it all night, and in the morning they’d take the train to the shop together and she’d do a passable job. Anyway, he was very fast and could cover for her until she got the hang of it. My mother considered herself clumsy and, worse, too timid to go through with such a scheme. But my father insisted that she was capable and brave. She was a survivor just as he was, he told her. And so they talked, back and forth, about which qualities truly defined my mother.

We choose the facts that we want to believe. We also choose our friends, lovers, and spouses not just because of the way we perceive them but because of the way they perceive us. Unlike phenomena in physics, in life, events can often obey one theory or another, and what actually happens can depend largely upon which theory we choose to believe. It is a gift of the human mind to be extraordinarily open to accepting the theory of ourselves that pushes us in the direction of survival, and even happiness. And so my parents did not sleep that night, while my father taught my mother how to sew.

Загрузка...