4. CHOICE

People behave sometimes as if they had two selves, one who wants clean lungs and long life and another who adores tobacco, one who yearns to improve himself by reading Adam Smith on self-command (in The Theory of Moral Sentiments) and another who would rather watch an old movie on television. The two are in continual contest for control.

— THOMAS SCHELLINC

IN THE LATE 1960s and early 1970s, in the midst of the craze for the TV show Candid Camera (forerunner of YouTube, reality TV, and shows like America’s Funniest Home Videos), the psychologist Walter Mischel offered four-year-old preschoolers a choice: a marshmallow now, or two marshmallows if they could wait until he returned. And then, cruelly, he left them alone with nothing more than themselves, the single marshmallow, a hidden camera, and no indication of when he would return. A few of the kids ate the oh-so-tempting marshmallow the minute he left the room. But most kids wanted the bigger bonus and endeavored to wait. So they tried. Hard. But with nothing else to do in the room, the torture was visible. The kids did just about anything they could to distract themselves from the tempting marshmallow that stood before them: they talked to themselves, bounced up and down, covered their eyes, sat on their hands — strategies that more than a few adults might on occasion profitably adopt. Even so, for about half the kids, the 15 or 20 minutes until Mischel returned was just too long to wait.

Giving up after 15 minutes is a choice that could only really make sense under two circumstances: (1) the kids were so hungry that having the marshmallow now could stave off true starvation or (2) their prospects for a long and healthy life were so remote that the 20-minute future versions of themselves, which would get the two marshmallows, simply weren’t worth planning for. Barring these rather remote possibilities, the children who gave in were behaving in an entirely irrational fashion.

Toddlers, of course, aren’t the only humans who melt in the face of temptation. Teenagers often drive at speeds that would be unsafe even on the autobahn, and people of all ages have been known to engage in unprotected sex with strangers, even when they are aware of the risks. The preschoolers’ marshmallows have a counterpart in my raspberry cheesecake, which I know I’ll regret later but nevertheless want desperately now. If you ask people whether they’d rather have a certified check for $100 that they can cash now, or a check for twice as much that they can’t cash for three years, more than half will take the $100 now. (Curiously— and I will come back to this later — most people’s preferences reverse when the time horizon is lengthened, preferring $200 in nine years to $100 in six years.) Then there are the daily uncontrollable choices made by alcoholics, drug addicts, and compulsive gamblers. Not to mention the Rhode Island convict who attempted to escape from jail on day 89 of a 90-day prison sentence.

Collectively, the tendencies I just described exemplify what philosophers call “weakness of the will,” and they are our first hint that the brain mechanisms that govern our everyday choices might be just as klugey as those that govern memory and belief.

Wikipedia defines Homo economicus, or Economic man, as the assumption, popular in many economic theories, that man is “a rational and self-interested actor who desires wealth, avoids unnecessary labor, and has the ability to make judgments towards those ends.”

At first glance, this assumption seem awfully reasonable. Who among us isn’t self-interested? And who wouldn’t avoid unnecessary labor, given the chance? (Why clean your apartment unless you know that guests are coming?)

But as the architect Mies van der Rohe famously said, “God is in the details.” We are indeed good at dodging unnecessary labor, but true rationality is an awfully high standard, frequently well beyond our grasp. To be truly rational, we would need, at a minimum, to face each decision with clear eyes, uncontaminated by the lust of the moment, prepared to make every decision with appropriately dispassionate views of the relevant costs and benefits. Alas, as we’ll see in a moment, the weight of the evidence from psychology and neuroscience suggests otherwise. We can be rational on a good day, but much of the time we are not.

Appreciating what we as a species can and can’t do well — when we are likely to make sound decisions and when we are likely to make a hash of them — requires moving past the idealization of economic man and into the more sticky territory of human psychology. To see why some of our choices appear perfectly sensible and others perfectly foolish, we need to understand how our capacity for choice evolved.

I’ll start with good news. On occasion, human choices can be entirely rational. Two professors at NYU, for example, studied what one might think of as the world’s simplest touch-screen video game — and found that, within the parameters of that simple task, people were almost as rational (in the sense of maximizing reward relative to risk) as you could possibly imagine. Two targets appear (standing still) on a screen, one green, one red. In this task, you get points if you touch the green circle; you lose a larger number of points if you touch the red one. The challenge comes when the circles overlap, as they often do, and if you touch the intersection between the circles, you get both the reward and the (larger) penalty, thus accruing a net loss. Because people are encouraged to touch the screen quickly, and because nobody’s hand-eye coordination is perfect, the optimal thing to do is to point somewhere other than the center of the green circle. For example, if the green circle overlaps but is to the right of the red circle, pointing to the center of the green circle is risky business: an effort to point at the exact center of the green circle will sometimes wind up off target, left of center, smack in the point-losing region where the green and red circles overlap. Instead, it makes more sense to point somewhere to the right of the center of the green circle, keeping the probability of hitting the green circle high, while minimizing the probability of hitting the red circle. Somehow people figure all this out, though not necessarily in an explicit or conscious fashion. Even more remarkably, they do so in a manner that is almost perfectly calibrated to the specific accuracy of their own individual system of hand-eye coordination. Adam Smith couldn’t have asked for more.

The bad news is that such exquisite rationality may well be the exception rather than the rule. People are as good as they are at the pointing-at-circles task because it draws on a mental capacity — the ability to reach for things — that is truly ancient. Reaching is close to a reflex, not just for humans, but for every animal that grabs a meal to bring it closer to its mouth; by the time we are adults, our reaching system is so well tuned, we never even think about it. For instance, in a strict technical sense, every time I reach for my cup of tea, I make a set of choices. I decide that I want the tea, that the potential pleasure and the hydration offered by the beverage outweigh the risk of spillage. More than that, and even less consciously, I decide at what angle to send my hand. Should I use my left hand (which is closer) or my right hand (which is better coordinated)? Should I grab the cylindrical central portion of the mug (which holds the contents that I really want) or go instead for the handle, a less direct but easier-to-grasp means to the tea that is inside? Our hands and muscles align themselves automatically, my fingers forming a pincer grip, my elbow rotating so that my hand is in perfect position. Reaching, central to life, involves many decisions, and evolution has had a long time to get them just right.

But economics is not supposed to be a theory of how people reach for coffee mugs; it’s supposed be a theory of how they spend their money, allocate their time, plan for their retirement, and so forth — it’s supposed to be, at least in part, a theory about how people make conscious decisions.

And often, the closer we get to conscious decision making, a more recent product of evolution, the worse our decisions become. When the NYU professors reworked their grasping task to make it a more explicit word problem, most subjects’ performance fell to pieces. Our more recently evolved deliberative system is, in this particular respect, no match for our ancient system for muscle control. Outside that rarefied domain, there are loads of circumstances in which human performance predictably defies any reasonable notion of rationality.

Suppose, for example, that I give you a choice between participating in two lotteries. In one lottery, you have an 89 percent chance of winning $1 million, a 10 percent chance of winning $5 million, and a 1 percent chance of winning nothing; in the other, you have a 100 percent chance of winning $1 million. Which do you go for? Almost everyone takes the sure thing.

Now suppose instead your choice is slightly more complicated. You can take either an 11 percent chance at $1 million or a 10 percent chance of winning $5 million. Which do you choose? Here, almost everyone goes for the second choice, a 10 percent shot at $5 million.

What would be the rational thing to do? According to the theory of rational choice, you should calculate your “expected utility,” or expected gain, essentially averaging the amount you would win across all the possible outcomes, weighted by their probability. An 11 percent chance at $1 million works out to an expected gain of $110,000; 10 percent at $5 million works out to an expected gain of $500,000, clearly the better choice. So far, so good. But when you apply the same logic to the first set of choices, you discover that people behave far less rationally. The expected gain in the lottery that is split 89 percent/ 10 percent/i percent is $1,390,000 (89 percent of $1 million plus 10 percent of $5 million plus 1 percent of $0), compared to a mere million for the sure thing. Yet nearly everyone goes for the million bucks — leaving close to half a million dollars on the table. Pure insanity from the perspective of “rational choice.”

Another experiment offered undergraduates a choice between two raffle tickets, one with 1 chance in 100 to win a $500 voucher toward a trip to Paris, the other, 1 chance in 100 to win a $500 voucher toward college tuition. Most people, in this case, prefer Paris. No big problem there; if Paris is more appealing than the bursar’s office, so be it. But when the odds increase from 1 in 100 to 99 out of 100, most people’s preferences reverse; given the near certainty of winning, most students suddenly go for the tuition voucher rather than the trip — sheer lunacy, if they’d really rather go to Paris.

To take an entirely different sort of illustration, consider the simple question I posed in the opening chapter: would you drive across town to save $25 on a $100 microwave? Most people would say yes, but hardly anybody would drive across town to save the same $25 on a $1,000 television. From the perspective of an economist, this sort of thinking too is irrational. Whether the drive is worth it should depend on just two things: the value of your time and the cost of the gas, nothing else. Either the value of your time and gas is less than $25, in which case you should make the drive, or your time and gas are worth more than $25, in which case you shouldn’t make the drive — end of story. Since the labor to drive across town is the same in both cases and the monetary amount is the same, there’s no rational reason why the drive would make sense in one instance and not the other.

On the other hand, to anyone who hasn’t taken a class in economics, saving $25 on $100 seems like a good deal (“I saved 25 percent!”), whereas saving $25 on $1,000 appears to be a stupid waste of time (“You drove all the way across town to get 2.5 percent off? You must have nothing better to do”). In the clear-eyed arithmetic of the economist, a dollar is a dollar is a dollar, but most ordinary people can’t help but think about money in a somewhat less rational way: not in absolute terms, but in relative terms.

What leads us to think about money in (less rational) relative terms rather than (more rational) absolute terms?

To start with, humans didn’t evolve to think about numbers, much less money, at all. Neither money nor numerical systems are omnipresent. Some cultures trade only by means of barter, and some have simple counting systems with only a few numerical terms, such as one, two, many. Clearly, both counting systems and money are cultural inventions. On the other hand, all vertebrate animals are built with what some psychologists call an “approximate system” for numbers, such that they can distinguish more from less. And that system in turn has the peculiar property of being “nonlinear”: the difference between 1 and 2 subjectively seems greater than the difference between 101 and 102. Much of the brain is built on this principle, known as Weber’s law. Thus, a 150-watt light bulb seems only a bit brighter than a 100-watt bulb, whereas a 100-watt bulb seems much brighter than a 50-watt bulb.

In some domains, following Weber’s law makes a certain amount of sense: a storehouse of an extra 2 kilos of wheat relative to a baseline of 100 kilos isn’t going to matter if everything after the first kilos ultimately spoils; what really matters is the difference between starving and not starving. Of course, money doesn’t rot (except in times of hyperinflation), but our brain didn’t evolve to cope with money; it evolved to cope with food.

And so even today, there’s some remarkable crosstalk between the two. People are less likely to donate money to charities, for example, if they are hungry than if they are full; meanwhile, experimental subjects (excluding those who were dieting) who are put in a state of “high desire for money” eat more M&Ms during a taste test than do people who are in a state of “low desire for money.”[21] To the degree that our understanding of money is kluged onto our understanding of food, the fact that we think about money in relative terms may be little more than one more accident of our cognitive history.

“Christmas Clubs,” accounts into which people put away small amounts of money all year, with the goal of having enough money for Christmas shopping at the end of the year, provide another case in point. Although the goal is admirable, the behavior is (at least from the perspective of classical economics) irrational: Christmas Club accounts generally have low balances, so they tend to earn less interest than if the money were pooled with a person’s other funds. And in any event, that money, sitting idle, might be better spent paying down high-interest credit card debt. Yet people do this sort of thing all the time, establishing real or imaginary accounts for different purposes, as if the money weren’t all theirs.

Christmas Clubs and the like persist not because they are fiscally rational but because they are an accommodation to the idiosyncratic structure of our evolved brain: they provide a way of coping with the weakness of the will. If our self-control were better, we wouldn’t need such accommodations. We would save money all year long in a unified account that receives the maximum rate of return, and draw on it as needed; only because the temptation of the present so often outweighs the abstract reality of the future do we fail to do so such a simple, fiscally sound thing. (The temptation of the present also tends to leave our future selves high and dry. According one estimate, nearly two thirds of all Americans save too little for retirement.)

Rationality also takes a hit when we think about so-called sunk costs. Suppose, for instance, that you decide to see a play and plop down $20 for a ticket — only to find, as you enter the theater, that you’ve lost the ticket. Suppose, further, that you were to be seated in general admission (that is, you have no specific assigned seat), and there’s no way to get the ticket back. Would you buy another ticket? Laboratory data show that half the people say yes, while the other half give up and go home, a 50-50 split; fair enough. But compare that scenario with one that is only slightly different. Say you’ve lost cash rather than a prepurchased ticket. (“Imagine that you have decided to see a play, and the admission is $20 per ticket. As you enter the theater, ready to purchase one, you discover that you have lost a $20 bill. Would you still pay $20 for a ticket for the play?”) In this case, a whopping 88 percent of those tested say yes — even though the extra out-of-pocket expense, $20, is identical in the two scenarios.

Here’s an even more telling example. Suppose you spend $100 for a ticket to a weekend ski trip to Michigan. Several weeks later you buy a $50 ticket for another weekend ski trip, this time to Wisconsin, which (despite being cheaper) you actually think you’ll enjoy more. Then, just as you are putting your newly purchased Wisconsin ski-trip ticket in your wallet, you realize you’ve goofed: the two trips take place on the same weekend! And it’s too late to sell either one. Which trip do you go on? More than half of test subjects said they would choose (more expensive) Michigan — even though they knew they would enjoy the Wisconsin option more. With the money for both trips already spent (and unrecoverable), this choice makes no sense; a person would get more utility (pleasure) out of the trip to Wisconsin for no further expense, but the human fear of “waste” convinces him or her to select the less pleasurable trip.[22] On a global scale, the same kind of dubious reasoning can have massive consequences. Even presidents have been known to stick to policies long after it’s evident to everyone that those policies just aren’t working.

Economists tell us that we should assess the value of a thing according to its expected utility, or how much pleasure it will bring,t buying only if the utility exceeds the asking price. But here again, human behavior diverges from economic rationality. If the first principle of how people determine value is that they do so in relative terms (as we saw in the previous section), the second is that people often have only the faintest idea of what something is truly worth.

Instead, we often rely on secondary criteria, such as how good a deal we think we’re getting. Consider, for example, the question posed in Bob Merrill’s classic sing-along: “How much is that doggie in the window?” How much is a well-bred doggie worth? Is a golden retriever worth a hundred times the price of a movie? A thousand times? Twice the value of a trip to Peru? A tenth of the price of a BMW convertible? Only an economist would ask.

But what people actually do is no less weird, often giving more attention to the salesperson’s jabber than the dog in question. If the breeder quotes a price of $600 and the customer haggles her down to $500, the customer buys the dog and counts himself lucky. If the salesperson starts at $500 and doesn’t budge, the customer may walk out in a huff. And, most likely, that customer is a fool. Assuming the dog is healthy, the $500 probably would have been well spent.[23]

To take another example, suppose you find yourself on a beach, on a hot day, with nothing to drink — but a strong desire for a nice cold beer. Suppose, furthermore, that a friend of yours kindly offers to get you a beer, provided that you front him the money. His only request is that you tell him — in advance — the most you are willing to pay; your friend doesn’t want to have the responsibility of deciding for you. People often set their limit according to where the beer is going to be purchased; you might go as high as $6 if the beer is to be purchased at a resort, but no more than $4 if the friend were going to a bodega at the end of the beach. From an economist’s perspective, that’s just loopy: the true measure should be “How much pleasure would that beer bring me?” and not “Is the shop/resort charging a price that is fair relative to other similar establishments?” Six dollars is $6, and if the beer would bring $10 of pleasure, $6 is a bargain, even if one spends it at the world’s most expensive bodega. In the dry language of one economist, “The consumption experience is the same.”

The psychologist Robert Cialdini tells a story of a shopkeeper friend of his who was having trouble moving a certain set of necklaces. About to go away for vacation, this shopkeeper left a note for her employees, intending to tell them to cut the price in half. Her employees, who apparently had trouble reading the note, instead doubled the price. If the necklaces didn’t budge at $100, you’d scarcely expect them to sell at $200. But that’s exactly what happened; by the time the shopkeeper had returned, the whole inventory was gone. Customers were more likely to buy a particular necklace if it had a high price than if it had a low price — apparently because they were using list price (rather than intrinsic worth) as a proxy for value. From the perspective of economics, this is madness.

What’s going on here? These last few examples should remind you of something we saw in the previous chapter: anchoring. When the value we set depends on irrelevancies like a shopkeeper’s starting price as much as it does on an object’s intrinsic value, anchoring has clearly cluttered our head.

Anchoring is such a basic part of human cognition that it extends not just to how we value puppies or material goods, but even to intangibles like life itself. One recent study, for example, asked people how much they would pay for safety improvements that would reduce the annual risk of automobile fatalities. Interviewers would start by asking interviewees whether they would be willing to pay some fairly low price, either £25 or £75. Perhaps because nobody wished to appear to be a selfish lout, answers were always in the affirmative. The fun came after: the experimenter would just keep pushing and pushing until he (or she) found a given subject’s upper limit. When the experimenter started with £25 per year, subjects could be driven up to, on average, £149. In contrast, when the experimenter started at £75 per year, subjects tended to go up almost 40 percent higher to an average maximum of £232.

Indeed, virtually every choice that we make, economic or not, is, in some way or another, influenced by how the problem is posed. Consider, for example, the following scenario:

Imagine that the nation is preparing for the outbreak of an unusual disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimates of the consequences of the programs are as follows:

If Program A is adopted, 200 people will be saved.

If Program B is adopted, there is a one-third probability that 600 people will be saved and a two-thirds probability that no people will be saved.

Most people would choose Program A, not wanting to put all the lives at risk. But people’s preferences flip if the same choices are instead posed this way:

If Program A is adopted, 400 people will die.

If Program B is adopted, there is a one-third probability that nobody will die and a two-thirds probability that 600 people will die.

“Saving 200 lives” for certain (out of 600) somehow seems like a good idea, whereas letting 400 die (out of the same 600) seems bad—even though they represent exactly the same outcome. Only the wording of the question, what psychologists call framing, has been changed.

Politicians and advertisers take advantage of our susceptibility to framing all the time. A death tax sounds far more ominous than an inheritance tax, and a community that is described as having a crime rate of 3.7 percent is likely to get more resources than one that is described as 96.3 percent crime free.

Framing has the power that it does because choice, like belief, is inevitably mediated by memory. And, as we have already seen, the memory that evolution equipped us with is inherently and inevitably shaped by momentary contextual details. Change the context (here, the actual words used), and you often change the choice. “Death tax” summons thoughts of death, a fate that we all fear, whereas “inheritance tax” may make us think only of the truly wealthy, suggesting a tax scarcely relevant to the average taxpayer. “Crime rates” makes us think of crime; “crime-free rates” triggers thoughts of safety. What we think of — what we summon into memory as we come to a decision — often makes all the difference.

Indeed, the whole field of advertising is based on that premise: if a product brings pleasant associations to mind, no matter how irrelevant, you’re more likely to buy it.[24]

One Chicago law firm recently put the power of memory and suggestion to the ultimate test, flogging not potato chips or beer but the dissolution of marriage. Their tool? A 48-foot-wide billboard made of three panels — the torso of an exceptionally attractive woman, breasts all but bursting out of her lacy black bra; the torso of a man no less handsome, shirtless, with his well-oiled muscles bulging; and, just above the law firm’s name and contact information, a slogan containing just five words: LIFE’s SHORT — GET A DIVORCE.

In a species less driven by contextual memory and spontaneous priming, I doubt that sign would have any impact. But in a species like ours, there’s reason to worry. To seek a divorce is, of course, to make one of the most difficult choices a human being can make. One must weigh hopes for the future against fears of loneliness, regret, financial implications, and (especially) concerns about children. Few people make such decisions lightly. In a rational world, a titillating billboard wouldn’t make a dime’s difference. In the real world of flesh-and-blood human beings governed by klugey brains, people who weren’t otherwise thinking of divorce might well be induced to start. What’s more, the billboard might frame how people think about divorce, leading them to evaluate their marriage not in terms of companionship, family, and financial security, but whether it includes enough ripped bodices and steamy sexual encounters.

If this seems speculative, that’s because the law firm took the sign down, under pressure, after just a couple of weeks, so there’s no direct evidence. But a growing literature of real-world marketing studies backs me up. One study, for example, asked people how likely they were to buy a car in the next six months. People who were asked whether they’d buy a car were almost twice as likely to actually do so than those who weren’t asked. (Small wonder that many car dealers ask not whether you are going to buy a car but when.) The parallel to a lawyer’s leading question is exact, the mechanism the same. Just as context influences belief by jostling the current contents of our thoughts, it also affects choice.

The cluster of phenomena I’ve just discussed — framing, anchoring, susceptibility to advertising, and the like — is only part of the puzzle; our choices are also contaminated by memories retrieved from within. Consider, for example, a study that examined how office workers, some feeling hungry, some not, would select which snack they’d like to have a week hence, in the late afternoon. Seventy-two percent of those who were hungry at the time of the decision (several days before they would be having the snack in question) chose unhealthful snacks, like potato chips or candy bars. Among the people who weren’t feeling hungry, only 42 percent chose the same unhealthful snacks; most instead committed themselves to apples and bananas. Everybody knows an apple is a better choice (consistent with our long-term goal of staying healthy), but when we feel hungry, memories of the joys of salt and refined sugar win out.

All of this is, of course, a function of evolution. Rationality, pretty much by definition, demands a thorough and judicious balancing of evidence, but the circuitry of mammalian memory simply isn’t attuned to that purpose. The speed and context-sensitivity of memory no doubt helped our ancestors, who had to make snap decisions in a challenging environment. But in modern times, this former asset has become a liability. When context tells us one thing, but rationality another, rationality often loses.

Evolutionary inertia made a third significant contribution to the occasional irrationality of modern humans by calibrating us to expect a degree of uncertainty that is largely (and mercifully) absent in contemporary life. Until very recently, our ancestors could not count on the success of next year’s harvest, and a bird in hand was certainly better than two, or even three, in the bush. Absent refrigerators, preservatives, and grocery stores, mere survival was far less assured than it is today — in the immortal words of Thomas Hobbes, life was “nasty, brutish, and short.”

As a result, over hundreds of millions of years, evolution selected strongly for creatures that lived largely in the moment. In every species that’s ever been studied, animals tend to follow what is known as a “hyperbolic discounting curve” — fancy words for the fact that organisms tend to value the present far more than the future. And the closer temptation is, the harder it is to resist. For example, at a remove of 10 seconds, a pigeon can recognize (so to speak) that it’s worth waiting 14 seconds to get four ounces of food rather than a single ounce in 10 seconds — but if you wait 9 seconds and let the pigeon change its choice at the last moment, it will. At the remove of just 1 second, the desire for food now overwhelms the desire for more food later; the pigeon refuses to wait an extra 4 seconds, like a hungry human noshing on chips while he waits for dinner to arrive.

Life is generally much more stable for humans than for the average pigeon, and human frontal lobes much larger, but still we humans can’t get over the ancestral tendency to live in the moment. When we are hungry, we gobble French fries as if driven to lard up on carbs and fat now, since we might not find any next week. Obesity is chronic not just because we routinely underexercise, but also because our brain hasn’t caught up with the relative cushiness of modern life.[25] We continue to discount the future enormously, even as we live in a world of all-night grocery stores and 24/7 pizza delivery.

Future discounting extends well beyond food. It affects how people spend money, why they fail to save enough for retirement, and why they so frequently rack up enormous credit card debt. One dollar now, for example, simply seems more valuable than $1.20 a year hence, and nobody seems to think much about how quickly compound interest rises, precisely because the subjective future is just so far away — or so we are evolved to believe. To a mind not evolved to think about money, let alone the future, credit cards are almost as serious a problem as crack. (Fewer than 1 in 50 Americans uses crack regularly, but nearly half carry regular credit card debt, almost 10 percent owing over $10,000.)

Our extreme favoritism toward the present at the expense of the future would make sense if our life span were vastly shorter, or if the world were much less predictable (as was the case for our ancestors), but in countries where bank accounts are federally insured and grocery stores reliably restocked, the premium we place on the present is often seriously counterproductive.

The more we discount the future, the more we succumb to short-term temptations like drugs, alcohol, and overeating. As one researcher, Howard Rachlin, sums it up, in general, living a healthy life for a period of ten years, say, is in trinsically satisfying… Over a ten-year period, virtually all would prefer living a healthy life to being a couch potato. Yet we also (more or less) prefer to drink this drink than not to drink it, to eat this chocolate sundae than to forgo it, to smoke this cigarette than not smoke it, to watch this TV program than spend a half-hour exercising… [emphasis added]

I don’t think it’s exaggerating to say that this tension between the short term and the long term defines much of contemporary Western life: the choice between going to the gym now and staying home to watch a movie, the joy of the French fries now versus the pain of winding up later with a belly the size of Buddha’s.

But the notion that we are shortsighted in our choices actually explains only half of this modern bourgeois conflict. The other half of the story is that we humans are the only species smart enough to appreciate the fact that there is another option. When the pigeon goes for the one ounce now, I’m not sure it feels any remorse at what has been lost. I, on the other hand, have shown myself perfectly capable of downing an entire bag of the ironically named Smartfood popcorn, even as I recognize that in a few hours I will regret it.

And that too is a sure sign of a kluge: when I can do something stupid even as I know at the time that it’s stupid, it seems clear that my brain is a patchwork of multiple systems working in conflict. Evolution built the ancestral reflexive system first and evolved systems for rational deliberation second — fine in itself. But any good engineer would have put some thought into integrating the two, perhaps largely or entirely turning over choices to the more judicious human forebrain (except possibly during time-limited emergencies, where we have to act without the benefit of reflection). Instead, our ancestral system seems to be the default option, our first recourse just about all the time, whether we need it or not. We eschew our deliberative system not just during a time crunch, but also when we are tired, distracted, or just plain lazy; using the deliberative system seems to require an act of will. Why? Perhaps it’s simply because the older system came first, and — in systems built through the progressive overlay of technology — what comes first tends to remain intact. And no matter how shortsighted it is, our deliberative system (if it manages to get involved at all) inevitably winds up contaminated. Small wonder that future discounting is such a hard habit to shake.

Choice slips a final cog when it comes to the tension between logic and emotion. The temptation of the immediate present is but one example; many alcoholics know that continued drink will bring them to ruin, but the anticipated pleasure in a drink at a given moment is often enough to overwhelm sensible choice. Emotion one, logic zero.

Perhaps it is only a myth that Menelaus declared war on the Trojans after Paris abducted the woman Menelaus loved, but there can be little doubt that some of the most significant decisions in history have been made for reasons more emotional than rational. This may well, for example, have been the case in the 2003 invasion of Iraq; only a few months earlier, President Bush was quoted as saying, in reference to Saddam Hussein, “After all, this is the guy who tried to kill my dad.” Emotion almost certainly plays a role when certain individuals decide to murder their spouse, especially one caught in flagrante delicto. Positive emotions, of course, influence many decisions too — the houses people buy, the partners they marry, the sometimes dubious individuals with whom they have short-term flings. As my father likes to say, “All sales” — and indeed all choices — “are emotional.” From the perspective developed in this book, what is klugey is not so much the fact that people sometimes rely on emotions but rather the way those emotions interact with the deliberative system. This is true not just in the obvious scenarios I mentioned — those involving jealousy, love, vengeance, and so forth — but even in cases that don’t appear to engage our emotions at all. Consider, for example, a study that asked people how much they would contribute toward various environmental programs, such as saving dolphins or providing free medical checkups to farm workers in order to reduce the incidence of skin cancer. When asked which effort they thought was more important, most people point to the farm workers (perhaps because they valued human lives more than those of dolphins). But when researchers asked people how much money they would donate to each cause, dolphins and farm workers, they gave more to the cuddly dolphins. Either choice on its own might make sense, but making the two together is as inconsistent as you can get. Why would someone spend more money on dolphins if that person thinks that human lives are more important? It’s one thing for our deliberative system to be out of sync with the ancestral system, another for the two to flip-flop arbitrarily in their bid for control.

In another recent study, people were shown a face — happy, sad, or neutral — for about a sixtieth of a second — and then were asked to drink a “novel lemon-lime beverage.” People drank more lemon-lime after seeing happy faces than after viewing sad ones, and they were willing to pay twice as much for the privilege. All this presumably shows that the process of priming affects our choices just as much as our beliefs: a happy face primes us to approach the drink as if it were pleasant, and a sad face primes us to avoid the drink (as if it were unpleasant). Is it any wonder that advertisers almost always present us with what the rock band REM once called “shiny, happy people”?

An even more disquieting study asked a group of subjects to play a game known as “prisoner’s dilemma,” which requires pairs of people to choose to either cooperate with each other or “defect” (act uncooperatively). You get the bigger payoff if you and the other person both cooperate (say, $10), an intermediate reward (say, $3) if you defect and your opponent cooperates, and no reward if you both defect. The general procedure is a staple in psychology research; the catch in this particular study was that before people began to play the game, they sat in a waiting room where an ostensibly unrelated news broadcast was playing in the background. Some subjects heard prosocial news (about a clergyman donating a kidney to a needy patient); others, by contrast, heard a broadcast about a clergyman committing murder. What happened? You guessed it: people who heard about the good clergyman were a lot more cooperative than those who heard about the bad clergyman.

In all these studies, emotions of one sort or another prime memories, and those memories in turn shape choice. A different sort of illustration comes from what economist George Loewenstein calls “the attraction of the visceral.” It’s one thing to turn down chocolate cheesecake in the abstract, another when the waiter brings in the dessert cart. College students who are asked whether they’d risk wasting 30 minutes in exchange for a chance to win all the freshly baked chocolate chip cookies they could eat are more likely to say yes if they actually see (and smell) the cookies than if they are merely told about them.

Hunger, however, is nothing compared to lust. A follow-up study exposed young men to either a written or a (more visceral) filmed scenario depicting a couple who had met earlier in the evening and are now discussing the possibility of (imminently) having sex. Both are in favor, but neither party has a condom, and there is no store nearby. The woman reports that she is taking a contraceptive pill and is disease-free; she leaves it up to the man to decide whether to proceed, unprotected. Subjects were then asked to rate their own probability of having unprotected sex if they were in the male character’s shoes. Guess which group of men — readers or video watchers — was more likely to throw caution to the wind? (Undergraduate men are also apparently able to persuade themselves that their risk of contracting a sexually transmitted disease goes down precisely as the attractiveness of their potential partner goes up.) The notion that men might think with organs below the brain is not new, but the experimental evidence highlights rather vividly the degree to which our choices don’t necessarily follow from purely “rational” considerations. Hunger, lust, happiness, and sadness are all factors that most of us would say shouldn’t enter into rational thought. Yet evolution’s progressive overlay of technology has guaranteed that each wields an influence, even when we insist otherwise.

The clumsiness of our decision-making ability becomes especially clear when we consider moral choices. Suppose, for example, that a runaway trolley is about to run over and kill five people. You (and you alone) are in a position such that you can hit a switch to divert the trolley onto a different set of tracks, where it would kill only one person instead of five. Do you hit the switch?

Now, suppose instead that you are on a footbridge, standing above the track that bears the runaway trolley. This time, saving the five people would require you to push a rather large person (considerably bigger than you, so don’t bother to volunteer yourself) off the footbridge and into the oncoming trolley. The large person in question would, should you toss him over, die, allowing the other five to survive. Would thatbe okay? Although most people answer yes to the scenario involving the switch, most people say no to pushing someone off the footbridge — even though in both cases five lives are saved at the cost of one.

Why the difference? Nobody knows for sure, but part of the answer seems to be that there is something more visceral about the second scenario; it’s one thing to flip a switch, which is inanimate and somewhat removed from the actual collision, and another to forcibly send someone to his death.

One historical example of how visceral feelings affect moral choice is the unofficial truce called by British and German soldiers during Christmas 1914, early in World War I. The original intention was to resume battle afterward, but the soldiers got to know one another during the truce; some even shared a Christmas meal. In so doing, they shifted from conceptualizing one another as enemies to seeing each other as flesh-and-blood individuals. The consequence was that after the Christmas truce, the soldiers were no longer able to kill one another. As the former president Jimmy Carter put it in his Nobel Peace Prize lecture (2002), “In order for us human beings to commit ourselves personally to the inhumanity of war, we find it necessary first to dehumanize our opponents.”

Both the trolley problem and the Christmas truce remind us that though our moral choices may seem to be the product of a single process of deliberative reasoning, our gut, in the end, often also plays a huge role, whether we are speaking of something mundane, like a new car, or making decisions with lives at stake.

The trolley scenarios illustrate the split by showing how we can get two different answers to essentially the same question, depending on which system we tap into. The psychologist Jonathan Haidt has tried to go a step further, arguing that we can have strong moral intuitions even when we can’t back them up with explicit reasons. Consider, for example, the following scenario:

Julie and Mark are brother and sister. They are traveling together in France on summer vacation from college. One night they are staying alone in a cabin near the beach. They decide that it would be interesting and fun if they tried making love. At the very least it would be a new experience for each of them. Julie was already taking birth control pills, but Mark uses a condom too, just to be safe. They both enjoy making love, but they decide not to do it again. They keep that night as a special secret, which makes them feel even closer to each other. What do you think about that? Was it okay for them to make love?

Every time I read this passage, I get the creeps. But why exactly is it wrong? As Haidt describes it, most people who hear the above story immediately say that it was wrong for the siblings to make love, and they then begin searching for reasons. They point out the dangers of inbreeding, only to remember that Julie and Mark used two forms of birth control. They argue that Julie and Mark will be hurt, perhaps emotionally, even though the story makes it clear that no harm befell them. Eventually, many people say something like “I don’t know, I can’t explain it, I just know it’s wrong.”

Haidt calls this phenomenon — where we feel certain that something is wrong but are at a complete loss to explain why — “moral dumbfounding.” I call it an illustration of how the emotional and the judicious can easily decouple. What makes moral dumbfounding possible is the split between our ancestral system — which looks at an overall picture without being analytical about the details — and a judicious system, which can parse things piece by piece. As is so often the case, where there is conflict, the ancestral system wins: even though we know we can’t give a good reason, our emotional queasiness lingers.

When you look inside the skull, using neuroimaging, you find further evidence that our moral judgments derive from two distinct sources: people’s choices on moral dilemmas correlate with how they use their brains. In experimental trials like those mentioned earlier, the subjects who chose to save five lives at the expense of one tended to rely on the regions of the brain known as the dorsolateral prefrontal cortex and the posterior parietal cortex, which are known to be important for deliberative reasoning. On the other hand, people who decided in favor of the single individual at the cost of five tended to rely more on regions of the limbic cortex, which are more closely tied to emotion.[26]

What makes the human mind a kluge is not the fact that we have two systems per se but the way in which the two systems interact. In principle, a deliberative reasoning system should be, well, deliberate: above the fray and unbiased by the considerations of the emotional system. A sensibly designed deliberative-reasoning machine would systematically search its memory for relevant data, pro and con, so that it could make systematic decisions. It would be attuned as much to disconfirmation as confirmation and utterly immune to patently irrelevant information (such as the opening bid of a salesperson whose interests are necessarily different from your own). This system would also be empowered to well and truly stifle violations of its master plan. (“I’m on a diet. No chocolate cake. Period.”) What we have instead falls between two systems — an ancestral, reflexive system that is only partly responsive to the overall goals of the organism, and a deliberative system (built from inappropriate old parts, such as contextual memory) that can act in genuinely independent fashion only with great difficulty.

Does this mean that our conscious, deliberate choices are always the best ones? Not at all. As Daniel Kahneman has observed, the reflexive system is better at what ztdoes than the deliberative system is at deliberating. The ancestral system, for example, is exquisitely sensitive to statistical fluctuations — its bread and butter, shaped over eons, is to track the probabilities of finding food and predators in particular locations. And while our deliberative system can be deliberate, it takes a great deal of effort to get it to function in genuinely fair and balanced ways. (Of course, this is no surprise if you consider that the ancestral system has been shaped for hundreds of millions of years, but deliberative reasoning is still a bit of a newfangled invention.)

So, inevitably, there are decisions for which the ancestral system is better suited; in some circumstances it offers the only real option. For instance, when you have to make a split-second decision — whether to brake your car or swerve into the next lane — the deliberative system is just too slow. Similarly, where we have many different variables to consider, the unconscious mind — given suitable time — can sometimes outperform the conscious deliberative mind; if your problem requires a spreadsheet, there’s a chance that the ancestral, statistically inclined mind might be just the ticket. As Malcolm Gladwell said in his recent book Blink, “Decisions made very quickly can be every bit as good as decisions made consciously and deliberately.”

Still, we shouldn’t blindly trust our instincts. When people make effective snap decisions, it’s usually because they have ample experience with similar problems. Most of Gladwell’s examples, like that of an art curator who instantly recognizes a forgery, come from experts, not amateurs. As the Dutch psychologist Ap Dijksterhuis, one of the world’s leading researchers on intuition, noted, our best intuitions are those that are the result of thorough unconscious thought, honed by years of experience. Effective snap decisions (Gladwell’s “blinks”) often represent the icing on a cake that has been baking for a very long time. Especially when we face problems that differ significantly from those that we’ve faced before, deliberative reasoning can be our first and best hope.

It would be foolish to routinely surrender our considered judgment to our unconscious, reflexive system, vulnerable and biased as it often is. But it would be just as silly to abandon the ancestral reflexive system altogether: it’s not entirely irrational, just less reasoned. In the final analysis, evolution has left us with two systems, each with different capabilities: a reflexive system that excels in handling the routine and a deliberative system that can help us think outside the box.

Wisdom will come ultimately from recognizing and harmonizing the strengths and weaknesses of the two, discerning the situations in which our decisions are likely to be biased, and devising strategies to overcome those biases.

Загрузка...