2. MEMORY

Your memory is a monster; you forget — it doesn’t. It simply files things away. It keeps things for you, or hides things from you — and summons them to your recall with a will of its own. You think you have a memory; but it has you!

— JOHN IRVING

MEMORY IS, I BELIEVE, the mother of all kluges, the single factor most responsible for human cognitive idiosyncrasy.

Our memory is both spectacular and a constant source of disappointment: we can recognize photos from our high school yearbooks decades later — yet find it impossible to remember what we had for breakfast yesterday. Our memory is also prone to distortion, conflation, and simple failure. We can know a word but not be able to remember it when we need it (think of a word that starts with a, meaning “a counting machine with beads”),[5] or we can learn something valuable (say, how to remove tomato sauce stains) and promptly forget it. The average high school student spends four years memorizing dates, names, and places, drill after drill, and yet a significant number of teenagers can’t even identify the century in which World War I took place.

I’m one to talk. In my life, I have lost my house keys, my glasses, my cell phone, and even a passport. I’ve forgotten where I parked, left the house without remembering my keys, and on a particularly sad day, left a leather jacket (containing a second cell phone) on a park bench. My mother once spent an hour looking for her car in the garage at an unfamiliar airport. A recent Newsweek article claims that people typically spend 55 minutes a day “looking for things they know they own but can’t find.”

Memory can fail people even when their lives are at stake. Skydivers have been known to forget to pull the ripcord to open their parachute (accounting, by one estimate, for approximately 6 percent of skydiving deaths), scuba divers have forgotten to check their oxygen level, and more than a few parents have inadvertently left their babies in locked cars. Pilots have long known that there’s only one way to fly: with a checklist, relying on a clipboard to do what human memory can’t, which is to keep straight the things that we have do over and over again. (Are the flaps down? Did I check the fuel gauge? Or was that last time?) Without a checklist, it’s easy to forget not just the answers but also the questions.

Why, if evolution is usually so good at making things work well, is our memory so hit-or-miss?

The question becomes especially pointed when we compare the fragility of our memory with the robustness of the memory in the average computer. Whereas my Mac can store (and retrieve) my complete address book, the locations of all the countries in Africa, the complete text of every email message I ever sent, and all the photographs I’ve taken since late 1999 (when I got my first digital camera), not to mention the first 3,000 digits of pi, all in perfect detail, I still struggle with the countries in Africa and can scarcely even remember whom I last emailed, let alone exactly what I said. And I never got past the first ten digits of pi (3.1415926535) — even though I was just the sort of nerd who’d try to memorize more.[6]

Human memory for photographic detail is no better; we can recognize the main elements of a photo we’ve seen before, but studies show that people often don’t notice small or even fairly large changes in the background.[7] And I for one could never ever recall the details of a photograph, no matter how long I sat and stared at it beforehand. I can still remember the handful of phone numbers I memorized as a child, when I had loads of free time, but it took me almost a year to learn my wife’s cell phone number by heart.

Worse, once we do manage to encode a memory, it’s often difficult to revise it. Take, for instance, the trouble I have with the name of my dear colleague Rachel. Five years after she got divorced and reverted to her maiden name (Rachel K.), I still sometimes stumble and refer to her by her former married name (Rachel C.) because the earlier habit is so strong. Whereas computer memory is precise, human memory is in many ways a recalcitrant mess.

Computer memory works well because programmers organize information into what amounts to a giant map: each item is assigned a specific location, or “address,” in the computer’s databanks. With this system, which I will call “postal-code memory,” when a computer is prompted to retrieve a particular memory, it simply goes to the relevant address. (A 64-megabyte memory card holds roughly 64 million such addresses, each containing a single “word” made up of a set of eight binary digits.)

Postal-code memory is as powerful as it is simple; used properly, it allows computers to store virtually any information with near-perfect reliability; it also allows a programmer to readily change any memory; no referring to Rachel K. as Rachel C. once she’s changed her name. It’s no exaggeration to say that postal-code memory is a key component of virtually every modern computer.

But not, alas, in humans. Having postal-code memory would have been terrifically useful for us, but evolution never discovered the right part of the mountain range. We humans rarely — if ever — know precisely where a piece of information is stored (beyond the extremely vague “somewhere inside the brain”), and our memory evolved according to an entirely different logic.

In lieu of postal-code memory we wound up with what I’ll call “contextual memory”: we pull things out of our memory by using context, or clues, that hints at what we are looking for. It’s as if we say to ourselves, every time we need a particular fact, “Urn, hello, brain, sorry to bother you, but I need a memory that’s about the War of 1812. Got anything for me?” Often, our brain obliges, quickly and accurately yielding precisely the information we want. For instance, if I ask you to name the director who made the movies E.T. and Schindler’s List, you might well come up with the answer within milliseconds — even though you may not have the foggiest idea where in your brain that information was stored.[8] In general, we pull what we need from memory by using various clues, and when things go well, the detail we need just “pops” into our mind. In this respect, accessing a memory is a bit like breathing — most of it comes naturally.

And what comes to mind most naturally often depends on context. We’re more likely to remember what we know about gardening when we are in the garden, more likely to remember what we know about cooking when we are in the kitchen. Context, sometimes for better and sometimes for worse, is one of the most powerful cues affecting our memory.

Contextual memory has a very long evolutionary history; it’s found not just in humans, but also in apes and monkeys, rats and mice, and even spiders and snails. Scientists picked up the first hints of the power of contextual cues almost a hundred years ago, in 1917, when Harvey Carr, a student of the famous behaviorist psychologist John Watson, was running a fairly routine study that involved training rats to run in a maze. Along the way, Carr discovered that the rats were quite sensitive to factors that had nothing to do with the maze itself. A rat that was trained in a room with electric light, for example, would run the maze better when tested in a room with electric light rather than natural light. The context in which the rat was tested — that is, the environment to which it had grown accustomed — affected its memory of how to run in the maze, even though lighting was not germane to the task. It has since become clear that just about every biological creature uses context, relevant or not, as a major guiding force in accessing memory.

Contextual memory may have evolved as a hack, a crude way of compensating for the fact that nature couldn’t work out a proper postal-code system for accessing stored information, but there are still some obvious virtues in the system we do have. For one thing, instead of treating all memories equally, as a computer might do, context-dependent memory prioritizes, bringing most quickly to mind things that are common, things that we’ve needed recently, and things that have previously been relevant in situations that are similar to our current circumstances — exactly the sort of information that we tend to need the most. For another thing, context-dependent memories can be searched rapidly in parallel, and as such may represent a good way of compensating for the fact that neurons are millions of times slower than the memory chips used by digital computers. What’s more, we (unlike computers) don’t have to keep track of the details of our own internal hardware; most of the time, finding what we need in our memory becomes a matter of asking ourself the right question, not identifying a particular set of brain cells.[9]

Nobody knows for sure how this works, but my best guess is that each of our brain’s memories acts autonomously, on its own, in response to whatever queries it might match, thereby eliminating the need for a central agent to keep a map of memory storage locations. Of course, when you rely on matches rather than specific locations that are known in advance, there’s no guarantee that the right memory will respond; the fewer the cues you provide, the more “hits” your memory will serve up, and as a consequence the memory that you actually want may get buried among those that you don’t want.

Contextual memory has its price, and that price is reliability. Because human memory is so thoroughly driven by cues, rather than location in the brain, we can easily get confused. The reason I can’t remember what I had for breakfast yesterday is that yesterday’s breakfast is too easily confused with that of the day before, and the day before that. Was it yogurt Tuesday, waffles Wednesday, or the other way around? There are too many Tuesdays, too many Wednesdays, and too many near-identical waffles for a cue-driven system to keep straight. (Ditto for any pilot foolish enough to rely on memory instead of a checklist — one takeoff would blur together with the next. Sooner or later the landing gear would be forgotten.)

Whenever context changes, there’s a chance of a problem. I, for example, recently found myself at a party where I was awestruck by the sudden appearance of the luminescent and brilliantly talented actress who played the role of Claire Fisher in the television show Six Feet Under. I thought it would be fun to introduce myself. Ordinarily, I probably would have had little trouble remembering her name — I’d seen it in the credits dozens of times, but at that moment I drew a total blank. By the time I got a friend to remind me of her name, the actress was already leaving; I had missed my chance. In hindsight, it’s perfectly clear why I couldn’t remember her name: the context was all wrong. I was used to seeing the person in question on TV, in character, in a fictional show set in Los Angeles, not in real life, in New York, in the company of the mutual acquaintances who had brought me to the party. In the memory of a human being, context is all, and sometimes, as in this instance, context works against us.

Context exerts its powerful effect — sometimes helping us, sometimes not — in part by “priming” the pump of our memory; when I hear the word doctor, it becomes easier to recognize the word nurse. Had someone said “Lauren” (the first name of the actor in question), I probably could have instantly come up with her last name (Ambrose), but without the right cue, I could only draw a blank.

The thing about context is that it is always with us — even when it’s not really relevant to what we are trying to remember. Carr’s experiment with rats, for instance, has a parallel with humans in a remarkable experiment with scuba divers. The divers were asked to memorize a list of words while underwater. Like the rats that needed electric light to perform well, the scuba divers were better at remembering the words they studied underwater when they were tested underwater (relative to being tested on land) — a fact that strikes this landlubber as truly amazing. Just about every time we remember anything, context looms in the background.[10]

This is not always a good thing. As Merlin Mann of the blog “43 folders” put it, the time when we tend to notice that we need toilet paper tends not to be the moment when we are in a position to buy it. Relying on context works fine if the circumstance in which we need some bit of information matches the circumstance in which we first stored it — but it becomes problematic when there is a mismatch between the original circumstance in which we’ve learned something and the context in which we later need to remember it.

Another consequence of contextual memory is the fact that nearly every bit of information that we hear (or see, touch, taste, or smell), like it or not, triggers some further set of memories — often in ways that float beneath our awareness. The novelist Marcel Proust, who coined the term “involuntary memory,” got part of the idea — the reminiscences in Proust’s famous (and lengthy) novel Remembrance of Things Past were all triggered by a single, consciously recognized combination of taste and smell.

But the reality of automatic, unconscious memory exceeds even that which Proust imagined; emotionally significant smells are only the tip of an astonishing iceberg. Take, for example, an ingenious study run by a former colleague of mine, John Bargh, when he was at New York University. His subjects, all undergraduates, were asked to unscramble a series of sentences. Quietly embedded within the scrambled lists were words related to a common theme, such as old, wise, forgetful, and Florida, designed to elicit the concept of the elderly. The subjects did as they were told, diligently making their way through the task. The real experiment, however, didn’t begin until afterward, when Bargh surreptitiously videotaped subjects as they departed after the test, walking to an elevator down the hall. Remarkably, the words people read affected their walking speed. The subjects all presumably had places to go and people to see, but those who unscrambled words like retired and Florida ambled more slowly than those who hadn’t.

Another lab studied people as they played a trivia game. Those briefly primed by terms like professor or intelligent outperformed those prepped with less lofty expressions, such as soccer hooligans and stupid. All the trash-talking that basketball players do might be more effective than we imagine.

At first, these studies may seem like mere fun and games — stupid pet tricks for humans — but the real-life consequences of priming can be serious. For example, priming can lead minority groups to do worse when cultural stereotypes are made especially salient, and, other things being equal, negative racial stereotypes tend to be primed automatically even in well-intentioned people who report feeling “exactly the same” about whites and blacks. Likewise, priming may reinforce depression, because being in a bad mood primes a person to think about negative things, and this in turn furthers depression. The context-driven nature of memory may also play a role in leading depressed people to seek out depressive activities, such as drinking or listening to songs of lost love, which presumably deepens the gloom as well. So much for intelligent design.

Anchoring our memories in terms of context and cues, rather than specific pre-identified locations, leads to another problem: our memories often blur together. In the first instance, this means that something I learn now can easily interfere with something I knew before: today’s strawberry yogurt can obscure yesterday’s raspberry. Conversely, something I already know, or once knew, can interfere with something new, as in my trouble with acclimating to Rachel K.’s change in surname.

Ultimately, interference can lead to something even worse: false memories. Some of the first direct scientific evidence to establish the human vulnerability to false memories came from a now classic cognitive-psychology study in which people were asked to memorize a series of random dot patterns like these:

Later, the experimenters showed various dot patterns to the same subjects and asked whether they had seen certain ones before. People were often tricked by this next one, claiming they had seen it when in fact it is a new pattern, a sort of composite of the ones viewed previously.

We now know that these sorts of “false alarms” are common. Try, for example, to memorize the following list of words: bed, rest, awake, tired, dream, wake, snooze, blanket, doze, slumber, snore, nap, peace, yawn, drowsy, nurse, sick, lawyer, medicine, health, hospital, dentist, physician, ill, patient, office, stethoscope, surgeon, clinic, cure.

If you’re like most people, you’ll surely remember the categories of words I’ve just asked you to memorize, but you’ll probably find yourself fuzzy on the details. Do you recall the word dream or sleep (or both, or neither?), snooze or tired (or both, or neither)? How about doctor or dentist7. Experimental data show that most people are easily flummoxed, frequently falling for words they didn’t see (such as doctor). The same thing appears to happen even with so-called flashbulb memories, which capture events of considerable importance, like 9/11 or the fall of the Berlin Wall. As time passes, it becomes harder and harder to keep particular memories straight, even though we continue to believe, sometimes with great confidence, that they are accurate. Sadly, confidence is no measure of accuracy.

For most species, most of the time, remembering gist rather than detail is enough. If you are a beaver, you need to know how to build dams, but you don’t need to remember where each individual branch is. For most of evolution, the costs and benefits of context-dependent memory worked out fine: fast for gist, poor for detail; so be it.

If you are human, though, things are often different; societies and circumstances sometimes require of us a precision that wasn’t demanded of our ancestors. In the courtroom, for example, it’s not enough to know that some guy committed a crime; we need to know which guy did — which is often more than the average human can remember. Yet, until recently, with the rise of DNA evidence, eyewitness testimony has often been treated as the final arbiter; when an honest-looking witness appears confident, juries usually assume that this person speaks the truth.

Such trust is almost certainly misplaced — not because honest people lie, but because even the most honorable witness is just human— saddled with contextually driven memory. Oodles of evidence for this comes from the lab of the psychologist Elizabeth Loftus. In a typical study, Loftus shows her subjects a film of a car accident and asks them afterward what happened. Distortion and interference rule the day. For example, in one experiment, Loftus showed people slides of a car running a stop sign. Subjects who later heard mention of a yield sign would often blend what they saw with what they heard and misremember the car as driving past a yield sign rather than a stop sign.

In another experiment, Loftus asked several different groups of subjects (all of whom had seen a film of another car accident) slightly different questions, such as How fast were the cars going when they hit each other? or How fast were the cars going when they smashed into each other? All that varied from one version to the next was the final verb (hit, smashed, contacted, and so forth). Yet this slight difference in wording was enough to affect people’s memory: subjects who heard verbs like smashed estimated the crash as occurring at 40.8 miles per hour, a significantly greater speed than that reported by those who heard verbs with milder connotations, like hit (34.0) and contacted (31.8). The word smashed cues different memories than hit, subtly influencing people’s estimates.

Both studies confirm what most lawyers already know: questions can “lead witnesses.” This research also makes clear just how unreliable memory can be. As far as we can tell, this pattern holds just as strongly outside the lab. One recent real-world study, admittedly small, concerned people who had been wrongly imprisoned (and were subsequently cleared on the basis of DNA tests). Over 90 percent of their convictions had hinged on faulty eyewitness testimony.

When we consider the evolutionary origins of memory, we can start to understand this problem. Eyewitness testimony is unreliable because our memories are stored in bits and pieces; without a proper system for locating or keeping them together, context affects how well we retrieve them. Expecting human memory to have the fidelity of a video recorder (as juries often do) is patently unrealistic. Memories related to accidents and crimes are, like all memories, vulnerable to distortion.

A memorable line from George Orwell’s novel 1984 states that “Oceania had always been at war with Eurasia” — the irony being, of course, that until recently (in the time frame of the book) Oceania had not in fact been at war with Eurasia. (“As Winston well knew, it was only four years since Oceania had been at war with Eastasia and in alliance with Eurasia.”) The dictators of 1984 manipulate the masses by revising history. This idea is, of course, essential to the book, but when I read it as a smug teenager, I found the whole thing implausible: wouldn’t people remember that the battle lines only recently had been redrawn? Who was fooling whom?

Now I realize that Orwell’s conceit wasn’t so far-fetched. All memories — even those concerning our own history — are constantly being revised. Every time we access a memory, it becomes “labile,” subject to change, and this seems to be true even for memories that seem especially important and firmly established, such as those of political events or our own experiences.

A good, scientifically well documented illustration of how vulnerable autobiographical memory can be took place in 1992, courtesy of the ever-mercurial Ross Perot, an iconoclastic billionaire from Texas who ran for president as an independent candidate. Perot initially attracted a strong following, but suddenly, under fire, he withdrew from the race. At that point an enterprising psychologist named Linda Levine asked Perot followers how they felt about his withdrawal from the campaign. When Perot subsequently reentered the race, Levine had an unanticipated chance to collect follow-up data. Soon after election day, Levine asked people whom they voted for in the end, and how they felt about Perot earlier in the campaign, at the point when he had dropped out. Levine found that people’s memory of their own feelings shifted. Those who returned to Perot when Perot reentered the race tended to whitewash their negative memories of his withdrawal, forgetting how betrayed they had felt, while people who moved on from Perot and ultimately voted for another candidate whitewashed their positive memories of him, as if they had never intended to vote for him in the first place. Orwell would have been proud.[11]

Distortion and interference are just the tip of the iceberg. Any number of things would be a whole lot easier if evolution had simply vested us with postal-code memory. Take, for example, the seemingly trivial task of remembering where you last put your house keys. Nine times out of ten you may get it right, but if you should leave your keys in an atypical spot, all bets are off. An engineer would simply assign a particular memory location (known as a “buffer”) to the geographical coordinates of your keys, update the value whenever you moved them, and voilà: you would never need to search the pockets of the pants you wore yesterday or find yourself locked out of your own home.

Alas, precisely because we can’t access memories by exact location, we can’t straightforwardly update specific memories, and we can’t readily “erase” information about where we put our keys in the past. When we place them somewhere other than their usual spot, recency (their most recent location) and frequency (where they’re usually placed) come into conflict, and we may well forget where the keys are. The same problem crops up when we try to remember where we last put our car, our wallet, our phone; it’s simply part of human life. Lacking proper buffers, our memory banks are a bit like a shoebox full of disorganized photographs: recent photos tend on average to be closer to the top, but this is not guaranteed. This shoebox-like system is fine when we want to remember some general concept (say, reliable locations for obtaining food) — in which case, remembering any experience, be it from yesterday or a year ago, might do. But it’s a lousy system for remembering particular, precise bits of information.

The same sort of conflict between recency and frequency explains the near-universal human experience of leaving work with the intention of buying groceries, only to wind up at home, having completely forgotten to stop at the grocery store. The behavior that is common practice (driving home) trumps the recent goal (our spouse’s request that we pick up some milk).

Preventing this sort of cognitive autopilot should have been easy. As any properly trained computer scientist will tell you, driving home and getting groceries are goals, and goals belong on a stack. A computer does one thing, then a user presses a key and the first goal (analogous to driving home) is temporarily interrupted by a new goal (getting groceries); the new goal is placed on top of the stack (it becomes top priority), until, when it is completed, it is removed from the stack, returning the old goal to the top. Any number of goals can then be pursued in precisely the right priority sequence. No such luck for us human beings.

Or consider another common quirk of human memory: the fact that our memory for what happened is rarely matched by memory for when it occurred. Whereas computers and videotapes can pinpoint events to the second (when a particular movie was recorded or particular file was modified), we’re often lucky if we can guess the year in which something happened, even if, say, it was in the headlines for months. Most people my age, for example, were inundated a few years ago with a rather sordid story involving two Olympic figure skaters; the ex-husband of one skater hired a goon to whack the other skater on the knee, in order to ruin the latter skater’s chance at a medal. It’s just the sort of thing the media love, and for nearly six months the story was unavoidable. But if today I asked the average person when it happened, I suspect he or she would have difficulty recalling the year, let alone the specific month.[12]

For something that happened fairly recently, we can get around the problem by using a simple rule of thumb: the more recent the event, the more vivid the memory. But this vividness has its limits: events that have receded more than a couple of months into the past tend to blur together, frequently leaving us chronologically challenged. For example, when regular viewers of the weekly TV news program 60 Minutes were asked to recall when a series of stories aired, viewers could readily distinguish a story presented two months earlier from a story shown only a week before. But stories presented further in the past — say, two years versus four — all faded into an indistinct muddle.

Of course, there is always another workaround. Instead of simply trying to recall when something happened, we can try to infer this information. By a process known as “reconstruction,” we work backward, correlating an event of uncertain date with chronological landmarks that we’re sure of. To take another example ripped from the headlines, if I asked you to name the year in which O. J. Simpson was tried for murder, you’d probably have to guesstimate. As vivid as the proceedings were then, they are now (for me, anyway) beginning to get a bit hazy. Unless you are a trivia buff, you probably can’t remember exactly when the trial happened. Instead, you might reason that it took place before the Monica Lewinsky scandal but after Bill Clinton took office, or that it occurred before you met your significant other but after you went to college. Reconstruction is, to be sure, better than nothing, but compared to a simple time/date stamp, it’s incredibly clumsy.

A kindred problem is reminiscent of the sixth question every reporter must ask. Not who, what, when, where, or why, but how, as in How do I know it? What are my sources? Where did I see that somewhat frightening article about the Bush administration’s desire to invade Iran? Was it in The New Yorker? Or the Economist? Or was it just some paranoid but entertaining blog? For obvious reasons, cognitive psychologists call this sort of memory “source memory.” And source memory, like our memory for times and dates, is, for want of a proper postal code, often remarkably poor. One psychologist, for example, asked a group of test subjects to read aloud a list of random names (such as Sebastian Weisdorf). Twenty-four hours later he asked them to read a second list of names and to identify which ones belonged to famous people and which didn’t. Some were in fact the names of celebrities, and some were made up; the interesting thing is that some were made-up names drawn from the first list. If people had good source memory, they would have spotted the ruse. Instead, most subjects knew they had seen a particular name before, but they had no idea where. Recognizing a name like Sebastian Weisdorf but not recalling where they’d seen it, people mistook Weisdorf for the name of a bona fide celebrity whom they just couldn’t place. The same thing happens, with bigger stakes, when voters forget whether they heard some political rumor on Letterman or read it in the New York Times.

The workaround by which we “reconstruct” memory for dates and times is but one example of the many clumsy techniques that humans use to cope with the lack of postal-code memory. If you Google for “memory tricks,” you’ll find dozens more.

Take for example, the ancient “method of loci.” If you have a long list of words to remember, you can associate each one with a specific room in a familiar large building: the first word with the vestibule, the second word with the living room, the third word with the dining room, the fourth with the kitchen, and so forth. This trick, which is used in adapted form by all the world’s leading mnemonists, works pretty well, since each room provides a different context for memory retrieval — but it’s still little more than a Band-Aid, one more solution we shouldn’t need in the first place.

Another classical approach, so prominent in rap music, is to use rhyme and meter as an aid to memorization. Homer had his hexameter, Tom Lehrer had his song “The Elements” (“There’s antimony, arsenic, aluminum, selenium, / And hydrogen and oxygen and nitrogen and rhenium…”), and the band They Might Be Giants have their cover of “Why Does the Sun Shine? (The Sun Is a Mass of Incandescent Gas).”

Actors often take these mnemonic devices one step further. Not only do they remind themselves of their next lines by using cues of rhythm, syntax, and rhyme; they also focus on their character’s motivations and actions, as well as those of other characters. Ideally, this takes place automatically. In the words of the actor Michael Caine, the goal is to get immersed in the story, rather than worry about specific lines. “You must be able to stand there not thinking of that line. You take it off the other actor’s face.” Some performers can do this rather well; others struggle with it (or rely on cue cards). The point is, memorizing lines will never be as easy for us as it would be for a computer. We retrieve memorized information not by reading files from a specific sector of the hard drive but by cobbling together as many clues as possible — and hoping for the best.

Even the oldest standby — simple rehearsal, repeating something over and over — is a bit of clumsiness that shouldn’t be necessary. Rote memorization works somewhat well because it exploits the brain’s attachment to memories based on frequently occurring events, but here too the solution is hardly elegant. An ideal memory system would capture information in a single exposure, so we wouldn’t have to waste time with flash cards or lengthy memorization sessions. (Yes, I’ve heard the rumors about the existence of photographic memory, but no, I’ve never seen a well-documented case.)

There’s nothing wrong with mnemonics and no end to the possibilities; any cue can help. But when they fail, we can rely on a different sort of solution — arranging our life to accommodate the limits of our memory. I, for example, have learned through long experience that the only way to deal with my congenital absent-mindedness is to develop habits that reduce the demands on my memory. I always put my keys in the same place, position anything I need to bring to work by the front door, and so forth. To a forgetful guy like me, a PalmPilot is a godsend. But the fact that we can patch together solutions doesn’t mean that our mental mechanisms are well engineered; it is a symptorn of the opposite condition. It is only the clumsiness of human memory that necessitates these tricks in the first place.

Given the liabilities of our contextual memory, it’s natural to ask whether its benefits (speed, for example) outweigh the costs. I think not, and not just because the costs are so high, but because it is possible in principle to have the benefits without the costs. The proof is Google (not to mention a dozen other search engines). Search engines start with an underlying substrate of postal-code memory (the well-mapped information they can tap into) and build contextual memory on top. The postal-code foundation guarantees reliability, while the context on top hints at which memories are most likely needed at a given moment. If evolution had started with a system of memory organized by location, I bet that’s exactly what we’d have, and the advantages would be considerable. But our ancestors never made it to that part of the cognitive mountain; once evolution stumbled upon contextual memory, it never wandered far enough away to find another considerably higher peak. As a result, when we need precise, reliable memories, all we can do is fake it — kluging a poor man’s approximation of postal-code memory onto a substrate that doesn’t genuinely provide for it.

In the final analysis, we would be nowhere without memory; as Steven Pinker once wrote, “To a very great extent, our memories are ourselves.” Yet memory is arguably the mind’s original sin. So much is built on it, and yet it is, especially in comparison to computer memory, wildly unreliable.

In no small part this is because we evolved not as computers but as actors, in the original sense of the word: as organisms that act, entities that perceive the world and behave in response to it. And that led to a memory system attuned to speed more than reliability. In many circumstances, especially those requiring snap decisions, recency, frequency, and context are powerful tools for mediating memory. For our ancestors, who lived almost entirely in the here and now (as virtually all nonhuman life forms still do), quick access to contextually relevant memories of recent events or frequently occurring ones helped navigate the challenges of seeking food or avoiding danger. Likewise, for a rat or a monkey, it is often enough to remember related general information. Concerns about misattribution or bias in courtroom testimony simply don’t apply.

But today, courts, employers, and many other facets of everyday life make demands that our pre-hominid predecessors rarely faced, requiring us to remember specific details, such as where we last put our keys (rather than where we tend, in general, to put them), where we’ve gotten particular information, and who told us what, and when.

To be sure, there will always be those who see our limits as virtues. The memory expert Henry Roediger, for example, has implied that memory errors are the price we pay in order to make inferences. The Harvard psychologist Dan Schacter, meanwhile, has argued that the fractured nature of memory prepares us for the future: “A memory that works by piecing together bits of the past may be better suited to simulating future events than one that is a store of perfect records.” Another common suggestion is that we’re better off because we can’t remember certain things, as if faulty memory would spare us from pain.

These ideas sound nice on the surface, but I don’t see any evidence to support them. The notion that the routine failures of human memory convey some sort of benefit misses an important point: the things that we have trouble remembering arent the things we’d like to forget. It’s easy to glibly imagine some kind of optimal state wherein we’d remember only happy thoughts, a bit like Dorothy at the end of The Wizard of Oz. But the truth is that we generally can’t — contrary to Freud — repress memories that we find painful, and we don’t automatically forget them either. What we remember isn’t a function of what we want to remember, and what we forget isn’t a matter of what we want to forget; any war veteran or Holocaust survivor could tell you that. What we remember and what we forget are a function of context, frequency, and recency, not a means of attaining inner peace. It’s possible to imagine a robot that could automatically expunge all unpleasant memories, but we humans are just not built that way.

Similarly, there is no logical relation between having a capacity to make inferences and having a memory that is prone to errors. In principle, it is entirely possible to have both perfect records of past events and a capacity to make inferences about the future. That’s exactly how computer-based weather-forecasting systems work, for example; they extrapolate the future from a reliable set of data about the past. Degrading the quality of their memory wouldn’t improve their predictions, but rather it would undermine them. And there’s no evidence that people with an especially distortion-prone memory are happier than the rest of us, no evidence that they make better inferences or have an edge at predicting the future. If anything, the data suggest the opposite, since having an above-average memory is well correlated with general intelligence.

None of which is to say that there aren’t compensations. We can, for example, have a great deal of fun with what Freud called “free associations”; it’s entertaining to follow the chains of our memories, and we can put that to use in literature and poetry. If connecting trains of thought with chains of ought tickles your fancy, by all means, enjoy! But would we really and truly be better off if our memory was less reliable and more prone to distortion? It’s one thing to make lemonade out of lemons, another to proclaim that lemons are what you’d hope for in the first place.

In the final analysis, the fact that our ability to make inferences is built on rapid but unreliable contextual memory isn’t some optimal tradeoff. It’s just a fact of history: the brain circuits that allow us to make inferences make do with distortion-prone memory because that’s all evolution had to work with. To build a truly reliable memory, fit for the requirements of human deliberate reasoning, evolution would have had to start over. And, despite its power and elegance, that’s the one thing evolution just can’t do.

Загрузка...