FOUR. Paul Van Riper’s Big Victory: Creating Structure for Spontaneity

Paul Van Riper is tall and lean with a gleaming bald dome and wire-rimmed glasses. He walks with his shoulders square and has a gruff, commanding voice. His friends call him Rip. Once when he and his twin brother were twelve, they were sitting in a car with their father as he read a newspaper story about the Korean War. “Well, boys,” he said, “the war’s about to be over. Truman’s sending in the marines.” That’s when Van Riper decided that when he grew up, he would join the Marine Corps. In his first tour in Vietnam, he was almost cut in half by gunfire while taking out a North Vietnamese machine gun in a rice paddy outside Saigon. In 1968, he returned to Vietnam, and this time he was the commander of Mike Company (Third Battalion, Seventh Marines, First Marine Division) in the rice-paddy-and-hill country of South Vietnam between two treacherous regions the marines called Dodge City and the Arizona Territory. There his task was to stop the North Vietnamese from firing rockets into Danang. Before he got there, the rocket attacks in his patrol area were happening once or even twice a week. In the three months he was in the bush, there was only one.

“I remember when I first met him like it was yesterday,” says Richard Gregory, who was Van Riper’s gunnery sergeant in Mike Company. “It was between Hill Fifty-five and Hill Ten, just southeast of Danang. We shook hands. He had that crisp voice, low to middle tones. Very direct. Concise. Confident, without a lot of icing on the cake. That’s how he was, and he maintained that every day of the war. He had an office in our combat area—a hooch—but I never saw him in there. He was always out in the field or out near his bunker, figuring out what to do next. If he had an idea and he had a scrap of paper in his pocket, he would write that idea on the scrap, and then, when we had a meeting, he would pull out seven or eight little pieces of paper. Once he and I were in the jungle a few yards away from a river, and he wanted to reconnoiter over certain areas, but he couldn’t get the view he wanted. The bush was in the way. Damned if he didn’t take off his shoes, dive into the river, swim out to the middle, and tread water so he could see downstream.”

In the first week of November of 1968, Mike Company was engaged in heavy fighting with a much larger North Vietnamese regiment. “At one point we called in a medevac to take out some wounded. The helicopter was landing, and the North Vietnamese army was shooting rockets and killing everybody in the command post,” remembers John Mason, who was one of the company’s platoon commanders. “We suddenly had twelve dead marines. It was bad. We got out of there three or four days later, and we took a number of casualties, maybe forty-five total. But we reached our objective. We got back to Hill Fifty-five, and the very next day, we were working on squad tactics and inspection and, believe it or not, physical training. It had never dawned on me as a young lieutenant that we would do PT in the bush. But we did. It did not dawn on me that we would practice platoon and squad tactics or bayonet training in the bush, but we did. And we did it on a routine basis. After a battle, there would be a brief respite, then we would be back to training. That’s how Rip ran his company.”

Van Riper was strict. He was fair. He was a student of war, with clear ideas about how his men ought to conduct themselves in combat. “He was a gunslinger,” another of his soldiers from Mike Company remembers, “somebody who doesn’t sit behind a desk but leads the troops from the front. He was always very aggressive but in such a way that you didn’t mind doing what he was asking you to do. I remember one time I was out with a squad on a night ambush. I got a call from the skipper [what marines call the company commander] on the radio. He told me that there were one hundred twenty-one little people, meaning Vietnamese, heading toward my position, and my job was to resist them. I said, ‘Skipper, I have nine men.’ He said he would bring out a reactionary force if I needed one. That’s the way he was. The enemy was out there and there may have been nine of us and one hundred twenty-one of them, but there was no doubt in his mind that we had to engage them. Wherever the skipper operated, the enemy was put off by his tactics. He was not ‘live and let live.’”

In the spring of 2000, Van Riper was approached by a group of senior Pentagon officials. He was retired at that point, after a long and distinguished career. The Pentagon was in the earliest stages of planning for a war game that they were calling Millennium Challenge ’02. It was the largest and most expensive war game thus far in history. By the time the exercise was finally staged—in July and early August of 2002, two and a half years later—it would end up costing a quarter of a billion dollars, which is more than some countries spend on their entire defense budget. According to the Millennium Challenge scenario, a rogue military commander had broken away from his government somewhere in the Persian Gulf and was threatening to engulf the entire region in war. He had a considerable power base from strong religious and ethnic loyalties, and he was harboring and sponsoring four different terrorist organizations. He was virulently anti-American. In Millennium Challenge—in what would turn out to be an inspired (or, depending on your perspective, disastrous) piece of casting—Paul Van Riper was asked to play the rogue commander.

1. One Morning in the Gulf

The group that runs war games for the U.S. military is called the Joint Forces Command, or, as it is better known, JFCOM. JFCOM occupies two rather nondescript low-slung concrete buildings at the end of a curving driveway in Suffolk, Virginia, a few hours’ drive south and east of Washington, D.C. Just before the entrance to the parking lot, hidden from the street, is a small guard hut. A chain-link fence rings the perimeter. There is a Wal-Mart across the street. Inside, JFCOM looks like a very ordinary office building, with conference rooms and rows of cubicles and long, brightly lit carpetless corridors. The business of JFCOM, however, is anything but ordinary. JFCOM is where the Pentagon tests new ideas about military organization and experiments with new military strategies.

Planning for the war game began in earnest in the summer of 2000. JFCOM brought together hundreds of military analysts and specialists and software experts. In war game parlance, the United States and its allies are always known as Blue Team, and the enemy is always known as Red Team, and JFCOM generated comprehensive portfolios for each team, covering everything they would be expected to know about their own forces and their adversary’s forces. For several weeks leading up to the game, the Red and Blue forces took part in a series of “spiral” exercises that set the stage for the showdown. The rogue commander was getting more and more belligerent, the United States more and more concerned.

In late July, both sides came to Suffolk and set up shop in the huge, windowless rooms known as test bays on the first floor of the main JFCOM building. Marine Corps, air force, army, and navy units at various military bases around the country stood by to enact the commands of Red and Blue Team brass. Sometimes when Blue Team fired a missile or launched a plane, a missile actually fired or a plane actually took off, and whenever it didn’t, one of forty-two separate computer models simulated each of those actions so precisely that the people in the war room often couldn’t tell it wasn’t real. The game lasted for two and a half weeks. For future analysis, a team of JFCOM specialists monitored and recorded every conversation, and a computer kept track of every bullet fired and missile launched and tank deployed. This was more than an experiment. As became clear less than a year later—when the United States invaded a Middle Eastern state with a rogue commander who had a strong ethnic power base and was thought to be harboring terrorists—this was a full dress rehearsal for war.

The stated purpose of Millennium Challenge was for the Pentagon to test a set of new and quite radical ideas about how to go to battle. In Operation Desert Storm in 1991, the United States had routed the forces of Saddam Hussein in Kuwait. But that was an utterly conventional kind of war: two heavily armed and organized forces meeting and fighting in an open battlefield. In the wake of Desert Storm, the Pentagon became convinced that that kind of warfare would soon be an anachronism: no one would be foolish enough to challenge the United States head-to-head in pure military combat. Conflict in the future would be diffuse. It would take place in cities as often as on battlefields, be fueled by ideas as much as by weapons, and engage cultures and economies as much as armies. As one JFCOM analyst puts it: “The next war is not just going to be military on military. The deciding factor is not going to be how many tanks you kill, how many ships you sink, and how many planes you shoot down. The decisive factor is how you take apart your adversary’s system. Instead of going after war-fighting capability, we have to go after war-making capability. The military is connected to the economic system, which is connected to their cultural system, to their personal relationships. We have to understand the links between all those systems.”

With Millennium Challenge, then, Blue Team was given greater intellectual resources than perhaps any army in history. JFCOM devised something called the Operational Net Assessment, which was a formal decision-making tool that broke the enemy down into a series of systems—military, economic, social, political—and created a matrix showing how all those systems were interrelated and which of the links among the systems were the most vulnerable. Blue Team’s commanders were also given a tool called Effects-Based Operations, which directed them to think beyond the conventional military method of targeting and destroying an adversary’s military assets. They were given a comprehensive, real-time map of the combat situation called the Common Relevant Operational Picture (CROP). They were given a tool for joint interactive planning. They were given an unprecedented amount of information and intelligence from every corner of the U.S. government and a methodology that was logical and systematic and rational and rigorous. They had every toy in the Pentagon’s arsenal.

“We looked at the full array of what we could do to affect our adversary’s environment—political, military, economic, societal, cultural, institutional. All those things we looked at very comprehensively,” the commander of JFCOM, General William F. Kernan, told reporters in a Pentagon press briefing after the war game was over. “There are things that the agencies have right now that can interrupt people’s capabilities. There are things that you can do to disrupt their ability to communicate, to provide power to their people, to influence their national will . . . to take out power grids.” Two centuries ago, Napoleon wrote that “a general never knows anything with certainty, never sees his enemy clearly, and never knows positively where he is.” War was shrouded in fog. The point of Millennium Challenge was to show that, with the full benefit of high-powered satellites and sensors and supercomputers, that fog could be lifted.

This is why, in many ways, the choice of Paul Van Riper to head the opposing Red Team was so inspired, because if Van Riper stood for anything, it was the antithesis of that position. Van Riper didn’t believe you could lift the fog of war. His library on the second floor of his house in Virginia is lined with rows upon rows of works on complexity theory and military strategy. From his own experiences in Vietnam and his reading of the German military theorist Carl von Clausewitz, Van Riper became convinced that war was inherently unpredictable and messy and nonlinear. In the 1980s, Van Riper would often take part in training exercises, and, according to military doctrine, he would be required to perform versions of the kind of analytical, systematic decision making that JFCOM was testing in Millennium Challenge. He hated it. It took far too long. “I remember once,” he says, “we were in the middle of the exercise. The division commander said, ‘Stop. Let’s see where the enemy is.’ We’d been at it for eight or nine hours, and they were already behind us. The thing we were planning for had changed.” It wasn’t that Van Riper hated all rational analysis. It’s that he thought it was inappropriate in the midst of battle, where the uncertainties of war and the pressures of time made it impossible to compare options carefully and calmly.

In the early 1990s, when Van Riper was head of the Marine Corps University at Quantico, Virginia, he became friendly with a man named Gary Klein. Klein ran a consulting firm in Ohio and wrote a book called Sources of Power, which is one of the classic works on decision making. Klein studied nurses, intensive care units, firefighters, and other people who make decisions under pressure, and one of his conclusions is that when experts make decisions, they don’t logically and systematically compare all available options. That is the way people are taught to make decisions, but in real life it is much too slow. Klein’s nurses and firefighters would size up a situation almost immediately and act, drawing on experience and intuition and a kind of rough mental simulation. To Van Riper, that seemed to describe much more accurately how people make decisions on the battlefield.

Once, out of curiosity, Van Riper and Klein and a group of about a dozen Marine Corps generals flew to the Mercantile Exchange in New York to visit the trading floor. Van Riper thought to himself, I’ve never seen this sort of pandemonium except in a military command post in war—we can learn something from this. After the bell rang at the end of the day, the generals went onto the floor and played trading games. Then they took a group of traders from Wall Street across New York Harbor to the military base on Governor’s Island and played war games on computers. The traders did brilliantly. The war games required them to make decisive, rapid-fire decisions under conditions of high pressure and with limited information, which is, of course, what they did all day at work. Van Riper then took the traders down to Quantico, put them in tanks, and took them on a live fire exercise. To Van Riper, it seemed clearer and clearer that these “overweight, unkempt, long-haired” guys and the Marine Corps brass were fundamentally engaged in the same business—the only difference being that one group bet on money and the other bet on lives. “I remember the first time the traders met the generals,” Gary Klein says. “It was at the cocktail party, and I saw something that really startled me. You had all these marines, these two- and three-star generals, and you know what a Marine Corps general is like. Some of them had never been to New York. Then there were all these traders, these brash, young New Yorkers in their twenties and thirties, and I looked at the room and there were groups of two and three, and there was not a single group that did not include members of both sides. They weren’t just being polite. They were animatedly talking to each other. They were comparing notes and connecting. I said to myself, These guys are soul mates. They were treating each other with total respect.”

Millennium Challenge, in other words, was not just a battle between two armies. It was a battle between two perfectly opposed military philosophies. Blue Team had their databases and matrixes and methodologies for systematically understanding the intentions and capabilities of the enemy. Red Team was commanded by a man who looked at a long-haired, unkempt, seat-of-the pants commodities trader yelling and pushing and making a thousand instant decisions an hour and saw in him a soul mate.

On the opening day of the war game, Blue Team poured tens of thousands of troops into the Persian Gulf. They parked an aircraft carrier battle group just offshore of Red Team’s home country. There, with the full weight of its military power in evidence, Blue Team issued an eight-point ultimatum to Van Riper, the eighth point being the demand to surrender. They acted with utter confidence, because their Operational Net Assessment matrixes told them where Red Team’s vulnerabilities were, what Red Team’s next move was likely to be, and what Red Team’s range of options was. But Paul Van Riper did not behave as the computers predicted.

Blue Team knocked out his microwave towers and cut his fiber-optics lines on the assumption that Red Team would now have to use satellite communications and cell phones and they could monitor his communications.

“They said that Red Team would be surprised by that,” Van Riper remembers. “Surprised? Any moderately informed person would know enough not to count on those technologies. That’s a Blue Team mind-set. Who would use cell phones and satellites after what happened to Osama bin Laden in Afghanistan? We communicated with couriers on motorcycles, and messages hidden inside prayers. They said, ‘How did you get your airplanes off the airfield without the normal chatter between pilots and the tower?’ I said, ‘Does anyone remember World War Two? We’ll use lighting systems.’”

Suddenly the enemy that Blue Team thought could be read like an open book was a bit more mysterious. What was Red Team doing? Van Riper was supposed to be cowed and overwhelmed in the face of a larger foe. But he was too much of a gunslinger for that. On the second day of the war, he put a fleet of small boats in the Persian Gulf to track the ships of the invading Blue Team navy. Then, without warning, he bombarded them in an hour-long assault with a fusillade of cruise missiles. When Red Team’s surprise attack was over, sixteen American ships lay at the bottom of the Persian Gulf. Had Millennium Challenge been a real war instead of just an exercise, twenty thousand American servicemen and women would have been killed before their own army had even fired a shot.

“As the Red force commander, I’m sitting there and I realize that Blue Team had said that they were going to adopt a strategy of preemption,” Van Riper says. “So I struck first. We’d done all the calculations on how many cruise missiles their ships could handle, so we simply launched more than that, from many different directions, from offshore and onshore, from air, from sea. We probably got half of their ships. We picked the ones we wanted. The aircraft carrier. The biggest cruisers. There were six amphibious ships. We knocked out five of them.”

In the weeks and months that followed, there were numerous explanations from the analysts at JFCOM about exactly what happened that day in July. Some would say that it was an artifact of the particular way war games are run. Others would say that in real life, the ships would never have been as vulnerable as they were in the game. But none of the explanations change the fact that Blue Team suffered a catastrophic failure. The rogue commander did what rogue commanders do. He fought back, yet somehow this fact caught Blue Team by surprise. In a way, it was a lot like the kind of failure suffered by the Getty when it came to evaluating the kouros: they had conducted a thoroughly rational and rigorous analysis that covered every conceivable contingency, yet that analysis somehow missed a truth that should have been picked up instinctively. In that moment in the Gulf, Red Team’s powers of rapid cognition were intact—and Blue Team’s were not. How did that happen?

2. The Structure of Spontaneity

One Saturday evening not long ago, an improvisation comedy group called Mother took the stage in a small theater in the basement of a supermarket on Manhattan’s West Side. It was a snowy evening just after Thanksgiving, but the room was full. There are eight people in Mother, three women and five men, all in their twenties and thirties. The stage was bare except for a half dozen white folding chairs. Mother was going to perform what is known in the improve world as a Harold. They would get up onstage, without any idea whatsoever of what character they would be playing or what plot they would be acting out, take a random suggestion from the audience, and then, without so much as a moment’s consultation, make up a thirty-minute play from scratch.

One of the group members called out to the audience for a suggestion. “Robots,” someone yelled back. In improv, the suggestion is rarely taken literally, and in this case, Jessica, the actress who began the action, said later that the thing that came to mind when she heard the word “robots” was emotional detachment and the way technology affects relationships. So, right then and there, she walked onstage, pretending to read a bill from the cable television company. There was one other person onstage with her, a man seated in a chair with his back to her. They began to talk. Did he know what character he was playing at that moment? Not at all; nor did she or anyone in the audience. But somehow it emerged that she was the wife, and the man was her husband, and she had found charges on the cable bill for porn movies and was distraught. He, in turn, responded by blaming their teenaged son, and after a spirited back-and-forth, two more actors rushed onstage, playing two different characters in the same narrative. One was a psychiatrist helping the family with their crisis. In another scene, an actor angrily slumped in a chair. “I’m doing time for a crime I didn’t commit,” the actor said. He was the couple’s son. At no time as the narrative unfolded did anyone stumble or freeze or look lost. The action proceeded as smoothly as if the actors had rehearsed for days. Sometimes what was said and done didn’t quite work. But often it was profoundly hilarious, and the audience howled with delight. And at every point it was riveting: here was a group of eight people up on a stage without a net, creating a play before our eyes.

Improvisation comedy is a wonderful example of the kind of thinking that Blink is about. It involves people making very sophisticated decisions on the spur of the moment, without the benefit of any kind of script or plot. That’s what makes it so compelling and—to be frank—terrifying. If I were to ask you to perform in a play that I’d written, before a live audience with a month of rehearsal, I suspect that most of you would say no. What if you got stage fright? What if you forgot your lines? What if the audience booed? But at least a conventional play has structure. Every word and movement has been scripted. Every performer gets to rehearse. There’s a director in charge, telling everyone what to do. Now suppose that I were to ask you to perform again before a live audience—only this time without a script, without any clue as to what part you were playing or what you were supposed to say, and with the added requirement that you were expected to be funny. I’m quite sure you’d rather walk on hot coals. What is terrifying about improv is the fact that it appears utterly random and chaotic. It seems as though you have to get up onstage and make everything up, right there on the spot.

But the truth is that improv isn’t random and chaotic at all. If you were to sit down with the cast of Mother, for instance, and talk to them at length, you’d quickly find out that they aren’t all the sort of zany, impulsive, free-spirited comedians that you might imagine them to be. Some are quite serious, even nerdy. Every week they get together for a lengthy rehearsal. After each show they gather backstage and critique each other’s performance soberly. Why do they practice so much? Because improv is an art form governed by a series of rules, and they want to make sure that when they’re up onstage, everyone abides by those rules. “We think of what we’re doing as a lot like basketball,” one of the Mother players said, and that’s an apt analogy. Basketball is an intricate, high-speed game filled with split-second, spontaneous decisions. But that spontaneity is possible only when everyone first engages in hours of highly repetitive and structured practice—perfecting their shooting, dribbling, and passing and running plays over and over again—and agrees to play a carefully defined role on the court. This is the critical lesson of improv, too, and it is also a key to understanding the puzzle of Millennium Challenge: spontaneity isn’t random. Paul Van Riper’s Red Team did not come out on top in that moment in the Gulf because they were smarter or luckier at that moment than their counterparts over at Blue Team. How good people’s decisions are under the fast-moving, high-stress conditions of rapid cognition is a function of training and rules and rehearsal.

One of the most important of the rules that make improv possible, for example, is the idea of agreement, the notion that a very simple way to create a story—or humor—is to have characters accept everything that happens to them. As Keith Johnstone, one of the founders of improv theater, writes: “If you’ll stop reading for a moment and think of something you wouldn’t want to happen to you, or to someone you love, then you’ll have thought of something worth staging or filming. We don’t want to walk into a restaurant and be hit in the face by a custard pie, and we don’t want to suddenly glimpse Granny’s wheelchair racing towards the edge of a cliff, but we’ll pay money to attend enactments of such events. In life, most of us are highly skilled at suppressing action. All the improvisation teacher has to do is to reverse this skill and he creates very ‘gifted’ improvisers. Bad improvisers block action, often with a high degree of skill. Good improvisers develop action.”

Here, for instance, is an improvised exchange between two actors in a class that Johnstone was teaching:

A: I’m having trouble with my leg.

B: I’m afraid I’ll have to amputate.

A: You can’t do that, Doctor.

B: Why not?

A: Because I’m rather attached to it.

B: (Losing heart) Come on, man.

A: I’ve got this growth on my arm too, Doctor.

The two actors involved in this scene quickly became very frustrated. They couldn’t keep the scene going. Actor A had made a joke—and a rather clever one (“I’m rather attached to it”)—but the scene itself wasn’t funny. So Johnstone stopped them and pointed out the problem. Actor A had violated the rule of agreement. His partner had made a suggestion, and he had turned it down. He had said, “You can’t do that, Doctor.” So the two started again, only this time with a renewed commitment to agreeing:

A: Augh!

B: Whatever is it, man?

A: It’s my leg, Doctor.

B: This looks nasty. I shall have to amputate.

A: It’s the one you amputated last time, Doctor.

B: You mean you’ve got a pain in your wooden leg?

A: Yes, Doctor.

B: You know what this means?

A: Not woodworm, Doctor!

B: Yes. We’ll have to remove it before it spreads to the rest of you.

(A’s chair collapses.)

B: My God! It’s spreading to the furniture!

Here are the same two people, with the same level of skill, playing exactly the same roles, and beginning almost exactly the same way. However, in the first case, the scene comes to a premature end, and in the second case, the scene is full of possibility. By following a simple rule, A and B became funny. “Good improvisers seem telepathic; everything looks pre-arranged,” Johnstone writes. “This is because they accept all offers made—which is something no ‘normal’ person would do.”

Here’s one more example, from a workshop conducted by Del Close, another of the fathers of improv. One actor is playing a police officer, the other a robber he’s chasing.

Cop: (Panting) Hey—I’m 50 years old and a little overweight. Can we stop and rest for a minute?

Robber: (Panting) You’re not gonna grab me if we rest?

Cop: Promise. Just for a few seconds—on the count of three. One, Two, Three.

Do you have to be particularly quick-witted or clever or light on your feet to play that scene? Not really. It’s a perfectly straightforward conversation. The humor arises entirely out of how steadfastly the participants adhere to the rule that no suggestion can be denied. If you can create the right framework, all of a sudden, engaging in the kind of fluid, effortless, spur-of-the-moment dialogue that makes for good improv theater becomes a lot easier. This is what Paul Van Riper understood in Millennium Challenge. He didn’t just put his team up onstage and hope and pray that funny dialogue popped into their heads. He created the conditions for successful spontaneity.

3. The Perils of Introspection

On Paul Van Riper’s first tour in Southeast Asia, when he was out in the bush, serving as an advisor to the South Vietnamese, he would often hear gunfire in the distance. He was then a young lieutenant new to combat, and his first thought was always to get on the radio and ask the troops in the field what was happening. After several weeks of this, however, he realized that the people he was calling on the radio had no more idea than he did about what the gunfire meant. It was just gunfire. It was the beginning of something—but what that something was was not yet clear. So Van Riper stopped asking. On his second tour of Vietnam, whenever he heard gunfire, he would wait. “I would look at my watch,” Van Riper says, “and the reason I looked was that I wasn’t going to do a thing for five minutes. If they needed help, they were going to holler. And after five minutes, if things had settled down, I still wouldn’t do anything. You’ve got to let people work out the situation and work out what’s happening. The danger in calling is that they’ll tell you anything to get you off their backs, and if you act on that and take it at face value, you could make a mistake. Plus you are diverting them. Now they are looking upward instead of downward. You’re preventing them from resolving the situation.”

Van Riper carried this lesson with him when he took over the helm of Red Team. “The first thing I told our staff is that we would be in command and out of control,” Van Riper says, echoing the words of the management guru Kevin Kelly. “By that, I mean that the overall guidance and the intent were provided by me and the senior leadership, but the forces in the field wouldn’t depend on intricate orders coming from the top. They were to use their own initiative and be innovative as they went forward. Almost every day, the commander of the Red air forces came up with different ideas of how he was going to pull this together, using these general techniques of trying to overwhelm Blue Team from different directions. But he never got specific guidance from me of how to do it. Just the intent.”

Once the fighting started, Van Riper didn’t want introspection. He didn’t want long meetings. He didn’t want explanations. “I told our staff that we would use none of the terminology that Blue Team was using. I never wanted to hear that word ‘effects,’ except in a normal conversation. I didn’t want to hear about Operational Net Assessment. We would not get caught up in any of these mechanistic processes. We would use the wisdom, the experience, and the good judgment of the people we had.”

This kind of management system clearly has its risks. It meant Van Riper didn’t always have a clear idea of what his troops were up to. It meant he had to place a lot of trust in his subordinates. It was, by his own admission, a “messy” way to make decisions. But it had one overwhelming advantage: allowing people to operate without having to explain themselves constantly turns out to be like the rule of agreement in improv. It enables rapid cognition.

Let me give you a very simple example of this. Picture, in your mind, the face of the waiter or waitress who served you the last time you ate at a restaurant, or the person who sat next to you on the bus today. Any stranger whom you’ve seen recently will do. Now, if I were to ask you to pick that person out of a police lineup, could you do it? I suspect you could. Recognizing someone’s face is a classic example of unconscious cognition. We don’t have to think about it. Faces just pop into our minds. But suppose I were to ask you to take a pen and paper and write down in as much detail as you can what your person looks like. Describe her face. What color was her hair? What was she wearing? Was she wearing any jewelry? Believe it or not, you will now do a lot worse at picking that face out of a lineup. This is because the act of describing a face has the effect of impairing your otherwise effortless ability to subsequently recognize that face.

The psychologist Jonathan W. Schooler, who pioneered research on this effect, calls it verbal overshadowing. Your brain has a part (the left hemisphere) that thinks in words, and a part (the right hemisphere) that thinks in pictures, and what happened when you described the face in words was that your actual visual memory was displaced. Your thinking was bumped from the right to the left hemisphere. When you were faced with the lineup the second time around, what you were drawing on was your memory of what you said the waitress looked like, not your memory of what you saw she looked like. And that’s a problem because when it comes to faces, we are an awful lot better at visual recognition than we are at verbal description. If I were to show you a picture of Marilyn Monroe or Albert Einstein, you’d recognize both faces in a fraction of a second. My guess is that right now you can “see” them both almost perfectly in your imagination. But how accurately can you describe them? If you wrote a paragraph on Marilyn Monroe’s face, without telling me whom you were writing about, could I guess who it was? We all have an instinctive memory for faces. But by forcing you to verbalize that memory—to explain yourself—I separate you from those instincts.

Recognizing faces sounds like a very specific process, but Schooler has shown that the implications of verbal overshadowing carry over to the way we solve much broader problems. Consider the following puzzle:

A man and his son are in a serious car accident. The father is killed, and the son is rushed to the emergency room. Upon arrival, the attending doctor looks at the child and gasps, “This child is my son!” Who is the doctor?

This is an insight puzzle. It’s not like a math or a logic problem that can be worked out systematically with pencil and paper. The only way you can get the answer is if it comes to you suddenly in the blink of an eye. You need to make a leap beyond the automatic assumption that doctors are always men. They aren’t always, of course. The doctor is the boy’s mother! Here’s another insight puzzle:

A giant inverted steel pyramid is perfectly balanced on its point. Any movement of the pyramid will cause it to topple over. Underneath the pyramid is a $100 bill. How do you remove the bill without disturbing the pyramid?

Think about this problem for a few moments. Then, after a minute or so, write down, in as much detail as you can, everything you can remember about how you were trying to solve the problem—your strategy, your approach, or any solutions you’ve thought of. When Schooler did this experiment with a whole sheet of insight puzzles, he found that people who were asked to explain themselves ended up solving 30 percent fewer problems than those who weren’t. In short, when you write down your thoughts, your chances of having the flash of insight you need in order to come up with a solution are significantly impaired—just as describing the face of your waitress made you unable to pick her out of a police lineup. (The solution to the pyramid problem, by the way, is to destroy the bill in some way—tear it or burn it.)

With a logic problem, asking people to explain themselves doesn’t impair their ability to come up with the answers. In some cases, in fact, it may help. But problems that require a flash of insight operate by different rules. “It’s the same kind of paralysis through analysis you find in sports contexts,” Schooler says. “When you start becoming reflective about the process, it undermines your ability. You lose the flow. There are certain kinds of fluid, intuitive, nonverbal kinds of experience that are vulnerable to this process.” As human beings, we are capable of extraordinary leaps of insight and instinct. We can hold a face in memory, and we can solve a puzzle in a flash. But what Schooler is saying is that all these abilities are incredibly fragile. Insight is not a lightbulb that goes off inside our heads. It is a flickering candle that can easily be snuffed out.

Gary Klein, the decision-making expert, once did an interview with a fire department commander in Cleveland as part of a project to get professionals to talk about times when they had to make tough, split-second decisions. The story the fireman told was about a seemingly routine call he had taken years before, when he was a lieutenant. The fire was in the back of a one-story house in a residential neighborhood, in the kitchen. The lieutenant and his men broke down the front door, laid down their hose, and then, as firemen say, “charged the line,” dousing the flames in the kitchen with water. Something should have happened at that point: the fire should have abated. But it didn’t. So the men sprayed again. Still, it didn’t seem to make much difference. The firemen retreated back through the archway into the living room, and there, suddenly, the lieutenant thought to himself, There’s something wrong. He turned to his men. “Let’s get out, now!” he said, and moments after they did, the floor on which they had been standing collapsed. The fire, it turned out, had been in the basement.

“He didn’t know why he had ordered everyone out,” Klein remembers. “He believed it was ESP. He was serious. He thought he had ESP, and he felt that because of that ESP, he’d been protected throughout his career.”

Klein is a decision researcher with a Ph.D., a deeply intelligent and thoughtful man, and he wasn’t about to accept that as an answer. Instead, for the next two hours, again and again he led the firefighter back over the events of that day in an attempt to document precisely what the lieutenant did and didn’t know. “The first thing was that the fire didn’t behave the way it was supposed to,” Klein says. Kitchen fires should respond to water. This one didn’t. “Then they moved back into the living room,” Klein went on. “He told me that he always keeps his earflaps up because he wants to get a sense of how hot the fire is, and he was surprised at how hot this one was. A kitchen fire shouldn’t have been that hot. I asked him, ‘What else?’ Often a sign of expertise is noticing what doesn’t happen, and the other thing that surprised him was that the fire wasn’t noisy. It was quiet, and that didn’t make sense given how much heat there was.”

In retrospect all those anomalies make perfect sense. The fire didn’t respond to being sprayed in the kitchen because it wasn’t centered in the kitchen. It was quiet because it was muffled by the floor. The living room was hot because the fire was underneath the living room, and heat rises. At the time, though, the lieutenant made none of those connections consciously. All of his thinking was going on behind the locked door of his unconscious. This is a beautiful example of thin-slicing in action. The fireman’s internal computer effortlessly and instantly found a pattern in the chaos. But surely the most striking fact about that day is how close it all came to disaster. Had the lieutenant stopped and discussed the situation with his men, had he said to them, let’s talk this over and try to figure out what’s going on, had he done, in other words, what we often think leaders are supposed to do to solve difficult problems, he might have destroyed his ability to jump to the insight that saved their lives.

In Millennium Challenge, this is exactly the mistake that Blue Team made. They had a system in place that forced their commanders to stop and talk things over and figure out what was going on. That would have been fine if the problem in front of them demanded logic. But instead, Van Riper presented them with something different. Blue Team thought they could listen to Van Riper’s communications. But he started sending messages by couriers on motorcycles. They thought he couldn’t launch his planes. But he borrowed a forgotten technique from World War II and used lighting systems. They thought he couldn’t track their ships. But he flooded the Gulf with little PT boats. And then, on the spur of the moment, Van Riper’s field commanders attacked, and all of a sudden what Blue Team thought was a routine “kitchen fire” was something they could not factor into their equations at all. They needed to solve an insight problem, but their powers of insight had been extinguished.

“What I heard is that Blue Team had all these long discussions,” Van Riper says. “They were trying to decide what the political situation was like. They had charts with up arrows and down arrows. I remember thinking, Wait a minute. You were doing that while you were fighting? They had all these acronyms. The elements of national power were diplomatic, informational, military, and economic. That gives you DIME. They would always talk about the Blue DIME. Then there were the political, military, economic, social, infrastructure, and information instruments, PMESI. So they’d have these terrible conversations where it would be our DIME versus their PMESI. I wanted to gag. What are you talking about? You know, you get caught up in forms, in matrixes, in computer programs, and it just draws you in. They were so focused on the mechanics and the process that they never looked at the problem holistically. In the act of tearing something apart, you lose its meaning.”

“The Operational Net Assessment was a tool that was supposed to allow us to see all, know all,” Major General Dean Cash, one of the senior JFCOM officials involved in the war game, admitted afterward. “Well, obviously it failed.”

4. A Crisis in the ER

On West Harrison Street in Chicago, two miles west of the city’s downtown, there is an ornate, block-long building designed and built in the early part of the last century. For the better part of one hundred years, this was the home of Cook County Hospital. It was here that the world’s first blood bank opened, where cobalt-beam therapy was pioneered, where surgeons once reattached four severed fingers, and where the trauma center was so famous—and so busy treating the gunshot wounds and injuries of the surrounding gangs—that it inspired the television series ER. In the late 1990s, however, Cook County Hospital started a project that may one day earn the hospital as much acclaim as any of those earlier accomplishments. Cook County changed the way its physicians diagnose patients coming to the ER complaining of chest pain, and how and why they did that offers another way of understanding Paul Van Riper’s unexpected triumph in Millennium Challenge.

Cook County’s big experiment began in 1996, a year after a remarkable man named Brendan Reilly came to Chicago to become chairman of the hospital’s Department of Medicine. The institution that Reilly inherited was a mess. As the city’s principal public hospital, Cook County was the place of last resort for the hundreds of thousands of Chicagoans without health insurance. Resources were stretched to the limit. The hospital’s cavernous wards were built for another century. There were no private rooms, and patients were separated by flimsy plywood dividers. There was no cafeteria or private telephone—just a payphone for everyone at the end of the hall. In one possibly apocryphal story, doctors once trained a homeless man to do routine lab tests because there was no one else available.

“In the old days,” says one physician at the hospital, “if you wanted to examine a patient in the middle of the night, there was only one light switch, so if you turned on the light, the whole ward lit up. It wasn’t until the mid-seventies that they got individual bed lights. Because it wasn’t air-conditioned, they had these big fans, and you can imagine the racket they made. There would be all kinds of police around because Cook County was where they brought patients from the jails, so you’d see prisoners shackled to the beds. The patients would bring in TVs and radios, and they would be blaring, and people would sit out in the hallways like they were sitting on a porch on a summer evening. There was only one bathroom for these hallways filled with patients, so people would be walking up and down, dragging their IVs. Then there were the nurses’ bells that you buzzed to get a nurse. But of course there weren’t enough nurses, so the bells would constantly be going, ringing and ringing. Try listening to someone’s heart or lungs in that setting. It was a crazy place.”

Reilly had begun his medical career at the medical center at Dartmouth College, a beautiful, prosperous state-of-the-art hospital nestled in the breezy, rolling hills of New Hampshire. West Harrison Street was another world. “The first summer I was here was the summer of ninety-five, when Chicago had a heat wave that killed hundreds of people, and of course the hospital wasn’t air-conditioned,” Reilly remembers. “The heat index inside the hospital was a hundred and twenty. We had patients—sick patients—trying to live in that environment. One of the first things I did was grab one of the administrators and just walk her down the hall and have her stand in the middle of one of the wards. She lasted about eight seconds.”

The list of problems Reilly faced was endless. But the Emergency Department (the ED) seemed to cry out for special attention. Because so few Cook County patients had health insurance, most of them entered the hospital through the Emergency Department, and the smart patients would come first thing in the morning and pack a lunch and a dinner. There were long lines down the hall. The rooms were jammed. A staggering 250,000 patients came through the ED every year.

“A lot of times,” says Reilly, “I’d have trouble even walking through the ED. It was one gurney on top of another. There was constant pressure about how to take care of these folks. The sick ones had to be admitted to the hospital, and that’s when it got interesting. It’s a system with constrained resources. How do you figure out who needs what? How do you figure out how to direct resources to those who need them the most?” A lot of those people were suffering from asthma, because Chicago has one of the worst asthma problems in the United States. So Reilly worked with his staff to develop specific protocols for efficiently treating asthma patients, and another set of programs for treating the homeless.

But from the beginning, the question of how to deal with heart attacks was front and center. A significant number of those people filing into the ED—on average, about thirty a day—were worried that they were having a heart attack. And those thirty used more than their share of beds and nurses and doctors and stayed around a lot longer than other patients. Chest-pain patients were resource-intensive. The treatment protocol was long and elaborate and—worst of all—maddeningly inconclusive.

A patient comes in clutching his chest. A nurse takes his blood pressure. A doctor puts a stethoscope on his chest and listens for the distinctive crinkling sound that will tell her whether the patient has fluid in his lungs—a sure sign that his heart is having trouble keeping up its pumping responsibilities. She asks him a series of questions: How long have you been experiencing chest pain? Where does it hurt? Are you in particular pain when you exercise? Have you had heart trouble before? What’s your cholesterol level? Do you use drugs? Do you have diabetes (which has a powerful association with heart disease)? Then a technician comes in, pushing a small device the size of a desktop computer printer on a trolley. She places small plastic stickers with hooks on them at precise locations on the patient’s arms and chest. An electrode is clipped to each sticker, which “reads” the electrical activity of his heart and prints out the pattern on a sheet of pink graph paper. This is the electrocardiogram. In theory, a healthy patient’s heart will produce a distinctive—and consistent—pattern on the page that looks like the profile of a mountain range. And if the patient is having heart trouble, the pattern will be distorted. Lines that usually go up may now be moving down. Lines that once were curved may now be flat or elongated or spiked, and if the patient is in the throes of a heart attack, the ECG readout is supposed to form two very particular and recognizable patterns. The key words, though, are “supposed to.” The ECG is far from perfect. Sometimes someone with an ECG that looks perfectly normal can be in serious trouble, and sometimes someone with an ECG that looks terrifying can be perfectly healthy. There are ways to tell with absolute certainty whether someone is having a heart attack, but those involve tests of particular enzymes that can take hours for results. And the doctor confronted in the emergency room with a patient in agony and another hundred patients in a line down the corridor doesn’t have hours. So when it comes to chest pain, doctors gather as much information as they can, and then they make an estimate.

The problem with that estimate, though, is that it isn’t very accurate. One of the things Reilly did early in his campaign at Cook, for instance, was to put together twenty perfectly typical case histories of people with chest pain and give the histories to a group of doctors—cardiologists, internists, emergency room docs, and medical residents—people, in other words, who had lots of experience making estimates about chest pain. The point was to see how much agreement there was about who among the twenty cases was actually having a heart attack. What Reilly found was that there really wasn’t any agreement at all. The answers were all over the map. The same patient might be sent home by one doctor and checked into intensive care by another. “We asked the doctors to estimate on a scale of zero to one hundred the probability that each patient was having an acute myocardial infarction [heart attack] and the odds that each patient would have a major life-threatening complication in the next three days,” Reilly says. “In each case, the answers we got pretty much ranged from zero to one hundred. It was extraordinary.”

The doctors thought they were making reasoned judgments. But in reality they were making something that looked a lot more like a guess, and guessing, of course, leads to mistakes. Somewhere between 2 and 8 percent of the time in American hospitals, a patient having a genuine heart attack gets sent home—because the doctor doing the examination thinks for some reason that the patient is healthy. More commonly, though, doctors correct for their uncertainty by erring heavily on the side of caution. As long as there is a chance that someone might be having a heart attack, why take even the smallest risk by ignoring her problem?

“Say you’ve got a patient who presents to ER complaining of severe chest pain,” Reilly says. “He’s old and he smokes and he has high blood pressure. There are lots of things to make you think, Gee, it’s his heart. But then, after evaluating the patient, you find out his ECG is normal. What do you do? Well, you probably say to yourself, This is an old guy with a lot of risk factors who’s having chest pain. I’m not going to trust the ECG.” In recent years, the problem has gotten worse because the medical community has done such a good job of educating people about heart attacks that patients come running to the hospital at the first sign of chest pain. At the same time, the threat of malpractice has made doctors less and less willing to take a chance on a patient, with the result that these days only about 10 percent of those admitted to a hospital on suspicion of having a heart attack actually have a heart attack.

This, then, was Reilly’s problem. He wasn’t back at Dartmouth or over in one of the plush private hospitals on Chicago’s north side, where money wasn’t an issue. He was at Cook County. He was running the Department of Medicine on a shoestring. Yet every year, the hospital found itself spending more and more time and money on people who were not actually having a heart attack. A single bed in Cook County’s coronary care unit, for instance, cost roughly $2,000 a night—and a typical chest pain patient might stay for three days—yet the typical chest pain patient might have nothing, at that moment, wrong with him. Is this, the doctors at Cook County asked themselves, any way to run a hospital?

“The whole sequence began in 1996,” Reilly says. “We just didn’t have the number of beds we needed to deal with patients with chest pain. We were constantly fighting about which patient needs what.” Cook County at that time had eight beds in its coronary care unit, and another twelve beds in what’s called intermediate coronary care, which is a ward that’s a little less intensive and cheaper to run (about $1,000 a night instead of $2,000) and staffed by nurses instead of cardiologists. But that wasn’t enough beds. So they opened another section, called the observation unit, where they could put a patient for half a day or so under the most basic care. “We created a third, lower-level option and said, ‘Let’s watch this. Let’s see if it helps.’ But pretty soon what happened is that we started fighting about who gets into the observation unit,” Reilly went on. “I’d be getting phone calls all through the night. It was obvious that there was no standardized, rational way of making this decision.”

Reilly is a tall man with a runner’s slender build. He was raised in New York City, the product of a classical Jesuit education: Regis for high school, where he had four years of Latin and Greek, and Fordham University for college, where he read everything from the ancients to Wittgenstein and Heidegger and thought about an academic career in philosophy before settling on medicine. Once, as an assistant professor at Dartmouth, Reilly grew frustrated with the lack of any sort of systematic textbook on the everyday problems that doctors encounter in the outpatient setting—things like dizziness, headaches, and abdominal pain. So he sat down and, in his free evenings and weekends, wrote an eight-hundred-page textbook on the subject, painstakingly reviewing the available evidence for the most common problems a general practitioner might encounter. “He’s always exploring different topics, whether it’s philosophy or Scottish poetry or the history of medicine,” says his friend and colleague Arthur Evans, who worked with Reilly on the chest pain project. “He’s usually reading five books at once, and when he took a sabbatical leave when he was at Dartmouth, he spent the time writing a novel.”

No doubt Reilly could have stayed on the East Coast, writing one paper after another in air-conditioned comfort on this or that particular problem. But he was drawn to Cook County. The thing about a hospital that serves only the poorest and the neediest is that it attracts the kinds of nurses and doctors who want to serve the poorest and neediest—and Reilly was one of those. The other thing about Cook County was that because of its relative poverty, it was a place where it was possible to try something radical—and what better place to go for someone interested in change?

Reilly’s first act was to turn to the work of a cardiologist named Lee Goldman. In the 1970s, Goldman got involved with a group of mathematicians who were very interested in developing statistical rules for telling apart things like subatomic particles. Goldman wasn’t much interested in physics, but it struck him that some of the same mathematical principles the group was using might be helpful in deciding whether someone was suffering a heart attack. So he fed hundreds of cases into a computer, looking at what kinds of things actually predicted a heart attack, and came up with an algorithm—an equation—that he believed would take much of the guesswork out of treating chest pain. Doctors, he concluded, ought to combine the evidence of the ECG with three of what he called urgent risk factors: (1) Is the pain felt by the patient unstable angina? (2) Is there fluid in the patient’s lungs? and (3) Is the patient’s systolic blood pressure below 100?

For each combination of risk factors, Goldman drew up a decision tree that recommended a treatment option. For example, a patient with a normal ECG who was positive on all three urgent risk factors would go to the intermediate unit; a patient whose ECG showed acute ischemia (that is, the heart muscle wasn’t getting enough blood) but who had either one or no risk factors would be considered low-risk and go to the short-stay unit; someone with an ECG positive for ischemia and two or three risk factors would be sent directly to the cardiac care unit—and so on.

Goldman worked on his decision tree for years, steadily refining and perfecting it. But at the end of his scientific articles, there was always a plaintive sentence about how much more hands-on, real-world research needed to be done before the decision tree could be used in clinical practice. As the years passed, however, no one volunteered to do that research—not even at Harvard Medical School, where Goldman began his work, or at the equally prestigious University of California at San Francisco, where he completed it. For all the rigor of his calculations, it seemed that no one wanted to believe what he was saying, that an equation could perform better than a trained physician.

Ironically, a big chunk of the funding for Goldman’s initial research had come not from the medical community itself but from the navy. Here was a man trying to come up with a way to save lives and improve the quality of care in every hospital in the country and save billions of dollars in health care costs, and the only group that got excited was the Pentagon. Why? For the most arcane of reasons: If you are in a submarine at the bottom of the ocean, quietly snooping in enemy waters, and one of your sailors starts suffering from chest pain, you really want to know whether you need to surface (and give away your position) in order to rush him to a hospital or whether you can stay underwater and just send him to his bunk with a couple of Rolaids.

But Reilly shared none of the medical community’s qualms about Goldman’s findings. He was in a crisis. He took Goldman’s algorithm, presented it to the doctors in the Cook County ED and the doctors in the Department of Medicine, and announced that he was holding a bake-off. For the first few months, the staff would use their own judgment in evaluating chest pain, the way they always had. Then they would use Goldman’s algorithm, and the diagnosis and outcome of every patient treated under the two systems would be compared. For two years, data were collected, and in the end, the result wasn’t even close. Goldman’s rule won hands down in two directions: it was a whopping 70 percent better than the old method at recognizing the patients who weren’t actually having a heart attack. At the same time, it was safer. The whole point of chest pain prediction is to make sure that patients who end up having major complications are assigned right away to the coronary and intermediate units. Left to their own devices, the doctors guessed right on the most serious patients somewhere between 75 and 89 percent of the time. The algorithm guessed right more than 95 percent of the time. For Reilly, that was all the evidence he needed. He went to the ED and changed the rules. In 2001, Cook County Hospital became one of the first medical institutions in the country to devote itself full-time to the Goldman algorithm for chest pain, and if you walk into the Cook County ER, you’ll see a copy of the heart attack decision tree posted on the wall.

5. When Less Is More

Why is the Cook County experiment so important? Because we take it, as a given, that the more information decision makers have, the better off they are. If the specialist we are seeing says she needs to do more tests or examine us in more detail, few of us think that’s a bad idea. In Millennium Challenge, Blue Team took it for granted that because they had more information at their fingertips than Red Team did, they had a considerable advantage. This was the second pillar of Blue Team’s aura of invincibility. They were more logical and systematic than Van Riper, and they knew more. But what does the Goldman algorithm say? Quite the opposite: that all that extra information isn’t actually an advantage at all; that, in fact, you need to know very little to find the underlying signature of a complex phenomenon. All you need is the evidence of the ECG, blood pressure, fluid in the lungs, and unstable angina.

That’s a radical statement. Take, for instance, the hypothetical case of a man who comes into the ER complaining of intermittent left-side chest pain that occasionally comes when he walks up the stairs and that lasts from five minutes to three hours. His chest exam, heart exam, and ECG are normal, and his systolic blood pressure is 165, meaning it doesn’t qualify as an urgent factor. But he’s in his sixties. He’s a hard-charging executive. He’s under constant pressure. He smokes. He doesn’t exercise. He’s had high blood pressure for years. He’s overweight. He had heart surgery two years ago. He’s sweating. It certainly seems like he ought to be admitted to the coronary care unit right away. But the algorithm says he shouldn’t be. All those extra factors certainly matter in the long term. The patient’s condition and diet and lifestyle put him at serious risk of developing heart disease over the next few years. It may even be that those factors play a very subtle and complex role in increasing the odds of something happening to him in the next seventy-two hours. What Goldman’s algorithm indicates, though, is that the role of those other factors is so small in determining what is happening to the man right now that an accurate diagnosis can be made without them. In fact—and this is a key point in explaining the breakdown of Blue Team that day in the Gulf—that extra information is more than useless. It’s harmful. It confuses the issues. What screws up doctors when they are trying to predict heart attacks is that they take too much information into account.

The problem of too much information also comes up in studies of why doctors sometimes make the mistake of missing a heart attack entirely—of failing to recognize when someone is on the brink of or in the midst of a major cardiac complication. Physicians, it turns out, are more likely to make this kind of mistake with women and minorities. Why is that? Gender and race are not irrelevant considerations when it comes to heart problems; blacks have a different overall risk profile than whites, and women tend to have heart attacks much later in life than men. The problem arises when the additional information of gender and race is factored into a decision about an individual patient. It serves only to overwhelm the physician still further. Doctors would do better in these cases if they knew less about their patients—if, that is, they had no idea whether the people they were diagnosing were white or black, male or female.

It is no surprise that it has been so hard for Goldman to get his ideas accepted. It doesn’t seem to make sense that we can do better by ignoring what seems like perfectly valid information. “This is what opens the decision rule to criticism,” Reilly says. “This is precisely what docs don’t trust. They say, ‘This process must be more complicated than just looking at an ECG and asking these few questions. Why doesn’t this include whether the patient has diabetes? How old he is? Whether he’s had a heart attack before?’ These are obvious questions.

They look at it and say, ‘This is nonsense, this is not how you make decisions.’” Arthur Evans says that there is a kind of automatic tendency among physicians to believe that a life-or-death decision has to be a difficult decision. “Doctors think it’s mundane to follow guidelines,” he says. “It’s much more gratifying to come up with a decision on your own. Anyone can follow an algorithm. There is a tendency to say, ‘Well, certainly I can do better. It can’t be this simple and efficient; otherwise, why are they paying me so much money?’” The algorithm doesn’t feel right.

Many years ago a researcher named Stuart Oskamp conducted a famous study in which he gathered together a group of psychologists and asked each of them to consider the case of a twenty-nine-year-old war veteran named Joseph Kidd. In the first stage of the experiment, he gave them just basic information about Kidd. Then he gave them one and a half single-spaced pages about his childhood. In the third stage, he gave each person two more pages of background on Kidd’s high school and college years. Finally, he gave them a detailed account of Kidd’s time in the army and his later activities. After each stage, the psychologists were asked to answer a twenty-five-item multiple-choice test about Kidd. Oskamp found that as he gave the psychologists more and more information about Kidd, their confidence in the accuracy of their diagnoses increased dramatically. But were they really getting more accurate? As it turns out, they weren’t. With each new round of data, they would go back over the test and change their answers to eight or nine or ten of the questions, but their overall accuracy remained pretty constant at about 30 percent.

“As they received more information,” Oskamp concluded, “their certainty about their own decisions became entirely out of proportion to the actual correctness of those decisions.” This is the same thing that happens with doctors in the ER. They gather and consider far more information than is truly necessary because it makes them feel more confident—and with someone’s life in the balance, they need to feel more confident. The irony, though, is that that very desire for confidence is precisely what ends up undermining the accuracy of their decision. They feed the extra information into the already overcrowded equation they are building in their heads, and they get even more muddled.

What Reilly and his team at Cook County were trying to do, in short, was provide some structure for the spontaneity of the ER. The algorithm is a rule that protects the doctors from being swamped with too much information—the same way that the rule of agreement protects improv actors when they get up onstage. The algorithm frees doctors to attend to all of the other decisions that need to be made in the heat of the moment: If the patient isn’t having a heart attack, what is wrong with him? Do I need to spend more time with this patient or turn my attention to someone with a more serious problem? How should I talk to and relate to him? What does this person need from me to get better?

“One of the things Brendan tries to convey to the house staff is to be meticulous in talking to patients and listening to them and giving a very careful and thorough physical examination—skills that have been neglected by many training programs,” Evans says. “He feels strongly that those activities have intrinsic value in terms of connecting you to another person. He thinks it’s impossible to care for someone unless you know about their circumstances—their home, their neighborhood, their life. He thinks that there are a lot of social and psychological aspects to medicine that physicians don’t pay enough attention to.” Reilly believes that a doctor has to understand the patient as a person, and if you believe in the importance of empathy and respect in the doctor-patient relationship, you have to create a place for that. To do so, you have to relieve the pressure of decision making in other areas.

There are, I think, two important lessons here. The first is that truly successful decision making relies on a balance between deliberate and instinctive thinking. Bob Golomb is a great car salesman because he is very good, in the moment, at intuiting the intentions and needs and emotions of his customers. But he is also a great salesman because he understands when to put the brakes on that process: when to consciously resist a particular kind of snap judgment. Cook County’s doctors, similarly, function as well as they do in the day-to-day rush of the ER because Lee Goldman sat down at his computer and over the course of many months painstakingly evaluated every possible piece of information that he could. Deliberate thinking is a wonderful tool when we have the luxury of time, the help of a computer, and a clearly defined task, and the fruits of that type of analysis can set the stage for rapid cognition.

The second lesson is that in good decision making, frugality matters. John Gottman took a complex problem and reduced it to its simplest elements: even the most complicated of relationships and problems, he showed, have an identifiable underlying pattern. Lee Goldman’s research proves that in picking up these sorts of patterns, less is more. Overloading the decision makers with information, he proves, makes picking up that signature harder, not easier. To be a successful decision maker, we have to edit.

When we thin-slice, when we recognize patterns and make snap judgments, we do this process of editing unconsciously. When Thomas Hoving first saw the kouros, the thing his eyes were drawn to was how fresh it looked. Federico Zeri focused instinctively on the fingernails. In both cases, Hoving and Zeri brushed aside a thousand other considerations about the way the sculpture looked and zeroed in on a specific feature that told them everything they needed to know. I think we get in trouble when this process of editing is disrupted—when we can’t edit, or we don’t know what to edit, or our environment doesn’t let us edit.

Remember Sheena Iyengar, who did the research on speed-dating? She once conducted another experiment in which she set up a tasting booth with a variety of exotic gourmet jams at the upscale grocery store Draeger’s in Menlo Park, California. Sometimes the booth had six different jams, and sometimes Iyengar had twenty-four different jams on display. She wanted to see whether the number of jam choices made any difference in the number of jams sold. Conventional economic wisdom, of course, says that the more choices consumers have, the more likely they are to buy, because it is easier for consumers to find the jam that perfectly fits their needs. But Iyengar found the opposite to be true. Thirty percent of those who stopped by the six-choice booth ended up buying some jam, while only 3 percent of those who stopped by the bigger booth bought anything. Why is that? Because buying jam is a snap decision. You say to yourself, instinctively, I want that one. And if you are given too many choices, if you are forced to consider much more than your unconscious is comfortable with, you get paralyzed. Snap judgments can be made in a snap because they are frugal, and if we want to protect our snap judgments, we have to take steps to protect that frugality.

This is precisely what Van Riper understood with Red Team. He and his staff did their analysis. But they did it first, before the battle started. Once hostilities began, Van Riper was careful not to overload his team with irrelevant information. Meetings were brief. Communication between headquarters and the commanders in the field was limited. He wanted to create an environment where rapid cognition was possible. Blue Team, meanwhile, was gorging on information. They had a database, they boasted, with forty thousand separate entries in it. In front of them was the CROP—a huge screen showing the field of combat in real time. Experts from every conceivable corner of the U.S. government were at their service. They were seamlessly connected to the commanders of the four military services in a state-of-the-art interface. They were the beneficiaries of a rigorous ongoing series of analyses about what their opponent’s next moves might be.

But once the shooting started, all of that information became a burden. “I can understand how all the concepts that Blue was using translate into planning for an engagement,” Van Riper says. “But does it make a difference in the moment? I don’t believe it does. When we talk about analytic versus intuitive decision making, neither is good or bad. What is bad is if you use either of them in an inappropriate circumstance. Suppose you had a rifle company pinned down by machine-gun fire. And the company commander calls his troops together and says, ‘We have to go through the command staff with the decision-making process.’ That’s crazy. He should make a decision on the spot, execute it, and move on. If we had had Blue’s processes, everything we did would have taken twice as long, maybe four times as long. The attack might have happened six or eight days later. The process draws you in. You disaggregate everything and tear it apart, but you are never able to synthesize the whole. It’s like the weather. A commander does not need to know the barometric pressure or the winds or even the temperature. He needs to know the forecast. If you get too caught up in the production of information, you drown in the data.”

Paul Van Riper’s twin brother, James, also joined the Marine Corps, rising to the rank of colonel before his retirement, and, like most of the people who know Paul Van Riper well, he wasn’t at all surprised at the way Millennium Challenge turned out. “Some of these new thinkers say if we have better intelligence, if we can see everything, we can’t lose,” Colonel Van Riper said. “What my brother always says is, ‘Hey, say you are looking at a chess board. Is there anything you can’t see? No. But are you guaranteed to win? Not at all, because you can’t see what the other guy is thinking.’ More and more commanders want to know everything, and they get imprisoned by that idea. They get locked in. But you can never know everything.” Did it really matter that Blue Team was many times the size of Red Team? “It’s like Gulliver’s Travels,” Colonel Van Riper says. “The big giant is tied down by those little rules and regulations and procedures. And the little guy? He just runs around and does what he wants.”

6. Millennium Challenge, Part Two

For a day and a half after Red Team’s surprise attack on Blue Team in the Persian Gulf, an uncomfortable silence fell over the JFCOM facility. Then the JFCOM staff stepped in. They turned back the clock. Blue Team’s sixteen lost ships, which were lying at the bottom of the Persian Gulf, were refloated. In the first wave of his attack, Van Riper had fired twelve theater ballistic missiles at various ports in the Gulf region where Blue Team troops were landing. Now, JFCOM told him, all twelve of those missiles had been shot down, miraculously and mysteriously, with a new kind of missile defense. Van Riper had assassinated the leaders of the pro-U.S. countries in the region. Now, he was told, those assassinations had no effect.

“The day after the attack, I walked into the command room and saw the gentleman who was my number two giving my team a completely different set of instructions,” Van Riper said. “It was things like—shut off the radar so Blue force are not interfered with. Move ground forces so marines can land without any interference. I asked, ‘Can I shoot down one V-twenty-two?’ and he said, ‘No, you can’t shoot down any V-twenty-two’s.’ I said, ‘What the hell’s going on in here?’ He said, ‘Sir, I’ve been given guidance by the program director to give completely different directions.’ The second round was all scripted, and if they didn’t get what they liked, they would just run it again.”

Millennium Challenge, the sequel, was won by Blue Team in a rout. There were no surprises the second time around, no insight puzzles, no opportunities for the complexities and confusion of the real world to intrude on the Pentagon’s experiment. And when the sequel was over, the analysts at JFCOM and the Pentagon were jubilant. The fog of war had been lifted. The military had been transformed, and with that, the Pentagon confidently turned its attention to the real Persian Gulf. A rogue dictator was threatening the stability of the region. He was virulently anti-American. He had a considerable power base from strong religious and ethnic loyalties and was thought to be harboring terrorist organizations. He needed to be replaced and his country restored to stability, and if they did it right—if they had CROP and PMESI and DIME—how hard could that be?

Загрузка...