2
GAME THEORY 101
WHEN PEOPLE TALK about science, subjects like chemistry and physics leap to mind. Political science certainly does not. But science is a method, not a subject. It is a method that relies on logical arguments and experimental evidence to figure out how the world of things—and of people—works. The scientific method certainly applies to politics just as it does to physics. Still, physics and politics are quite obviously entirely different subjects. One of the ways they differ is crucial for understanding everything that is to come. You see, the world of physics is pretty much about how particles interact. Now, the central feature of particle interactions is that photons, electrons, neutrons, or their constituent quarks never anticipate crashing into one another. Consequently, there is no strategizing behind the collision of particles.
Studying people is ever so much more complicated than studying inanimate particles. Just think how different interactions are between quarks and Quakers, electrons and electors, protons and protesters. People, and in fact just about every living thing, seem to have a survival instinct. Genes act as if they want to get passed on, bacteria find hosts, cockroaches flee my shoe, and ordinary people look out for what they think is good for them and try to avoid what they think is bad for them. That includes cooperating with friends and fighting with foes. Like the physicist’s particles, people interact, but unlike the physicist’s particles, people interact strategically. That is what game-theory thinking is all about.
To be a successful prognosticator, it is critical to think about how other people think about their problems. It is just as important to think about how other people think about how you think about your problems and theirs. The previous, tedious sentence, by the way, could be repeated ad infinitum to reflect on the information that gets ferreted out when thinking strategically. This and the next chapter—and the science of predictioneering—are about solving the problem of working out what others think, what they think you think, what you think they think, what you think they think you think. … This is the kind of information that physicists rightly don’t give a moment’s thought to when studying the particles that capture their interest—but it is the foundation from which we can see when and how to turn situations to our own advantage.
WHERE WE ARE HEADED
In Game Theory 101 we’ll consider how to look at the world through the eyes of others. For starters, we’ll need to set aside, at least for argument’s sake, our natural optimism about human nature. Game theory urges us to take a cold, hard look at what it means to be a calculating, rational decision maker. Sure, there are some genuinely nice, altruistic people in the world—but that doesn’t mean they aren’t carefully calculating their actions. In fact, we’ll see that even as nice and altruistic a person as Mother Teresa can be scrutinized through the not-so-warm-and-fuzzy eyes of a game theorist. Doing so will help us understand how paths as different as hers and a suicide bomber’s can be equally rational and strategically sensible. It will also help us realize that even some of the most unquestioned received wisdom—such as the existence of something called the national interest—may be just a strategic fiction created by politicians for their own advantage instead of ours. Depressing? Yes. Accurate? You bet.
This chapter will provide us with a framework for the game theorist’s notions of interests, beliefs, and rationality; a sense of how to use logic to cut through the fog of language; and an understanding of strategic behavior that, in conjunction with the previous two concepts, leads to an ability to better map and anticipate the thinking and actions of others.
WHAT ARE THE OTHER GUY’S
INTERESTS AND BELIEFS?
Game theory comes in two primary flavors. Cooperative game theory was invented by John von Neumann and Oskar Morgenstern.1 Their 1947 book on the subject drew a clear and compelling analogy between problems people (or nations) face and parlor games like charades or the name-in-the-hat game, a favorite in my family. These sorts of games deal with players who engage each other, trying to anticipate moves and countermoves, but only in a setting where what they say they will do is the same as what they actually do. That’s why it’s called cooperative game theory—a promise made is a promise kept. Because of this, one big limitation with cooperative game theory, especially in games that involve more than two players, is that it has far too optimistic a view of human nature. In this universe people make deals and keep them. They can be bought off, sure, but once they say they’ll do something, they do it. That means cooperative game theory works fine for zero-sum games where what one side loses equals what the other side wins, but not all that many interesting problems in the world are that cut and dried. When they are not, this original variety of game theory is not nearly as good for my purposes as what has replaced it.
By the early 1950s, the mathematician John Nash, the subject of A Beautiful Mind and the winner of the 1994 Nobel Prize in Economics, invented a different kind of game theory.2 He drew attention to the propensity people have not to cooperate with one another. Poker players and diplomats use polite terms, like “bluffing,” for what ordinary people mean when they say someone is a liar. In noncooperative games, promises do not necessarily mean anything. Lies are a part of strategizing. Promises are kept when a player decides it’s in her interest to do what she promised. When promises and interests differ, people renege, they break their word, they cheat, they do whatever they think will benefit them most. Of course, they know that bluffing and cheating can be costly. Therefore, they take prospective costs as well as benefits into account. In fact, raising costs is one way, albeit a difficult and painful way, of encouraging people to be truthful. Indeed, that is exactly the purpose behind meeting and then raising someone’s bet in poker or calling car dealers instead of going in to see them.
The view of people as cold, ruthless, and self-interested is at the heart of game-theory thinking. There may be room for nice guys, but not much. Most of the time, nice guys really do finish last. Those who will throw themselves on a hand grenade to save their fellows, well, they do so and then, tragically, they are dead. They are out of the game of life. We remember them, we honor them, we extol them, but we just don’t compete with them, because they are not here to compete with us. Such good souls need not occupy much of our time. Or, if they do, we applied game theorists take a cynical view and look for how suicide might benefit them. There might be virgins in heaven, or, as with kamikaze pilots in World War II, Crusaders in the Middle Ages, and some suicide bombers today, there might be significant financial incentives such as cash payments and debt forgiveness to their families in exchange for their sacrifice.
Some may find this materialistic explanation of personal sacrifice offensive. The trouble is, it’s a lot costlier to believe mistakenly in other people’s goodwill than it is to be a cynic and assume they’re looking out for themselves (until and unless their actions say otherwise). It is hard to get burned in personal dealings if you remember Ronald Reagan’s dictum: Trust but verify. For those who are offended by this tough view of human nature, I urge you to consider some facts.
The United States operates the Concerned Local Citizens program in Iraq. Following the alphabet-soup tradition so beloved by the Pentagon, the Iraqis participating in this program are known as CLCs. CLCs help guard neighborhoods against insurgents. They are paid ten dollars a day for their service. It doesn’t seem as if there is anything crass or overly materialistic about that. But then we should pause to ask, who are these CLCs and what, exactly, are we buying for ten dollars a day?
These concerned Iraqis are not your ordinary neighborhood watch group. They are not the folks next door who give school kids a safe place to go when their parents are at work. They are not the friends who have your house key, water your plants, take in your mail, and feed your cat while you’re on vacation. No, they’re former anti-American insurgents, tens of thousands of them. Some of them, in fact, used to belong to al-Qaeda. It would seem that they were among the most fanatic of fanatics, the worst of the worst. And yet for a measly ten bucks a day these supposedly unshakable al-Qaeda terrorists now act like allies of the United States, serving as our very own paramilitaries, helping to keep violence down in mostly Sunni neighborhoods, defending the peace that they used to shatter for a living. How can this be? How can terrorists be so easily converted into our friends and protectors?
As it happens, being an ex-insurgent employed as a CLC is a very good job by Iraqi standards. At ten dollars a day, CLCs can earn a few thousand dollars a year from the United States, plus, of course, whatever extra they make on the side. The average Iraqi, despite that country’s huge oil wealth, earns only about six dollars a day, almost half what a CLC gets!3 Those who think that terrorists are irrational religious zealots who do not respond to monetary and personal incentives should remember that a daily dose of just ten dollars is enough to get such folks to become quasi-friends of the United States of America.
Of course, there is as much room for saints as for sinners in game theory. There’s no problem accommodating the (few) Mother Teresas of our world. Since game theory is about choosing actions given expected costs and benefits, it does encourage us to ask, perhaps obnoxiously, what benefits Mother Teresa might have expected in return for her life of sacrifice and good works. We cannot help but notice that she did not serve the poor as quietly as most nuns do, living out their lives in anonymous obscurity. The very publicness of Mother Teresa’s deeds reassures us of her rationality and her potential to help poor people on a large scale.
Whether we call on the Catholic understanding of a saintly life or the Talmudic view of a charitable life, we encounter a problem on Mother Teresa’s behalf. In doing her good works, she might have had to worry, as (Saint) Bernard of Clairveaux (1090-1153) did, that in obeying God’s commandments as faithfully as possible she could be committing the deadly sin of pride. Maybe she thought herself better than others, more deserving of heaven, even worthy of sainthood, exactly because of her personal sacrifice and good works. That, as we will see, does not seem to have been a major source of worry for her.
From the Talmudic perspective as expressed by Moses Maimonides (1135-1204), she would have had at least as big a problem. Maimonides, or Rambam as he was known in his day, concluded that charity given anonymously to anonymous recipients in order to help them become self-sufficient is the best kind. Mother Teresa’s giving did not rise to this standard, and she made sure it didn’t. She did not give anonymously; she knew to whom she was giving; and she did not strive particularly to make the beneficiaries of her kindness self-sufficient. In fact, she went out of her way to make herself and her acts recognizable. For instance, Mother Teresa carefully promoted herself, creating brand-name recognition—just like Cheerios, Coke, Xerox, or Vaseline—by always wearing the special habit of the order she founded (a white sari with blue trim and sandals) so that she could not be easily confused with just any nice old lady. Of course, anonymous giving could still be prideful, but for sure it could not lead to a Nobel Peace Prize in this world or to beatification and canonization in the next.
Could it be that Mother Teresa’s ambition for herself was tied to her faith in an eternal reward? It makes sense to pay the price of sacrifice for the short, finite time of a life span if the consequence is a reward that goes on for infinity in heaven. In fact, isn’t that exactly the explanation many of us give for the actions of suicide bombers, dying in their own prideful eyes as martyrs who will be rewarded for all eternity in heaven?
Or maybe, in Mother Teresa’s case, the rational, calculating motivation behind her deeds was more complex. We know now that she questioned her religious faith and the existence of God.4 Her doubts apparently began shortly after she started to minister to the poor and sick in Calcutta. By then maybe she felt locked into the religious life she chose for herself. Doubting God and ill-prepared for a life outside the Church, perhaps she found a perfect strategy for gaining the acclaim in life that she feared might not exist after death. Was she looking for an eternal reward, or for reward in the here and now? Only she could really know. We applied game theorists are content to observe that she acted as if being rewarded was her motivation. That is, she was not cold and materialistic; she was warm and materialistic. That is enough to make her a fine subject for analysis as a rational, strategic player in the game of life—and maybe enough to earn her sainthood as well.
Game theory draws our attention to important principles that shape what people say and do. First of all, just like Mother Teresa or a suicide bomber, all people are taken to be rational. That just means we assume they do what they believe is in their own best interest, whether that’s making as much money as they can or gaining entry to heaven or anything else. They may find out later that they made a poor choice, but in game-theory thinking we worry about what people know, believe, and value at the time they choose their actions, not what they find out later when it’s too late to do something else. Game theory has no place for Monday-morning quarterbacks. It’s all about what to do when decisions must be made, even if we cannot know for sure what the consequences of our actions will be.
This notion of rational action seems to trouble some people. Usually that’s because they mean something different from what an economist or political scientist means when talking about rationality. Words can have many meanings, so we must be careful to define ideas carefully. As it happens, game theorists insist on a particular use of the word “rational.”
Some folks seem to think that rational people must be super smart, never making a mistake, looking over each and every possible thing that could happen to them, working out the exact costs and benefits of every conceivable course of action. That is nonsense. Nobody is that smart or diligent, nor should they be. Actually, checking out every possible course of action, working out everything that possibly could arise, is almost never rational, at least not as the term is used in my world. It is never rational to continue searching for more information, for example, when the cost of finding out more is greater than the expected benefits of knowing more. Rational people know when to stop searching—when enough is enough. (I try to impart this message to my students. When they tell me they want to make their term papers as good as possible, I plead with them not to. A paper that is worked on until it is as good as possible will never be finished.)
Another way that people talk about rationality that has nothing to do with what “rational choice theorists” have in mind is to discuss whether what someone wants is rational or not. Distasteful as the fact may be, people with crazy ideas can be perfectly rational. Rationality is about choosing actions that are consistent with advancing personal interests, whatever those interests may be. It has nothing to do with whether you or I think what someone wants is a good idea, shows good taste or judgment, or even makes sense to want.
I certainly think what Adolf Hitler said he wanted and what he did to advance his heinous goals were evil, but I am reluctant to let him off the hook with an insanity plea by saying he was not rational. His actions were rational given his evil aims, and therefore it was perfectly right and proper to hold him and his henchmen accountable.
The same holds for modern-day terrorists. They’re not nuts. They are desperate, calculating, disgruntled people who are looking for ways to force others to pay attention to their real or perceived woes. Dismissing them as irrational misses the point and leads us to make wrongheaded choices about how to handle their threat. We do ourselves no service by labeling people as insane or irrational simply because we can’t understand their goals. Our attention is better fixed on what they do, since we probably can change or impede their actions even when we can’t alter what they want.
What exactly does rationality require? Actually it’s a simple idea. To be rational, a person must be able to state a preference among choices, including having no preference at all (that is, being truly indifferent). Also, their preferences must not go in circles. For instance, if I like chocolate ice cream better than vanilla—who doesn’t?—and vanilla better than strawberry, then I also presumably like chocolate ice cream better than strawberry. Finally, rational people act in accordance with their preferences, taking into account the impediments to doing so. For instance, one ice cream parlor might be sold out of chocolate more often than another. I might be willing to risk having to settle for vanilla if the place that runs out also has much better tasting chocolate. Taking calculated risks is part of being rational. I just need to think about the size of the risk, the value of the reward that comes with success, and the cost that comes with failure, and compare those to the risks, costs, and benefits of doing things differently.
Since rational people take calculated risks, sometimes things turn out badly for them. Nobody gets everything they want. I sometimes end up drinking soda I don’t like or eating vanilla or strawberry ice cream despite my best efforts to obtain what I prefer. That’s what it means to take risks. We absolutely cannot conclude that someone was irrational or acted irrationally just because at the end of the day they got a rotten outcome, whether that means being stuck with strawberry ice cream, losing a war, or even worse.
Rational choices reflect not only thinking through risks but also trying to sort out costs and benefits. Costs and benefits can be tricky to work out. I could be unsure of what those costs or benefits are likely to be. That too can be an important impediment or constraint on my rational decisions. Sometimes we have to make decisions even though we are in the dark about the consequences. Fortunately, that doesn’t happen much with buying ice cream or soda, but it sure happens a lot when negotiating a big business deal or forging a new foreign policy. In those cases, we had better be careful to weigh the sources of our uncertainty carefully, and not plunge headlong into some dangerous endeavor with no more than rose-colored glasses to guide our way. We may not get the consequence we want, but we can be careful to manage the range of consequences that are likely to arise. (Just imagine how different the debacle in Iraq might have been, for example, had American leaders not thought that the Iraqi people would be dancing in the streets, kissing American soldiers after Saddam was overthrown the way Parisians did when Americans marched into Paris behind Charles de Gaulle on August 26, 1944.)
The question remains, however, as to when someone is actually irrational. In everyday usage, lots of behavior looks irrational even though on closer inspection it turns out not to be. Sometimes critics point to behavior like leaving tips in restaurants, giving gifts to friends, or—sorry, I don’t mean to be gross—flushing the toilet in public places like airports or museums as irrational acts. They argue that all of the benefit goes to someone else, not to the tipper, gift giver, or flusher. I say, not true.
Many rational acts impose short-term costs on the doer with the expectation of longer-term gains. That’s true of tipping, gift giving, flushing public toilets, not littering, and lots more. Sure, you might leave a tip even though you don’t expect to be in the particular restaurant again. Tipping, however, like gift giving, is a social norm that has arisen and taken hold because we have learned that its effects on the expectations of others (waiters, dinner party hosts) are important to making our own lives a little happier and easier. If waiters thought they weren’t going to get a tip and yet continued to be paid poorly, then it’s a good bet that service would be much worse in every restaurant. Studies show, for instance, that customer satisfaction with service does not help predict the restaurants people choose in southern China.5 Tipping is illegal in China (which is not to say that it never happens, but it isn’t expected). It is good to keep in mind that people act on expectations. It seems that the quality of service doesn’t vary much between restaurants in southern China, because the service ethic just isn’t guided by anticipated rewards for good service. Take away the expectation of tips, and the waitstaff is motivated by something other than the customers’ interests and the waiters’ rewards for satisfying those interests.
Tipping, gift giving, and, yes, flushing the toilet create good expectations that make each of us better off most of the time even if they cost a little at the moment. Sure, we could free-ride on the good acts of others, save a little money or the little bit of effort it takes to flush a toilet or throw litter in the garbage can instead of on the street, but most of us would feel bad about ourselves if we did that. The urge to feel good about ourselves—not to take the risk of offending others and not to bear the cost of their reaction—is sufficient to induce us to behave in a socially appropriate way. For the few misanthropes who prefer to save the money that a tip or a gift costs or the effort that flushing a toilet costs, well, they are behaving rationally too. They aren’t concerned about feeling like lowlifes. They value the savings from their poor behavior more than goodwill or long-term good results. That’s why there really is no accounting for taste. Rationality is, as I said, about doing what you believe is in your own interest; it doesn’t impose interests on us.
So what does constitute irrationality in an applied game theorist’s world? A person is irrational if, returning to the example of ice cream flavors, all of the following are true: she likes strawberry ice cream better than chocolate; strawberry ice cream costs no more than chocolate ice cream; strawberry ice cream is readily available for purchase; and still she goes and buys chocolate ice cream for herself. In such a case, I might wonder whether she had eaten so much strawberry ice cream recently that she wanted a change (a preference for variety over constancy, adding another dimension to the things preferred that was not included on my list) or something like that, but if those sorts of considerations are absent, then a strawberry lover is expected to eat strawberry ice cream when everything else is equal.
All of this is to say that, really, the only people who are ruled out by assuming rationality are very little children and perhaps schizophrenics. Little children—most especially two-year-olds—and schizophrenics sometimes act as if their preferences change every few seconds. One minute they want strawberry and the next it’s the worst thing in the world. That sort of flip-flopping in individual preferences is hazardous for those who want to predict or engineer people’s choices. Reasoning with people who flip-flop all the time is all but impossible. They’re not committed to being logically consistent in what they say, want, or do.
Nature may not abhor a vacuum, but game theory definitely abhors logical inconsistency. If you allow the possibility that what an individual really wants changes all the time, moment to moment, then you can claim that anything they do and anything they get fits in with (or contradicts) their interests. That certainly won’t lead to good predictions or good engineering, and besides, it just isn’t any fun. It takes all of the challenge out of working out what people are likely to do.
WHAT IS THE OTHER GUY’S LOGIC
(NOT HIS LANGUAGE)?
On account of the above, it may have become readily apparent to you that game theory alerts us to be careful in how we express and understand our interests and those of others. It’s easy to make logical mistakes, and they can be hard to spot, which can often disguise or obscure the meaning of the thinking and actions of individuals. That is why game theorists use mathematics to work out what people are likely to do.
Ordinary everyday language can be awfully vague and ambiguous. A friend of mine is a linguist. One of his favorite sentences goes like this: “I saw the man with a telescope.” Now that is one vague sentence. Did I look through a telescope and spot a man, or did I look over at a man who was carrying a telescope, or does the sentence mean something entirely different? You can see why linguists like this sentence. It gives them an interesting problem to work out. I don’t like sentences like that. I like sentences written with mathematics (and so do many linguists). They don’t produce poetic beauty or double entendres, which makes them boring, but it also gives them a great virtue. In English, saying things are equal often means “more or less;” in math, “equal” means just that, equal, not almost equal or usually equal, but plain simple equal.
We humans have devised all sorts of clever ways to cover up sloppy or slippery arguments. As I am fond of telling my students, my suspicions are aroused by sentences beginning with clauses like “It stands to reason that” or “It is a fact that. …” Usually, what follows the statement “It stands to reason that” does not. The clause is being asked to substitute for the hard work of showing that a conclusion follows logically from the assumptions. Likewise, “It is a fact that” generally precedes an expression of opinion rather than a fact. Watch out for these. This sort of rhetoric can easily take a person down a wrong line of thinking by accepting as true something that might be true and then again might not be.
Consider, for example, what policies you think our national leaders should follow to protect and enhance our national interest. When we think carefully about how to further the national interest, it becomes evident that sometimes things that seem obviously true are not, and that a little logic can go a long way to clarify our understanding.
It is commonplace to think that foreign policy should advance the national interest. This idea is so widespread that we accept it as an obvious truth, but is it? We hardly ever pause to ask how we know what is in the national interest. Most of the time, we seem to mean that policies benefiting the great majority of people are policies in the national interest. Secure borders to prevent foreign invasions or illegal immigration are thought to be in the national interest. Economic policies that make citizens more prosperous are thought to be in the national interest. Yet we also know that money spent on defending our national security is money that is not spent on building the economy. There is a trade-off between the two. What, then, is the right balance between national security and economic security that ensures the national interest?
Imagine that American citizens are divided into three equally sized groups. One group wants to spend more on national defense and to adopt more free-trade programs. Call these people Republicans. Another wants to cut defense spending and shift trade policy away from the status quo in order to better protect American industry against foreign competition. Call them Democrats. A third wants to spend more on national defense and also to greatly increase tariffs to keep our markets from being flooded with cheap foreign-made goods. Call this faction blue-collar independents. With all of these voters in mind, what defense and trade policy can rightfully call itself “the national interest”? The answer, as seen in figure 2.1 (on the next page), is that any policy can legitimately lay claim to being in—or against—the national interest.
Figure 2.1 places each of our three voting blocs—Republicans, Democrats, and blue-collar independents—at the policy outcomes they prefer when it comes to trade and defense spending. That’s why Republicans are found in the upper right-hand corner as you look at the figure, indicating their support for much freer trade and much higher defense spending. Democrats are on the far left-hand side just below the vertical center. That is consistent with their wanting much less spent on defense and a modest shift in trade policy. Blue-collar independents are found on the bottom right, consistent with their preference for trade protection and higher defense outlays. And, as you can see, there is a point labeled “Status Quo,” which denotes current defense spending and trade policy.
FIG. 2.1. Defense and Trade Policy in the National Interest
By putting the two issues together in the figure I am acknowledging that they are often linked in public debate. The debate generally revolves around how best to balance trade and defense given that there are inherent tradeoffs between them. Free trade, for instance, can imply selling high-end computer technology, weapons technology, and other technologies that adversaries might use to threaten our national security. High tariffs might provoke trade wars or worse, thereby potentially harming national security and prompting arguments to spend more on national defense.
I assume that everyone prefers policies closer to their favored position (that’s where the black dots associated with the Republicans, Democrats, and independents are positioned) to policies that are farther away. For example, blue-collar independents would vote to change the status quo on defense and trade if they had the chance to choose a mix on these issues that was closer to the black dot associated with them—that is, closer to what they want.
To show the range of policy combinations that the blue-collar independents like better than the status quo, I drew a circle (showing only a part of it) whose center is their most desired policy combination and whose perimeter just passes through the status quo policy.6 Anything inside the arc whose center is what blue-collar independents most want is better for them than the prevailing approach to defense spending and trade. The same is true for the points inside the arcs centered on the Republicans and the Democrats that pass through the status quo.
By drawing these circles around each player’s preferred policy mix we learn something important. We see that these circles overlap. The areas of overlap show us policy combinations that improve on the status quo for a coalition of two of the three players. For instance, the lined oblong area tilting toward the upper left of the figure depicts policies that improve the well-being of Democrats and Republicans (ah, a bipartisan foreign policy opposed by independent blue-collar workers). The gray petal-shaped area improves the interests of Democrats and blue-collar independents (at the expense of Republicans), and the bricked-over area provides a mix of trade and defense spending that benefit the Republicans and blue-collar independents (to the chagrin of Democrats).
Because we assumed that each of the three voting blocs is equal in size, each overlapping area identifies defense and trade policies that command the support of two-thirds of the electorate. Here’s the rub, then, when it comes to talking about the national interest. One coalition wants more free trade and less defense spending. Another wants less free trade and less defense spending. The third wants less free trade and more defense spending. So, we can assemble a two-thirds majority for more defense spending and also for less. We can find a two-thirds coalition for more free trade or for higher tariffs or (in the politically charged rhetoric of trade debate) for more fair trade. In fact, there are loads of ways to allocate effort between defense spending and trade policy to make better off whichever coalition forms.7
What, then, is the national interest? We might have to conclude that except under the direst circumstances there is no such thing as “the national interest,” even if the term refers to what a large majority favors. That is surprising, perhaps, but it follows logically from the idea that people will align themselves behind policies that are closer to what they want against policies that are farther from what they advocate. It just happens that any time there are trade-offs between alternative ways to spend money or to exert influence, there are likely to be many different spending or influence combinations that beat the prevailing view. None can be said to be a truer reflection of the national interest than another; that reflection is in the eyes of the beholder, not in some objective assessment of national well-being. So much for the venerable notion that our leaders pursue the national interest, or, for that matter, that business executives single-mindedly foster shareholder value. I suppose, freed as they are to build a coalition that wants whatever it is they also want, that our leaders really are free to pursue their own interests and to call that the national interest or the corporate interest.
WHAT IS THE OTHER GUY’S BEHAVIOR?
(DOES HE HAVE GOOD CARDS OR NOT?)
To understand how interests frame so many of the questions we have at stake, game theory still requires that people behave in a logically consistent way within those interests. That does not mean that people cannot behave in surprising ways, for surely they can. If you’ve ever played the game Mastermind, you’ve confronted the difficulties of logic directly. In Mastermind—a game I’ve used with students to teach them about really probing their beliefs—one player sets up four (or, in harder versions, more) colored pegs selected from among six colors in whatever order he or she chooses. The rest of the players cannot see the pegs. They propose color sequences of pegs and are told that yes, they got three colors right, or no, they didn’t get any right, or yes, they got one color in the right position but none of the others. In this way, information accumulates from round to round. By keeping careful track of the information about what is true and what is false, you gradually eliminate hypotheses and converge on a correct view of what order the colored pegs are in. This is the point behind a game like Mastermind, Battleship, or charades. It is also one point behind the forecasting games I designed and use to predict and engineer events.
The key to any of these games is sorting out the difference between knowledge and beliefs. Different players in any game are likely to start out with different beliefs because they don’t have enough information to know the true lay of the land. It is fine to sustain beliefs that could be consistent with what’s observed, but it’s not sensible to hold on to beliefs after they have been refuted by what is happening around us. Of course, sorting out when beliefs and actions are inconsistent requires working out the incentives people have to lie, mislead, bluff, and cheat.
In Mastermind this is easy to do because the game has rules that stipulate the order of guessing and that require the person who placed the pegs to respond honestly to proposed color sequences suggested by other players. There is no point to the game if the person placing the pegs lies to everyone else. But even when everyone tells the truth, it is easy to slip into serious lapses in logic that can lead to entirely wrong beliefs. That is something to be careful about.
Slipping into wrong beliefs is a problem for many of us. It is easy to look at facts selectively and to reach wrong conclusions. That is a major problem, for instance, with the alleged police practice of profiling, or some people’s judgment about the guilt or innocence of others based on thin evidence that is wrongly assessed. There are very good reasons why the police and we ordinary folk ought not to be too hasty in jumping to conclusions.
Let me give an example to help flesh out how easily we can slip into poor logical thinking. Baseball is beset by a scandal over performance-enhancing drugs. Suppose you know that the odds someone will test positive for steroids are 90 percent if they actually used steroids. Does that mean when someone tests positive we can be very confident that they used steroids? Journalists seem to think so. Congress seems to think so. But it just isn’t so. To formulate a policy we need an answer to the question, How likely is it that someone used steroids if they test positive? It is not enough to know how likely they are to test positive if they use steroids. Unfortunately, we cannot easily know the answer to the question we really care about. We can know whether someone tested positive, but that could be a terrible basis for deciding whether the person cheated. A logically consistent use of probabilities—working out the real risks—can help make that clear.
Imagine that out of every 100 baseball players (only) 10 cheat by taking steroids (game theory notwithstanding, I am an optimist) and that the tests are accurate enough that 9 out of every 10 cheaters test positive. To evaluate the likelihood of guilt or innocence we still need to know how many honest players test positive—that is, how often the tests give us a false positive answer. Tests are, after all, far from perfect. Just imagine that while 90 out of every 100 players do not cheat, 10 percent of the honest players nevertheless test (falsely) positive. Looking at these numbers it’s easy to think, well, hardly anyone gets a false positive (only 10 percent of the innocent) and almost every guilty party gets a true positive (90 percent of the guilty), so knowing whether a person tested positive must make us very confident of their guilt. Wrong!8
With the numbers just laid out, 9 out of 10 cheaters test positive and 9 out of 90 innocent ball players also test positive. So, 9 out of 18 of the positive test results include cheaters and 9 out of 18 include absolutely innocent baseball players. In this example, the odds that a player testing positive actually uses steroids are fifty-fifty, just the flip of a coin. That is hardly enough to ruin a person’s career and reputation. Who would want to convict so many innocents just to get the guilty few? It is best to take seriously the dictum “innocent until proven guilty.”
The calculation we just did is an example of Bayes’ Theorem.9 It provides a logically sound way to avoid inconsistencies between what we thought was true (a positive test means a player uses steroids) and new information that comes our way (half of all players testing positive do not use steroids). Bayes’ Theorem compels us to ask probing questions about what we observe. Instead of asking, “What are the odds that a baseball player uses performance-enhancing drugs?” we ask, “What are the odds that a baseball player uses performance-enhancing drugs given that we know he tested positive for such drugs and we know the odds of testing positive under different conditions?”
Bayes’ Theorem provides a way to calculate how people digest new information. It assumes that everyone uses such information to check whether what they believe is consistent with their new knowledge. It highlights how our beliefs change—how they are updated, in game-theory jargon—in response to new information that reinforces or contradicts what we thought was true. In that way, the theorem, and the game theorists who rely on it, view beliefs as malleable rather than as unalterable biases lurking in a person’s head.
This idea of updating beliefs leads us to the next challenge. Suppose a baseball player who had a positive (guilty) test result is called to testify before Congress in the steroid scandal. Now suppose he knows of the odds sketched above. Aware of these statistics, and knowing that any self-respecting congressperson is also aware of them, the baseball player knows that Congress, if citing only a positive test result as their evidence, in fact has little on him, no matter how much outrage they muster. The player, in other words, knows Congress is bluffing. But of course Congress knows this as well, so they have subpoenaed the player’s trainer, who is coming in to testify right after the player. Is this just another bluff by Congress, tightening the screws to elicit a confession with the threat of perjury looming? Whether the player is guilty or not, perhaps he shrugs off the move, in effect calling Congress’s raising of the stakes. Now what? Does Congress actually have anything, or will they be embarrassed for going on a fishing expedition and dragging an apparently innocent man through the mud? Will the player adamantly profess innocence knowing he’s guilty (but maybe he really isn’t), and should we shrug off the declarations of innocence lightly, as it seems so many of us do? Is Congress bluffing? Is the player bluffing? Is everyone bluffing? These are tough problems, and they are right up game theory’s alley!
In real life there are plenty of incentives for others (and for us) to lie. That is certainly true for athletes, corporate executives, national leaders, poker players, and all the rest of us. Therefore, to predict the future we have to reflect on when people are likely to lie and when they are most likely to tell the truth. In engineering the future, our task is to find the right incentives so that people tell the truth, or so that, when it helps our cause, they believe our lies.
One way of eliciting honest responses is to make repeated lying really costly. Bluffing at poker, for instance, can be costly exactly because other players sometimes don’t believe big bets, and don’t fold as a result. If their hand is better, the bluff comes straight out of the liar’s pocket. So the central feature of a game like five-card draw is not figuring out the probability of drawing an inside straight or three of a kind, although that’s certainly useful too. It’s about convincing others that your hand is stronger than it really is. Part of the key to accumulating bargaining chips, whether in poker or diplomacy, is engineering the future by exploiting leverage that really does not exist. Along with taking prudent risks, creating leverage is one of the most important features in changing outcomes. Of course, that is just a polite way of saying that it’s good to know when and how to lie.
Betting, whether with chips, stockholders’ money, perjury charges, or soldiers, can lead others to wrong inferences that benefit the bettor; but gambling always suffers from two limitations. First, it can be expensive to bet more than a hand is worth. Second, everyone has an interest in trying to figure out who is bluffing and who is being honest. Raising the stakes helps flush out the bluffers. The bigger the cumulative bet, the costlier it is to pretend to have the resolve to see a dispute through when the cards really are lousy. How much pain anyone is willing to risk on a bluff, and how similar their wagering is when they are bluffing and when they are really holding good cards, is crucial to the prospects of winning or of being unmasked. That, of course, is why diplomats, lawyers, and poker players need a good poker face, and it is why, for example, you take your broker’s advice more seriously if she invests a lot of her own money in a stock she’s recommending.
Getting the best results comes down to matching actions to beliefs. Gradually, under the right circumstances, exploiting information leads to consistency between what people see, what they think, and what they do, just as it does in Mastermind. Convergence in thinking facilitates deals, bargains, and the resolution of disputes.
With that, we’ve just completed the introductory course in game theory. Nicely done! Now we’re ready to go on to the more advanced course. In the next chapter we look in more depth at how the very fact of our being strategic changes everything going on around us. That will set the stage for working out how we can use strategy to change things to be better for ourselves and those we care about and, if we are altruistic enough, maybe even for almost everyone.