Testing the Simple Model of Rational Crime (SMORC)
Let me come right out and say it. They cheat. You cheat. And yes, I also cheat from time to time.
As a college professor, I try to mix things up a bit in order to keep my students interested in the material. To this end, I occasionally invite interesting guest speakers to class, which is also a nice way to reduce the time I spend on preparation. Basically, it’s a win-win-win situation for the guest speaker, the class, and, of course, me.
For one of these “get out of teaching free” lectures, I invited a special guest to my behavioral economics class. This clever, well-established man has a fine pedigree: before becoming a legendary business consultant to prominent banks and CEOs, he had earned his juris doctor and, before that, a bachelor’s at Princeton. “Over the past few years,” I told the class, “our distinguished guest has been helping business elites achieve their dreams!”
With that introduction, the guest took the stage. He was forthright from the get-go. “Today I am going to help you reach your dreams. Your dreams of MONEY!” he shouted with a thumping, Zumba-trainer voice. “Do you guys want to make some MONEY?”
Everyone nodded and laughed, appreciating his enthusiastic, non-buttoned-down approach.
“Is anybody here rich?” he asked. “I know I am, but you college students aren’t. No, you are all poor. But that’s going to change through the power of CHEATING! Let’s do it!”
He then recited the names of some infamous cheaters, from Genghis Khan through the present, including a dozen CEOs, Alex Rodriguez, Bernie Madoff, Martha Stewart, and more. “You all want to be like them,” he exhorted. “You want to have power and money! And all that can be yours through cheating. Pay attention, and I will give you the secret!”
With that inspiring introduction, it was now time for a group exercise. He asked the students to close their eyes and take three deep, cleansing breaths. “Imagine you have cheated and gotten your first ten million dollars,” he said. “What will you do with this money? You! In the turquoise shirt!”
“A house,” said the student bashfully.
“A HOUSE? We rich people call that a MANSION. You?” he said, pointing to another student.
“A vacation.”
“To the private island you own? Perfect! When you make the kind of money that great cheaters make, it changes your life. Is anyone here a foodie?”
A few students raised their hands.
“What about a meal made personally by Jacques Pépin? A wine tasting at Châteauneuf-du-Pape? When you make enough money, you can live large forever. Just ask Donald Trump! Look, we all know that for ten million dollars you would drive over your boyfriend or girlfriend. I am here to tell you that it is okay and to release the handbrake for you!”
By that time most of the students were starting to realize that they were not dealing with a serious role model. But having spent the last ten minutes sharing dreams about all the exciting things they would do with their first $10 million, they were torn between the desire to be rich and the recognition that cheating is morally wrong.
“I can sense your hesitation,” the lecturer said. “You must not let your emotions dictate your actions. You must confront your fears through a cost-benefit analysis. What are the pros of getting rich by cheating?” he asked.
“You get rich!” the students responded.
“That’s right. And what are the cons?”
“You get caught!”
“Ah,” said the lecturer, “There is a CHANCE you will get caught. BUT—here is the secret! Getting caught cheating is not the same as getting punished for cheating. Look at Bernie Ebbers, the ex-CEO of WorldCom. His lawyer whipped out the ‘Aw, shucks’ defense, saying that Ebbers simply did not know what was going on. Or Jeff Skilling, former CEO of Enron, who famously wrote an e-mail saying, ‘Shred the documents, they’re onto us.’ Skilling later testified that he was just being ‘sarcastic’! Now, if these defenses don’t work, you can always skip town to a country with no extradition laws!”
Slowly but surely, my guest lecturer—who in real life is a stand-up comedian named Jeff Kreisler and the author of a satirical book called Get Rich Cheating—was making a hard case for approaching financial decisions on a purely cost-benefit basis and paying no attention to moral considerations. Listening to Jeff’s lecture, the students realized that from a perfectly rational perspective, he was absolutely right. But at the same time they could not help but feel disturbed and repulsed by his endorsement of cheating as the best path to success.
At the end of the class, I asked the students to think about the extent to which their own behavior fit with the SMORC. “How many opportunities to cheat without getting caught do you have in a regular day?” I asked them. “How many of these opportunities do you take? How much more cheating would we see around us if everyone took Jeff’s cost-benefit approach?”
Setting Up the Testing Stage
Both Becker’s and Jeff’s approach to dishonesty are comprised of three basic elements: (1) the benefit that one stands to gain from the crime; (2) the probability of getting caught; and (3) the expected punishment if one is caught. By comparing the first component (the gain) with the last two components (the costs), the rational human being can determine whether committing a particular crime is worth it or not.
Now, it could be that the SMORC is an accurate description of the way people make decisions about honesty and cheating, but the uneasiness experienced by my students (and myself) with the implications of the SMORC suggests that it’s worth digging a bit further to figure out what is really going on. (The next few pages will describe in some detail the way we will measure cheating throughout this book, so please pay attention.)
My colleagues Nina Mazar (a professor at the University of Toronto) and On Amir (a professor at the University of California at San Diego) and I decided to take a closer look at how people cheat. We posted announcements all over the MIT campus (where I was a professor at the time), offering students a chance to earn up to $10 for about ten minutes of their time.* At the appointed time, participants entered a room where they sat in chairs with small desks attached (the typical exam-style setup). Next, each participant received a sheet of paper containing a series of twenty different matrices (structured like the example you see on the next page) and were told that their task was to find in each of these matrices two numbers that added up to 10 (we call this the matrix task, and we will refer to it throughout much of this book). We also told them that they had five minutes to solve as many of the twenty matrices as possible and that they would get paid 50 cents per correct answer (an amount that varied depending on the experiment). Once the experimenter said, “Begin!” the participants turned the page over and started solving these simple math problems as quickly as they could.
On the next page is a sample of what the sheet of paper looked like, with one matrix enlarged. How quickly can you find the pair of numbers that adds up to 10?
This was how the experiment started for all the participants, but what happened at the end of the five minutes was different depending on the particular condition.
Imagine that you are in the control condition and you are hurrying to solve as many of the twenty matrices as possible. After a minute passes, you’ve solved one. Two more minutes pass, and you’re up to three. Then time is up, and you have four completed matrices. You’ve earned $2. You walk up to the experimenter’s desk and hand her your solutions. After checking your answers, the experimenter smiles approvingly. “Four solved,” she says and then counts out your earnings. “That’s it,” she says, and you’re on your way. (The scores in this control condition gave us the actual level of performance on this task.)
Now imagine you are in another setup, called the shredder condition, in which you have the opportunity to cheat. This condition is similar to the control condition, except that after the five minutes are up the experimenter tells you, “Now that you’ve finished, count the number of correct answers, put your worksheet through the shredder at the back of the room, and then come to the front of the room and tell me how many matrices you solved correctly.” If you were in this condition you would dutifully count your answers, shred your worksheet, report your performance, get paid, and be on your way.
If you were a participant in the shredder condition, what would you do? Would you cheat? And if so, by how much?
With the results for both of these conditions, we could compare the performance in the control condition, in which cheating was impossible, to the reported performance in the shredder condition, in which cheating was possible. If the scores were the same, we would conclude that no cheating had occurred. But if we saw that, statistically speaking, people performed “better” in the shredder condition, then we could conclude that our participants overreported their performance (cheated) when they had the opportunity to shred the evidence. And the degree of this group’s cheating would be the difference in the number of matrices they claimed to have solved correctly above and beyond the number of matrices participants actually solved correctly in the control condition.
Perhaps somewhat unsurprisingly, we found that given the opportunity, many people did fudge their score. In the control condition, participants solved on average four out of the twenty matrices. Participants in the shredder condition claimed to have solved an average of six—two more than in the control condition. And this overall increase did not result from a few individuals who claimed to solve a lot more matrices, but from lots of people who cheated by just a little bit.
More Money, More Cheating?
With this basic quantification of dishonesty under our belts, Nina, On, and I were ready to investigate what forces motivate people to cheat more and less. The SMORC tells us that people should cheat more when they stand a chance of getting more money without being caught or punished. That sounds both simple and intuitively appealing, so we decided to test it next. We set up another version of the matrix experiment, only this time we varied the amount of money the participants would get for solving each matrix correctly. Some participants were promised 25 cents per question; others were promised 50 cents, $1, $2, or $5. At the highest level, we promised some participants a whopping $10 for each correct answer. What do you think happened? Did the amount of cheating increase with the amount of money offered?
Before I divulge the answer, I want to tell you about a related experiment. This time, rather than taking the matrix test themselves, we asked another group of participants to guess how many answers those in the shredder condition would claim to solve correctly at each level of payment. Their predictions were that the claims of correctly solved matrices would increase as the amount of money went up. Essentially, their intuitive theory was the same as the premise of the SMORC. But they were wrong. It turned out that when we looked at the magnitude of cheating, our participants added two questions to their scores on average, regardless of the amount of money they could make per question. In fact, the amount of cheating was slightly lower when we promised our participants the highest amount of $10 for each correct answer.
Why wouldn’t the level of cheating increase with the amount of money offered? Why was cheating slightly lower at the highest level of payment? This insensitivity to the amount of reward suggests that dishonesty is most likely not an outcome of a cost-benefit analysis. If it were, the increase in the benefit (the amount of money offered) would lead to more cheating. And why was the level of cheating lowest when the payment was greatest? I suspect that when the amount of money that the participants could make per question was $10, it was harder for them to cheat and still feel good about their own sense of integrity (we will come back to this later). At $10 per matrix, we’re not talking about cheating on the level of, say, taking a pencil from the office. It’s more akin to taking several boxes of pens, a stapler, and a ream of printer paper, which is much more difficult to ignore or rationalize.
To Catch a Thief
Our next experiment looked at what might happen if participants felt that there was a higher probability of getting caught cheating. Basically, we inserted the mental equivalent of a partially operating security camera into the experiment.
We asked one group of participants to shred one half of their worksheet—which meant that if they were dishonest, we might find some evidence of it. We asked a second group to shred the whole work sheet, meaning that they could get off scot-free. Finally, we asked a third group to shred the whole worksheet, leave the testing room, and pay themselves from a sizable bowl of money filled with more than $100 in small bills and coins. In this self-paying condition, participants could not only cheat and get away with it, but they could also help themselves to a lot of extra cash.
Again, we asked a different group to predict how many questions, on average, participants would claim to solve correctly in each condition. Once again, they predicted that the human tendency for dishonesty would follow the SMORC and that participants would claim to solve more matrices as the probability of getting caught decreased.
What did we find? Once again, lots of people cheated, but just by a bit, and the level of cheating was the same across all three conditions (shredding half, shredding all, shredding all and self-paying).
NOW, YOU MIGHT wonder if the participants in our experiments really believed that in our experimental setting, they could cheat and not get caught. To make it clear that this was indeed the case, Racheli Barkan (a professor at Ben-Gurion University of the Negev), Eynav Maharabani (a master’s candidate working with Racheli), and I carried out another study where either Eynav or a different research assistant, Tali, proctored the experiment. Eynav and Tali were similar in many ways—but Eynav is noticeably blind, which meant that it was easier to cheat when she was in charge. When it was time to pay themselves from the pile of money that was placed on the table in front of the experimenter, participants could grab as much of the cash as they wanted and Eynav would not be able to see them do so.
So did they cheat Eynav to a greater degree? They still took a bit more money than they deserved, but they cheated just as much when Tali supervised the experiments as they did when Eynav was in charge.
These results suggest that the probability of getting caught doesn’t have a substantial influence on the amount of cheating. Of course, I am not arguing that people are entirely uninfluenced by the likelihood of being caught—after all, no one is going to steal a car when a policeman is standing nearby—but the results show that getting caught does not have as great an influence as we tend to expect, and it certainly did not play a role in our experiments.
YOU MIGHT BE wondering whether the participants in our experiments were using the following logic: “If I cheat by only a few questions, no one will suspect me. But if I cheat by more than a small amount, it may raise suspicion and someone might question me about it.”
We tested this idea in our next experiment. This time, we told half of the participants that the average student in this experiment solves about four matrices (which was true). We told the other half that the average student solves about eight matrices. Why did we do this? Because if the level of cheating is based on the desire to avoid standing out, then our participants would cheat in both conditions by a few matrices beyond what they believed was the average performance (meaning that they would claim to solve around six matrices when they thought the average was four and about ten matrices when they thought the average was eight).
So how did our participants behave when they expected others to solve more matrices? They were not influenced even to a small degree by this knowledge. They cheated by about two extra answers (they solved four and reported that they had solved six) regardless of whether they thought that others solved on average four or eight matrices.
This result suggests that cheating is not driven by concerns about standing out. Rather, it shows that our sense of our own morality is connected to the amount of cheating we feel comfortable with. Essentially, we cheat up to the level that allows us to retain our self-image as reasonably honest individuals.
Into the Wild
Armed with this initial evidence against the SMORC, Racheli and I decided to get out of the lab and venture into a more natural setting. We wanted to examine common situations that one might encounter on any given day. And we wanted to test “real people” and not just students (though I have discovered that students don’t like to be told that they are not real people). Another component missing from our experimental paradigm up to that point was the opportunity for people to behave in positive and benevolent ways. In our lab experiments, the best our participants could do was not cheat. But in many real-life situations, people can exhibit behaviors that are not only neutral but are also charitable and generous. With this added nuance in mind, we looked for situations that would let us test both the negative and the positive sides of human nature.
IMAGINE A LARGE farmer’s market spanning the length of a street. The market is located in the heart of Be’er Sheva, a town in southern Israel. It’s a hot day, and hundreds of merchants have set out their wares in front of the stores that line both sides of the street. You can smell fresh herbs and sour pickles, freshly baked bread and ripe strawberries, and your eyes wander over plates of olives and cheese. The sound of merchants shouting praises of their goods surrounds you: “Rak ha yom!” (only today), “Matok!” (sweet), “Bezol!” (cheap).
Eynav and Tali entered the market and headed in different directions, Eynav using a white cane to navigate the market. Each of them approached a few vegetable vendors and asked each of the sellers to pick out two kilos (about 4.5 pounds) of tomatoes for them while they went on another errand. Once they made their request, they left for about ten minutes, returned to pick up their tomatoes, paid, and left. From there they took the tomatoes to another vendor at the far end of the market who had agreed to judge the quality of the tomatoes from each seller. By comparing the quality of the tomatoes that were sold to Eynav and to Tali, we could figure out who got better produce and who got worse.
Did Eynav get a raw deal? Keep in mind that from a purely rational perspective, it would have made sense for the seller to choose his worst-looking tomatoes for her. After all, she could not possibly benefit from their aesthetic quality. A traditional economist from, say, the University of Chicago might even argue that in an effort to maximize the social welfare of everyone involved (the seller, Eynav, and the other consumers), the seller should have sold her the worst-looking tomatoes, keeping the pretty ones for people who could also enjoy that aspect of the tomatoes. As it turned out, the visual quality of the tomatoes chosen for Eynav was not worse and, in fact, was superior to those chosen for Tali. The sellers went out of their way, and at some cost to their business, to choose higher-quality produce for a blind customer.
WITH THOSE OPTIMISTIC results, we next turned to another profession that is often regarded with great suspicion: cab drivers. In the taxi world, there is a popular stunt called “long hauling,” which is the official term for taking passengers who don’t know their way around to their destination via a lengthy detour, sometimes adding substantially to the fare. For example, a study of cab drivers in Las Vegas found that some cabbies drive from McCarran International Airport to the Strip by going through a tunnel to Interstate 215, which can mount to a fare of $92 for what should be a two-mile journey.1
Given the reputation that cabbies have, one has to wonder whether they cheat in general and whether they would be more likely to cheat those who cannot detect their cheating. In our next experiment we asked Eynav and Tali to take a cab back and forth between the train station and Ben-Gurion University of the Negev twenty times. The way the cabs on this particular route work is as follows: if you have the driver activate the meter, the fare is around 25 NIS (about $7). However, there is a customary flat rate of 20 NIS (about $5.50) if the meter is not activated. In our setup, both Eynav and Tali always asked to have the meter activated. Sometimes drivers would tell the “amateur” passengers that it would be cheaper not to activate the meter; regardless, both of them always insisted on having the meter activated. At the end of the ride, Eynav and Tali asked the cab driver how much they owed them, paid, left the cab, and waited a few minutes before taking another cab back to the place they had just left.
Looking at the charges, we found that Eynav paid less than Tali, despite the fact that they both insisted on paying by the meter. How could this be? One possibility was that the drivers had taken Eynav on the shortest and cheapest route and had taken Tali for a longer ride. If that were the case, it would mean that the drivers had not cheated Eynav but that they had cheated Tali to some degree. But Eynav had a different account of the results. “I heard the cab drivers activate the meter when I asked them to,” she told us, “but later, before we reached our final destination, I heard many of them turn the meter off so that the fare would come out close to twenty NIS.” “That certainly never happened to me,” Tali said. “They never turned off the meter, and I always ended up paying around twenty-five NIS.”
There are two important aspects to these results. First, it’s clear that the cab drivers did not perform a cost-benefit analysis in order to optimize their earnings. If they had, they would have cheated Eynav more by telling her that the meter reading was higher than it really was or by driving her around the city for a bit. Second, the cab drivers did better than simply not cheat; they took Eynav’s interest into account and sacrificed some of their own income for her benefit.
Making Fudge
Clearly there’s a lot more going on here than Becker and standard economics would have us believe. For starters, the finding that the level of dishonesty is not influenced to a large degree (to any degree in our experiments) by the amount of money we stand to gain from being dishonest suggests that dishonesty is not an outcome of simply considering the costs and benefits of dishonesty. Moreover, the results showing that the level of dishonesty is unaltered by changes in the probability of being caught makes it even less likely that dishonesty is rooted in a cost-benefit analysis. Finally, the fact that many people cheat just a little when given the opportunity to do so suggests that the forces that govern dishonesty are much more complex (and more interesting) than predicted by the SMORC.
What is going on here? I’d like to propose a theory that we will spend much of this book examining. In a nutshell, the central thesis is that our behavior is driven by two opposing motivations. On one hand, we want to view ourselves as honest, honorable people. We want to be able to look at ourselves in the mirror and feel good about ourselves (psychologists call this ego motivation). On the other hand, we want to benefit from cheating and get as much money as possible (this is the standard financial motivation). Clearly these two motivations are in conflict. How can we secure the benefits of cheating and at the same time still view ourselves as honest, wonderful people?
This is where our amazing cognitive flexibility comes into play. Thanks to this human skill, as long as we cheat by only a little bit, we can benefit from cheating and still view ourselves as marvelous human beings. This balancing act is the process of rationalization, and it is the basis of what we’ll call the “fudge factor theory.”
To give you a better understanding of the fudge factor theory, think of the last time you calculated your tax return. How did you make peace with the ambiguous and unclear decisions you had to make? Would it be legitimate to write off a portion of your car repair as a business expense? If so, what amount would you feel comfortable with? And what if you had a second car? I’m not talking about justifying our decisions to the Internal Revenue Service (IRS); I’m talking about the way we are able to justify our exaggerated level of tax deductions to ourselves.
Or let’s say you go out to a restaurant with friends and they ask you to explain a work project you’ve been spending a lot of time on lately. Having done that, is the dinner now an acceptable business expense? Probably not. But what if the meal occurred during a business trip or if you were hoping that one of your dinner companions would become a client in the near future? If you have ever made allowances of this sort, you too have been playing with the flexible boundaries of your ethics. In short, I believe that all of us continuously try to identify the line where we can benefit from dishonesty without damaging our own self-image. As Oscar Wilde once wrote, “Morality, like art, means drawing a line somewhere.” The question is: where is the line?
I THINK JEROME K. JEROME got it right in his 1889 novel, Three Men in a Boat (to Say Nothing of the Dog), in which he tells a story about one of the most famously lied-about topics on earth: fishing. Here’s what he wrote:
I knew a young man once, he was a most conscientious fellow and, when he took to fly-fishing, he determined never to exaggerate his hauls by more than twenty-five per cent.
“When I have caught forty fish,” said he, “then I will tell people that I have caught fifty, and so on. But I will not lie any more than that, because it is sinful to lie.”
Although most people haven’t consciously figured out (much less announced) their acceptable rate of lying like this young man, this overall approach seems to be quite accurate; each of us has a limit to how much we can cheat before it becomes absolutely “sinful.”
Trying to figure out the inner workings of the fudge factor—the delicate balance between the contradictory desires to maintain a positive self-image and to benefit from cheating—is what we are going to turn our attention to next.