Collaborative Cheating

Why Two Heads Aren’t Necessarily Better than One

If you’ve ever worked in just about any organization, you know that working in teams accounts for a lot of your time. A great deal of economic activity and decision making takes place through collaboration. In fact, the majority of U.S. companies depend on group-based work, and more than half of all U.S. employees currently spend at least part of their day working in a group setting.1 Try to count the number of meetings, project teams, and collaborative experiences you’ve had over the last six months, and you will quickly realize how many working hours these group activities consume. Group work also plays a prominent role in education. For example, the majority of MBA students’ assignments consist of group-based tasks, and many undergraduate classes also require group-based projects.

In general, people tend to believe that working in groups has a positive influence on outcomes and that it increases the overall quality of decisions.2 (In fact, much research has shown that collaboration can decrease the quality of decisions. But that’s a topic for another time.) In general, the belief is that there is little to lose and everything to gain from collaboration—including encouraging a sense of camaraderie, increasing the level of fun at work, and benefitting from sharing and developing new ideas—all of which add up to more motivated and effective employees. What’s not to love?

A FEW YEARS ago, in one of my graduate classes, I lectured about some of my research related to conflicts of interest (see chapter 3, “Blinded by Our Own Motivations”). After class, a student (I’ll call her Jennifer) told me that the discussion had struck a chord with her. It reminded her of an incident that had taken place a few years earlier, when she was working as a certified public accountant (CPA) for a large accounting firm.

Jennifer told me that her job had been to produce the annual reports, proxy statements, and other documents that would inform shareholders about the state of their companies’ affairs. One day her boss asked her to have her team prepare a report for the annual shareholders’ meeting of one of their larger clients. The task involved going over all of the client’s financial statements and determining the company’s financial standing. It was a large responsibility, and Jennifer and her team worked hard to put together a comprehensive and detailed report that was honest and realistic. She did her best to prepare the report as accurately as possible, without, for example, overclaiming the company’s profits or delaying reporting any losses to the next accounting year. She then left the draft of the report on her boss’s desk, looking forward (somewhat anxiously) to his feedback.

Later that day, Jennifer got the report back with a note from her boss. It read, “I don’t like these numbers. Please gather your team and get me a revised version by next Wednesday.” Now, there are many reasons why her boss might not have “liked” the numbers, and it wasn’t entirely clear to her what he meant. Moreover, not “liking” the numbers is an entirely different matter from the numbers being wrong—which was never implied. A multitude of questions ran through Jennifer’s head: “What exactly did he want? How different should I make the numbers? Half a percent? One percent? Five percent?” She also didn’t understand who was going to be accountable for any of the “improvements” she made. If the revisions turned out to be overly optimistic and someone was going to take the blame for it down the road, would it be her boss or her?

THE PROFESSION OF accounting is itself a somewhat equivocal trade. Sure, there are some clear-cut rules. But then there is a vaguely titled body of suggestions—known as Generally Accepted Accounting Principles (GAAP)—that accountants are supposed to follow. These guidelines afford accountants substantial leeway; they are so general that there’s considerable variation in how accountants can interpret financial statements. (And often there are financial incentives to “bend” the guidelines to some degree.) For instance, one of the rules, “the principle of sincerity,” states that the accountant’s report should reflect the company’s financial status “in good faith.” That’s all well and good, but “in good faith” is both excessively vague and extremely subjective. Of course, not everything (in life or accounting) is precisely quantifiable, but “in good faith” begs a few questions: Does it mean that accountants can act in bad faith?* And toward whom is this good faith directed? The people who run the company? Those who would like the books to look impressive and profitable (which would increase their bonuses and compensation)? Or should it be directed toward the people who have invested in the company? Or is it about those who want a clear idea of the company’s financial condition?

Adding to the inherent complexity and ambiguity of her original task, Jennifer was now put under additional pressure by her boss. She’d prepared the initial report in what seemed to her to be good faith, but she realized that she was being asked to bend the accounting rules to some degree. Her boss wanted numbers that reflected more favorably upon the client company. After deliberating for a while, she concluded that she and her team should comply with his request; after all, he was her boss, and he certainly knew a lot more than she did about accounting, how to work with clients, and the client’s expectations. In the end, although Jennifer started the process with every intention of being as accurate as possible, she wound up going back to the drawing board, reviewing the statements, reworking the numbers, and returning with a “better” report. This time, her boss was satisfied.

AFTER JENNIFER TOLD me her story, I continued to think about her work environment and the effect that working on a team with her boss and teammates had on her decision to push the accounting envelope a bit further. Jennifer was certainly in the kind of situation that people frequently face in the workplace, but what really stood out for me was that in this case the cheating took place in the context of a team, which was different from anything we had studied before.

In all of our earlier experiments on cheating, one person alone made the decision to cheat (even if he or she was spurred along by a dishonest act of another person). But in Jennifer’s case, more than one person was directly involved, as is frequently the case in professional settings. In fact, it was clear to Jennifer that in addition to herself and her boss, her teammates would be affected by her actions. At the end of the year, the whole team would be evaluated together as a group—and their bonuses, raises, and future prospects were intertwined.

I started to wonder about the effects of collaboration on individual honesty. When we are part of a group, are we tempted to cheat more? Less? In other words, is a group setting conducive or destructive to honesty? This question is related to a topic we discussed in the previous chapter (“Cheating as an Infection”): whether it’s possible that people can “catch” cheating from one another. But social contagion and social dependency are different. It’s one thing to observe dishonest behavior in others and, based on that, alter our perceptions of what acceptable social norms are; it’s quite another if the financial welfare of others depends on us.

Let’s say you’re working on a project with your coworkers. You don’t necessarily observe them doing anything shady, but you know that they (and you) will benefit if you bend the rules a bit. Will you be more likely to do so if you know that they too will get something out of it? Jennifer’s account suggests that collaboration can cause us to take a few extra liberties with moral guidelines, but is this the case in general?

Before we take a tour of some experiments examining the impact of collaboration on cheating, let’s take a step back and think about possible positive and negative influences of teams and collaboration on our tendency to be dishonest.



Altruistic Cheating: Possible Costs of Collaboration

Work environments are socially complex, with multiple forces at play. Some of those forces might make it easy for group-based processes to turn collaborations into cheating opportunities in which individuals cheat to a higher degree because they realize that their actions can benefit people they like and care about.

Think about Jennifer again. Suppose she was a loyal person and liked to think of herself that way. Suppose further that she really liked her supervisor and team members and sincerely wanted to help them. Based on such considerations, she might have decided to fulfill her boss’s request or even take her report a step further—not because of any selfish reasons but out of concern for her boss’s well-being and deep regard for her team members. In her mind, “bad” numbers might get her boss and team members to fall out of favor with the client and the accounting company—meaning that Jennifer’s concern for her team might lead her to increase the magnitude of her misbehavior.

Underlying this impulse is what social scientists call social utility. This term is used to describe the irrational but very human and wonderfully empathetic part of us that causes us to care about others and take action to help them out when we can—even at a cost to ourselves. Of course, we are all motivated to act in our own self-interest to some degree, but we also have a desire to act in ways that benefit those around us, particularly those we care about. Such altruistic feelings motivate us to help a stranger who is stuck with a flat tire, return a wallet we’ve found in the street, volunteer at a homeless shelter, help a friend in need, and so on.

This tendency to care about others can also make it possible to be more dishonest in situations where acting unethically will benefit others. From this perspective, we can think about cheating when others are involved as altruistic—where, like Robin Hood, we cheat because we are good people who care about the welfare of those around us.



Watch Out: Possible Benefits of Collaboration

In Plato’s “Myth of the King of Gyges,” a shepherd named Gyges finds a ring that makes him invisible. With this new-found power, he decides to go on a crime spree. So he travels to the king’s court, seduces the queen, and conspires with her to kill the king and takes control of the kingdom. In telling the story, Plato wonders whether there is anyone alive who could resist taking advantage of the power of invisibility. The question, then, is whether the only force that keeps us from carrying out misdeeds is the fear of being seen by others (J. R. R. Tolkien elaborated on this theme a couple millennia later in The Lord of the Rings). To me, Plato’s myth offers a nice illustration of the notion that group settings can inhibit our propensity to cheat. When we work within a team, other team members can act informally as monitors, and, knowing that we are being watched, we may be less inclined to act dishonorably.

A CLEVER EXPERIMENT by Melissa Bateson, Daniel Nettle, and Gilbert Roberts (all from the University of Newcastle) illustrated the idea that the mere feeling of being watched can inhibit bad behavior. This experiment took place in the kitchen of the psychology department at the University of Newcastle where tea, coffee, and milk were available for the professors and staff. Over the tea-making area hung a sign saying that beverage drinkers should contribute some cash to the honesty box located nearby. For ten weeks the sign was decorated with images, but the type of image alternated every week. On five of the weeks the sign was decorated with images of flowers, and on the other five weeks the sign was decorated with images of eyes that stared directly at the beverage drinkers. At the end of every week, the researchers counted the money in the honesty box. What did they find? There was some money in the box at the end of the weeks when the image of flowers was hung, but when the glaring eyes were “watching,” the box contained almost three times more money.

As is the case with many findings in behavioral economics, this experiment produced a mix of good and bad news. On the negative side, it showed that even members of the psychology department—who you would think would know better—tried to sneak off without paying their share for a common good. On the positive side, it showed that the mere suggestion that they were being watched made them behave more honestly. It also shows that a full-blown Orwellian “Big Brother is watching” approach is not necessary and that much more subtle suggestions of being watched can be effective in increasing honesty. Who knows? Perhaps a warning sign, complete with watchful eyes, on Jennifer’s boss’s wall might have made a difference in his behavior.

IN PONDERING JENNIFER’S situation, Francesca Gino, Shahar Ayal, and I began to wonder how dishonesty operates in collaborative environments. Does monitoring help to reduce cheating? Do social connections in groups increase both altruism and dishonesty? And if both of these forces exert their influence in opposite directions, which of the two is more powerful? In order to shed light on this question, we turned once again to our favorite matrix experiment. We included the basic control condition (in which cheating was not possible), the shredder condition (in which cheating was possible), and we added a new condition that introduced a collaborative element to the shredder condition.

As our first step in exploring the effects of groups, we didn’t want the collaborators to have an opportunity to discuss their strategy or to become friends, so we came up with a collaboration condition that included no familiarity or connection between the two team members. We called it the distant-group condition. Let’s say you are one of the participants in the distant-group condition. As in the regular shredder condition, you sit at a desk and use a number 2 pencil to work on the matrices for five minutes. When the time is up, you walk to the shredder and destroy your test sheet.

Up to that point, the procedure is the same as in the basic shredder condition, but now we introduce the collaborative element. The experimenter tells you that you are part of a two-person team and that each of you will be paid half of the group’s total earnings. The experimenter points out that your collection slip is either blue or green and has a number printed in the top-right corner. The experimenter asks you to walk around the room and find the person whose collection slip is different in color but with the same number in the top-right corner. When you find your partner, you sit down together, and each of you writes the number of matrices you correctly solved on your collection slip. Next, you write the other person’s score on your collection slip. And finally, you combine the numbers for a total performance measure. Once that’s done, you walk over to the experimenter together and hand him both collection slips. Since your worksheets have been shredded, the experimenter has no way to check the validity of your reported earnings. So he takes your word for it, pays you accordingly, and you split the takings.

Do you think people in this situation would cheat more than they did in the individual shredder condition? Here’s what we found: when participants learned that both they and someone else would benefit from their dishonesty if they exaggerated their scores more, they ended up engaging in even higher levels of cheating, claiming to have solved three more matrices than when they were cheating just for themselves. This result suggests that we humans have a weakness for altruistic cheating, even if we barely know the person who might benefit from our misbehavior. Sadly, it seems that even altruism can have a dark side.

That’s the bad news, and it’s not all of it.

HAVING ESTABLISHED ONE negative aspect of collaboration—that people are more dishonest when others, even strangers, can benefit from their cheating—we wanted to turn our experimental sights on a possible positive aspect of collaboration and see what would happen when team members watch each other. Imagine that you’re in a room with a few other participants, and you’re randomly paired up with someone you have never met before. As luck would have it, you’ve ended up with a friendly-looking young woman. Before you have a chance to talk to her, you have to complete the matrix task in complete silence. You are player 1, so you start first. You tear into the first matrix, then the second, and then the third. All the while, your partner watches your attempts, successes, and failures. When the five minutes are up, you silently put your pencil down and your partner picks hers up. She starts working on her matrix task while you observe her progress. When the time is up, you walk to the shredder together and shred your worksheets. Then you each write down your own score on the same slip of paper, combine the two numbers for your joint performance score, and walk over to the experimenter’s desk to collect your payment—all without saying a word to each other.

What level of cheating did we find? None at all. Despite the general inclination to cheat that we observe over and over, and despite the increase in the propensity to cheat when others can benefit from such actions, being closely supervised eliminated cheating altogether.

SO FAR, OUR experiments on cheating in groups showed two forces at play: altruistic tendencies get people to cheat more when their team members can benefit from their dishonesty, but direct supervision can reduce dishonesty and even eliminate it altogether. Given the coexistence of these two forces, the next question is: which force is more likely to overpower the other in more standard group interactions?

To answer this question, we needed to create an experimental setting that was more representative of how group members interact in a normal, day-to-day environment. You probably noticed that in the first two experiments, our participants didn’t really interact with each other, whereas in daily life, group discussion and friendly chatter are an essential and inherent part of group-based collaborations. Hoping to add this important social element to our experimental setup, we devised our next experiment. This time, participants were encouraged to talk to each other, get to know each other, and become friendly. We even gave them lists of questions that they could ask each other in order to break the ice. They then took turns monitoring each other while each of them solved the matrices.

Sadly, we found that cheating reared its ugly head when we added this social element to the mix. When both elements were in the mix, the participants reported that they correctly solved about four extra matrices. So whereas altruism can increase cheating and direct supervision can decrease it, altruistic cheating overpowers the supervisory effect when people are put together in a setting where they have a chance to socialize and be observed.




LONG-TERM RELATIONSHIPS

Most of us tend to think that the longer we are in a relationship with our doctors, accountants, financial advisers, lawyers, and so on, the more likely it is that they will care more deeply about our well-being, and as a consequence, they will more likely put our needs ahead of their own. For example, imagine that you just received a (nonterminal) diagnosis from your physician and you are faced with two treatment options. One is to start an aggressive, expensive therapy; the other is to wait awhile and see how your body deals with the problem and how it progresses (“watchful waiting” is the official term for this). There is not a definite answer as to which of the two options is better for you, but it is clear that the expensive, aggressive one is better for your physician’s pocket. Now imagine that your physician tells you that you should pick the aggressive treatment option and that you should schedule it for next week at the latest. Would you trust his advice? Or would you take into account what you know about conflicts of interests, discount his advice, and maybe go for a second opinion? When faced with such dilemmas, most people trust their service providers to a very high degree and we are even more likely to trust them the longer we have known them. After all, if we have known our advisers for many years, wouldn’t they start caring about us more? Wouldn’t they see things from our perspective and give us better advice?

Another possibility, however, is that as the relationship extends and grows, our paid advisers—intentionally or not—become more comfortable recommending treatments that are in their own best interest. Janet Schwartz (the Tulane professor who, along with me, enjoyed dinner with the pharmaceutical reps), Mary Frances Luce (a professor at Duke University), and I tackled this question, sincerely hoping that as relationships between clients and service providers deepened, professionals would care more about their clients’ welfare and less about their own. What we found, however, was the opposite.

We examined this question by analyzing data from millions of dental procedures over twelve years. We looked at instances when patients received fillings and whether the fillings were made of silver amalgam or white composite. You see, silver fillings last longer, cost less, and are more durable; white fillings, on the other hand, are more expensive and break more easily but are more aesthetically pleasing. So when it comes to our front teeth, aesthetics often reign over practicality, making white fillings the preferred option. But when it comes to our less visible back teeth, silver fillings are the way to go. 3

What we found was that about a quarter of all patients receive attractive and expensive white fillings in their hidden teeth rather than the functionally superior silver fillings. In those cases, it was most likely that the dentists were making decisions that favored their own interests (higher initial pay and more frequent repairs) over the patients’ interests (lower cost and longer-lasting treatment).

As if that weren’t bad enough, we also found that this tendency is more pronounced the longer the patient sees the same dentist (we found the same pattern of results for other procedures as well). What this suggests is that as dentists become more comfortable with their patients, they also more frequently recommend procedures that are in their own financial interest. And long-term patients, for their part, are more likely to accept the dentist’s advice based on the trust that their relationship has engendered.*

The bottom line: there are clearly many benefits to continuity of care and ongoing patient-provider relationships. Yet, at the same time, we should also be aware of the costs these long-term relationships can have.

HERE’S WHAT WE’VE learned about collaborative cheating so far:


BUT WAIT, THERE’S MORE! In our initial experiments, both the cheater and the partner benefited from every additional exaggeration of their score. So if you were the cheater in the experiment and you exaggerated the number of your correct responses by one, you would get half of the additional payment and your partner would get the same. This is certainly less financially rewarding than snagging the whole amount for yourself, but you would still benefit from your exaggeration to some degree.

To look into purely altruistic cheating, we introduced a condition in which the fruit of each participant’s cheating would benefit only their partner. What did we find? As it turns out, altruism is indeed a strong motivator for cheating. When cheating was carried out for purely altruistic reasons and the cheaters themselves did not gain anything from their act, overclaiming increased to an even larger degree.

Why might this be the case? I think that when both we and another person stand to benefit from our dishonesty, we operate out of a mix of selfish and altruistic motives. In contrast, when other people, and only other people, stand to benefit from our cheating, we find it far easier to rationalize our bad behavior in purely altruistic ways and subsequently we further relax our moral inhibitions. After all, if we are doing something for the pure benefit of others, aren’t we indeed a little bit like Robin Hood?*

FINALLY, IT IS worthwhile to say something more explicit about performance in the many control conditions that we had in this set of experiments. For each of our cheating conditions (individual shredder, group with shredder, distant group with shredder, friendly group with shredder, altruistic payoff with shredder), we also had a control condition in which there was no opportunity to cheat (that is, no shredder). Looking across these many different control conditions allowed us to see if the nature of collaboration influenced the level of performance. What we found was that performance was the same across all of these control conditions. Our conclusion? It seems that performance doesn’t necessarily improve when people work in groups—at least not as much as we’ve all been led to believe.

OF COURSE, WE cannot survive without the help of others. Working together is a crucial element of our lives. But clearly, collaboration is a double-edged sword. On the one hand, it increases enjoyment, loyalty, and motivation. On the other hand, it carries with it the increased potential for cheating. In the end—and very sadly—it may be that the people who care the most about their coworkers end up cheating the most. Of course, I am not advocating that we stop working in groups, stop collaborating, or stop caring about one another. But we do need to recognize the potential costs of collaboration and increased affinity.



The Irony of Collaborative Work

If collaboration increases dishonesty, what can we do about it? One obvious answer is to increase monitoring. In fact, this seems to be the default response of the government’s regulators to every instance of corporate misconduct. For example, the Enron fiasco brought about a large set of reporting regulations known as the Sarbanes-Oxley Act, and the financial crisis of 2008 ushered in an even larger set of regulations (largely emerging from the Dodd-Frank Wall Street Reform and Consumer Protection Act), which were designed to regulate and increase the supervision of the financial industry.

To some degree, there is no question that monitoring can be helpful, but it is also clear from our results that increased monitoring alone is unlikely to completely overcome our ability to justify our own dishonesty—particularly when others stand to gain from our misbehavior (not to mention the high financial costs of compliance with such regulations).

In some cases, instead of adding layers and layers of rules and regulations, perhaps we could set our sights on changing the nature of group-based collaboration. An interesting solution to this problem was recently implemented in a large international bank by a former student of mine named Gino. To allow his team of loan officers to work together without risking increased dishonesty (for example, by recording the value of the loans as higher than they really were in an effort to show larger short-run profits), he set up a unique supervisory system. He told his loan officers that an outside group would review their processing and approval of loan applications. The outside group was socially disconnected from the loan-making team and had no loyalty or motivation to help out the loan officers. To make sure that the two groups were separated, Gino located them in different office buildings. And he ensured that they had no direct dealings with each other or even knew the individuals in the other group.

I tried to get the data from Gino in order to evaluate the success of his approach, but the lawyers of this large bank stopped us. So, I don’t know whether this approach worked or how his employees felt about the arrangement, but I suspect that this mechanism had at least some positive outcomes. It probably decreased the fun that the loan work group had during their meetings. It likely also increased the stress surrounding the groups’ decisions, and it was certainly not cheap to implement. Nevertheless, Gino told me that overall, adding the objective and anonymous monitoring element seemed to have a positive effect on ethics, morals, and the bottom line.

CLEARLY, THERE ARE no silver bullets for the complex issue of cheating in group settings. Taken together, I think that our findings have serious implications for organizations, especially considering the predominance of collaborative work in our day-to-day professional lives. There is also no question that better understanding the extent and complexity of dishonesty in social settings is rather depressing. Still, by understanding the possible pitfalls involved in collaboration, we can take some steps toward rectifying dishonest behavior.

Загрузка...