Fun with the Fudge Factor

Here’s a little joke for you:

Eight-year-old Jimmy comes home from school with a note from his teacher that says, “Jimmy stole a pencil from the student sitting next to him.” Jimmy’s father is furious. He goes to great lengths to lecture Jimmy and let him know how upset and disappointed he is, and he grounds the boy for two weeks. “And just wait until your mother comes home!” he tells the boy ominously. Finally he concludes, “Anyway, Jimmy, if you needed a pencil, why didn’t you just say something? Why didn’t you simply ask? You know very well that I can bring you dozens of pencils from work.”

If we smirk at this joke, it’s because we recognize the complexity of human dishonesty that is inherent to all of us. We realize that a boy stealing a pencil from a classmate is definitely grounds for punishment, but we are willing to take many pencils from work without a second thought.

To Nina, On, and me, this little joke suggested the possibility that certain types of activities can more easily loosen our moral standards. Perhaps, we thought, if we increased the psychological distance between a dishonest act and its consequences, the fudge factor would increase and our participants would cheat more. Of course, encouraging people to cheat more is not something we want to promote in general. But for the purpose of studying and understanding cheating, we wanted to see what kinds of situations and interventions might further loosen people’s moral standards.

To test this idea, we first tried a university version of the pencil joke: One day, I sneaked into an MIT dorm and seeded many communal refrigerators with one of two tempting baits. In half of the refrigerators, I placed six-packs of Coca-Cola; in the others, I slipped in a paper plate with six $1 bills on it. I went back from time to time to visit the refrigerators and see how my Cokes and money were doing—measuring what, in scientific terms, we call the half-life of Coke and money.

As anyone who has been to a dorm can probably guess, within seventy-two hours all the Cokes were gone, but what was particularly interesting was that no one touched the bills. Now, the students could have taken a dollar bill, walked over to the nearby vending machine and gotten a Coke and change, but no one did.

I must admit that this is not a great scientific experiment, since students often see cans of Coke in their fridge, whereas discovering a plate with a few dollar bills on it is rather unusual. But this little experiment suggests that we human beings are ready and willing to steal something that does not explicitly reference monetary value—that is, something that lacks the face of a dead president. However, we shy away from directly stealing money to an extent that would make even the most pious Sunday school teacher proud. Similarly, we might take some paper from work to use in our home printer, but it would be highly unlikely that we would ever take $3.50 from the petty-cash box, even if we turned right around and used the money to buy paper for our home printer.

To look at the distance between money and its influence on dishonesty in a more controlled way, we set up another version of the matrix experiment, this time including a condition where cheating was one step removed from money. As in our previous experiments, participants in the shredder condition had the opportunity to cheat by shredding their worksheets and lying about the number of matrices they’d solved correctly. When the participants finished the task, they shredded their worksheet, approached the experimenter, and said, “I solved X* matrices, please give me X dollars.”

The innovation in this experiment was the “token” condition. The token condition was similar to the shredder condition, except that the participants were paid in plastic chips instead of dollars. In the token condition, once participants finished shredding their worksheets, they approached the experimenter and said, “I solved X matrices, please give me X tokens.” Once they received their chips, they walked twelve feet to a nearby table, where they handed in their tokens and received cold, hard cash.

As it turned out, those who lied for tokens that a few seconds later became money cheated by about twice as much as those who were lying directly for money. I have to confess that, although I had suspected that participants in the token condition would cheat more, I was surprised by the increase in cheating that came with being one small step removed from money. As it turns out, people are more apt to be dishonest in the presence of nonmonetary objects—such as pencils and tokens—than actual money.

From all the research I have done over the years, the idea that worries me the most is that the more cashless our society becomes, the more our moral compass slips. If being just one step removed from money can increase cheating to such a degree, just imagine what can happen as we become an increasingly cashless society. Could it be that stealing a credit card number is much less difficult from a moral perspective than stealing cash from someone’s wallet? Of course, digital money (such as a debit or credit card) has many advantages, but it might also separate us from the reality of our actions to some degree. If being one step removed from money liberates people from their moral shackles, what will happen as more and more banking is done online? What will happen to our personal and social morality as financial products become more obscure and less recognizably related to money (think, for example, about stock options, derivatives, and credit default swaps)?



Some Companies Already Know This!

As scientists, we took great care to carefully document, measure, and examine the influence of being one step removed from money. But I suspect that some companies intuitively understand this principle and use it to their advantage. Consider, for example, this letter that I received from a young consultant:


Dear Dr. Ariely,

I graduated a few years ago with a BA degree in Economics from a prestigious college and have been working at an economic consulting firm, which provides services to law firms.

The reason I decided to contact you is that I have been observing and participating in a very well documented phenomenon of overstating billable hours by economic consultants. To avoid sugar coating it, let’s call it cheating. From the most senior people all the way to the lowest analyst, the incentive structure for consultants encourages cheating: no one checks to see how much we bill for a given task; there are no clear guidelines as to what is acceptable; and if we have the lowest billability among fellow analysts, we are the most likely to get axed. These factors create the perfect environment for rampant cheating.

The lawyers themselves get a hefty cut of every hour we bill, so they don’t mind if we take longer to finish a project. While lawyers do have some incentive to keep costs down to avoid enraging clients, many of the analyses we perform are very difficult to evaluate. Lawyers know this and seem to use it to their advantage. In effect, we are cheating on their behalf; we get to keep our jobs and they get to keep an additional profit.

Here are some specific examples of how cheating is carried out in my company:


• A deadline was fast approaching and we were working extremely long hours. Budget didn’t seem to be an issue and when I asked how much of my day I should bill, my boss (a midlevel project manager) told me to take the total amount of time I was in the office and subtract two hours, one for lunch and one for dinner. I said that I had taken a number of other breaks while the server was running my programs and she said I could count that as a mental health break that would promote higher productivity later.


• A good friend of mine in the office adamantly refused to overbill and consequently had an overall billing rate that was about 20 percent lower than the average. I admire his honesty, but when it was time to lay people off, he was the first to go. What kind of message does that send to the rest of us?


• One person bills every hour he is monitoring his email for a project, whether or not he receives any work to do. He is “on-call,” he says.


• Another guy often works from home and seems to bill a lot, but when he is in the office he never seems to have any work to do.


These kinds of examples go on and on. There is no doubt that I am complicit in this behavior, but seeing it more clearly makes me want to fix the problems. Do you have any advice? What would you do in my situation?

Sincerely yours,

Jonah

Unfortunately, the problems Jonah noted are commonplace, and they are a direct outcome of the way we think about our own morality. Here is another way to think about this issue: One morning I discovered that someone had broken the window of my car and stolen my portable GPS system. Certainly, I was very annoyed, but in terms of its economic impact on my financial future, this crime had a very small effect. On the other hand, think about how much my lawyers, stockbrokers, mutual fund managers, insurance agents, and others probably take from me (and all of us) over the years by slightly overcharging, adding hidden fees, and so on. Each of these actions by itself is probably not very financially significant, but together they add up to much more than a few navigation devices. At the same time, I suspect that unlike the person who took my GPS, those white-collar transgressors think of themselves as highly moral people because their actions are relatively small and, most important, several steps removed from my pocket.

The good news is that once we understand how our dishonesty increases when we are one or more steps removed from money, we can try to clarify and emphasize the links between our actions and the people they can affect. At the same time, we can try to shorten the distance between our actions and the money in question. By taking such steps, we can become more cognizant of the consequences of our actions and, with that awareness, increase our honesty.




LESSONS FROM LOCKSMITHS

Not too long ago, one of my students named Peter told me a story that captures our misguided efforts to decrease dishonesty rather nicely.

One day, Peter locked himself out of his house, so he called around to find a locksmith. It took him a while to find one who was certified by the city to unlock doors. The locksmith finally pulled up in his truck and picked the lock in about a minute.

“I was amazed at how quickly and easily this guy was able to open the door,” Peter told me. Then he passed on a little lesson in morality he learned from the locksmith that day.

In response to Peter’s amazement, the locksmith told Peter that locks are on doors only to keep honest people honest. “One percent of people will always be honest and never steal,” the locksmith said. “Another one percent will always be dishonest and always try to pick your lock and steal your television. And the rest will be honest as long as the conditions are right—but if they are tempted enough, they’ll be dishonest too. Locks won’t protect you from the thieves, who can get in your house if they really want to. They will only protect you from the mostly honest people who might be tempted to try your door if it had no lock.”

After reflecting on these observations, I came away thinking that the locksmith was probably right. It’s not that 98 percent of people are immoral or will cheat anytime the opportunity arises; it’s more likely that most of us need little reminders to keep ourselves on the right path.



How to Get People to Cheat Less

Now that we had figured out how the fudge factor works and how to expand it, as our next step we wanted to figure out whether we could decrease the fudge factor and get people to cheat less. This idea, too, was spawned by a little joke:

A visibly upset man goes to see his rabbi one day and says, “Rabbi, you won’t believe what happened to me! Last week, someone stole my bicycle from synagogue!”

The rabbi is deeply upset by this, but after thinking for a moment, he offers a solution: “Next week come to services, sit in the front row, and when we recite the Ten Commandments, turn around and look at the people behind you. And when we get to ‘Thou shalt not steal,’ see who can’t look you in the eyes and that’s your guy.” The rabbi is very pleased with his suggestion, and so is the man.

At the next service, the rabbi is very curious to learn whether his advice panned out. He waits for the man by the doors of the synagogue, and asks him, “So, did it work?”

“Like a charm,” the man answers. “The moment we got to ‘Thou shalt not commit adultery,’ I remembered where I left my bike.”

What this little joke suggests is that our memory and awareness of moral codes (such as the Ten Commandments) might have an effect on how we view our own behavior.

Inspired by the lesson behind this joke, Nina, On, and I ran an experiment at the University of California, Los Angeles (UCLA). We took a group of 450 participants and split them into two groups. We asked half of them to try to recall the Ten Commandments and then tempted them to cheat on our matrix task. We asked the other half to try to recall ten books they had read in high school before setting them loose on the matrices and the opportunity to cheat. Among the group who recalled the ten books, we saw the typical widespread but moderate cheating. On the other hand, in the group that was asked to recall the Ten Commandments, we observed no cheating whatsoever. And that was despite the fact that no one in the group was able to recall all ten.

This result was very intriguing. It seemed that merely trying to recall moral standards was enough to improve moral behavior. In another attempt to test this effect, we asked a group of self-declared atheists to swear on a Bible and then gave them the opportunity to claim extra earnings on the matrix task. What did the atheists do? They did not stray from the straight-and-narrow path.

These experiments with moral reminders suggest that our willingness and tendency to cheat could be diminished if we are given reminders of ethical standards. But although using the Ten Commandments and the Bible as honesty-building mechanisms might be helpful, introducing religious tenets into society on a broader basis as a means to reduce cheating is not very practical (not to mention the fact that doing so would violate the separation of church and state). So we began to think of more general, practical, and secular ways to shrink the fudge factor, which led us to test the honor codes that many universities already use.

To discover whether honor codes work, we asked a group of MIT and Yale students to sign such a code just before giving half of them a chance to cheat on the matrix tasks. The statement read, “I understand that this experiment falls under the guidelines of the MIT/Yale honor code.” The students who were not asked to sign cheated a little bit, but the MIT and Yale students who signed this statement did not cheat at all. And that was despite the fact that neither university has an honor code (somewhat like the effect that swearing on the Bible had on the self-declared atheists).




STEALING PAPER

A few years ago I received a letter from a woman named Rhonda who attended the University of California at Berkeley. She told me about a problem she’d had in her house and how a little ethical reminder helped her solve it.

She was living near campus with several other people—none of whom knew one another. When the cleaning people came each weekend, they left several rolls of toilet paper in each of the two bathrooms. However, by Monday all the toilet paper would be gone. It was a classic tragedy-of-the-commons situation: because some people hoarded the toilet paper and took more than their fair share, the public resource was destroyed for everyone else.

After reading about the Ten Commandments experiment on my blog, Rhonda put a note in one of the bathrooms asking people not to remove toilet paper, as it was a shared commodity. To her great satisfaction, one roll reappeared in a few hours, and another the next day. In the other note-free bathroom, however, there was no toilet paper until the following weekend, when the cleaning people returned.

This little experiment demonstrates how effective small reminders can be in helping us maintain our ethical standards and, in this case, a fully stocked bathroom.

We found that an honor code worked in universities that don’t have an honor code, but what about universities that have a strong honor code? Would their students cheat less all the time? Or would they cheat less only when they signed the honor code? Luckily, at the time I was spending some time at the Institute for Advanced Study at Princeton University, which was a great petri dish in which to test this idea.

Princeton University has a rigorous honor system that’s been around since 1893. Incoming freshmen receive a copy of the Honor Code Constitution and a letter from the Honor Committee about the honor system, which they must sign before they can matriculate. They also attend mandatory talks about the importance of the Honor Code during their first week of school. Following the lectures, the incoming Princetonians further discuss the system with their dorm advising group. As if that weren’t enough, one of the campus music groups, the Triangle Club, performs its “Honor Code Song” for the incoming class.

For the rest of their time at Princeton, students are repeatedly reminded of the honor code: they sign an honor code at the end of every paper they submit (“This paper represents my own work in accordance with University regulations”). They sign another pledge for every exam, test, or quiz (“I pledge my honor that I have not violated the honor code during this examination”), and they receive biannual reminder e-mails from the Honor Committee.

To see if Princeton’s crash course on morality has a long-term effect, I waited two weeks after the freshmen finished their ethics training before tempting them to cheat—giving them the same opportunities as the students at MIT and Yale (which have neither an honor code nor a weeklong course on academic honesty). Were the Princeton students, still relatively fresh from their immersion in the honor code, more honest when they completed the matrix task?

Sadly, they were not. When the Princeton students were asked to sign the honor code, they did not cheat at all (but neither did the MIT or Yale students). However, when they were not asked to sign the honor code, they cheated just as much as their counterparts at MIT and Yale. It seems that the crash course, the propaganda on morality, and the existence of an honor code did not have a lasting influence on the moral fiber of the Princetonians.

These results are both depressing and promising. On the depressing side, it seems that it is very difficult to alter our behavior so that we become more ethical and that a crash course on morality will not suffice. (I suspect that this ineffectiveness also applies to much of the ethics training that takes place in businesses, universities, and business schools.) More generally, the results suggest that it’s quite a challenge to create a long-term cultural change when it comes to ethics.

On the positive side, it seems that when we are simply reminded of ethical standards, we behave more honorably. Even better, we discovered that the “sign here” honor code method works both when there is a clear and substantial cost for dishonesty (which, in the case of Princeton, can entail expulsion) and when there is no specific cost (as at MIT and Yale). The good news is that people seem to want to be honest, which suggests that it might be wise to incorporate moral reminders into situations that tempt us to be dishonest.*

ONE PROFESSOR AT Middle Tennessee State University got so fed up with the cheating among his MBA students that he decided to employ a more drastic honor code. Inspired by our Ten Commandments experiment and its effect on honesty, Thomas Tang asked his students to sign an honor code stating that they would not cheat on an exam. The pledge also stated that they “would be sorry for the rest of their lives and go to Hell” if they cheated.

The students, who did not necessarily believe in Hell or agree that they were going there, were outraged. The pledge became very controversial, and, perhaps unsurprisingly, Tang caught a lot of heat for his effort (he eventually had to revert to the old, Hell-free pledge).

Still, I imagine that in its short existence, this extreme version of the honor code had quite an effect on the students. I also think the students’ outrage indicates how effective this type of pledge can be. The future businessmen and women must have felt that the stakes were very high, or they would not have cared so much. Imagine yourself confronted by such a pledge. How comfortable would you feel signing it? Would signing it influence your behavior? What if you had to sign it just before filling out your expense reports?




RELIGIOUS REMINDERS

The possibility of using religious symbols as a way to increase honesty has not escaped religious scholars. There is a story in the Talmud about a religious man who becomes desperate for sex and goes to a prostitute. His religion wouldn’t condone this, of course, but at the time he feels that he has more pressing needs. Once alone with the prostitute, he begins to undress. As he takes off his shirt, he sees his tzitzit, an undergarment with four pieces of knotted fringe. Seeing the tzitzit reminds him of the mitzvoth (religious obligations), and he quickly turns around and leaves the room without violating his religious standards.



Adventures with the IRS

Using honor codes to curb cheating at a university is one thing, but would moral reminders of this type also work for other types of cheating and in nonacademic environments? Could they help prevent cheating on, say, tax-reporting and insurance claims? That is what Lisa Shu (a PhD student at Harvard University), Nina Mazar, Francesca Gino (a professor at Harvard University), Max Bazerman (a professor at Harvard University), and I set out to test.

We started by restructuring our standard matrix experiment to look a bit like tax reporting. After they finished solving and shredding the matrix task, we asked participants to write down the number of questions that they had solved correctly on a form we modeled after the basic IRS 1040EZ tax form. To make it feel even more as if they were working with a real tax form, it was stated clearly on the form that their income would be taxed at a rate of 20 percent. In the first section of the form, the participants were asked to report their “income” (the number of matrices they had solved correctly). Next, the form included a section for travel expenses, where participants could be reimbursed at a rate of 10 cents per minute of travel time (up to two hours, or $12) and for the direct cost of their transportation (up to another $12). This part of the payment was tax exempt (like a business expense). The participants were then asked to add up all the numbers and come up with their final net payment.

There were two conditions in this experiment: Some of the participants filled out the entire form and then signed it at the bottom, as is typically done with official forms. In this condition, the signature acted as verification of the information on the form. In the second condition, participants signed the form first and only then filled it out. That was our “moral reminder” condition.

What did we find? The participants in the sign-at-the-end condition cheated by adding about four extra matrices to their score. And what about those who signed at the top? When the signature acted as a moral reminder, participants claimed only one extra matrix. I am not sure how you feel about “only” one added matrix—after all, it is still cheating—but given that the one difference between these two conditions was the location of the signature line, I see this outcome as a promising way to reduce dishonesty.

Our version of the tax form also allowed us to look at the requests for travel reimbursements. Now, we did not know how much time the participants really spent traveling, but if we assumed that due to randomization, the average amount of travel time was basically the same in both conditions, we could see in which condition participants claimed higher travel expenses. What we saw was that the amount of requests for travel reimbursement followed the same pattern: Those in the signature-at-the-bottom condition claimed travel expenses averaging $9.62, while those in the moral reminder (signature-at-the-top) condition claimed that they had travel expenses averaging $5.27.

ARMED WITH OUR evidence that when people sign their names to some kind of pledge, it puts them into a more honest disposition (at least temporarily), we approached the IRS, thinking that Uncle Sam would be glad to hear of ways to boost tax revenues. The interaction with the IRS went something like this:


ME: By the time taxpayers finish entering all the data onto the form, it is too late. The cheating is done and over with, and no one will say, “Oh, I need to sign this thing, let me go back and give honest answers.” You see? If people sign before they enter any data onto the form, they cheat less. What you need is a signature at the top of the form, and this will remind everyone that they are supposed to be telling the truth.

IRS: Yes, that’s interesting. But it would be illegal to ask people to sign at the top of the form. The signature needs to verify the accuracy of the information provided.

ME: How about asking people to sign twice? Once at the top and once at the bottom? That way, the top signature will act as a pledge—reminding people of their patriotism, moral fiber, mother, the flag, homemade apple pie—and the signature at the bottom would be for verification.

IRS: Well, that would be confusing.

ME: Have you looked at the tax code or the tax forms recently?

IRS: [No reaction.]

ME: How about this? What if the first item on the tax form asked if the taxpayer would like to donate twenty-five dollars to a task force to fight corruption? Regardless of the particular answer, the question will force people to contemplate their standing on honesty and its importance for society! And if the taxpayer donates money to this task force, they not only state an opinion, but they also put some money behind their decision, and now they might be even more likely to follow their own example.

IRS: [Stony silence.]

ME: This approach may have another interesting benefit: You could flag the taxpayers who decide not to donate to the task force and audit them!

IRS: Do you really want to talk about audits?*

Despite the reaction from the IRS, we were not entirely discouraged, and continued to look for other opportunities to test our “sign first” idea. We were finally (moderately) successful when we approached a large insurance company. The company confirmed our already substantiated theory that most people cheat, but only by a little bit. They told us that they suspect that very few people cheat flagrantly (committing arson, faking a robbery, and so on) but that many people who undergo a loss of property seem comfortable exaggerating their loss by 10 to 15 percent. A 32-inch television becomes 40 inches, an 18k necklace becomes 22k, and so on.

I went to their headquarters and got to spend the day with the top folks at this company, trying to come up with ways to decrease dishonest reporting on insurance claims. We came up with lots of ideas. For instance, what if people had to declare their losses in highly concrete terms and provide more specific details (where and when they bought the items) in order to allow less moral flexibility? Or if a couple lost their house in a flood, what if they had to agree on what was lost (although as we will see in chapter 8, “Cheating as an Infection,” and chapter 9, “Collaborative Cheating,” this particular idea might backfire). What if we played religious music when people were on hold? And of course, what if people had to sign at the top of the claim form or even next to each reported item?

As is the way with such large companies, the people I met with took the ideas to their lawyers. We waited six months and then finally heard from the lawyers—who said that they were not willing to let us try any of these approaches.

A few days later, my contact person at the insurance company called me and apologized for not being able to try any of our ideas. He also told me that there was one relatively unimportant automobile insurance form that we could use for an experiment. The form asked people to record their current odometer reading so that the insurance company could calculate how many miles they had driven the previous year. Naturally, people who want their premium to be lower (I can think of many) might be tempted to lie and underreport the actual number of miles they drove.

The insurance company gave us twenty thousand forms, and we used them to test our sign-at-the-top versus the sign-at-the-bottom idea. We kept half of the forms with the “I promise that the information I am providing is true” statement and signature line on the bottom of the page. For the other half, we moved the statement and signature line to the top. In all other respects, the two forms were identical. We mailed the forms to twenty thousand customers and waited a while, and when we got the forms back we were ready to compare the amount of driving reported on the two types of forms. What did we find?

When we estimated the amount of driving that took place over the last year, those who signed the form first appeared to have driven on average 26,100 miles, while those who signed at the end of the form appeared to have driven on average 23,700 miles—a difference of about 2,400 miles. Now, we don’t know how much those who signed at the top really drove, so we don’t know if they were perfectly honest—but we do know that they cheated to a much lesser degree. It is also interesting to note that this magnitude of decreased cheating (which was about 15 percent of the total amount of driving reported) was similar to the percentage of dishonesty we found in our lab experiments.

TOGETHER, THESE EXPERIMENTAL results suggest that although we commonly think about signatures as ways to verify information (and of course signatures can be very useful in fulfilling this purpose), signatures at the top of forms could also act as a moral prophylactic.




COMPANIES ARE ALWAYS RATIONAL!

Many people believe that although individuals might behave irrationally from time to time, large commercial companies that are run by professionals with boards of directors and investors will always operate rationally. I never bought into this sentiment, and the more I interact with companies, the more I find that they are actually far less rational than individuals (and the more I am convinced that anyone who thinks that companies are rational has never attended a corporate board meeting).

What do you think happened after we demonstrated to the insurance company that we could improve honesty in mileage reporting using their forms? Do you think the company was eager to emend their regular practices? They were not! Or do you think anyone asked (maybe begged) us to experiment



Some Lessons

When I ask people how we might reduce crime in society, they usually suggest putting more police on the streets and applying harsher punishments for offenders. When I ask CEOs of companies what they would do to solve the problem of internal theft, fraud, overclaiming on expense reports, and sabotage (when employees do things to hurt their employer with no concrete benefit to themselves), they usually suggest stricter oversight and tough no-tolerance policies. And when governments try to decrease corruption or create regulations for more honest behavior, they often push for transparency (also known as “sunshine policies”) as a cure for society’s ills. Of course, there is little evidence that any of these solutions work.

By contrast, the experiments described here show that doing something as simple as recalling moral standards at the time of temptation can work wonders to decrease dishonest behavior and potentially prevent it altogether. This approach works even if those specific moral codes aren’t a part of our personal belief system. In fact, it’s clear that moral reminders make it relatively easy to get people to be more honest—at least for a short while. If your accountant were to ask you to sign an honor code a moment before filing your taxes or if your insurance agent made you swear that you were telling the whole truth about that water-damaged furniture, chances are that tax evasion and insurance fraud would be less common.*

What are we to make of all this? First, we need to recognize that dishonesty is largely driven by a person’s fudge factor and not by the SMORC. The fudge factor suggests that if we want to take a bite out of crime, we need to find a way to change the way in which we are able to rationalize our actions. When our ability to rationalize our selfish desires increases, so does our fudge factor, making us more comfortable with our own misbehavior and cheating. The other side is true as well; when our ability to rationalize our actions is reduced, our fudge factor shrinks, making us less comfortable with misbehaving and cheating. When you consider the range of undesirable behaviors in the world from this stand-point—from banking practices to backdating stock options, from defaulting on loans and mortgages to cheating on taxes—there’s a lot more to honesty and dishonesty than rational calculations.

Of course, this means that understanding the mechanisms involved in dishonesty is more complex and that deterring dishonesty is not an easy task—but it also means that uncovering the intricate relationship between honesty and dishonesty will be a more exciting adventure.

Загрузка...