Blinded by Our Own Motivations

Picture your next dental appointment. You walk in, exchange pleasantries with the receptionist, and begin leafing through some old magazines while waiting for your name to be called.

Now let’s imagine that since your last visit, your dentist went out and bought an innovative and expensive piece of dental equipment. It’s a dental CAD/CAM (short for computer-aided design/computer-aided manufacturing) machine, a cutting-edge device used to customize tooth restorations such as crowns and bridges. The device works in two steps. First it displays a 3D replica of the patient’s teeth and gums on a computer screen, allowing the dentist to trace the exact shape of the crown—or whatever the restoration—against the screen’s image. This is the CAD part. Then comes the CAM part; this device molds ceramic material into a crown according to the dentist’s blueprint. Altogether, this fancy machine comes with a hefty price tag.

But let’s get back to you. Just as you finish skimming an article about some politician’s marital troubles and are about to start a story about the next it-girl, the receptionist calls your name. “Second room to the left,” she says.

You situate yourself in the dentist’s chair and engage in a bit of small talk with the hygienist, who pokes around your mouth for a while and follows up with a cleaning. Before long, your dentist walks in.

The dentist repeats the same general poking procedure, and as he checks your teeth he tells the hygienist to mark teeth 3 and 4 for further observation and to mark tooth 7 as having craze lines.

“Huh? Caze wha?” you gurgle, with your mouth open wide and the suction tube pulling on the right side of your mouth.

The dentist stops, pulls the instruments out, carefully places them on the tray next to him, and sits back in his chair. He then starts explaining your situation: “Craze lines are what we call certain small cracks in the tooth enamel. But no problem, we have a great solution for this. We’ll just use the CAD/CAM to fit you with a crown, problem solved. How about it?” he asks.

You waver a little, but after you get his assurance that it won’t hurt one bit, you agree. After all, you have been seeing this dentist for a long time, and although some of his treatments over the years were rather unpleasant, you feel that he has generally treated you well.

Now, I should point out—because your dentist might not—that craze lines are basically very, very small cracks in the enamel of your teeth, and what’s more, they’re almost always completely asymptomatic; many people have them and aren’t bothered by them in the least. So, in effect, it’s usually unnecessary to target craze lines with any kind of treatment.

LET ME GIVE you one real-life story from my friend Jim, the former vice president of a large dental company. Over the years, Jim has encountered his fair share of oddball dental cases, but one CAD/CAM story he told me was particularly horrible.

A few years after the CAD/CAM equipment came onto the market, one particular dentist in Missouri invested in the equipment, and from that point on he seemed to start looking at craze lines differently. “He wanted to crown everything,” Jim told me. “He was excited and enthusiastic to use his brand-new gadget, so he recommended that many of his patients improve their smiles, using, of course, his state-of-the-art CAD/CAM equipment.”

One of his patients was a young law student with asymptomatic craze lines; still, he recommended that she get a crown. The young woman complied, because she was used to listening to her dentist’s advice, but guess what? Because of the crown, her tooth became symptomatic and then died, forcing her to go in for a root canal. But wait, it gets worse. The root canal failed and had to be redone, and that second root canal failed as well. As a result, the woman had no choice but to undergo more complex and painful surgery. So what began as a treatment for harmless craze lines ultimately resulted in a lot of pain and financial cost for this young woman.

After the woman graduated from law school, she did her homework and realized that (surprise!) she’d never needed that crown in the first place. As you can imagine, she wasn’t thrilled by this, so she went after the dentist with a vengeance, took him to court, and won.

NOW, WHAT CAN we make of this tale? As we’ve already learned, people don’t need to be corrupt in order to act in problematic and sometimes damaging ways. Perfectly well-meaning people can get tripped up by the quirks of the human mind, make egregious mistakes, and still consider themselves to be good and moral. It’s safe to say that most dentists are competent, caring individuals who approach their work with the best of intentions. Yet, as it turns out, biased incentives can—and do—lead even the most upstanding professionals astray.

Think about it. When a dentist decides to purchase a new device, he no doubt believes it will help him better serve his patients. But it can also be an expensive venture. He wants to use it to improve patient care, but he also wants to recover his investment by charging his patients for using this wonderful new technology. So, consciously or not, he looks for ways to do so, and voilà! The patient ends up with a crown—sometimes necessary, other times not.

To be clear, I don’t think dentists (or the vast majority of people, for that matter) carry out an explicit calculation of costs and benefits by weighing patients’ well-being against their own pockets and then deliberately choose their own self-interest over their patients’ best interest. Instead, I suspect that some dentists who purchase the CAD/CAM equipment are reacting to the fact that they have invested a great deal of money in the device and want to make the most of it. This information then colors the dentists’ professional judgment, leading them to make recommendations and decisions that are in their own self-interest rather than doing what is best for the patient.

You might think that instances like this, when a service provider is pulled in two directions (generally referred to as a conflict of interest), are rare. But the reality is that conflicts of interest influence our behavior in all kinds of places and, quite frequently, both professionally and personally.




Can I Tattoo Your Face?

Some time ago I ran smack into a rather strange conflict of interest. In this case I was the patient. As a young man in my midtwenties—about six or seven years after I was originally injured*—I went back to the hospital for a routine checkup. On that particular visit, I met with a few physicians, and they reviewed my case. Later, I met the head of the burn department, who seemed especially happy to see me.

“Dan, I have a fantastic new treatment for you!” he exclaimed. “You see, because you have thick, dark hair, when you shave, no matter how closely you try to shave, there will always be little black dots where your hair grows. But since the right side of your face is scarred, you don’t have any facial hair or small black dots on that side, making your face look asymmetrical.”

At that point, he launched into a short lecture on the importance of symmetry for aesthetic and social reasons. I knew how important symmetry was to him, because I was given a similar minilecture a few years earlier, when he convinced me to undergo a complex and lengthy operation in which he would take part of my scalp together with its blood supply and re-create the right half of my right eyebrow. (I’d undergone that complex twelve-hour operation and liked the results.)

Then came his proposal: “We have started tattooing little dots resembling stubble onto scarred faces much like yours, and our patients have been incredibly happy with the results.”

“That sounds interesting,” I said. “Can I talk to one of the patients that had this procedure?”

“Unfortunately you can’t—that would violate medical confidentiality,” he said. Instead, he showed me pictures of the patients—not of their whole faces, just the parts that were tattooed. And sure enough, it did look as though the scarred faces were covered with black stubblelike specks.

But then I thought of something. “What happens when I grow old and my hair turns gray?” I asked.

“Oh, that’s no problem,” he replied. “When that happens, we’ll just lighten up the tattoo with a laser.” Satisfied, he got up, adding “Come back tomorrow at nine. Just shave the left side of your face as you usually do, with the same closeness of shave that you like to keep, and I’ll tattoo the right side of your face to look the same. I guarantee that by noon, you’ll be happier and more attractive.”

I mulled over the possible treatment on my drive home and for the rest of the day. I also realized that in order to get the full benefit from this treatment, I would have to shave in exactly the same way for the rest of my life. I walked into the department head’s office the next morning and told him that I was not interested in the procedure.

I did not expect what came next. “What is wrong with you?” he growled. “Do you like looking unattractive? Do you derive some kind of strange pleasure from looking asymmetrical? Do women feel sorry for you and give you sympathy sex? I’m offering you a chance to fix yourself in a very simple and elegant way. Why not just take it and be grateful?”

“I don’t know,” I said. “I’m just uncomfortable with the idea. Let me think about it some more.”

You may find it hard to believe that the department head could be so aggressive and harsh, but I assure you this is exactly what he told me. At the same time, it was not his usual manner with me, so I was puzzled by his unrelenting approach. In fact, he was a fantastic, dedicated doctor who treated me well and worked very hard to make me better. It was also not the first time I refused a treatment. Over many years of interacting with medical professionals, I had decided to have some treatments and not others. But none of my doctors, including the head of the burn department, had ever tried to guilt me into having a treatment.

In an attempt to solve this mystery, I went to his deputy, a younger doctor with whom I had a friendly rapport. I asked him to explain why the department head had put me under such pressure.

“Ah, yes, yes,” the deputy said. “He’s already performed this procedure on two patients, and he needs just one more in order to publish a scientific paper in one of the leading medical journals.”

This additional information certainly helped me better understand the conflict of interest I was up against. Here was a really good physician, someone I had known for many years and who had consistently treated me with compassion and great care. Yet, despite the fact that he cared a great deal about me in general, in this instance he was unable to see past his conflict of interest. It goes to show just how hard it is to overcome conflicts of interests once they fundamentally color our view of the world.

After years of experience publishing in academic journals myself, I now have a greater understanding of this physician’s conflict of interest (more about this later). Of course, I’ve never tried to coerce anyone into tattooing his face—but there’s still time for that.



The Hidden Cost of Favors

One other common cause of conflicts of interest is our inherent inclination to return favors. We humans are deeply social creatures, so when someone lends us a hand in some way or presents us with a gift, we tend to feel indebted. That feeling can in turn color our view, making us more inclined to try to help that person in the future.

One of the most interesting studies on the impact of favors was carried out by Ann Harvey, Ulrich Kirk, George Denfield, and Read Montague (at the time all were at the Baylor College of Medicine). In this study, Ann and her colleagues looked into whether a favor could influence aesthetic preferences.

When participants arrived at the neuroscience lab at Baylor, they were told that they would be evaluating art from two galleries, one called “Third Moon” and another called “Lone Wolfe.” The participants were informed that the galleries had generously provided their payment for participating in this experiment. Some were told that their individual payment was sponsored by Third Moon, while the others were told that their individual payment was sponsored by Lone Wolfe.

Armed with this information, the participants moved to the main part of the experiment. One by one, they were asked to remain as motionless as possible in a functional magnetic resonance imagining (fMRI) scanner, a large machine with a cylinder-shaped hole in the middle. Once they were situated inside the massive magnet, they viewed a series of sixty paintings, one at a time. All the paintings were by Western artists dating from the thirteenth through the twentieth century and ranged from representational to abstract art. But the sixty paintings were not all that they saw. Near the top-left corner of each painting was the handsome logo of the gallery where that particular picture could be purchased—which meant that some pictures were presented as if they came from the gallery that sponsored the participant, and some pictures were presented as if they came from the non-sponsoring gallery.

Once the scanning portion of the experiment was over, each participant was asked to take another look at each of the painting-logo combinations, but this time they were asked to rate each of the pictures on a scale that ranged from “dislike” to “like.”

With the rating information in hand, Ann and her colleagues could compare which paintings the participants liked more, the ones from Third Moon or the ones from Lone Wolfe. As you might suspect, when the researchers examined the ratings they found that participants gave more favorable ratings to the paintings that came from their sponsoring gallery.

You might think that this preference for the sponsoring gallery was due to a kind of politeness—or maybe just lip service, the way we compliment friends who invite us for dinner even when the food is mediocre. This is where the fMRI part of the study came in handy. Suggesting that the effects of reciprocity run deep, the brain scans showed the same effect; the presence of the sponsor’s logo increased the activity in the parts of the participants’ brains that are related to pleasure (particularly the ventromedial prefrontal cortex, a part of the brain that is responsible for higher-order thinking, including associations and meaning). This suggested that the favor from the sponsoring gallery had a deep effect on how people responded to the art. And get this: when participants were asked if they thought that the sponsor’s logo had any effect on their art preferences, the universal answer was “No way, absolutely not.”

What’s more, different participants were given varying amounts of money for their time in the experiments. Some received $30 from their sponsoring gallery, others received $100. At the highest level, participants were paid $300. It turned out that the favoritism toward the sponsoring gallery increased as the amount of earnings grew. The magnitude of brain activation in the pleasure centers of the brain was lowest when the payment was $30, higher when the payment was $100, and highest when the payment was $300.

These results suggest that once someone (or some organization) does us a favor, we become partial to anything related to the giving party—and that the magnitude of this bias increases as the magnitude of the initial favor (in this case the amount of payment) increases. It’s particularly interesting that financial favors could have an influence on one’s preferences for art, especially considering that the favor (paying for their participation in the study) had nothing at all to do with the art, which had been created independently of the galleries. It is also interesting to note that participants knew the gallery would pay their compensation regardless of their ratings of the paintings and yet the payment (and its magnitude) established a sense of reciprocity that guided their preferences.



Fun with Pharma

Some people and companies understand this human propensity for reciprocity very well and consequently spend a lot of time and money trying to engender a feeling of obligation in others. To my mind, the profession that most embodies this type of operation, that is, the one that depends most on creating conflicts of interests, is—of course—that of governmental lobbyists, who spend a small fraction of their time informing politicians about facts as reported by their employers and the rest of their time trying to implant a feeling of obligation and reciprocity in politicians who they hope will repay them by voting with their interest in mind.

But lobbyists are not alone in their relentless pursuit of conflicts of interest, and some other professions could arguably give them a run for their well-apportioned money. For example, let’s consider the way representatives for drug companies (pharma reps) run their business. A pharma rep’s job is to visit doctors and convince them to purchase medical equipment and drugs to treat everything from A(sthma) to Z(ollinger-Ellison syndrome). First they may give a doctor a free pen with their logo, or perhaps a notepad, a mug, or maybe some free drug samples. Those small gifts can subtly influence physicians to prescribe a drug more often—all because they feel the need to give back.1

But small gifts and free drug samples are just a few of the many psychological tricks that pharma reps use as they set out to woo physicians. “They think of everything,” my friend and colleague (let’s call him MD) told me. He went on to explain that drug companies, especially smaller ones, train their reps to treat doctors as if they were gods. And they seem to have a disproportionately large reserve of attractive reps. The whole effort is coordinated with military precision. Every self-respecting rep has access to a database that tells them exactly what each doctor has prescribed over the last quarter (both that company’s drugs as well as their competitors’). The reps also make it their business to know what kind of food each doctor and their office staff likes, what time of day they are most likely to see reps, and also which type of rep gets the most face time with the doctors. If the doctor is noted to spend more time with a certain female rep, they may adjust that rep’s rotation so that she can spend more time in that office. If the doctor is a fan of the military, they’ll send him a veteran. The reps also make it a point to be agreeable with the doctor’s outer circles, so when the rep arrives they start by handing out candy and other small gifts to the nurses and the front desk, securing themselves in everyone’s good graces from the get-go.

One particularly interesting practice is the “dine-and-dash,” where, in the name of education, doctors can simply pull up at prespecified take-out restaurants and pick up whatever they want. Even medical students and trainees are pulled into some schemes. One particularly creative example of this strategy was the famous black mug. A black mug with the company’s logo was handed out to doctors and residents, and the company arranged it such that a doctor could take this mug to any location of a local coffee chain (which shall go unnamed) and get as much espresso or cappuccino as he or she wanted. The clamor for this mug was so great that it became a status symbol among students and trainees. As these practices became more extravagant, there was also more regulation from hospitals and the American Medical Association, limiting the use of these aggressive marketing tactics. Of course, as the regulations become more stringent, pharma reps continue to search for new and innovative approaches to influence physicians. And the arms race continues …*

A FEW YEARS AGO, my colleague Janet Schwartz (a professor at Tulane University) and I invited some pharmaceutical reps to dinner. We basically tried the pharma reps at their own game; we took them to a nice restaurant and kept the wine flowing. Once we had them feeling happily lubricated, they were ready to tell us the tricks of their trade. And what we learned was fairly shocking.

Picture one of those pharma reps, an attractive, charming man in his early twenties. Not the kind of guy who would have any trouble finding a date. He told us how he had once persuaded a reluctant female physician to attend an informational seminar about a medication he was promoting—by agreeing to escort her to a ballroom dancing class. It was an unstated quid pro quo: the rep did a personal favor for the doctor, and the doctor took his free drug samples and promoted the product to her patients.

Another common practice, the reps told us, was to take fancy meals to the entire doctor’s office (one of the perks of being a nurse or receptionist, I suppose). One doctor’s office even required alternating days of steak and lobster for lunch if the reps wanted access to the doctors. Even more shocking, we found out that physicians sometimes called the reps into the examination room (as an “expert”) to directly inform patients about the way certain drugs work.

Hearing stories from the reps who sold medical devices was even more disturbing. We learned that it’s common practice for device reps to peddle their medical devices in the operating room in real time and while a surgery is under way.

Janet and I were surprised at how well the pharmaceutical reps understood classic psychological persuasion strategies and how they employed them in a sophisticated and intuitive manner. Another clever tactic that they told us about involved hiring physicians to give a brief lecture to other doctors about a drug they were trying to promote. Now, the pharma reps really didn’t care about what the audience took from the lecture—what they were actually interested in was the effect that giving the lecture had on the speaker. They found that after giving a short lecture about the benefits of a certain drug, the speaker would begin to believe his own words and soon prescribe accordingly. Psychological studies show that we quickly and easily start believing whatever comes out of our own mouths, even when the original reason for expressing the opinion is no longer relevant (in the doctors’ case, that they were paid to say it). This is cognitive dissonance at play; doctors reason that if they are telling others about a drug, it must be good—and so their own beliefs change to correspond to their speech, and they start prescribing accordingly.

The reps told us that they employed other tricks too, turning into chameleons—switching various accents, personalities, and political affiliations on and off. They prided themselves on their ability to put doctors at ease. Sometimes a collegial relationship expanded into the territory of social friendship—some reps would go deep-sea fishing or play basketball with the doctors as friends. Such shared experiences allowed the physicians to more happily write prescriptions that benefited their “buddies.” The physicians, of course, did not see that they were compromising their values when they were out fishing or shooting hoops with the drug reps; they were just taking a well-deserved break with a friend with whom they just happened to do business. Of course, in many cases the doctors probably didn’t realize that they were being manipulated—but there is no doubt that they were.

DISGUISED FAVORS ARE one thing, but there are many cases when conflicts of interest are more easily recognizable. Sometimes a drug maker pays a doctor thousands of dollars in consulting fees. Sometimes the company donates a building or gives an endowment to a medical researcher’s department in the hope of influencing his views. This type of action creates immense conflicts of interest—especially at medical schools, where pharmaceutical bias can be passed from the medical professor to medical students and along to patients.

Duff Wilson, a reporter for The New York Times, described one example of this type of behavior. A few years ago, a Harvard Medical School student noticed that his pharmacology professor was promoting the benefits of cholesterol drugs and downplaying their side effects. When the student did some googling, he discovered that the professor was on the payroll of ten drug companies, five of which made cholesterol drugs. And the professor wasn’t alone. As Wilson put it, “Under the school’s disclosure rules, about 1,600 of 8,900 professors and lecturers at Harvard Medical School have reported to the dean that they or a family member had a financial interest in a business related to their teaching, research, or clinical care.”2 When professors publicly pass drug recommendations off as academic knowledge, we have a serious problem.



Fudging the Numbers

If you think that the world of medicine is rife with conflicts of interest, let’s consider another profession in which these conflicts may be even more widespread. Yes, I’m talking about the wonderland of financial services.

Say it’s 2007, and you’ve just accepted a fantastic banking job on Wall Street. Your bonus could be in the neighborhood of $5 million a year, but only if you view mortgage-backed securities (or some other new financial instrument) in a positive light. You’re being paid a lot of money to maintain a distorted view of reality, but you don’t notice the tricks that your big bonus plays on your perception of reality. Instead, you are quickly convinced that mortgage-backed securities are every bit as solid as you want to believe they are.

Once you’ve accepted that mortgage-backed securities are the wave of the future, you’re at least partially blind to their risks. On top of that, it’s notoriously hard to evaluate how much securities are really worth. As you sit there with your large and complex Excel spreadsheet full of parameters and equations, you try to figure out the real value of the securities. You change one of the discount parameters from 0.934 to 0.936, and right off the bat you see how the value of the securities jumps up. You continue to play around with the numbers, searching for parameters that provide the best representation of “reality,” but with one eye you also see the consequences of your parameter choices for your personal financial future. You continue to play with the numbers for a while longer, until you are convinced that the numbers truly represent the ideal way to evaluate mortgage-backed securities. You don’t feel bad because you are certain that you have done your best to represent the values of the securities as objectively as possible.

Moreover, you aren’t dealing with real cash; you are only playing with numbers that are many steps removed from cash. Their abstractness allows you to view your actions more as a game, and not as something that actually affects people’s homes, livelihoods, and retirement accounts. You are also not alone. You realize that the smart financial engineers in the offices next to yours are behaving more or less the same way as you and when you compare your evaluations to theirs, you realize that a few of your coworkers have chosen even more extreme values than yours. Believing that you are a rational creature, and believing that the market is always correct, you are even more inclined to accept what you’re doing—and what everyone else is doing (we’ll learn more about this in chapter 8)—as the right way to go. Right?

Of course, none of this is actually okay (remember the financial crisis of 2008?), but given the amount of money involved, it feels natural to fudge things a bit. And it’s perfectly human to behave this way. Your actions are highly problematic, but you don’t see them as such. After all, your conflicts of interest are supported by the facts that you’re not dealing with real money; that the financial instruments are mind-bogglingly complex; and that every one of your colleagues is doing the same thing.

The riveting (and awfully distressing) Academy Award–winning documentary Inside Job shows in detail how the financial services industry corrupted the U.S. government, leading to a lack of oversight on Wall Street and to the financial meltdown of 2008. The film also describes how the financial services industry paid leading academics (deans, heads of departments, university professors) to write expert reports in the service of the financial industry and Wall Street. If you watch the film, you will most likely feel puzzled by the ease with which academic experts seemed to sell out, and think that you would never do the same.

But before you put a guarantee on your own standards of morality, imagine that I (or you) were paid a great deal to be on Giantbank’s audit committee. With a large part of my income depending on Giantbank’s success, I would probably not be as critical as I am currently about the bank’s actions. With a hefty enough incentive I might not, for example, repeatedly say that investments must be transparent and clear and that companies need to work hard to try to overcome their conflicts of interests. Of course, I’ve yet to be on such a committee, so for now it’s easy for me to think that many of the actions of the banks have been reprehensible.



Academics Are Conflicted Too

When I reflect on the ubiquity of conflicts of interest and how impossible they are to recognize in our own lives, I have to acknowledge that I’m susceptible to them as well.

We academics are sometimes called upon to use our knowledge as consultants and expert witnesses. Shortly after I got my first academic job, I was invited by a large law firm to be an expert witness. I knew that some of my more established colleagues provided expert testimonials as a regular side job for which they were paid handsomely (though they all insisted that they didn’t do it for the money). Out of curiosity, I asked to see the transcripts of some of their old cases, and when they showed me a few I was surprised to discover how one-sided their use of the research findings was. I was also somewhat shocked to see how derogatory they were in their reports about the opinions and qualifications of the expert witnesses representing the other side—who in most cases were also respectable academics.

Even so, I decided to try it out (not for the money, of course), and I was paid quite a bit to give my expert opinion.* Very early in the case I realized that the lawyers I was working with were trying to plant ideas in my mind that would buttress their case. They did not do it forcefully or by saying that certain things would be good for their clients. Instead, they asked me to describe all the research that was relevant to the case. They suggested that some of the less favorable findings for their position might have some methodological flaws and that the research supporting their view was very important and well done. They also paid me warm compliments each time that I interpreted research in a way that was useful to them. After a few weeks, I discovered that I rather quickly adopted the viewpoint of those who were paying me. The whole experience made me doubt whether it’s at all possible to be objective when one is paid for his or her opinion. (And now that I am writing about my lack of objectivity, I am sure that no one will ever ask me to be an expert witness again—and maybe that’s a good thing.)



The Drunk Man and the Data Point

I had one other experience that made me realize the dangers of conflicts of interest; this time it was in my own research. At the time, my friends at Harvard were kind enough to let me use their behavioral lab to conduct experiments. I was particularly interested in using their facility because they recruited residents from the surrounding area rather than relying only on students.

One particular week, I was testing an experiment on decision making, and, as is usually the case, I predicted that the performance level in one of the conditions would be much higher than the performance level in the other condition. That was basically what the results showed—aside from one person. This person was in the condition I expected to perform best, but his performance was much worse than everyone else’s. It was very annoying. As I examined his data more closely, I discovered that he was about twenty years older than everyone else in the study. I also remembered that there was one older fellow who was incredibly drunk when he came to the lab.

The moment I discovered that the offending participant was drunk, I realized that I should have excluded his data in the first place, given that his decision-making ability was clearly compromised. So I threw out his data, and instantly the results looked beautiful—showing exactly what I expected them to show. But, a few days later I began thinking about the process by which I decided to eliminate the drunk guy. I asked myself: what would have happened if this fellow had been in the other condition—the one I expected to do worse? If that had been the case, I probably would not have noticed his individual responses to start with. And if I had, I probably would not have even considered excluding his data.

In the aftermath of the experiment, I could easily have told myself a story that would excuse me from using the drunk guy’s data. But what if he hadn’t been drunk? What if he had some other kind of impairment that had nothing to do with drinking? Would I have invented another excuse or logical argument to justify excluding his data? As we will see in chapter 7, “Creativity and Dishonesty,” creativity can help us justify following our selfish motives while still thinking of ourselves as honest people.

I decided to do two things. First, I reran the experiment to double-check the results, which worked out beautifully. Then I decided it was okay to create standards for excluding participants from an experiment (that is, we wouldn’t test drunks or people who couldn’t understand the instructions). But the rules for exclusion have to be made up front, before the experiment takes place, and definitely not after looking at the data.

What did I learn? When I was deciding to exclude the drunk man’s data, I honestly believed I was doing so in the name of science—as if I were heroically fighting to clear the data so that the truth could emerge. It didn’t occur to me that I might be doing it for my own self-interest, but I clearly had another motivation: to find the results I was expecting. More generally, I learned—again—about the importance of establishing rules that can safeguard ourselves from ourselves.



Disclosure: A Panacea?

So what is the best way to deal with conflicts of interest? For most people, “full disclosure” springs to mind. Following the same logic as “sunshine policies,” the basic assumption underlying disclosure is that as long as people publicly declare exactly what they are doing, all will be well. If professionals were to simply make their incentives clear and known to their clients, so the thinking goes, the clients can then decide for themselves how much to rely on their (biased) advice and then make more informed decisions.

If full disclosure were the rule of the land, doctors would inform their patients when they own the equipment required for the treatments they recommend. Or when they are paid to consult for the manufacturer of the drugs that they are about to prescribe. Financial advisers would inform their clients about all the different fees, payments, and commissions they get from various vendors and investment houses. With that information in hand, consumers should be able to appropriately discount the opinions of those professionals and make better decisions. In theory, disclosure seems to be a fantastic solution; it both exonerates the professionals who are acknowledging their conflicts of interest and it provides their clients with a better sense of where their information is coming from.

HOWEVER, IT TURNS out that disclosure is not always an effective cure for conflicts of interest. In fact, disclosure can sometimes make things worse. To explain how, allow me to run you through a study conducted by Daylian Cain (a professor at Yale University), George Loewenstein (a professor at Carnegie Mellon University), and Don Moore (a professor at the University of California, Berkeley). In this experiment, participants played a game in one of two roles. (By the way, what researchers call a “game” is not what any reasonable kid would consider a game.) Some of the participants played the role of estimators: their task was to guess the total amount of money in a large jar full of loose change as accurately as possible. These players were paid according to how close their guess was to the real value of the money in the jar. The closer their estimates were, the more money they received, and it didn’t matter if they missed by overestimating or underestimating the true value.

The other participants played the role of advisers, and their task was to advise the estimators on their guesses. (Think of someone akin to your stock adviser, but with a much simpler task.) There were two interesting differences between the estimators and the advisers. The first was that whereas the estimators were shown the jar from a distance for a few seconds, the advisers had more time to examine it, and they were also told that the amount of money in the jar was between $10 and $30. That gave the advisers an informational edge. It made them relative experts in the field of estimating the jar’s value, and it gave the estimators a very good reason to rely on their advisers’ reports when formulating their guesses (comparable to the way we rely on experts in many areas of life).

The second difference concerned the rule for paying the advisers. In the control condition, the advisers were paid according to the accuracy of the estimators’ guesses, so no conflicts of interest were involved. In the conflict-of-interest condition, the advisers were paid more as the estimators overguessed the value of the coins in the jar to a larger degree. So if the estimators overguessed by $1, it was good for the advisers—but it was even better if they overguessed by $3 or $4. The higher the overestimation, the less the estimator made but the more the adviser pocketed.

So what happened in the control condition and in the conflict-of-interest condition? You guessed it: in the control condition, advisers suggested an average value of $16.50, while in the conflict-of-interest condition, the advisers suggested an estimate that was over $20. They basically goosed the estimated value by almost $4. Now, you can look at the positive side of this result and tell yourself, “Well, at least the advice was not $36 or some other very high number.” But if that is what went through your mind, you should consider two things: first, that the adviser could not give clearly exaggerated advice because, after all, the estimator did see the jar. If the value had been dramatically too high, the estimator would have dismissed the suggestion altogether. Second, remember that most people cheat just enough to still feel good about themselves. In that sense, the fudge factor was an extra $4 (or about 25 percent of the amount).

The importance of this experiment, however, showed up in the third condition—the conflict-of-interest-plus-disclosure condition. Here the payment for the adviser was the same as it was in the conflict-of-interest condition. But this time the adviser had to tell the estimator that he or she (the adviser) would receive more money when the estimator overguessed. The sunshine policy in action! That way, the estimator could presumably take the adviser’s biased incentives into account and discount the advice of the adviser appropriately. Such a discount of the advice would certainly help the estimator, but what about the effect of the disclosure on the advisers? Would the need to disclose eliminate their biased advice? Would disclosing their bias stretch the fudge factor? Would they now feel more comfortable exaggerating their advice to an even greater degree? And the billion-dollar question is this: which of these two effects would prove to be larger? Would the discount that the estimator applied to the adviser’s advice be smaller or larger than the extra exaggeration of the adviser?

The results? In the conflict-of-interest-plus-disclosure condition, the advisers increased their estimates by another $4 (from $20.16 to $24.16). And what did the estimators do? As you can probably guess, they did discount the estimates, but only by $2. In other words, although the estimators did take the advisers’ disclosure into consideration when formulating their estimates, they didn’t subtract nearly enough. Like the rest of us, the estimators didn’t sufficiently recognize the extent and power of their advisers’ conflicts of interest.

The main takeaway is this: disclosure created even greater bias in advice. With disclosure the estimators made less money and the advisers made more. Now, I am not sure that disclosure will always make things worse for clients, but it is clear that disclosure and sunshine policies will not always make things better.



So What Should We Do?

Now that we understand conflicts of interest a bit better, it should be clear what serious problems they cause. Not only are they ubiquitous, but we don’t seem to fully appreciate their degree of influence on ourselves and on others. So where do we go from here?

One straightforward recommendation is to try to eradicate conflicts of interest altogether, which of course is easier said than done. In the medical domain, that would mean, for example, that we would not allow doctors to treat or test their own patients using equipment that they own. Instead, we’d have to require that an independent entity, with no ties to the doctors or equipment companies, conduct the treatments and tests. We would also prohibit doctors from consulting for drug companies or investing in pharmaceutical stocks. After all, if we don’t want doctors to have conflicts of interest, we need to make sure that their income doesn’t depend on the number and types of procedures or prescriptions they recommend. Similarly, if we want to eliminate conflicts of interest for financial advisers, we should not allow them to have incentives that are not aligned with their clients’ best interests—no fees for services, no kickbacks, and no differential pay for success and failure.

Though it is clearly important to try to reduce conflicts of interest, it is not easy to do so. Take contractors, lawyers, and car mechanics, for example. The way these professionals are paid puts them into terrible conflicts of interest because they both make the recommendation and benefit from the service, while the client has no expertise or leverage. But stop for a few minutes and try to think about a compensation model that would not involve any conflicts of interest. If you are taking the time to try to come up with such an approach, you most likely agree that it is very hard—if not impossible—to pull off. It is also important to realize that although conflicts of interest cause problems, they sometimes happen for good reason. Take the case of physicians (and dentists) ordering treatments that use equipment they own. Although this is a potentially dangerous practice from the perspective of conflicts of interest, it also has some built-in advantages: professionals are more likely to purchase equipment that they believe in; they are likely to become experts in using it; it can be much more convenient for the patient; and the doctors might even conduct some research that could help improve the equipment or the ways in which it is used.

The bottom line is that it is no easy task to come up with compensation systems that don’t inherently involve—and sometimes rely on—conflicts of interest. Even if we could eliminate all conflicts of interest, the cost of doing so in terms of decreased flexibility and increased bureaucracy and oversight might not be worth it—which is why we should not overzealously advocate draconian rules and restrictions (say, that physicians can never talk to pharma reps or own medical equipment). At the same time, I do think it’s important for us to realize the extent to which we can all be blinded by our financial motivations. We need to acknowledge that situations involving conflicts of interest have substantial disadvantages and attempt to thoughtfully reduce them when their costs are likely to outweigh their benefits.

As you might expect, there are many straightforward instances where conflicts of interest should simply be eliminated. For example, the conflicts for financial advisers who receive side payments, auditors who serve as consultants to the same firms, financial professionals who are paid handsome bonuses when their clients make money but lose nothing when their clients lose their shirts, rating agencies that are paid by the companies they rate, and politicians who accept money and favors from corporations and lobbyists in exchange for their votes; in all of these cases it seems to me that we must do our best to eradicate as many conflicts of interest as possible—most likely by regulation.

You’re probably skeptical that regulation of this sort could ever happen. When regulation by the government or by professional organizations does not materialize, we as consumers should recognize the danger that conflicts of interest bring with them and do our best to seek service providers who have fewer conflicts of interest (or, if possible, none). Through the power of our wallets we can push service providers to meet a demand for reduced conflicts of interest.

Finally, when we face serious decisions in which we realize that the person giving us advice may be biased—such as when a physician offers to tattoo our faces—we should spend just a little extra time and energy to seek a second opinion from a party that has no financial stake in the decision at hand.

Загрузка...