CHAPTER 21 REASON


Opposing reason is, by definition, unreasonable. But that hasn’t stopped a slew of irrationalists from favoring the heart over the head, the limbic system over the cortex, blinking over thinking, McCoy over Spock. There was the Romantic movement of the counter-Enlightenment, captured in Johann Herder’s avowal “I am not here to think, but to be, feel, live!” There’s the common veneration (not just by the religious) of faith, namely believing something without a good reason. There’s the postmodernist credo that reason is a pretext to exert power, reality is socially constructed, and all statements are trapped in a web of self-reference and collapse into paradox. Even members of my own tribe of cognitive psychologists often claim to have refuted what they take to be the Enlightenment belief that humans are rational agents, and hence to have undermined the centrality of reason itself. The implication is that it is futile even to try to make the world a more rational place.1

But all these positions have a fatal flaw: they refute themselves. They deny that there can be a reason for believing those very positions. As soon as their defenders open their mouths to begin their defense, they have lost the argument, because in that very act they are tacitly committed to persuasion—to adducing reasons for what they are about to argue, which, they insist, ought to be accepted by their listeners according to standards of rationality that both accept. Otherwise they are wasting their breath and might as well try to convert their audience by bribery or violence. In The Last Word, the philosopher Thomas Nagel drives home the point that subjectivity and relativism regarding logic and reality are incoherent, because “one can’t criticize something with nothing”:

The claim “Everything is subjective” must be nonsense, for it would itself have to be either subjective or objective. But it can’t be objective, since in that case it would be false if true. And it can’t be subjective, because then it would not rule out any objective claim, including the claim that it is objectively false. There may be some subjectivists, perhaps styling themselves as pragmatists, who present subjectivism as applying even to itself. But then it does not call for a reply, since it is just a report of what the subjectivist finds it agreeable to say. If he also invites us to join him, we need not offer any reason for declining, since he has offered us no reason to accept.2

Nagel calls this line of thinking Cartesian, because it resembles Descartes’s argument “I think, therefore I am.” Just as the very fact that one is wondering whether one exists demonstrates that one exists, the very fact that one is appealing to reasons demonstrates that reason exists. It may also be called a transcendental argument, one that invokes the necessary preconditions for doing what it is doing, namely making an argument.3 (In a way, it goes back to the ancient Liar’s Paradox, featuring the Cretan who says, “All Cretans are liars.”) Whatever you call the argument, it would be a mistake to interpret it as justifying a “belief” or a “faith” in reason, which Nagel calls “one thought too many.” We don’t believe in reason; we use reason (just as we don’t program our computers to have a CPU; a program is a sequence of operations made available by the CPU).4

Though reason is prior to everything else and needn’t (indeed cannot) be justified on first principles, once we start engaging in it we can stroke our confidence that the particular kinds of reasoning we are engaging in are sound by noting their internal coherence and their fit with reality. Life is not a dream, in which disconnected experiences appear in bewildering succession. And the application of reason to the world validates itself by granting us the ability to bend the world to our will, from curing infections to sending a man to the moon.

Despite its provenance in abstract philosophy, the Cartesian argument is not an exercise in logic-chopping. From the most recondite deconstructionist to the most anti-intellectual purveyor of conspiracy theories and “alternative facts,” everyone recognizes the power of responses like “Why should I believe you?” or “Prove it” or “You’re full of crap.” Few would reply, “That’s right, there’s no reason to believe me,” or “Yes, I’m lying right now,” or “I agree, what I’m saying is bullshit.” It’s in the very nature of argument that people stake a claim to being right. As soon as they do, they have committed themselves to reason—and the listeners they are trying to convince can hold their feet to the fire of coherence and accuracy.


By now many people have become aware of the research in cognitive psychology on human irrationality, explained in bestsellers like Daniel Kahneman’s Thinking Fast and Slow and Dan Ariely’s Predictably Irrational. I’ve alluded to these cognitive infirmities in earlier chapters: the way we estimate probability from available anecdotes, project stereotypes onto individuals, seek confirming and ignore disconfirming evidence, dread harms and losses, and reason from teleology and voodoo resemblance rather than mechanical cause and effect.5 But as important as these discoveries are, it’s a mistake to see them as refuting some Enlightenment tenet that humans are rational actors, or as licensing the fatalistic conclusion that we might as well give up on reasoned persuasion and fight demagoguery with demagoguery.

To begin with, no Enlightenment thinker ever claimed that humans were consistently rational. Certainly not the über-rational Kant, who wrote that “from the crooked timber of humanity no truly straight thing can be made,” nor Spinoza, Hume, Smith, or the Encyclopédistes, who were cognitive and social psychologists ahead of their time.6 What they argued was that we ought to be rational, by learning to repress the fallacies and dogmas that so readily seduce us, and that we can be rational, collectively if not individually, by implementing institutions and adhering to norms that constrain our faculties, including free speech, logical analysis, and empirical testing. And if you disagree, then why should we accept your claim that humans are incapable of rationality?

Often the cynicism about reason is justified with a crude version of evolutionary psychology (not one endorsed by evolutionary psychologists) in which humans think with their amygdalas, reacting instinctively to the slightest rustle in the grass which may portend a crouching tiger. But real evolutionary psychology treats humans differently: not as two-legged antelopes but as the species that outsmarts antelopes. We are a cognitive species that depends on explanations of the world. Since the world is the way it is regardless of what people believe about it, there is a strong selection pressure for an ability to develop explanations that are true.7

Reasoning thus has deep evolutionary roots. The citizen scientist Louis Liebenberg has studied the San hunter-gatherers of the Kalahari Desert (the “Bushmen”), one of the world’s most ancient cultures. They engage in the oldest form of the chase, persistence hunting, in which humans, with their unique ability to dump heat through sweat-slicked skin, pursue a furry mammal in the midday sun until it collapses of heat stroke. Since most mammals are swifter than humans and dart out of sight as soon as they are spotted, persistence hunters track them by their spoor, which means inferring the animal’s species, sex, age, and level of fatigue, and thus its likely direction of flight, from the hoofprints, bent stems, and displaced pebbles it leaves behind. The San do not just engage in inference—deducing, for example, that agile springboks tread deeply with pointed hooves to get a good grip, whereas heavy kudus tread flat-footed to support their weight. They also engage in reasoning—articulating the logic behind their inferences to persuade their companions or be persuaded in their turn. Liebenberg observed that Kalahari trackers don’t accept arguments from authority. A young tracker can challenge the majority opinion of his elders, and if his interpretation of the evidence is convincing, he can bring them around, increasing the group’s accuracy.8

And if you’re still tempted to excuse modern dogma and superstition by saying that it’s only human, consider Liebenberg’s account of scientific skepticism among the San:

Three trackers, !Nate, /Uase and Boroh//xao, of Lone Tree in the central Kalahari, told me that the Monotonous Lark (Mirafra passerina) only sings after it has rained, because “it is happy that it rained.” One tracker, Boroh//xao, told me that when the bird sings, it dries out the soil, making the roots good to eat. Afterwards, !Nate and /Uase told me that Boroh//xao was wrong—it is not the bird that dries out the soil, it is the sun that dries out the soil. The bird is only telling them that the soil will dry out in the coming months and that it is the time of the year when the roots are good to eat. . . .

!Namka, a tracker from Bere in the central Kalahari, Botswana, told me the myth of how the sun is like an eland, which crosses the sky and is then killed by people who live in the west. The red glow in the sky when the sun goes down is the blood of the eland. After they have eaten it, they throw the shoulder blade across the sky back to the east, where it falls into a pool and grows into a new sun. Sometimes, it is said, you can hear the swishing noise of the shoulder blade flying through the air. After telling me the story in great detail, he told me that he thinks that the “Old People” lied, because he has never seen . . . the shoulder blade fly through the sky or heard the swishing noise.9

Of course, none of this contradicts the discovery that humans are vulnerable to illusions and fallacies. Our brains are limited in their capacity to process information and evolved in a world without science, scholarship, and other forms of fact-checking. But reality is a mighty selection pressure, so a species that lives by ideas must have evolved with an ability to prefer correct ones. The challenge for us today is to design an informational environment in which that ability prevails over the ones that lead us into folly. The first step is to pinpoint why an otherwise intelligent species is so easily led into folly.


The 21st century, an age of unprecedented access to knowledge, has also seen maelstroms of irrationality, including the denial of evolution, vaccine safety, and anthropogenic climate change, and the promulgation of conspiracy theories, from 9/11 to the size of Donald Trump’s popular vote. Fans of rationality are desperate to understand the paradox, but in a bit of irrationality of their own, they seldom look at data that might explain it.

The standard explanation of the madness of crowds is ignorance: a mediocre education system has left the populace scientifically illiterate, at the mercy of their cognitive biases, and thus defenseless against airhead celebrities, cable-news gladiators, and other corruptions from popular culture. The standard solution is better schooling and more outreach to the public by scientists on television, social media, and popular Web sites. As an outreaching scientist I’ve always found this theory appealing, but I’ve come to realize it’s wrong, or at best a small part of the problem.

Consider these questions about evolution:

During the Industrial Revolution of the 19th century, the English countryside got covered in soot, and the Peppered Moth became, on average, darker in color. How did this happen?

A. In order to blend in with their surroundings, the moths had to become darker in color.

B. The moths with darker color were less likely to get eaten and were more likely to reproduce.

After a year the average test score at a private high school increased by thirty points. Which explanation for this change is most analogous to Darwin’s explanation for the adaptation of species?

A. The school no longer admitted children of wealthy alumni unless they met the same standards as everyone else.

B. Since the last test, each returning student had grown more knowledgeable.

The correct answers are B and A. The psychologist Andrew Shtulman gave high school and university students a battery of questions like this which probed for a deep understanding of the theory of natural selection, in particular the key idea that evolution consists of changes in the proportion of a population with adaptive traits rather than a transformation of the population so that its traits would be more adaptive. He found no correlation between performance on the test and a belief that natural selection explains the origin of humans. People can believe in evolution without understanding it, and vice versa.10 In the 1980s several biologists got burned when they accepted invitations to debate creationists who turned out to be not Bible-thumping yokels but well-briefed litigators who cited cutting-edge research to sow uncertainty as to whether the science was complete.

Professing a belief in evolution is not a gift of scientific literacy, but an affirmation of loyalty to a liberal secular subculture as opposed to a conservative religious one. In 2010 the National Science Foundation dropped the following item from its test of scientific literacy: “Human beings, as we know them today, developed from earlier species of animals.” The reason for that change was not, as scientists howled, because the NSF had given in to creationist pressure to bowdlerize evolution from the scientific canon. It was that the correlation between performance on that item and on every other item on the test (such as “An electron is smaller than an atom” and “Antibiotics kill viruses”) was so low that it was taking up space in the test that could go to more diagnostic items. The item, in other words, was effectively a test of religiosity rather than scientific literacy.11 When the item was prefaced with “According to the theory of evolution,” so that scientific understanding was divorced from cultural allegiance, religious and nonreligious test-takers responded the same.12

Or consider these questions:

Climate scientists believe that if the North Pole icecap melted as a result of human-caused global warming, global sea levels would rise. True or False?

What gas do most scientists believe causes temperatures in the atmosphere to rise? Is it carbon dioxide, hydrogen, helium, or radon?

Climate scientists believe that human-caused global warming will increase the risk of skin cancer in human beings. True or False?

The answer to the first question is “false”; if it were true, your glass of Coke would overflow as the ice cubes melted. It’s icecaps on land, such as Greenland and Antarctica, that raise sea levels when they melt. Believers in human-made climate change scored no better on tests of climate science, or of science literacy in general, than deniers. Many believers think, for example, that global warming is caused by a hole in the ozone layer and that it can be mitigated by cleaning up toxic waste dumps.13 What predicts the denial of human-made climate change is not scientific illiteracy but political ideology. In 2015, 10 percent of conservative Republicans agreed that the Earth is getting warmer because of human activity (57 percent denied that the Earth is getting warmer at all), compared with 36 percent of moderate Republicans, 53 percent of Independents, 63 percent of moderate Democrats, and 78 percent of liberal Democrats.14

In a revolutionary analysis of reason in the public sphere, the legal scholar Dan Kahan has argued that certain beliefs become symbols of cultural allegiance. People affirm or deny these beliefs to express not what they know but who they are.15 We all identify with particular tribes or subcultures, each of which embraces a creed on what makes for a good life and how society should run its affairs. These creeds tend to vary along two dimensions. One contrasts a right-wing comfort with natural hierarchy with a left-wing preference for forced egalitarianism (measured by agreement with statements like “We need to dramatically reduce inequalities between the rich and the poor, whites and people of color, and men and women”). The other is a libertarian affinity to individualism versus a communitarian or authoritarian affinity to solidarity (measured by agreement with statements like “Government should put limits on the choices individuals can make so they don’t get in the way of what’s good for society”). A given belief, depending on how it is framed and who endorses it, can become a touchstone, password, motto, shibboleth, sacred value, or oath of allegiance to one of these tribes. As Kahan and his collaborators explain:

The principal reason people disagree about climate change science is not that it has been communicated to them in forms they cannot understand. Rather, it is that positions on climate change convey values—communal concern versus individual self-reliance; prudent self-abnegation versus the heroic pursuit of reward; humility versus ingenuity; harmony with nature versus mastery over it—that divide them along cultural lines.16

The values that divide people are also defined by which demons are blamed for society’s misfortunes: greedy corporations, out-of-touch elites, meddling bureaucrats, lying politicians, ignorant rednecks, or, all too often, ethnic minorities.

Kahan notes that people’s tendency to treat their beliefs as oaths of allegiance rather than disinterested appraisals is, in one sense, rational. With the exception of a tiny number of movers, shakers, and deciders, a person’s opinions on climate change or evolution are astronomically unlikely to make a difference to the world at large. But they make an enormous difference to the respect the person commands in his or her social circle. To express the wrong opinion on a politicized issue can make one an oddball at best—someone who “doesn’t get it”—and a traitor at worst. The pressure to conform becomes all the greater as people live and work with others who are like them and as academic, business, or religious cliques brand themselves with left-wing or right-wing causes. For pundits and politicians with a reputation for championing their faction, coming out on the wrong side of an issue would be career suicide.

Given these payoffs, endorsing a belief that hasn’t passed muster with science and fact-checking isn’t so irrational after all—at least, not by the criterion of the immediate effects on the believer. The effects on the society and planet are another matter. The atmosphere doesn’t care what people think about it, and if it in fact warms by 4° Celsius, billions of people will suffer, no matter how many of them had been esteemed in their peer groups for holding the locally fashionable opinion on climate change along the way. Kahan concludes that we are all actors in a Tragedy of the Belief Commons: what’s rational for every individual to believe (based on esteem) can be irrational for the society as a whole to act upon (based on reality).17

The perverse incentives behind “expressive rationality” or “identity-protective cognition” help explain the paradox of 21st-century irrationality. During the 2016 presidential campaign, many political observers were incredulous at opinions expressed by Trump supporters (and in many cases by Trump himself), such as that Hillary Clinton had multiple sclerosis and was concealing it with a body double, or that Barack Obama must have had a role in 9/11 because he was never in the Oval Office around that time (Obama, of course, was not the president in 2001). As Amanda Marcotte put it, “These folks clearly are competent enough to dress themselves, read the address of the rally and show up on time, and somehow they continue to believe stuff that’s so crazy and so false that it’s impossible to believe anyone that isn’t barking mad could believe it. What’s going on?”18 What’s going on is that these people are sharing blue lies. A white lie is told for the benefit of the hearer; a blue lie is told for the benefit of an in-group (originally, fellow police officers).19 While some of the conspiracy theorists may be genuinely misinformed, most express these beliefs for the purpose of performance rather than truth: they are trying to antagonize liberals and display solidarity with their blood brothers. The anthropologist John Tooby adds that preposterous beliefs are more effective signals of coalitional loyalty than reasonable ones.20 Anyone can say that rocks fall down rather than up, but only a person who is truly committed to the brethren has a reason to say that God is three persons but also one person, or that the Democratic Party ran a child sex ring out of a Washington pizzeria.


The conspiracy theories of fervid hordes at a political rally represent an extreme case of self-expression trumping truth, but the Tragedy of the Belief Commons runs even deeper. Another paradox of rationality is that expertise, brainpower, and conscious reasoning do not, by themselves, guarantee that thinkers will approach the truth. On the contrary, they can be weapons for ever-more-ingenious rationalization. As Benjamin Franklin observed, “So convenient a thing is it to be a rational creature, since it enables us to find or make a reason for everything one has a mind to.”

Psychologists have long known that the human brain is infected with motivated reasoning (directing an argument toward a favored conclusion, rather than following it where it leads), biased evaluation (finding fault with evidence that disconfirms a favored position and giving a pass to evidence that supports it), and a My-Side bias (self-explanatory).21 In a classic experiment from 1954, the psychologists Al Hastorf and Hadley Cantril quizzed Dartmouth and Princeton students about a film of a recent bone-crushing, penalty-filled football game between the two schools, and found that each set of students saw more infractions by the other team.22

We know today that political partisanship is like sports fandom: testosterone levels rise or fall on election night just as they do on Super Bowl Sunday.23 And so it should not be surprising that political partisans—which include most of us—always see more infractions by the other team. In another classic study, the psychologists Charles Lord, Lee Ross, and Mark Lepper presented proponents and opponents of the death penalty with a pair of studies, one suggesting that capital punishment deterred homicide (murder rates went down the year after states adopted it), the other that it failed to do so (murder rates were higher in states that had capital punishment than in neighboring states that didn’t). The studies were fake but realistic, and the experimenters flipped the outcomes for half the participants just in case any of them found comparisons across time more convincing than comparisons across space or vice versa. The experimenters found that each group was momentarily swayed by the result they had just learned, but as soon as they had had a chance to read the details, they picked nits in whichever study was uncongenial to their starting position, saying things like “The evidence is meaningless without data about how the overall crime rate went up in those years,” or “There might be different circumstances between the two states even though they shared a border.” Thanks to this selective prosecution, the participants were more polarized after they had all been exposed to the same evidence than before: the antis were more anti, the pros more pro.24

Engagement with politics is like sports fandom in another way: people seek and consume news to enhance the fan experience, not to make their opinions more accurate.25 That explains another of Kahan’s findings: the better informed a person is about climate change, the more polarized his or her opinion.26 Indeed, people needn’t even have a prior opinion to be polarized by the facts. When Kahan exposed people to a neutral, balanced presentation of the risks of nanotechnology (hardly a hot button on the cable news networks), they promptly split into factions that aligned with their views on nuclear power and genetically modified foods.27

If these studies aren’t sobering enough, consider this one, described by one magazine as “The Most Depressing Discovery About the Brain, Ever.”28 Kahan recruited a thousand Americans from all walks of life, assessed their politics and numeracy with standard questionnaires, and asked them to look at some data to evaluate the effectiveness of a new treatment for an ailment. The respondents were told that they had to pay close attention to the numbers, because the treatment was not expected to work a hundred percent of the time and might even make things worse, while sometimes the ailment got better on its own, without any treatment. The numbers had been jiggered so that one answer popped out (the treatment worked, because a larger number of treated people showed an improvement) but the other answer was correct (the treatment didn’t work, because a smaller proportion of the treated people showed an improvement). The knee-jerk answer could be overridden by a smidgen of mental math, namely eyeballing the ratios. In one version, the respondents were told that the ailment was a rash and the treatment was a skin cream. Here are the numbers they were shown:

Improved

Got Worse

Treatment

223

75

No Treatment

107

21

The data implied that the skin cream did more harm than good: the people who used it improved at a ratio of around three to one, while those not using it improved at a ratio of around five to one. (With half the respondents, the rows were flipped, implying that the skin cream did work.) The more innumerate respondents were seduced by the larger absolute number of treated people who got better (223 versus 107) and picked the wrong answer. The highly numerate respondents zoomed in on the difference between the two ratios (3:1 versus 5:1) and picked the right one. The numerate respondents, of course, were not biased for or against skin cream: whichever way the data went, they spotted the difference. And contrary to liberal Democrats’ and conservative Republicans’ worst suspicions about each other’s intelligence, neither faction did substantially better than the other.

But all this changed in a version of the experiment in which the treatment was switched from boring skin cream to incendiary gun control (a law banning citizens from carrying concealed handguns in public), and the outcome was switched from rashes to crime rates. Now the highly numerate respondents diverged from each other according to their politics. When the data suggested that the gun-control measure lowered crime, all the liberal numerates spotted it, and most of the conservative numerates missed it—they did a bit better than the conservative innumerates, but were still wrong more often than they were right. When the data showed that gun control increased crime, this time most of the conservative numerates spotted it, but the liberal numerates missed it; in fact, they did no better than the liberal innumerates. So we can’t blame human irrationality on our lizard brains: it was the sophisticated respondents who were most blinded by their politics. As two other magazines summarized the results: “Science Confirms: Politics Wrecks Your Ability to Do Math” and “How Politics Makes Us Stupid.”29

Researchers themselves are not immune. They often trip over their own biases when they try to show that their political adversaries are biased, a fallacy that can be called the bias bias (as in Matthew 7:3, “And why beholdest thou the mote that is in thy brother’s eye, but considerest not the beam that is in thine own eye?”).30 A recent study by three social scientists (members of a predominantly liberal profession) purporting to show that conservatives were more hostile and aggressive had to be retracted when the authors discovered that they had misread the labels: it was actually liberals who were more hostile and aggressive.31 Many studies that try to show that conservatives are temperamentally more prejudiced and rigid than liberals turn out to have cherry-picked the test items.32 Conservatives are indeed more prejudiced against African Americans, but liberals turn out to be more prejudiced against religious Christians. Conservatives are indeed more biased toward allowing Christian prayers in schools, but liberals are more biased toward allowing Muslim prayers in schools.

It would also be an error to think that bias about bias is confined to the left: that would be a bias bias bias. In 2010 the libertarian economists Daniel Klein and Zeljka Buturovic published a study aiming to show that left-liberals were economically illiterate, based on erroneous answers to Econ 101 items like these:33

Restrictions on housing development make housing less affordable. [True]

Mandatory licensing of professional services increases the prices of those services. [True]

A company with the largest market share is a monopoly. [False]

Rent control leads to housing shortages. [True]

(Another item was “Overall, the standard of living is higher today than it was 30 years ago,” which is true. Consistent with my claim in chapter 4 that progressives hate progress, 61 percent of the progressives and 52 percent of the liberals disagreed.) Conservatives and libertarians gloated, and the Wall Street Journal reported the study under the headline “Are You Smarter Than a Fifth Grader?” with the implication that left-wingers are not. But critics pointed out that the items on the quiz implicitly challenged left-wing causes. So the pair ran a follow-up with equally elementary Econ 101 items designed this time to get under the skin of conservatives:34

When two people complete a voluntary transaction, they both necessarily come away better off. [False]

Making abortion illegal would increase the number of black-market abortions. [True]

Legalizing drugs would give more wealth and power to street gangs and organized crime. [False]

Now it was the conservatives who earned the dunce caps. Klein, to his credit, retracted his swipe at the left in an article entitled “I Was Wrong, and So Are You.” As he noted,

More than 30 percent of my libertarian compatriots (and more than 40 percent of conservatives), for instance, disagreed with the statement “A dollar means more to a poor person than it does to a rich person”—c’mon, people!—versus just 4 percent among progressives. . . . A full tabulation of all 17 questions showed that no group clearly out-stupids the others. They appear about equally stupid when faced with proper challenges to their position.35


If the left and right are equally stupid in quizzes and experiments, we might expect them to be equally off the mark in making sense of the world. The data on human history presented in chapters 5 through 18 provide an opportunity to see which of the major political ideologies can explain the facts of human progress. I’ve been arguing that the main drivers were the nonpolitical ideals of reason, science, and humanism, which led people to seek and apply knowledge that enhanced human flourishing. Do right-wing or left-wing ideologies have anything to add? Do the seventy-odd graphs entitle either side to say, “Bias, shmias: we’re right; you’re wrong”? It seems that each side can take some credit while also missing big parts of the story.

Foremost is the conservative skepticism about the ideal of progress itself. Ever since the first modern conservative, Edmund Burke, suggested that humans were too flawed to think up schemes for improving their condition and were better off sticking with traditions and institutions that kept them from the abyss, a major stream of conservative thought has been skeptical about the best-laid plans of mice and men. The reactionary fringe of conservatism, recently disinterred by Trumpists and the European far right (chapter 23), believes that Western civilization has careened out of control since some halcyon century, having abandoned the moral clarity of traditional Christendom for a decadent secular fleshpot that, if left on its current course, will soon implode from terrorism, crime, and anomie.

Well, that’s wrong. Life before the Enlightenment was darkened by starvation, plagues, superstitions, maternal and infant mortality, marauding knight-warlords, sadistic torture-executions, slavery, witch hunts, and genocidal crusades, conquests, and wars of religion.36 Good riddance. The arcs in figures 5-1 through 18-4 show that as ingenuity and sympathy have been applied to the human condition, life has gotten longer, healthier, richer, safer, happier, freer, smarter, deeper, and more interesting. Problems remain, but problems are inevitable.

The left, too, has missed the boat in its contempt for the market and its romance with Marxism. Industrial capitalism launched the Great Escape from universal poverty in the 19th century and is rescuing the rest of humankind in a Great Convergence in the 21st. Over the same time span, communism brought the world terror-famines, purges, gulags, genocides, Chernobyl, megadeath revolutionary wars, and North Korea–style poverty before collapsing everywhere else of its own internal contradictions.37 Yet in a recent survey 18 percent of social science professors identified themselves as Marxist, and the words capitalist and free market still stick in the throats of most intellectuals.38 Partly this is because their brains autocorrect these terms to unbridled, unregulated, unfettered, or untrammeled free markets, perpetuating a false dichotomy: a free market can coexist with regulations on safety, labor, and the environment, just as a free country can coexist with criminal laws. And a free market can coexist with high levels of spending on health, education, and welfare (chapter 9)—indeed, some of the countries with the greatest amount of social spending also have the greatest amount of economic freedom.39

To be fair to the left, the libertarian right has embraced the same false dichotomy and seems all too willing to play the left’s straw man.40 Right-wing libertarians (in their 21st-century Republican Party version) have converted the observation that too much regulation can be harmful (by over-empowering bureaucrats, costing more to society than it delivers in benefits, or protecting incumbents against competition rather than consumers against harm) into the dogma that less regulation is always better than more regulation. They have converted the observation that too much social spending can be harmful (by creating perverse incentives against work and undermining the norms and institutions of civil society) into the dogma that any amount of social spending is too much. And they have translated the observation that tax rates can be too high into a hysterical rhetoric of “liberty” in which raising the marginal tax rate for income above $400,000 from 35 to 39.6 percent means turning the country over to jackbooted storm troopers. Often the refusal to seek the optimum level of government is justified by an appeal to Friedrich Hayek’s argument in The Road to Serfdom that regulation and welfare lay out a slippery slope along which a country will slide into penury and tyranny.

The facts of human progress strike me as having been as unkind to right-wing libertarianism as to right-wing conservatism and left-wing Marxism. The totalitarian governments of the 20th century did not emerge from democratic welfare states sliding down a slippery slope, but were imposed by fanatical ideologues and gangs of thugs.41 And countries that combine free markets with more taxation, social spending, and regulation than the United States (such as Canada, New Zealand, and Western Europe) turn out to be not grim dystopias but rather pleasant places to live, and they trounce the United States in every measure of human flourishing, including crime, life expectancy, infant mortality, education, and happiness.42 As we saw, no developed country runs on right-wing libertarian principles, nor has any realistic vision of such a country ever been laid out.

It should not be surprising that the facts of human progress confound the major -isms. The ideologies are more than two centuries old and are based on mile-high visions such as whether humans are tragically flawed or infinitely malleable, and whether society is an organic whole or a collection of individuals.43 A real society comprises hundreds of millions of social beings, each with a trillion-synapse brain, who pursue their well-being while affecting the well-being of others in complex networks with massive positive and negative externalities, many of them historically unprecedented. It is bound to defy any simple narrative of what will happen under a given set of rules. A more rational approach to politics is to treat societies as ongoing experiments and open-mindedly learn the best practices, whichever part of the spectrum they come from. The empirical picture at present suggests that people flourish most in liberal democracies with a mixture of civic norms, guaranteed rights, market freedom, social spending, and judicious regulation. As Pat Paulsen noted, “If either the right wing or the left wing gained control of the country, it would fly around in circles.”

It’s not that Goldilocks is always right and that the truth always falls halfway between extremes. It’s that current societies have winnowed out the worst blunders of the past, so if a society is functioning halfway decently—if the streets aren’t running with blood, if obesity is a bigger problem than malnutrition, if the people who vote with their feet are clamoring to get in rather than racing for the exits—then its current institutions are probably a good starting point (itself a lesson we can take from Burkean conservatism). Reason tells us that political deliberation would be most fruitful if it treated governance more like scientific experimentation and less like an extreme-sports competition.


Though examining data from history and social science is a better way of evaluating our ideas than arguing from the imagination, the acid test of empirical rationality is prediction. Science proceeds by testing the predictions of hypotheses, and we all recognize the logic in everyday life when we praise or ridicule barroom sages depending on whether events bear them out, when we use idioms that hold people responsible for their accuracy like to eat crow and to have egg on your face, and when we use sayings like “Put your money where your mouth is” and “The proof of the pudding is in the eating.”

Unfortunately the epistemological standards of common sense—we should credit the people and ideas that make correct predictions, and discount the ones that don’t—are rarely applied to the intelligentsia and commentariat, who dispense opinions free of accountability. Always-wrong prognosticators like Paul Ehrlich continue to be canvassed by the press, and most readers have no idea whether their favorite columnists, gurus, or talking heads are more accurate than a chimpanzee picking bananas. The consequences can be dire: many military and political debacles arose from misplaced confidence in the predictions of experts (such as intelligence reports in 2003 that Saddam Hussein was developing nuclear weapons), and a few percentage points of accuracy in predicting financial markets can spell the difference between gaining and losing a fortune.

A track record of predictions also ought to inform our appraisal of intellectual systems, including political ideologies. Though some ideological differences come from clashing values and may be irreconcilable, many hinge on different means to agreed-upon ends and should be decidable. Which policies will in fact bring about things that almost everyone wants, like lasting peace or economic growth? Which will reduce poverty, or violent crime, or illiteracy? A rational society should seek the answers by consulting the world rather than assuming the omniscience of a bloc of opinionators who have coalesced around a creed.

Unfortunately, the expressive rationality documented by Kahan in his experimental subjects also applies to editorialists and experts. The payoffs that determine their reputations don’t coincide with the accuracy of the predictions, since no one is keeping score. Instead, their reputations hinge on their ability to entertain, titillate, or shock; on their ability to instill confidence or fear (in the hopes that a prophecy might be self-fulfilling or self-defeating); and on their skill in galvanizing a coalition and celebrating its virtue.

Since the 1980s the psychologist Philip Tetlock has studied what distinguishes accurate forecasters from the many oracles who are “often mistaken but never in doubt.”44 He recruited hundreds of analysts, columnists, academics, and interested laypeople to compete in forecasting tournaments in which they were presented with possible events and asked to assess their likelihoods. Experts are ingenious at wordsmithing their predictions to protect them from falsification, using weasely modal auxiliaries (could, might), adjectives (fair chance, serious possibility), and temporal modifiers (very soon, in the not-too-distant future). So Tetlock pinned them down by stipulating events with unambiguous outcomes and deadlines (for example, “Will Russia annex additional Ukraine territory in the next three months?” “In the next year, will any country withdraw from the Eurozone?” “How many additional countries will report cases of the Ebola virus in the next eight months?”) and having them write down numerical probabilities.

Tetlock also avoided the common fallacy of praising or ridiculing a single probabilistic prediction after the fact, as when the poll aggregator Nate Silver of FiveThirtyEight came under fire for giving Donald Trump just a 29 percent chance of winning the 2016 election.45 Since we cannot replay the election thousands of times and count up the number of times that Trump won, the question of whether the prediction was confirmed or disconfirmed is meaningless. What we can do, and what Tetlock did, is compare the set of each forecaster’s probabilities with the corresponding outcomes. Tetlock used a formula which credits the forecaster not just for accuracy but for accurately going out on a limb (since it’s easier to be accurate by just playing it safe with 50-50 predictions). The formula is mathematically related to how much they would win if they put their money where their mouths were and bet on their predictions according to their own odds.

Twenty years and twenty-eight thousand predictions later, how well did the experts do? On average, about as well as a chimpanzee (which Tetlock described as throwing darts rather than picking bananas). Tetlock and the psychologist Barbara Mellers held a rematch between 2011 and 2015 in which they recruited several thousand contestants to take part in a forecasting tournament held by the Intelligence Advanced Research Projects Activity (the research organization of the federation of American intelligence agencies). Once again there was plenty of dart-throwing, but in both tournaments the couple could pick out “superforecasters” who performed not just better than chimps and pundits, but better than professional intelligence officers with access to classified information, better than prediction markets, and not too far from the theoretical maximum. How can we explain this apparent clairvoyance? (For a year, that is—accuracy declines with distance into the future, and it falls to the level of chance around five years out.) The answers are clear and profound.

The forecasters who did the worst were the ones with Big Ideas—left-wing or right-wing, optimistic or pessimistic—which they held with an inspiring (but misguided) confidence:

As ideologically diverse as they were, they were united by the fact that their thinking was so ideological. They sought to squeeze complex problems into the preferred cause-effect templates and treated what did not fit as irrelevant distractions. Allergic to wishy-washy answers, they kept pushing their analyses to the limit (and then some), using terms like “furthermore” and “moreover” while piling up reasons why they were right and others wrong. As a result, they were unusually confident and likelier to declare things “impossible” or “certain.” Committed to their conclusions, they were reluctant to change their minds even when their predictions clearly failed. They would tell us, “Just wait.”46

Indeed, the very traits that put these experts in the public eye made them the worst at prediction. The more famous they were, and the closer the event was to their area of expertise, the less accurate their predictions turned out to be. But the chimplike success of brand-name ideologues does not mean that “experts” are worthless and we should distrust elites. It’s that we need to revise our concept of an expert. Tetlock’s superforecasters were:

pragmatic experts who drew on many analytical tools, with the choice of tool hinging on the particular problem they faced. These experts gathered as much information from as many sources as they could. When thinking, they often shifted mental gears, sprinkling their speech with transition markers such as “however,” “but,” “although,” and “on the other hand.” They talked about possibilities and probabilities, not certainties. And while no one likes to say “I was wrong,” these experts more readily admitted it and changed their minds.47

Successful prediction is the revenge of the nerds. Superforecasters are intelligent but not necessarily brilliant, falling just in the top fifth of the population. They are highly numerate, not in the sense of being math whizzes but in the sense of comfortably thinking in guesstimates. They have personality traits that psychologists call “openness to experience” (intellectual curiosity and a taste for variety), “need for cognition” (pleasure taken in intellectual activity), and “integrative complexity” (appreciating uncertainty and seeing multiple sides). They are anti-impulsive, distrusting their first gut feeling. They are neither left-wing nor right-wing. They aren’t necessarily humble about their abilities, but they are humble about particular beliefs, treating them as “hypotheses to be tested, not treasures to be guarded.” They constantly ask themselves, “Are there holes in this reasoning? Should I be looking for something else to fill this in? Would I be convinced by this if I were somebody else?” They are aware of cognitive blind spots like the Availability and confirmation biases, and they discipline themselves to avoid them. They display what the psychologist Jonathan Baron calls “active open-mindedness,” with opinions such as these:48

People should take into consideration evidence that goes against their beliefs. [Agree]

It is more useful to pay attention to those who disagree with you than to pay attention to those who agree. [Agree]

Changing your mind is a sign of weakness. [Disagree]

Intuition is the best guide in making decisions. [Disagree]

It is important to persevere in your beliefs even when evidence is brought to bear against them. [Disagree]

Even more important than their temperament is their manner of reasoning. Superforecasters are Bayesian, tacitly using the rule from the eponymous Reverend Bayes on how to update one’s degree of credence in a proposition in light of new evidence. They begin with the base rate for the event in question: how often it is expected to occur across the board and over the long run. Then they nudge that estimate up or down depending on the degree to which new evidence portends the event’s occurrence or non-occurrence. They seek this new evidence avidly, and avoid both overreacting to it (“This changes everything!”) and underreacting to it (“This means nothing!”).

Take, for example, the prediction “There will be an attack by Islamist militants in Western Europe between 21 January and 31 March 2015,” made shortly after the Charlie Hebdo massacre in January of that year. Pundits and politicians, their heads spinning with the Availability heuristic, would play out the scenario in the theater of the imagination and, not wanting to appear complacent or naïve, answer Definitely Yes. That’s not how superforecasters work. One of them, asked by Tetlock to think aloud, reported that he began by estimating the base rate: he went to Wikipedia, looked up the list of Islamist terrorist attacks in Europe for the previous five years, and divided by 5, which predicted 1.2 attacks a year. But, he reasoned, the world had changed since the Arab Spring in 2011, so he lopped off the 2010 data, with brought the base rate up to 1.5. ISIS recruitment had increased since the Charlie Hebdo attacks, a reason to poke the estimate upward, but so had security measures, a reason to tug it downward. Balancing the two factors, an increase by about a fifth seemed reasonable, yielding a prediction of 1.8 attacks a year. There were 69 days left in the forecast period, so he divided 69 by 365 and multiplied the fraction by 1.8. That meant that the chance of an Islamist attack in Western Europe by the end of March was about one in three. A manner of forecasting very different from the way most people think led to a very different forecast.

Two other traits distinguish superforecasters from pundits and chimpanzees. The superforecasters believe in the wisdom of crowds, laying their hypotheses on the table for others to criticize or amend and pooling their estimates with those of others. And they have strong opinions on chance and contingency in human history as opposed to necessity and fate. Tetlock and Mellers asked different groups of people whether they agreed with statements like the following:

Events unfold according to God’s plan.

Everything happens for a reason.

There are no accidents or coincidences.

Nothing is inevitable.

Even major events like World War II or 9/11 could have turned out very differently.

Randomness is often a factor in our personal lives.

They calculated a Fate Score by adding up the “Agree” ratings for items like the first three and the “Disagree” ratings for items like the last three. An average American is somewhere in the middle. An undergraduate at an elite university scores a bit lower; a so-so forecaster lower still; and the superforecasters lowest of all, with the most accurate superforecasters expressing the most vehement rejection of fate and acceptance of chance.

To my mind, Tetlock’s hardheaded appraisal of expertise by the ultimate benchmark, prediction, should revolutionize our understanding of history, politics, epistemology, and intellectual life. What does it mean that the wonkish tweaking of probabilities is a more reliable guide to the world than the pronouncements of erudite sages and narratives inspired by systems of ideas? Aside from smacking us upside the head with a reminder to be more humble and open-minded, it offers a glimpse into the workings of history on the time scale of years and decades. Events are determined by myriad small forces incrementing or decrementing their likelihoods and magnitudes rather than by sweeping laws and grand dialectics. Unfortunately for many intellectuals and for all political ideologues, this is not the way they are accustomed to thinking, but perhaps we had better get used to it. When Tetlock was asked at a public lecture to forecast the nature of forecasting, he said, “When the audience of 2515 looks back on the audience of 2015, their level of contempt for how we go about judging political debate will be roughly comparable to the level of contempt we have for the 1692 Salem witch trials.”49


Tetlock did not assign a probability to his whimsical prediction, and he gave it a long, safe deadline. It certainly would be unwise to forecast an improvement in the quality of political debate within the five-year window in which prediction is feasible. The major enemy of reason in the public sphere today—which is not ignorance, innumeracy, or cognitive biases, but politicization—appears to be on an upswing.

In the political arena itself, Americans have become increasingly polarized.50 Most people’s opinions are too shallow and uninformed to fit into a coherent ideology, but in a dubious form of progress, the percentage of Americans whose opinions are down-the-line liberal or down-the-line conservative doubled between 1994 and 2014, from 10 to 21 percent. The polarization has coincided with an increase in social segregation by politics: over those twenty years, the ideologues have become more likely to say that most of their close friends share their political views.

The parties have become more partisan as well. According to a recent Pew study, in 1994 about a third of Democrats were more conservative than the median Republican, and vice-versa. In 2014 the figures were closer to a twentieth. Though Americans across the political spectrum drifted leftward through 2004, since then they have diverged on every major issue except gay rights, including government regulation, social spending, immigration, environmental protection, and military strength. Even more troublingly, each side has become more contemptuous of the other. In 2014, 38 percent of Democrats held “very unfavorable” views of the Republican Party (up from 16 percent in 1994), and more than a quarter saw it as “a threat to the nation’s well-being.” Republicans were even more hostile to Democrats, with 43 percent viewing the party unfavorably and more than a third seeing it as a threat. The ideologues on each side have also become more resistant to compromise.

Fortunately, a majority of Americans are more moderate in all these opinions, and the proportion who call themselves moderate has not changed in forty years.51 Unfortunately, it’s the extremists who are more likely to vote, donate, and pressure their representatives. There is little reason to think that any of this has improved since the survey was conducted in 2014, to put it mildly.

Universities ought to be the arena in which political prejudice is set aside and open-minded investigation reveals the way the world works. But just when we need this disinterested forum the most, academia has become more politicized as well—not more polarized, but more left-wing. Colleges have always been more liberal than the American population, but the skew has been increasing. In 1990, 42 percent of faculty were far left or liberal (11 percentage points more than the American population), 40 percent were moderate, and 18 percent were far right or conservative, for a left-to-right ratio of 2.3 to 1. In 2014 the proportions were 60 percent far left or liberal (30 percentage points more than the population), 28 percent moderate, and 12 percent conservative, a ratio of 5 to 1.52 The proportions vary by field: departments of business, computer science, engineering, and health science are evenly split, while the humanities and social sciences are decidedly on the left: the proportion of conservatives is in the single digits, and they are outnumbered by Marxists two to one.53 Professors in the physical and biological sciences are in between, with few radicals and virtually no Marxists, but liberals outnumber conservatives by a wide margin.

The liberal tilt of academia (and of journalism, commentary, and intellectual life) is in some ways natural.54 Intellectual inquiry is bound to challenge the status quo, which is never perfect. And verbally articulated propositions, intellectuals’ stock in trade, are more congenial to the deliberate policies typically favored by liberals than to the diffuse forms of social organization such as markets and traditional norms typically favored by conservatives.55 A liberal tilt is also, in moderation, desirable. Intellectual liberalism was at the forefront of many forms of progress that almost everyone has come to accept, such as democracy, social insurance, religious tolerance, the abolition of slavery and judicial torture, the decline of war, and the expansion of human and civil rights.56 In many ways we are (almost) all liberals now.57

But we have seen that when a creed becomes attached to an in-group, the critical faculties of its members can be disabled, and there are reasons to think that has happened within swaths of academia.58 In The Blank Slate (updated in 2016) I showed how leftist politics had distorted the study of human nature, including sex, violence, gender, childrearing, personality, and intelligence. In a recent manifesto, Tetlock, together with the psychologists José Duarte, Jarret Crawford, Charlotta Stern, Jonathan Haidt, and Lee Jussim, documented the leftward swing of social psychology and showed how it has compromised the quality of research.59 Quoting John Stuart Mill—“He who knows only his own side of the case, knows little of that”—they called for greater political diversity in psychology, the version of diversity that matters the most (as opposed to the version commonly pursued, namely people who look different but think alike).60

To the credit of academic psychology, Duarte et al.’s critique has been respectfully received.61 But the respect is far from universal. When the New York Times columnist Nicholas Kristof cited their article favorably and made similar points, the angry reaction confirmed their worst accusations (the most highly recommended comment was “You don’t diversify with idiots”).62 And a faction of academic culture composed of hard-left faculty, student activists, and an autonomous diversity bureaucracy (pejoratively called social justice warriors) has become aggressively illiberal. Anyone who disagrees with the assumption that racism is the cause of all problems is called a racist.63 Non-leftist speakers are frequently disinvited after protests or drowned out by jeering mobs.64 A student may be publicly shamed by her dean for a private email that considers both sides of a controversy.65 Professors are pressured to avoid lecturing on upsetting topics, and have been subjected to Stalinesque investigations for politically incorrect opinions.66 Often the repression veers into unintended comedy.67 A guideline for deans on how to identify “microaggressions” lists remarks such as “America is the land of opportunity” and “I believe the most qualified person should get the job.” Students mob and curse a professor who invited them to discuss a letter written by his wife suggesting that students chill out about Halloween costumes. A yoga course was canceled because yoga was deemed “cultural appropriation.” The comedians themselves are not amused: Jerry Seinfeld, Chris Rock, and Bill Maher, among others, are wary of performing at college campuses because inevitably some students will be enraged by a joke.68

For all the follies on campus, we can’t let right-wing polemicists indulge in a bias bias and dismiss any idea they don’t like that comes out of a university. The academic archipelago embraces a vast sea of opinions, and it is committed to norms such as peer review, tenure, open debate, and the demand for citation and empirical evidence that are engineered to foster disinterested truth-seeking, however imperfectly they do so in practice. Colleges and universities have fostered the heterodox criticisms reviewed here and elsewhere, while delivering immense gifts of knowledge to the world.69 And it’s not as if alternative arenas—the blogosphere, the Twittersphere, cable news, talk radio, Congress—are paragons of objectivity and rigor.

Of the two forms of politicization that are subverting reason today, the political is far more dangerous than the academic, for an obvious reason. It’s often quipped (no one knows who said it first) that academic debates are vicious because the stakes are so small.70 But in political debates the stakes are unlimited, including the future of the planet. Politicians, unlike professors, pull the levers of power. In 21st-century America, the control of Congress by a Republican Party that became synonymous with the extreme right has been pernicious, because it is so convinced of the righteousness of its cause and the evil of its rivals that it has undermined the institutions of democracy to get what it wants. The corruptions include gerrymandering, imposing voting restrictions designed to disenfranchise Democratic voters, encouraging unregulated donations from moneyed interests, blocking Supreme Court nominations until their party controls the presidency, shutting down the government when their maximal demands aren’t met, and unconditionally supporting Donald Trump over their own objections to his flagrantly antidemocratic impulses.71 Whatever differences in policy or philosophy divide the parties, the mechanisms of democratic deliberation should be sacrosanct. Their erosion, disproportionately by the right, has led many people, including a growing share of young Americans, to see democratic government as inherently dysfunctional and to become cynical about democracy itself.72

Intellectual and political polarization feed each other. It’s harder to be a conservative intellectual when American conservative politics has become steadily more know-nothing, from Ronald Reagan to Dan Quayle to George W. Bush to Sarah Palin to Donald Trump.73 On the other side, the capture of the left by identity politicians, political correctness police, and social justice warriors creates an opening for loudmouths who brag of “telling it like it is.” A challenge of our era is how to foster an intellectual and political culture that is driven by reason rather than tribalism and mutual reaction.


Making reason the currency of our discourse begins with clarity about the centrality of reason itself.74 As I mentioned, many commentators are confused about it. The discovery of cognitive and emotional biases does not mean that “humans are irrational” and so there’s no point in trying to make our deliberations more rational. If humans were incapable of rationality, we could never have discovered the ways in which they were irrational, because we would have no benchmark of rationality against which to assess human judgment, and no way to carry out the assessment. Humans may be vulnerable to bias and error, but clearly not all of us all the time, or no one would ever be entitled to say that humans are vulnerable to bias and error. The human brain is capable of reason, given the right circumstances; the problem is to identify those circumstances and put them more firmly in place.

For the same reason, editorialists should retire the new cliché that we are in a “post-truth era” unless they can keep up a tone of scathing irony. The term is corrosive, because it implies that we should resign ourselves to propaganda and lies and just fight back with more of our own. We are not in a post-truth era. Mendacity, truth-shading, conspiracy theories, extraordinary popular delusions, and the madness of crowds are as old as our species, but so is the conviction that some ideas are right and others are wrong.75 The same decade that has seen the rise of pants-on-fire Trump and his reality-challenged followers has also seen the rise of a new ethic of fact-checking. Angie Holan, the editor of PolitiFact, a fact-checking project begun in 2007, noted:

[Many of] today’s TV journalists . . . have picked up the torch of fact-checking and now grill candidates on issues of accuracy during live interviews. Most voters don’t think it’s biased to question people about whether their seemingly fact-based statements are accurate. Research published earlier this year by the American Press Institute showed that more than eight in 10 Americans have a positive view of political fact-checking.

In fact, journalists regularly tell me their media organizations have started highlighting fact-checking in their reporting because so many people click on fact-checking stories after a debate or high-profile news event. Many readers now want fact-checking as part of traditional news stories as well; they will vocally complain to ombudsmen and readers’ representatives when they see news stories repeating discredited factual claims.76

This ethic would have served us well in earlier decades when false rumors regularly set off pogroms, riots, lynchings, and wars (including the Spanish-American War in 1898, the escalation of the Vietnam War in 1964, the Iraq invasion of 2003, and many others).77 It was not applied rigorously enough to prevent Trump’s victory in 2016, but since then his fibs and those of his spokespeople have been mercilessly ridiculed in the media and popular culture, which means that the resources for favoring truth are in place even if they don’t always carry the day.

Over the long run, the institutions of reason can mitigate the Tragedy of the Belief Commons and allow the truth to prevail. For all of our current irrationality, few influential people today believe in werewolves, unicorns, witches, alchemy, astrology, bloodletting, miasmas, animal sacrifice, the divine right of kings, or supernatural omens in rainbows and eclipses. Moral irrationality can be outgrown as well. As recently as my childhood, the Virginia judge Leon Bazile upheld the conviction of Richard and Mildred Loving for their interracial marriage with an argument that not even the most benighted conservative would advance today:

The parties were guilty of a most serious crime. It was contrary to the declared public law, founded upon motives of public policy . . . upon which social order, public morality and the best interests of both races depend. . . . Almighty God created the races white, black, yellow, malay and red, and he placed them on separate continents. The fact that he separated the races shows that he did not intend for the races to mix.78

And presumably most liberals would not be persuaded by this defense of Castro’s Cuba by the intellectual icon Susan Sontag in 1969:

The Cubans know a lot about spontaneity, gaiety, sensuality and freaking out. They are not linear, desiccated creatures of print-culture. In short, their problem is almost the obverse of ours—and we must be sympathetic to their efforts to solve it. Suspicious as we are of the traditional Puritanism of left revolutions, American radicals ought to be able to maintain some perspective when a country known mainly for dance music, prostitutes, cigars, abortions, resort life and pornographic movies gets a little up-tight about sexual morals and, in one bad moment two years ago, rounds up several thousand homosexuals in Havana and sends them to a farm to rehabilitate themselves.79

In fact, these “farms” were forced labor camps, and they arose not as a correction to spontaneous gaiety and freaking out but as an expression of a homophobia that was deeply rooted in that Latin culture. Whenever we get upset about the looniness of public discourse today, we should remind ourselves that people weren’t so rational in the past, either.


What can be done to improve standards of reasoning? Persuasion by facts and logic, the most direct strategy, is not always futile. It’s true that people can cling to beliefs in defiance of all evidence, like Lucy in Peanuts who insisted that snow comes out of the ground and rises into the sky even as she was being slowly buried in a snowfall. But there are limits as to how high the snow can pile up. When people are first confronted with information that contradicts a staked-out position, they become even more committed to it, as we’d expect from the theories of identity-protective cognition, motivated reasoning, and cognitive dissonance reduction. Feeling their identity threatened, belief holders double down and muster more ammunition to fend off the challenge. But since another part of the human mind keeps a person in touch with reality, as the counterevidence piles up the dissonance can mount until it becomes too much to bear and the opinion topples over, a phenomenon called the affective tipping point.80 The tipping point depends on the balance between how badly the opinion holder’s reputation would be damaged by relinquishing the opinion and whether the counterevidence is so blatant and public as to be common knowledge: a naked emperor, an elephant in the room.81 As we saw in chapter 10, that is starting to happen with public opinion on climate change. And entire populations can shift when a critical nucleus of persuadable influencers changes its mind and everyone else follows along, or when one generation is replaced by another that doesn’t cling to the same dogmas (progress, funeral by funeral).

Across the society as a whole the wheels of reason often turn slowly, and it would be nice to speed them up. The obvious places to apply this torque are in education and the media. For several decades fans of reason have pressured schools and universities to adopt curricula in “critical thinking.” Students are advised to look at both sides of an issue, to back up their opinions with evidence, and to spot logical fallacies like circular reasoning, attacking a straw man, appealing to authority, arguing ad hominem, and reducing a graded issue to black or white.82 Related programs called “debiasing” try to inoculate students against cognitive fallacies such as the Availability heuristic and confirmation bias.83

When they were first introduced, these programs had disappointing outcomes, which led to pessimism as to whether we could ever knock sense into the person on the street. But unless risk analysts and cognitive psychologists represent a superior breed of human, something in their education must have enlightened them about cognitive fallacies and how to avoid them, and there is no reason those enlightenments can’t be applied more widely. The beauty of reason is that it can always be applied to understand failures of reason. A second look at critical thinking and debiasing programs has shown what makes them succeed or fail.

The reasons are familiar to education researchers.84 Any curriculum will be pedagogically ineffective if it consists of a lecturer yammering in front of a blackboard, or a textbook that students highlight with a yellow marker. People understand concepts only when they are forced to think them through, to discuss them with others, and to use them to solve problems. A second impediment to effective teaching is that pupils don’t spontaneously transfer what they learned from one concrete example to others in the same abstract category. Students in a math class who learn how to arrange a marching band into even rows using the principle of a least common multiple are stymied when asked to arrange rows of vegetables in a garden. In the same way, students in a critical thinking course who are taught to discuss the American Revolution from both the British and American perspectives will not make the leap to consider how the Germans viewed World War I.

With these lessons about lessons under their belt, psychologists have recently devised debiasing programs that fortify logical and critical thinking curricula. They encourage students to spot, name, and correct fallacies across a wide range of contexts.85 Some use computer games that provide students with practice, and with feedback that allows them to see the absurd consequences of their errors. Other curricula translate abstruse mathematical statements into concrete, imaginable scenarios. Tetlock has compiled the practices of successful forecasters into a set of guidelines for good judgment (for example, start with the base rate; seek out evidence and don’t overreact or underreact to it; don’t try to explain away your own errors but instead use them as a source of calibration). These and other programs are provably effective: students’ newfound wisdom outlasts the training session and transfers to new subjects.

Despite these successes, and despite the fact that the ability to engage in unbiased, critical reasoning is a prerequisite to thinking about anything else, few educational institutions have set themselves the goal of enhancing rationality. (This includes my own university, where my suggestion during a curriculum review that all students should learn about cognitive biases fell deadborn from my lips.) Many psychologists have called on their field to “give debiasing away” as one of its greatest potential contributions to human welfare.86


Effective training in critical thinking and cognitive debiasing may not be enough to cure identity-protective cognition, in which people cling to whatever opinion enhances the glory of their tribe and their status within it. This is the disease with the greatest morbidity in the political realm, and so far scientists have misdiagnosed it, pointing to irrationality and scientific illiteracy instead of the myopic rationality of the Tragedy of the Belief Commons. As one writer noted, scientists often treat the public the way Englishmen treat foreigners: they speak more slowly and more loudly.87

Making the world more rational, then, is not just a matter of training people to be better reasoners and setting them loose. It also depends on the rules of discourse in workplaces, social circles, and arenas of debate and decision-making. Experiments have shown that the right rules can avert the Tragedy of the Belief Commons and force people to dissociate their reasoning from their identities.88 One technique was discovered long ago by rabbis: they forced yeshiva students to switch sides in a Talmudic debate and argue the opposite position. Another is to have people try to reach a consensus in a small discussion group; this forces them to defend their opinions to their groupmates, and the truth usually wins.89 Scientists themselves have hit upon a new strategy called adversarial collaboration, in which mortal enemies work together to get to the bottom of an issue, setting up empirical tests that they agree beforehand will settle it.90

Even the mere requirement to explicate an opinion can shake people out of their overconfidence. Most of us are deluded about our degree of understanding of the world, a bias called the Illusion of Explanatory Depth.91 Though we think we understand how a zipper works, or a cylinder lock, or a toilet, as soon as we are called upon to explain it we are dumbfounded and forced to confess we have no idea. That is also true of hot-button political issues. When people with die-hard opinions on Obamacare or NAFTA are challenged to explain what those policies actually are, they soon realize that they don’t know what they are talking about, and become more open to counterarguments. Perhaps most important, people are less biased when they have skin in the game and have to live with the consequences of their opinions. In a review of the literature on rationality, the anthropologists Hugo Mercier and Dan Sperber conclude, “Contrary to common bleak assessments of human reasoning abilities, people are quite capable of reasoning in an unbiased manner, at least when they are evaluating arguments rather than producing them, and when they are after the truth rather than trying to win a debate.”92

The way that the rules in particular arenas can make us collectively stupid or smart can resolve the paradox that keeps popping up in this chapter: why the world seems to be getting less rational in an age of unprecedented knowledge and tools for sharing it. The resolution is that in most arenas, the world has not been getting less rational. It’s not as if hospital patients are increasingly dying of quackery, or planes are falling out of the sky, or food is rotting on wharves because no one can figure out how to get it into stores. The chapters on progress have shown that our collective ingenuity has been increasingly successful in solving society’s problems.

Indeed, in one realm after another we are seeing the conquest of dogma and instinct by the armies of reason. Newspapers are supplementing shoe leather and punditry with statisticians and fact-checking squads.93 The cloak-and-dagger world of national intelligence is seeing farther into the future by using the Bayesian reasoning of superforecasters.94 Health care is being reshaped by evidence-based medicine (which should have been a redundant expression long ago).95 Psychotherapy has progressed from the couch and notebook to Feedback-Informed Treatment.96 In New York, and increasingly in other cities, violent crime has been reduced with the real-time data-crunching system called Compstat.97 The effort to aid the developing world is being guided by the Randomistas, economists who gather data from randomized trials to distinguish fashionable boondoggles from programs that actually improve people’s lives.98 Volunteering and charitable giving are being scrutinized by the Effective Altruism movement, which distinguishes altruistic acts that enhance the lives of beneficiaries from those that enhance the warm glow in benefactors.99 Sports has seen the advent of Moneyball, in which strategies and players are evaluated by statistical analysis rather than intuition and lore, allowing smarter teams to beat richer teams and giving fans endless new material for conversations over the hot stove.100 The blogosphere has spawned the Rationality Community, who urge people to be “less wrong” in their opinions by applying Bayesian reasoning and compensating for cognitive biases.101 And in the day-to-day functioning of governments, the application of behavioral insights (sometimes called Nudge) and evidence-based policy has wrung more social benefits out of fewer tax dollars.102 In area after area, the world has been getting more rational.

There is, of course, a flaming exception: electoral politics and the issues that have clung to it. Here the rules of the game are fiendishly designed to bring out the most irrational in people.103 Voters have a say on issues that don’t affect them personally, and never have to inform themselves or justify their positions. Practical agenda items like trade and energy are bundled with moral hot buttons like euthanasia and the teaching of evolution. Each bundle is strapped to a coalition with geographic, racial, and ethnic constituencies. The media cover elections like horse races, and analyze issues by pitting ideological hacks against each other in screaming matches. All of these features steer people away from reasoned analysis and toward perfervid self-expression. Some are products of the misconception that the benefits of democracy come from elections, whereas they depend more on having a government that is constrained in its powers, responsive to its citizens, and attentive to the results of its policies (chapter 14). As a result, reforms that are designed to make governance more “democratic,” such as plebiscites and direct primaries, may instead have made governance more identity-driven and irrational. The conundrums are inherent to democracy and have been debated since the time of Plato.104 They have no instant solution, but identifying the worst of the current problems and setting the goal of mitigating them is the place to start.

When issues are not politicized, people can be altogether rational. Kahan notes that “bitter public disputes over science are in fact the exception rather than the rule.”105 No one gets exercised over whether antibiotics work, or whether driving drunk is a good idea. Recent history proves the point in a natural experiment, complete with a neatly matched control group.106 The human papillomavirus (HPV) is sexually transmitted and a major cause of cervical cancer but can be neutralized with a vaccine. Hepatitis B is also sexually transmitted, also causes cancer, and also can be prevented by a vaccine. Yet HPV vaccination became a political firestorm, with parents protesting that the government should not be making it easier for teenagers to have sex, while hepatitis B vaccination is unexceptionable. The difference, Kahan suggests, lies in the way the two vaccines were introduced. Hep B was treated as a routine public health matter, like whooping cough or yellow fever. But the manufacturer of the HPV vaccine lobbied state legislatures to make vaccination mandatory, starting with adolescent girls, which sexualized the treatment and raised the dander of puritanical parents.

To make public discourse more rational, issues should be depoliticized as much as is feasible. Experiments have shown that when people hear about a new policy, such as welfare reform, they will like it if it is proposed by their own party and hate it if it is proposed by the other—all the while convinced that they are reacting to it on its objective merits.107 That implies that spokespeople should be chosen carefully. Several climate activists have lamented that by writing and starring in the documentary An Inconvenient Truth, Al Gore may have done the movement more harm than good, because as a former Democratic vice-president and presidential nominee he stamped climate change with a left-wing seal. (It’s hard to believe today, but environmentalism was once denounced as a right-wing cause, in which the gentry frivolously worried about habitats for duck-hunting and the views from their country estates rather than serious issues like racism, poverty, and Vietnam.) Recruiting conservative and libertarian commentators who have been convinced by the evidence and are willing to share their concern would be more effective than recruiting more scientists to speak more slowly and more loudly.108

Also, the factual state of affairs should be unbundled from remedies that are freighted with symbolic political meaning. Kahan found that people are less polarized in their opinion about the very existence of anthropogenic climate change when they are reminded of the possibility that it might be mitigated by geoengineering than when they are told that it calls for stringent controls on emissions.109 (This does not, of course, mean that geoengineering itself need be advocated as the primary solution.) Depoliticizing an issue can lead to real action. Kahan helped a compact of Florida businesspeople, politicians, and resident associations, many of them Republican, agree to a plan to adapt to rising sea levels that threatened coastal roads and freshwater supplies. The plan included measures to reduce carbon emissions, which under other circumstances would be politically radioactive. But as long as the planning was focused on problems they could see and the politically divisive backstory was downplayed, they acted reasonably.110

For their part, the media could examine their role in turning politics into a sport, and intellectuals and pundits could think twice about competing. Can we imagine a day in which the most famous columnists and talking heads have no predictable political orientation but try to work out defensible conclusions on an issue-by-issue basis? A day in which “You’re just repeating the left-wing [or right-wing] position” is considered a devastating gotcha? In which people (especially academics) will answer a question like “Does gun control reduce crime?” or “Does a minimum wage increase unemployment?” with “Wait, let me look up the latest meta-analysis” rather than with a patellar reflex predictable from their politics? A day when writers on the right and left abandon the Chicago Way of debating (“They pull a knife, you pull a gun. He sends one of yours to the hospital, you send one of his to the morgue”) and adopt the arms-controllers’ tactic of Graduated Reciprocation in Tension-Reduction (make a small unilateral concession with an invitation that it be reciprocated)?111

That day is a long way off. But the self-healing powers of rationality, in which flaws in reasoning are singled out as targets for education and criticism, take time to work. It took centuries for Francis Bacon’s observations on anecdotal reasoning and the confusion of correlation with causation to become second nature to scientifically literate people. It’s taken almost fifty years for Tversky and Kahneman’s demonstrations of Availability and other cognitive biases to make inroads into our conventional wisdom. The discovery that political tribalism is the most insidious form of irrationality today is still fresh and mostly unknown. Indeed, sophisticated thinkers can be as infected by it as anyone else. With the accelerating pace of everything, perhaps the countermeasures will catch on sooner.

However long it takes, we must not let the existence of cognitive and emotional biases or the spasms of irrationality in the political arena discourage us from the Enlightenment ideal of relentlessly pursuing reason and truth. If we can identify ways in which humans are irrational, we must know what rationality is. Since there’s nothing special about us, our fellows must have at least some capacity for rationality as well. And it’s in the very nature of rationality that reasoners can always step back, consider their own shortcomings, and reason out ways to work around them.

Загрузка...