I can calculate the motion of heavenly bodies but not the madness of people.
ENGINEERS WOULD PROBABLY build kluges more often if it were not for one small fact: that which is clumsy is rarely reliable. Kluges are often (though not always) designed to last for a moment, not a lifetime. On Apollo 13, with time running out and the nearest factory 200,000 miles away, making a kluge was essential. But the fact that some clever NASA engineers managed to build a substitute air-filter adapter using duct tape and a sock doesn’t mean that what they built was well built; the whole thing could have fallen apart at a moment’s notice. Even kluges designed to last for a while — like vacuumed-powered windshield wipers — often have what engineers might call “narrow operating conditions.” (You wanted those wipers to work uphill too?)
There can be little doubt that the human brain too is fragile, and not just because it routinely commits the cognitive errors we’ve already discussed, but also because it is deeply vulnerable both to minor malfunctions and even, in some cases, severe breakdown. The mildest malfunctions are what chess masters call blunders and a Norwegian friend of mine calls “brain farts” — momentary lapses of reason and attention that cause chagrin (d’Oh!) and the occasional traffic accident. We know better, but for a moment we just plain goof. Despite our best intentions, our brain just doesn’t manage to do what we want it to. No one is immune to this. Even Tiger Woods occasionally misses an easy putt.
At the risk of stating the obvious, properly programmed computers simply don’t make these kinds of transient blunders. My laptop has never, ever forgotten to “carry the one” in the midst of a complicated sum, nor (to my chagrin) has it “spaced out” and neglected to protect its queen during a game of chess. Eskimos don’t really have 500 words for snow, but we English speakers sure have a lot of words for our cognitive short circuits: not just mistakes, blunders, and fingerfehlers (a hybrid of English and German that’s popular among chess masters) but also goofs, gaffes, flubs, and boo-boos, along with slips, howlers, oversights, and lapses. Needless to say, we have plenty of opportunities to use this vocabulary.
The fact that even the best of us are prone to the occasional blunder illustrates something important about the neural hardware that runs our mental software: consistency just isn’t our forte. Nearly everything we carbon-based units do runs some chance of error. Word-finding failures, moments of disorientation and forgetfulness, each in its own way points to the imperfection inherent in the nerve cells (neurons) from which brain circuits are made. If a foolish consistency is the hobgoblin of little minds, as the American writer Ralph Waldo Emerson (1803-1882) once said, a foolish inconsistency characterizes every single human mind. There’s no guarantee that any person’s mind will always fire on all cylinders.
Yet random gaffes and transient blunders are just a tiny piece of a larger, more serious puzzle: why do we humans so often fail to do what we set out to do, and what makes the mind so fragile that it can sometimes spiral out of control altogether?
Plenty of circumstances systematically increase the chance of making mental errors. The more that’s on our mind, for example, the more likely we are to fall back on our primitive ancestral system. Bye-bye, prefrontal cortex, signature of the noble human mind; hello, animal instinct, short-sighted and reactive. People committed to eating in a healthful way are, for example, more likely to turn to junk if something else is on their mind. Laboratory studies show that as the demands on the brain, so-called cognitive load, increase, the ancestral system continues business as usual — while the more modern deliberative system gets left behind. Precisely when the cognitive chips are down, when we most need our more evolved (and theoretically sounder) faculties, they can let us down and leave us less judicious. When mentally (or emotionally) taxed, we become more prone to stereotyping, more egocentric, and more vulnerable to the pernicious effects of anchoring.
No system, of course, can cope with infinite demands, but if I had been hired to design this aspect of the mind, I would have started by letting the deliberative, “rational” system take priority, whenever time permits, favoring the rational over the reflexive where possible. In giving precedence instead to the ancestral reflexive system — not necessarily because it is better but simply because it is older — evolution has squandered some of our most valuable intellectual resources.
Whether we are under cognitive strain or not, another banal but systematic failure hampers our ability to meet mental goals: most of us — at one time or another — “space out.” We have one thing that we nominally intend to accomplish (say, finishing a report before a deadline), and the next thing you know, our thoughts have wandered. An ideal creature would be endowed with an iron will, sticking, in all but the most serious emergencies, to carefully constructed goals. Humans, by contrast, are characteristically distractible, no matter what the task might be.
Even with the aid of Google, I can neither confirm nor deny the widespread rumor that one in four people is daydreaming about sex at any given moment,[48] but my hunch is that the number is not too far from the truth. According to a recent British survey, during office meetings one in three office workers reportedly daydreams about sex. An economist quoted in the UK’s Sunday Daily Times estimates that this daydreaming may cost the British economy about £7.8 billion annually.
If you’re not the boss, statistics on daydreaming about sex might be amusing, but “zoning out,” as it is known in the technical literature, is a real problem. For example, all told, nearly a 100,000 Americans a year die in accidents of various sorts (in motor vehicles or otherwise); if even a third of those tragedies are due to lapses of attention, mind wandering is one of the top ten leading causes of death.[49]
My computer never zones out while downloading my email, but I find my mind wandering all the time, and not just during faculty meetings; to my chagrin, this also happens during those rare moments when I have time for pleasure reading. Attention-deficit disorder (ADD) gets all the headlines, but in reality, nearly everyone periodically finds it hard to stay on task.
What explains our species-wide tendency to zone out — even, sometimes, in the midst of important things? My guess is that our inherent distractibility is one more consequence of the sloppy integration between an ancestral, reflexive set of goal-setting mechanisms (perhaps shared with all mammals) and our evolutionarily more recent deliberative system, which, clever as it may be, aren’t always kept in the loop.
Even when we aren’t zoning out, we are often chickening out: putting off till tomorrow what we really ought to do today. As the eighteenth-century lexicographer and essayist Samuel Johnson put it (some 200 years before the invention of video games), procrastination is “one of the general weaknesses, which in spite of the instruction of moralists, and the remonstrances of reason, prevail to a greater or less degree in every mind.”
By one recent estimate, 80-95 percent of college students engage in procrastination, and two thirds of all students consider themselves to be (habitual) procrastinators. Another estimate says that 15-20 percent of all adults are chronically affected — and I can’t help but wonder whether the rest are simply lying. Most people are troubled by procrastination; most characterize it as bad, harmful, and foolish. And most of us do it anyway.
It’s hard to see how procrastination per se could be adaptive. The costs are often considerable, the benefits minuscule, and it wastes all the mental effort people put into making plans in the first place. Studies have shown that students who routinely procrastinate consistently get lower grades; businesses that miss deadlines due to the procrastination of their employees can sometimes lose millions of dollars. Yet many of us can’t help ourselves. Why, when so little good comes of procrastinating, do we persist in doing it so much?
I for one hope someone figures out the answer, and soon, maybe even inventing a magic pill that can keep us on task. Too bad no one’s gotten around to it just yet: tomorrow and tomorrow and tomorrow. In the meantime, the research that has been done suggests a diagnosis if not a solution: procrastination is, in words of one psychologist, the “quintessential self-regulatory failure.” Nobody, of course, can at a given moment do all of the things that need to be done, but the essence of procrastination is the way in which we defer progress on our own most important goals.
The problem, of course, is not that we put things off, per se; if we need to buy groceries and do our taxes, we literally can’t accomplish both at the same time. If we do one now, the other must wait. The problem is that we often postpone the things that need to get done in favor of others — watching television or playing video games — that most decidedly don’t need to get done. Procrastination is a sign of our inner kluge because it shows how our top-level goals (spend more time with the children, finish that novel) are routinely undermined by goals of considerably less priority (if catching up on the latest episodes of Desperate Housewives can be counted as a “goal” at all).
People need downtime and I don’t begrudge them that, but procrastination does highlight a fundamental glitch in our cognitive “design”: the gap between the machinery that sets our goals (offline) and the machinery that chooses (online, in the moment) which goals to follow.
The tasks most likely to tempt us to procrastinate generally meet two conditions: we don’t enjoy doing them and we don’t have to do them now. Given half a chance, we put off the aversive and savor the fun, often without considering the ultimate costs. Procrastination is, in short, the bastard child of future discounting (that tendency to devalue the future in relation to the present) and the use of pleasure as a quick-and-dirty compass.
We zone out, we chicken out, we deceive. To be human is to fight a lifelong uphill battle for self-control. Why? Because evolution left us clever enough to set reasonable goals but without the willpower to see them through.
Alas, zoning out and chickening out are among the least of our problems; the most serious are the psychological breakdowns that require professional help. From schizophrenia to obsessive-compulsive disorder and bipolar disorder (also called manic depression), nothing more clearly illustrates the vulnerability of the human mind than our susceptibility to chronic and severe mental disorders. What explains the madness of John Nash, the bipolar disorder of Vincent van Gogh and Virginia Woolf, the paranoia of Edgar Allan Poe, the obsessive-compulsive disorder of Howard Hughes, the depression that drove Ernest Hemingway, Jerzy Kosinski, Sylvia Plath, and Spalding Gray to suicide? Perhaps a quarter of all human beings at a given moment suffer from one clinical disorder or another. And, over the course of a lifetime, almost half the population will face bouts of one mental illness or another. Why is our mind so prone to breakdown?
Let’s start with a fact that is well known but perhaps not fully appreciated. For the most part, mental disorders aren’t random unprecedented anomalies, completely unique to the individuals who suffer from them. Rather, they comprise clusters of symptoms that recur again and again. When things fall apart mentally, they tend to do so in recognizable ways, what engineers sometimes call “known failure modes.” A given make and model of a car, say, might have a fine engine but consistently suffer from electrical problems. The human mind is vulnerable to its own particular malfunctions, well documented enough to be classified in the human equivalent of Chilton’s Auto Repair: the DSM-IV (short for Diagnostic and Statistical Manual of Mental Disorders, fourth edition; a fifth edition is scheduled for 2011).
To be sure, symptoms vary among individuals, both in severity and in number. Just as no two colds are exactly alike, no two people diagnosed with a given mental illness experience it in precisely the same way. Some people with depression, for example, are dysfunctional, and some aren’t; some people with schizophrenia hear voices, and others don’t.
And diagnosis remains an inexact science. There are a few disorders (such as multiple personality syndrome) whose very existence is controversial, and a few “conditions” used to be labeled as disorders but never should have been (such as homosexuality, removed from the DSM-III in 1973).[50] But by and large, there is an astonishing amount of consistency in the ways in which the human mind can break down, and certain symptoms, such as dysphoria (sadness), anxiety, panic, paranoia, delusions, obsessions, and unchecked aggression, recur again and again.
When we see the same basic patterns over and over, there has to be a reason for them. What is the mind, such that it breaks down in the ways that it does?
The standard tack in evolutionary psychiatry, the branch of evolutionary psychology that deals with mental disorders, is to explain particular disorders (or occasionally symptoms) in terms of hidden benefits.[51] We saw one example in the first chapter, the somewhat dubious suggestion that schizophrenia might have been selected for by natural selection because of a purported benefit that visions conveyed to tribal shamans, but there are many others. Agoraphobia has been viewed as a “potentially adaptive consequence of repeated panic attacks,” and anxiety has been interpreted as a way of “altering our thinking, behavior, and physiology in advantageous ways.” Depression, meanwhile, allegedly evolved as a way of allowing individuals to “accept defeat… and accommodate to what would otherwise be unacceptable low social rank.”
If you’re like me, you won’t find these examples particularly compelling. Were schizophrenics really more likely than other people to become shamans? Were those who became shamans more successful than their non-schizophrenic counterparts in producing viable offspring? Even if they were, are shamans prevalent enough in history to explain why at least 1 in every 100 humans suffers from the disorder? The depression theory initially seems more promising; as the authors note, it might well be better for the low man on the totem pole to accede to the wishes of an alpha male than to fight a battle that can’t be won. Furthermore, depression often does stem from people’s sense that their status is lower, relative to some peer group. But does the rest of the social competition theory even fit the facts? Depression isn’t usually about accepting defeat, it’s about not accepting it. A friend of mine, we’ll call him T., has been clinically depressed for years. He’s not particularly low in social rank (he’s actually a man of considerable accomplishment). Yet although there is nothing objectively wrong with his life, he doesn’t accept it: he ruminates on it. Depression hasn’t mobilized him to improve his life, nor to keep him out of trouble; instead, it’s paralyzed him, and it’s difficult to see how paralysis could be adaptive.
Of course, I don’t mean to suggest that one dubious theory is enough to rule out an entire line of work; certainly some physical disorders convey benefits, and there may well be analogous cases of mental disorders. The classic example of a physical disorder with a clear corresponding benefit is the gene that is associated with sickle cell anemia. Having two copies of the gene is harmful, but having a single copy of the gene alongside a normal copy can significantly reduce one’s chance of contracting malaria. In environments where malaria has been widespread (such as sub-Saharan Africa), the benefits outweigh the potential costs. And, accordingly, copies of such genes are far more widespread among people whose ancestors lived in parts of the world where malaria was prevalent.
But while some physical disorders do demonstrably bring about offsetting benefits, most probably don’t, and, with the possible exception of sociopathy,[52] I don’t think I’ve ever seen a case of mental illness offering advantages that might convincingly outweigh the costs. There are few, if any, concrete illustrations of offsetting advantages in mental illness, no mental sickle-cell anemia that demonstrably protects again “mental malaria.” Depression, for example, doesn’t ward off anxiety (in the way that a propensity for sickling protects from malaria) — it co-occurs with it. Most of the literature on the alleged virtues of mental disorders simply seems fanciful. All too often, I am reminded of Voltaire’s Dr. Pangloss, who found adaptive virtue in everything: “Observe, for instance, the nose is formed for spectacles, therefore we wear spectacles. The legs are visibly designed for stockings, accordingly we wear stockings. Stones were made to be hewn and to construct castles.”
It’s true that many disorders have at least some compensation, but the reasoning is often backward. The fact that some disorders have some redeeming features doesn’t mean that those features offset the costs, nor does it necessarily explain why those disorders evolved in the first place. What happy person would volunteer to take a hypothetical depressant — call it “anti-Prozac” or “inverse Zoloft” — in order to accrue the benefits that allegedly accompany depression?
At the very least, it seems plausible that some disorders (or symptoms) may appear not as direct adaptations, but simply from inadequate “design” or outright failure. Just as cars run out of gas, the brain can run out of (or run low on) neurotransmitters (or the molecules that traffic in them). We are born with coping mechanisms (or the capacity to acquire them), but nothing guarantees that those coping mechanisms will be all powerful or infallible. A bridge that can withstand winds of 100 miles per hour but not 200 doesn’t collapse in gusts of 200 miles per hour because it is adaptive to fail in such strong winds; it falls apart because it was built to a lesser specification. Similarly, other disorders, especially those that are extremely rare, may result from little more than “genetic noise,” random mutations that convey no advantage whatsoever.
Even if we set aside possibilities like sheer genetic noise, it is a fallacy to assume that if a mental illness persists in a population, it must convey an advantage. The bitter reality is that evolution doesn’t “care” about our inner lives, only results. So long as people with disorders reproduce at reasonably high rates, deleterious genetic variants can and do persist in the species, without regard to the fact that they leave their bearers in considerable emotional pain.[53]
All this has been discussed in the professional literature, but another possibility has gotten almost no attention: could it be that some aspects of mental illness persist not because of any specific advantage, but simply because evolution couldn’t readily build us in any other way?
Take, for example, anxiety. An evolutionary psychologist might tell you that anxiety is like pain: both exist to motivate their bearers into certain kinds of action. Maybe so, but does that mean that anxiety is an inevitable component of motivation, which we would expect to see in any well-functioning organism? Not at all — anxiety might have goaded some of our prelinguistic, pre-deliberative-reasoning ancestors into action, but that doesn’t make it the right system for creatures like us, who do have the capacity to reason. Instead, if we humans were built from the ground up, anxiety might have no place at all: our higher-level reasoning capacities could handle planning by themselves. In a creature empowered to set and follow its own goals, it’s not clear that anxiety would serve any useful function.
One could make a similar argument about the human need for self-esteem, social approval, and rank — collectively, the source of much psychological distress. Perhaps in any world we could imagine, it would be to most creatures’ benefit to secure social approval, but it is not clear why a lack of social approval ought necessarily to result in emotional pain. Why not be like the Buddhist robots I conjured in the last chapter, always aware of (and responsive to) circumstances, but never troubled by them?
Science fiction? Who knows. What these thought experiments do tell us is that it is possible to imagine other ways in which creatures might live and breathe, and it’s not clear that the disorders we see would inevitably evolve in those creatures.
What I am hinting at, of course, is this: the possibility that mental illness might stem, at least in part, from accidents of our evolutionary history. Consider, for example, our species-wide vulnerability to addiction, be it to cigarettes, alcohol, cocaine, sex, gambling, video games, chat rooms, or the Internet. Addiction can arise when short-term benefits appear subjectively enormous (as with heroin, often described as being better than sex), when long-term benefits appear subjectively small (to people otherwise depressed, who see themselves as having little to live for), or when the brain fails to properly compute the ratio between the two. (The latter seems to happen in some patients with lesions in the ventromedial prefrontal cortex, who evidently can detect costs and benefits but seem indifferent as to their ratio.) In each case, addiction can be thought of as a particular case of a general problem: our species-wide difficulty in balancing ancestral and modern systems of self-control.
To be sure, other factors are at work, such as the amount of pleasure a given individual gets from a given activity; some people get a kick out of gambling, and others would rather just save their pennies. Different people are vulnerable to different addictions, and to different degrees. But we are all at least somewhat at risk. Once the balance between long-term and short was left to a rather unprincipled tug-of-war, humanity’s vulnerability to addiction may have become all but inevitable.
If the split in our systems of self-control represents one kind of fault line in the human mind, confirmation bias and motivated reasoning combine to form another: the relative ease with which humans can lose touch with reality. When we “lose it” or “blow things out of proportion,” we lose perspective, getting so angry, for example, that all traces of objectivity vanish. It’s not one of our virtues, but it is a part of being human; we are clearly a hotheaded species.
That said, most of the time, most of us get over it; we may lose touch in the course of an argument, but ultimately we take a deep breath or get a good night’s sleep, and move on. (“Yes, it was really lousy of you to stay out all night and not call, but I admit that when I said you never call I might have been exaggerating. Slightly.” Or, as Christine Lavin once sang, “I’m sorry, forgive me,… but I’m still mad at you.”)
What occasionally allows normal people to spiral out of control is a witch’s brew of cognitive kluges: (1) the clumsy apparatus of self-control (which in the heat of the moment all too often gives the upper hand to our reflexive system); (2) the lunacy of confirmation bias (which convinces us that we are always right, or nearly so); (3) its evil twin, motivated reasoning (which leads us to protect our beliefs, even those beliefs that are dubious); and (4) the contextually driven nature of memory (such that when we’re angry at someone, we tend to remember other things about them that have made us angry in the past). In short, this leaves “hot” systems dominating cool reason; carnage often ensues.
That same mix, minus whatever inhibitory mechanisms normal people use to calm down, may exacerbate, or maybe even spawn, several other aspects of mental illness. Take, for example, the common symptom of paranoia. Once someone starts down that path — for whatever reason, legitimate or otherwise — the person may never leave it, because paranoia begets paranoia. As the old saying puts it, even the paranoid have real enemies; for an organism with confirmation bias and the will to deny counterevidence (that is, motivated reasoning), all that is necessary is one true enemy, if that. The paranoid person notices and recalls evidence that confirms his or her paranoia, discounts evidence that contradicts it, and the cycle repeats itself.
Dépressives too often lose touch with reality, but in different ways. Dépressives don’t generally hallucinate (as, for example, many schizophrenics do), but they often distort their perception of reality by fixating on the negative aspects of their lives — losses, mistakes, missed opportunities, and so forth — leading to what I call a “ruminative cycle,” one of the most common symptoms of depression. An early, well-publicized set of reports suggested that dépressives are more realistic than happy people, but today a more considered view is that dépressives are disordered in part because they place undue focus on negative things, often creating a downward spiral that is difficult to escape. Mark Twain once wrote, in a rare but perceptive moment of seriousness, “Nothing that grieves us can be called little; by the eternal laws of proportion a child’s loss of a doll and a king’s loss of a crown are events of the same size.” Much, if not all, depression may begin with the magnification of loss, which in turn may stem directly from the ways in which memory is driven by context. Sad memories stoke sadder memories, and those generate more that are sadder still. To a person who is depressed, every fresh insult confirms a fundamental view that life is unfair or not worth living. Contextual memory thus stokes the memory of past injustices. (Meanwhile, motivated reasoning often leads dépressives to discount evidence that would contradict their general view about the sadness of life.) Without some measure of self-control or a capacity to shift focus, the cycle may persist.
Such feedback cycles may even contribute a bit to bipolar disorder, not only in the “down” moments but also even in the manic (“up”) phases. According to Kay Redfield Jamison, a top-notch psychologist who has herself battled manic depression, when one has bipolar disorder, there is a particular kind of pain, elation, loneliness, and terror involved in this kind of madness… When you’re high it’s tremendous. The ideas and feelings are fast and frequent like shooting stars… But, somewhere, this changes. The fast ideas are far too fast, and there are far too many; overwhelming confusion replaces clarity… madness carves its own reality.
Without sufficient inherent capacity for cognitive and emotional control, a bipolar person in a manic state may spiral upward so far that he or she loses touch with reality. Jamison writes that in one of her early manic episodes she found herself “in that glorious illusion of high summer days, gliding, flying, now and again lurching through cloud banks and ethers, past stars, and across fields of ice crystals… I remember singing ‘Fly Me to the Moon’ as I swept past those of Saturn, and thinking myself terribly funny. I saw and experienced that which had been only in dreams, or fitful fragments of aspiration.” Manic moods beget manic thoughts, and the spiral intensifies.
Even the delusions common to schizophrenia may be exacerbated by — though probably not initially caused by — the effects of motivated reasoning and contextual memory. Many a schizophrenic, for example, has come to believe that he is Jesus and has then constructed a whole world around that notion, presumably “enabled” in part by the twin forces of confirmation bias and motivated reasoning. The psychiatrist Milton Rokeach once brought together three such patients, each of whom believed himself to be the Son of the Holy Father. Rokeach’s initial hope was that the three would recognize the inconsistency in their beliefs and each in turn would be dissuaded from his own delusions. Instead, the three patients simply became agitated. Each worked harder than ever to preserve his own delusions; each developed a different set of rationalizations. In a species that combines contextually driven memory with confirmation bias and a strong need to construct coherent-seeming life narratives, losing touch with reality may well be an occupational hazard.
Depression (and perhaps bipolar disorder) is probably also aggravated by another one of evolution’s glitches: the degree to which we depend on the somewhat quirky apparatus of pleasure. As we saw in the previous chapter, long before sophisticated deliberative reasoning arose, our pre-hominid ancestors presumably set their goals primarily by following the compass of pleasure (and avoiding its antithesis, pain). Even though modern humans have more sophisticated machinery for setting goals, pleasure and plain probably still form the core of our goal-setting apparatus. In dépressives, this may yield a kind of double-whammy; in addition to the immediate pain of depression, another symptom that often arises is paralysis. Why? Quite possibly because the internal compass of pleasure becomes nonresponsive, leaving sufferers with little motivation, nothing to steer toward. For an organism that kept its mood separate from its goals, the dysfunction often accompanying depression might simply not occur.
In short, many aspects of mental illness may be traced to, or at least intensified by, some of the quirks of our evolution: contextual memory, the distorting effects of confirmation bias and motivated reasoning, and the peculiar split in our systems of self-control. A fourth contributor may be our species’ thirst for explanation, which often leads us to build stories out of a sparse set of facts. Just as a gambler may seek to “explain” every roll of the dice, people afflicted with schizophrenia may use the cognitive machinery of explanation to piece together voices and delusions. This is not to say that people with disorders aren’t different from healthy folks, but rather that their disorders may well have their beginnings in neural vulnerabilities that we all share.
Perhaps it is no accident, then, that so much of the advice given by cognitive-behavioral therapists for treating depression consists of getting people to cope with ordinary human failures in reasoning. David Burns’s well-known Feeling Good Handbook, for example, suggests that ten basic cognitive errors, such as “overgeneralization” and “personalization,” are made by people who are anxious or depressed. Overgeneralization is the process of erroneously “seeing a single event as a part of a never-ending pattern of defeat”; personalization is the mistake of assuming that we (rather than external events) are responsible for anything bad that happens. Both errors probably stem in part from the human tendency to extrapolate excessively from small amounts of highly salient data. One setback does not a miserable life make, yet it’s human to treat the latest, worst news as an omen, as if a whole life of cyclical ups and downs is negated by a single vivid disaster. Such misapprehensions might simply not exist in a species capable of assigning equal mental weight to confirming and disaffirming evidence.
I don’t mean to say that depression (or any disorder) is purely a byproduct of limitations in our abilities to objectively evaluate data, but the clumsy mechanics of our klugey mind very likely lay some of the shaky groundwork.
If disorders extend from fault lines, they certainly move beyond them too. To the extent that genes clearly play a role in mental disorders, evolution is in some way — adaptively or otherwise — implicated. But our mental fault lines sometimes give rise to earthquakes, though at other times only tiny tremblors, scarcely felt. Evolution, however haphazard, can’t possibly be the whole story. Most common mental disorders seem to depend on a genetic component, shaped by evolution — but also on environmental causes that are not well understood. If one identical twin has, say, schizophrenia, the other one is considerably more likely than average to also have it, but the so-called “concordance” percentage, the chance that one twin will have the disorder if the other does, is only about 50 percent. For that reason alone it would clearly be overreaching to ascribe every aspect of mental illness to the idiosyncrasies of evolution.
But at the same time, it seems safe to say that no intelligent and compassionate designer would have built the human mind to be quite as vulnerable as it is. Our mental fragility provides yet another reason to doubt that we are the product of deliberate design rather than chance and evolution.
Which brings us to one last question, perhaps the most important of all: if the mind is a kluge, is there anything can we do about it?