THE TELL-TALE BRAIN
ALSO BY
V. S. RAMACHANDRAN
A Brief Tour of Human Consciousness
Phantoms in the Brain
The
TELL-TALE BRAIN
A Neuroscientist’s Quest for What Makes Us Human
V. S. RAMACHANDRAN
W. W. NORTON & COMPANY
NEW YORK LONDON
Copyright © 2011 by V. S. Ramachandran
All rights reserved
Figure 7.1: Illustration from Animal Architecture by Karl von Frisch and Otto von Frisch, illustrations copyright © 1974 by Turid Holldobler, reprinted by permission of Harcourt, Inc.
For information about permission to reproduce selections from this book, write to Permissions, W. W. Norton & Company, Inc., 500 Fifth Avenue, New York, NY 10110
Library of Congress Cataloging-in-Publication Data
Ramachandran, V. S.
The tell-tale brain: a neuroscientist’s quest for what makes us human/
V. S. Ramachandran.—1st ed.
p. cm.
Includes bibliographical references.
ISBN: 978-0-393-08058-2
1. Neurosciences—Popular works. 2. Neurology—Popular works. 3. Brain—Popular works. I. Title.
RC351.A45 2011
616.8—dc22
2010044913
W. W. Norton & Company, Inc.
500 Fifth Avenue, New York, N.Y. 10110
www.wwnorton.com
W. W. Norton & Company Ltd.
Castle House, 75/76 Wells Street, London W1T 3QT
For my mother, V. S. Meenakshi, and
my father, V. M. Subramanian
For Jaya Krishnan, Mani, and Diane
And for my ancestral sage Bharadhwaja,
who brought medicine down from the gods to mortals
CONTENTS
PREFACE
ACKNOWLEDGMENTS
INTRODUCTION NO MERE APE
CHAPTER 1 PHANTOM LIMBS AND PLASTIC BRAINS
CHAPTER 2 SEEING AND KNOWING
CHAPTER 3 LOUD COLORS AND HOT BABES: SYNESTHESIA
CHAPTER 4 THE NEURONS THAT SHAPED CIVILIZATION
CHAPTER 5 WHERE IS STEVEN? THE RIDDLE OF AUTISM
CHAPTER 6 THE POWER OF BABBLE: THE EVOLUTION OF LANGUAGE
CHAPTER 7 BEAUTY AND THE BRAIN: THE EMERGENCE OF AESTHETICS
CHAPTER 8 THE ARTFUL BRAIN: UNIVERSAL LAWS
CHAPTER 9 AN APE WITH A SOUL: HOW INTROSPECTION EVOLVED
EPILOGUE
GLOSSARY
NOTES
BIBLIOGRAPHY
ILLUSTRATION CREDITS
PREFACE
There is not, within the wide range of philosophical inquiry, a subject more intensely interesting to all who thirst for knowledge, than the precise nature of that important mental superiority which elevates the human being above the brute
—EDWARD BLYTH
FOR THE PAST QUARTER CENTURY I HAVE HAD THE MARVELOUS privilege of being able to work in the emerging field of cognitive neuroscience. This book is a distillation of a large chunk of my life’s work, which has been to unravel—strand by elusive strand—the mysterious connections between brain, mind, and body. In the chapters ahead I recount my investigations of various aspects of our inner mental life that we are naturally curious about. How do we perceive the world? What is the so-called mind-body connection? What determines your sexual identity? What is consciousness? What goes wrong in autism? How can we account for all of those mysterious faculties that are so quintessentially human, such as art, language, metaphor, creativity, self-awareness, and even religious sensibilities? As a scientist I am driven by an intense curiosity to learn how the brain of an ape—an ape!—managed to evolve such a godlike array of mental abilities.
My approach to these questions has been to study patients with damage or genetic quirks in different parts of their brains that produce bizarre effects on their minds or behavior. Over the years I have worked with hundreds of patients afflicted (though some feel they are blessed) with a great diversity of unusual and curious neurological disorders. For example, people who “see” musical tones or “taste” the textures of everything they touch, or the patient who experiences himself leaving his body and viewing it from above near the ceiling. In this book I describe what I have learned from these cases. Disorders like these are always baffling at first, but thanks to the magic of the scientific method we can render them comprehensible by doing the right experiments. In recounting each case I will take you through the same step-by-step reasoning—occasionally navigating the gaps with wild intuitive hunches—that I went through in my own mind as I puzzled over how to render it explicable. Often when a clinical mystery is solved, the explanation reveals something new about how the normal, healthy brain works, and yields unexpected insights into some of our most cherished mental faculties. I hope that you, the reader, will find these journeys as interesting as I did.
Readers who have assiduously followed my whole oeuvre over the years will recognize some of the case histories that I presented in my previous books, Phantoms in the Brain and A Brief Tour of Human Consciousness. These same readers will be pleased to see that I have new things to say about even my earlier findings and observations. Brain science has advanced at an astonishing pace over the past fifteen years, lending fresh perspectives on—well, just about everything. After decades of floundering in the shadow of the “hard” sciences, the age of neuroscience has truly dawned, and this rapid progress has directed and enriched my own work.
The past two hundred years saw breathtaking progress in many areas of science. In physics, just when the late nineteenth-century intelligentsia were declaring that physical theory was all but complete, Einstein showed us that space and time were infinitely stranger than anything formerly dreamed of in our philosophy, and Heisenberg pointed out that at the subatomic level even our most basic notions of cause and effect break down. As soon as we moved past our dismay, we were rewarded by the revelation of black holes, quantum entanglement, and a hundred other mysteries that will keep stoking our sense of wonder for centuries to come. Who would have thought the universe is made up of strings vibrating in tune with “God’s music”? Similar lists can be made for discoveries in other fields. Cosmology gave us the expanding universe, dark matter, and jaw-dropping vistas of endless billions of galaxies. Chemistry explained the world using the periodic table of the elements and gave us plastics and a cornucopia of wonder drugs. Mathematics gave us computers—although many “pure” mathematicians would rather not see their discipline sullied by such practical uses. In biology, the anatomy and physiology of the body were worked out in exquisite detail, and the mechanisms that drive evolution finally started to become clear. Diseases that had literally plagued humankind since the dawn of history were at last understood for what they really were (as opposed to, say, acts of witchcraft or divine retribution). Revolutions occurred in surgery, pharmacology, and public health, and human life spans in the developed world doubled in the space of just four or five generations. The ultimate revolution was the deciphering of the genetic code in the 1950s, which marks the birth of modern biology.
By comparison, the sciences of the mind—psychiatry, neurology, psychology—languished for centuries. Indeed, until the last quarter of the twentieth century, rigorous theories of perception, emotion, cognition, and intelligence were nowhere to be found (one notable exception being color vision). For most of the twentieth century, all we had to offer in the way of explaining human behavior was two theoretical edifices—Freudianism and behaviorism—both of which would be dramatically eclipsed in the 1980s and 1990s, when neuroscience finally managed to advance beyond the Bronze Age. In historical terms that isn’t a very long time. Compared with physics and chemistry, neuroscience is still a young upstart. But progress is progress, and what a period of progress it has been! From genes to cells to circuits to cognition, the depth and breadth of today’s neuroscience—however far short of an eventual Grand Unified Theory it may be—is light-years beyond where it was when I started working in the field. In the last decade we have even seen neuroscience becoming self-confident enough to start offering ideas to disciplines that have traditionally been claimed by the humanities. So we now for instance have neuroeconomics, neuromarketing, neuroarchitecture, neuroarcheology, neurolaw, neuropolitics, neuroesthetics (see Chapters 4 and 8), and even neurotheology. Some of these are just neurohype, but on the whole they are making real and much-needed contributions to many fields.
As heady as our progress has been, we need to stay completely honest with ourselves and acknowledge that we have only discovered a tiny fraction of what there is to know about the human brain. But the modest amount that we have discovered makes for a story more exciting than any Sherlock Holmes novel. I feel certain that as progress continues through the coming decades, the conceptual twists and technological turns we are in for are going to be at least as mind bending, at least as intuition shaking, and as simultaneously humbling and exalting to the human spirit as the conceptual revolutions that upended classical physics a century ago. The adage that fact is stranger than fiction seems to be especially true for the workings of the brain. In this book I hope I can convey at least some of the wonder and awe that my colleagues and I have felt over the years as we have patiently peeled back the layers of the mind-brain mystery. Hopefully it will kindle your interest in what the pioneering neurosurgeon Wilder Penfield called “the organ of destiny” and Woody Allen, in a less reverential mood, referred to as man’s “second favorite organ.”
Overview
Although this book covers a wide spectrum of topics, you will notice a few important themes running through all of them. One is that humans are truly unique and special, not “just” another species of primate. I still find it a little bit surprising that this position needs as much defense as it does—and not just against the ravings of antievolutionists, but against no small number of my colleagues who seem comfortable stating that we are “just apes” in a casual, dismissive tone that seems to revel in our lowliness. I sometimes wonder: Is this perhaps the secular humanists’ version of original sin?
Another common thread is a pervasive evolutionary perspective. It is impossible to understand how the brain works without also understanding how it evolved. As the great biologist Theodosius Dobzhansky said, “Nothing in biology makes sense except in the light of evolution.” This stands in marked contrast to most other reverse-engineering problems. For example when the great English mathematician Alan Turing cracked the code of the Nazis’ Enigma machine—a device used to encrypt secret messages—he didn’t need to know anything about the research and development history of the device. He didn’t need to know anything about the prototypes and earlier product models. All he needed was one working sample of the machine, a notepad, and his own brilliant brain. But in biological systems there is a deep unity between structure, function, and origin. You cannot make very much progress understanding any one of these unless you are also paying close attention to the other two.
You will see me arguing that many of our unique mental traits seem to have evolved through the novel deployment of brain structures that originally evolved for other reasons. This happens all the time in evolution. Feathers evolved from scales whose original role was insulation rather than flight. The wings of bats and pterodactyls are modifications of forelimbs originally designed for walking. Our lungs developed from the swim bladders of fish which evolved for buoyancy control. The opportunistic, “happenstantial” nature of evolution has been championed by many authors, most notably Stephen Jay Gould in his famous essays on natural history. I argue that the same principle applies with even greater force to the evolution of the human brain. Evolution found ways to radically repurpose many functions of the ape brain to create entirely new functions. Some of them—language comes to mind—are so powerful that I would go so far as to argue they have produced a species that transcends apehood to the same degree by which life transcends mundane chemistry and physics.
And so this book is my modest contribution to the grand attempt to crack the code of the human brain, with its myriad connections and modules that make it infinitely more enigmatic than any Enigma machine. The Introduction offers perspectives and history on the uniqueness of the human mind, and also provides a quick primer on the basic anatomy of the human brain. Drawing on my early experiments with the phantom limbs experienced by many amputees, Chapter 1 highlights the human brain’s amazing capacity for change and reveals how a more expanded form of plasticity may have shaped the course of our evolutionary and cultural development. Chapter 2 explains how the brain processes incoming sensory information, visual information in particular. Even here, my focus is on human uniqueness: Although our brains employ the same basic sensory-processing mechanisms as those of other mammals, we have taken these mechanisms to a new level. Chapter 3 deals with an intriguing phenomenon called synesthesia, a strange blending of the senses that some people experience as a result of unusual brain wiring. Synesthesia opens a window into the genes and brain connectivity that make some people especially creative, and may hold clues about what makes us such a profoundly creative species to begin with.
The next triad of chapters investigates a type of nerve cell that I argue is especially crucial in making us human. Chapter 4 introduces these special cells, called mirror neurons, which lie at the heart of our ability to adopt each other’s point of view and empathize with one another. Human mirror neurons achieve a level of sophistication that far surpasses that of any lower primate, and appear to be the evolutionary key to our attainment of full-fledged culture. Chapter 5 explores how problems with the mirror-neuron system may underlie autism, a developmental disorder characterized by extreme mental aloneness and social detachment. Chapter 6 explores how mirror neurons may have also played a role in humanity’s crowning achievement, language. (More technically, protolanguage, which is language minus syntax.)
Chapters 7 and 8 move on to our species’ unique sensibilities about beauty. I suggest that there are laws of aesthetics that are universal, cutting across cultural and even species boundaries. On the other hand, Art with a capital A is probably unique to humans.
In the final chapter I take a stab at the most challenging problem of all, the nature of self-awareness, which is undoubtedly unique to humans. I don’t pretend to have solved the problem, but I will share the intriguing insights that I have managed to glean over the years based on some truly remarkable syndromes that occupy the twilight zone between psychiatry and neurology, for example, people who leave their bodies temporarily, see God during seizures, or even deny that they exist. How can someone deny his own existence? Doesn’t the denial itself imply existence? Can he ever escape from this Gödelian nightmare? Neuropsychiatry is full of such paradoxes, which cast their spell on me when I wandered the hospital corridors as medical student in my early twenties. I could see that these patients’ troubles, deeply saddening as they were, were also rich troves of insight into the marvelously unique human ability to apprehend one’s own existence.
Like my previous books, The Tell-Tale Brain is written in a conversational style for a general audience. I presume some degree of interest in science and curiosity about human nature, but I do not presume any sort of formal scientific background or even familiarity with my previous works. I hope this book proves instructive and inspiring to students of all levels and backgrounds, to colleagues in other disciplines, and to lay readers with no personal or professional stake in these topics. Thus in writing this book I faced the standard challenge of popularization, which is to tread the fine line between simplification and accuracy. Oversimplification can draw ire from hard-nosed colleagues and, worse, can make readers feel like they are being talked down to. On the other hand, too much detail can be off-putting to nonspecialists. The casual reader wants a thought-provoking guided tour of an unfamiliar subject—not a treatise, not a tome. I have done my best to strike the right balance.
Speaking of accuracy, let me be the first to point out that some of the ideas I present in this book are, shall we say, on the speculative side. Many of the chapters rest on solid foundations, such as my work on phantom limbs, visual perception, synesthesia, and the Capgras delusion. But I also tackle a few elusive and less well-charted topics, such as the origins of art and the nature of self-awareness. In such cases I have let educated guesswork and intuition steer my thinking wherever solid empirical data are spotty. This is nothing to be ashamed of: Every virgin area of scientific inquiry must first be explored in this way. It is a fundamental element of the scientific process that when data are scarce or sketchy and existing theories are anemic, scientists must brainstorm. We need to roll out our best hypotheses, hunches, and hare-brained, half-baked intuitions, and then rack our brains for ways to test them. You see this all the time in the history of science. For instance, one of the earliest models of the atom likened it to plum pudding, with electrons nested like plums in the thick “batter” of the atom. A few decades later physicists were thinking of atoms as miniature solar systems, with orderly electrons that orbit the nucleus like planets around a star. Each of these models was useful, and each got us a little bit closer to the final (or at least, the current) truth. So it goes. In my own field my colleagues and I are making our best effort to advance our understanding of some truly mysterious and hard-to-pin-down faculties. As the biologist Peter Medawar pointed out, “All good science emerges from an imaginative conception of what might be true.” I realize, however, that in spite of this disclaimer I will probably annoy at least some of my colleagues. But as Lord Reith, the first director-general of the BBC, once pointed out, “There are some people whom it is one’s duty to annoy.”
Boyhood Seductions
“You know my methods, Watson,” says Sherlock Holmes before explaining how he has found the vital clue. And so before we journey any further into the mysteries of the human brain, I feel that I should outline the methods behind my approach. It is above all a wide-ranging, multidisciplinary approach, driven by curiosity and a relentless question: What if? Although my current interest is neurology, my love affair with science dates back to my boyhood in Chennai, India. I was perpetually fascinated by natural phenomena, and my first passion was chemistry. I was enchanted by the idea that the whole universe is based on simple interactions between elements in a finite list. Later I found myself drawn to biology, with all its frustrating yet fascinating complexities. When I was twelve, I remember reading about axolotls, which are basically a species of salamander that has evolved to remain permanently in the aquatic larval stage. They manage to keep their gills (rather than trading them in for lungs, like salamanders or frogs) by shutting down metamorphosis and becoming sexually mature in the water. I was completely flabbergasted when I read that by simply giving these creatures the “metamorphosis hormone” (thyroid extract) you could make the axolotl revert back into the extinct, land-dwelling, gill-less adult ancestor that it had evolved from. You could go back in time, resurrecting a prehistoric animal that no longer exists anywhere on Earth. I also knew that for some mysterious reason adult salamanders don’t regenerate amputated legs but the tadpoles do. My curiosity took me one step further, to the question of whether an axolotl—which is, after all, an “adult tadpole”—would retain its ability to regenerate a lost leg just as a modern frog tadpole does. And how many other axolotl-like beings exist on Earth, I wondered, that could be restored to their ancestral forms by simply giving them hormones? Could humans—who are after all apes that have evolved to retain many juvenile qualities—be made to revert to an ancestral form, perhaps something resembling Homo erectus, using the appropriate cocktail of hormones? My mind reeled out a stream of questions and speculations, and I was hooked on biology forever.
I found mysteries and possibilities everywhere. When I was eighteen, I read a footnote in some obscure medical tome that when a person with a sarcoma, a malignant cancer that affects soft tissues, develops high fever from an infection, the cancer sometimes goes into complete remission. Cancer shrinking as a result of fever? Why? What could explain it, and might it just possibly lead to a practical cancer therapy?1 I was enthralled by the possibility of such odd, unexpected connections, and I learned an important lesson: Never take the obvious for granted. Once upon a time, it was so obvious that a four-pound rock would plummet earthward twice as fast as a two-pound rock that no one ever bothered to test it. That is, until Galileo Galilei came along and took ten minutes to perform an elegantly simple experiment that yielded a counterintuitive result and changed the course of history.
I had a boyhood infatuation with botany too. I remember wondering how I might get ahold of my own Venus flytrap, which Darwin had called “the most wonderful plant in the world.” He had shown that it closes shut when you touch two hairs inside its trap in rapid succession. The double trigger makes it much more likely that it will be responding to the motions of insects as opposed to inanimate detritus falling or drifting in at random. Once it has clamped down on its prey, the plant stays shut and secretes digestive enzymes, but only if it has caught actual food. I was curious. What defines food? Will it stay shut for amino acids? Fatty acid? Which acids? Starch? Pure sugar? Saccharin? How sophisticated are the food detectors in its digestive system? Too bad, I never did manage to acquire one as a pet at that time.
My mother actively encouraged my early interest in science, bringing me zoological specimens from all over the world. I remember particularly well the time she gave me a tiny dried seahorse. My father also approved of my obsessions. He bought me a Carl Zeiss research microscope when I was still in my early teens. Few things could match the joy of looking at paramecia and volvox through a high-power objective lens. (Volvox, I learned, is the only biological creature on the planet that actually has a wheel.) Later, when I headed off to university, I told my father my heart was set on basic science. Nothing else stimulated my mind half as much. Wise man that he was, he persuaded me to study medicine. “You can become a second-rate doctor and still make a decent living,” he said, “but you can’t be second-rate scientist; it’s an oxymoron.” He pointed out that if I studied medicine I could play it safe, keeping both doors open and decide after graduation whether I was cut out for research or not.
All my arcane boyhood pursuits had what I consider to be a pleasantly antiquated, Victorian flavor. The Victorian era ended over a century ago (technically in 1901) and might seem remote from twenty-first-century neuroscience. But I feel compelled to mention my early romance with nineteenth-century science because it was a formative influence on my style of thinking and conducting research.
Simply put, this “style” emphasizes conceptually simple and easy-to-do experiments. As a student I read voraciously, not only about modern biology but also about the history of science. I remember reading about Michael Faraday, the lower-class, self-educated man who discovered the principle of electromagnetism. In the early 1800s he placed a bar magnet behind a sheet of paper and threw iron filings on the sheet. The filings instantly aligned themselves into arcing lines. He had rendered the magnetic field visible! This was about as direct a demonstration as possible that such fields are real and not just mathematical abstractions. Next Faraday moved a bar magnet to and fro through a coil of copper wire, and lo and behold, an electric current started running through the coil. He had demonstrated a link between two entirely separate areas of physics: magnetism and electricity. This paved the way not only for practical applications—such as hydroelectric power, electric motors, and electromagnets—but also for the deep theoretical insights of James Clerk Maxwell. With nothing more than bar magnets, paper, and copper wire, Faraday had ushered in a new era in physics.
I remember being struck by the simplicity and elegance of these experiments. Any schoolboy or -girl can repeat them. It was not unlike Galileo dropping his rocks, or Newton using two prisms to explore the nature of light. For better or worse, stories like these made me a technophobe early in life. I still find it hard to use an iPhone, but my technophobia has served me well in other respects. Some colleagues have warned me that this phobia might have been okay in the nineteenth century when biology and physics were in their infancy, but not in this era of “big science,” in which major advances can only be made by large teams employing high-tech machines. I disagree. And even if it is partly true, “small science” is much more fun and can often turn up big discoveries. It still tickles me that my early experiments with phantom limbs (see Chapter 1) required nothing more than Q-tips, glasses of warm and cold water, and ordinary mirrors. Hippocrates, Sushruta, my ancestral sage Bharadwaja, or any other physicians between ancient times and the present could have performed these same basic experiments. Yet no one did.
Or consider Barry Marshall’s research showing that ulcers are caused by bacteria—not acid or stress, as every doctor “knew.” In a heroic experiment to convince skeptics of his theory, he actually swallowed a culture of the bacterium Helicobacter pylori and showed that his stomach lining became studded with painful ulcers, which he promptly cured by consuming antibiotics. He and others later went on to show that many other disorders, including stomach cancer and even heart attacks, might be triggered by microorganisms. In just a few weeks, using materials and methods that had been available for decades, Dr. Marshall had ushered in a whole new era of medicine. Ten years later he won a Nobel Prize.
My preference for low-tech methods has both strengths and drawbacks, of course. I enjoy it—partly because I’m lazy—but it isn’t everyone’s cup of tea. And this is a good thing. Science needs a variety of styles and approaches. Most individual researchers need to specialize, but the scientific enterprise as a whole is made more robust when scientists march to different drumbeats. Homogeneity breeds weakness: theoretical blind spots, stale paradigms, an echo-chamber mentality, and cults of personality. A diverse dramatis personae is a powerful tonic against these ailments. Science benefits from its inclusion of the abstraction-addled, absent-minded professors, the control-freak obsessives, the cantankerous bean-counting statistics junkies, the congenitally contrarian devil’s advocates, the hard-nosed data-oriented literalists, and the starry-eyed romantics who embark on high-risk, high-payoff ventures, stumbling frequently along the way. If every scientist were like me, there would be no one to clear the brush or demand periodic reality checks. But if every scientist were a brush-clearing, never-stray-beyond-established-fact type, science would advance at a snail’s pace and would have a hard time unpainting itself out of corners. Getting trapped in narrow cul-de-sac specializations and “clubs” whose membership is open only to those who congratulate and fund each other is an occupational hazard in modern science.
When I say I prefer Q-tips and mirrors to brain scanners and gene sequencers, I don’t mean to give you the impression that I eschew technology entirely. (Just think of doing biology without a microscope!) I may be a technophobe, but I’m no Luddite. My point is that science should be question driven, not methodology driven. When your department has spent millions of dollars on a state-of-the-art liquid-helium-cooled brain-imaging machine, you come under pressure to use it all the time. As the old saying goes, “When the only tool you have is a hammer, everything starts to look like a nail.” But I have nothing against high-tech brain scanners (nor against hammers). Indeed, there is so much brain imaging going on these days that some significant discoveries are bound to be made, if only by accident. One could justifiably argue that the modern toolbox of state-of-the-art gizmos has a vital and indispensable place in research. And indeed, my low-tech-leaning colleagues and I often do take advantage of brain imaging, but only to test specific hypotheses. Sometimes it works, sometimes it doesn’t, but we are always grateful to have the high technology available—if we feel the need.
ACKNOWLEDGMENTS
ALTHOUGH IT IS LARGELY A PERSONAL ODYSSEY, THIS BOOK RELIES heavily on the work of many of my colleagues who have revolutionized the field in ways we could not have even imagined even just a few years ago. I cannot overstate the extent to which I have benefited from reading their books. I will mention just a few of them here: Joe LeDoux, Oliver Sacks, Francis Crick, Richard Dawkins, Stephen Jay Gould, Dan Dennett, Pat Churchland, Gerry Edelman, Eric Kandel, Nick Humphrey, Tony Damasio, Marvin Minsky, Stanislas Dehaene. If I have seen further, it is by standing on the shoulders of these giants. Some of these books resulted from the foresight of two enlightened agents—John Brockman and Katinka Matson—who have created a new scientific literacy in America and the world beyond. They have successfully reignited the magic and awe of science in the age of Twitter, Facebook, YouTube, sound-bite news, and reality TV—an age when the hard-won values of the Enlightenment are sadly in decline.
Angela von der Lippe, my editor, suggested major reorganization of chapters and provided valuable feedback throughout every stage of revision. Her suggestions improved the clarity of presentation enormously.
Special thanks to four people who have had a direct influence on my scientific career: Richard Gregory, Francis Crick, John D. Pettigrew, and Oliver Sacks.
I would also like to thank the many people who either goaded me on to pursue medicine and science as a career or influenced my thinking over the years. As I intimated earlier, I would not be where I am were it not for my mother and father. When my father was convincing me to go into medicine, I received similar advice from Drs. Rama Mani and M. K. Mani. I have never once regretted letting them talk me into it. As I often tell my students, medicine gives you a certain breadth of vision while at the same time imparting an intensely pragmatic attitude. If your theory is right, your patient gets better. If your theory is wrong—no matter how elegant or convincing it may be—she gets worse or dies. There is no better test of whether you are on the right track or not. And this no-nonsense attitude then spills over into your research as well.
I also owe an intellectual debt to my brother V. S. Ravi, whose vast knowledge of English and Telugu literature (especially Shakespeare and Thyagaraja) is unsurpassed. When I had just entered medical school (premed), he would often read me passages from Shakespeare and Omar Khayyam’s Rubaiyat, which had a deep impact on my mental development. I remember hearing him quote Macbeth’s famous “sound and fury” soliloquy and thinking, “Wow, that pretty much says it all.” It impressed on me the importance of economy of expression, whether in literature or in science.
I thank Matthew Blakeslee, who did a superb job in helping edit the book. Over fifteen years ago, as my student, he also assisted me in constructing the very first crude but effective prototype of the “mirror box” which inspired the subsequent construction of elegant, ivory-inlaid mahogany ones at Oxford (and which are now available commercially, although I have no personal financial stake in them). Various drug companies and philanthropic organizations have distributed thousands of such boxes to war veterans from Iraq and amputees in Haiti.
I also owe a debt of gratitude to the many patients who cooperated with me over the years. Many of them were in depressing situations, obviously, but most of them were unselfishly willing to help advance basic science in whatever way they could. Without them this book could not have been written. Naturally, I care about protecting their privacy. In the interest of confidentiality, all names, dates, and places, and in some instances the circumstances surrounding the admission of the patient, have been disguised. The conversations with patients (such as those with language problems) are literal transcripts of videotapes, except in a few cases where I had to re-create our exchanges based on memory. In one case (“John,” in Chapter 2, who developed embolic stroke originating from veins around an inflamed appendix) I have described appendicitis as it usually presents itself since notes on this particular case were unavailable. And the conversation with this patient is an edited summary of the conversation as recounted by the physician who originally saw him. In all cases the key symptoms and signs and history that are relevant to the neurological aspect of patients’ problems are presented as accurately as possible. But other aspects have been changed—for example, a patient who is fifty rather than fifty-five may have had an embolism originating in the heart rather than leg—so that even a close friend or relative would be unable to recognize the patient from the description.
I turn now to thank friends and colleagues with whom I have had productive conversations over the years. I list them in alphabetical order: Krishnaswami Alladi, John Allman, Eric Altschuler, Stuart Anstis, Carrie Armel, Shai Azoulai, Horace Barlow, Mary Beebe, Roger Bingham, Colin Blakemore, Sandy Blakeslee, Geoff Boynton, Oliver Braddick, David Brang, Mike Calford, Fergus Campbell, Pat Cavanagh, Pat and Paul Churchland, Steve Cobb, Francis Crick, Tony and Hanna Damasio, Nikki de Saint Phalle, Anthony Deutsch, Diana Deutsch, Paul Drake, Gerry Edelman, Jeff Elman, Richard Friedberg, Sir Alan Gilchrist, Beatrice Golomb, Al Gore (the “real” president), Richard Gregory, Mushirul Hasan, Afrei Hesam, Bill Hirstein, Mikhenan (“Mikhey”) Horvath, Ed Hubbard, David Hubel, Nick Humphrey, Mike Hyson, Sudarshan Iyengar, Mumtaz Jahan, Jon Kaas, Eric Kandel, Dorothy Kleffner, E. S. Krishnamoorthy, Ranjit Kumar, Leah Levi, Steve Link, Rama Mani, Paul McGeoch, Don McLeod, Sarada Menon, Mike Merzenich, Ranjit Nair, Ken Nakayama, Lindsay Oberman, Ingrid Olson, Malini Parthasarathy, Hal Pashler, David Peterzell, Jack Pettigrew, Jaime Pineda, Dan Plummer, Alladi Prabhakar, David Presti, N. Ram and N. Ravi (editors of The Hindu), Alladi Ramakrishnan, V. Madhusudhan Rao, Sushila Ravindranath, Beatrice Ring, Bill Rosar, Oliver Sacks, Terry Sejnowski, Chetan Shah, Naidu (“Spencer”) Sitaram, John Smythies, Allan Snyder, Larry Squire, Krishnamoorthy Srinivas, A. V. Srinivasan, Krishnan Sriram, Subramaniam Sriram, Lance Stone, Somtow (“Cookie”) Sucharitkul, K. V. Thiruvengadam, Chris Tyler, Claude Valenti, Ajit Varki, Ananda Veerasurya, Nairobi Venkataraman, Alladi Venkatesh, T. R. Vidyasagar, David Whitteridge, Ben Williams, Lisa Williams, Chris Wills, Piotr Winkielman, and John Wixted.
Thanks to Elizabeth Seckel and Petra Ostermuencher for their help.
I also thank Diane, Mani, and Jaya, who are an endless source of delight and inspiration. The Nature paper they published with me on flounder camouflage made a huge splash in the ichthyology world.
Julia Kindy Langley kindled my passion for the science of art.
Last but not least, I am grateful to the National Institutes of Health for funding much of the research reported in the book, and to private donors and patrons: Abe Pollin, Herb Lurie, Dick Geckler, and Charlie Robins.
THE TELL-TALE BRAIN
INTRODUCTION
No Mere Ape
Now I am quite sure that if we had these three creatures fossilized or preserved in spirits for comparison and were quite unprejudiced judges, we should at once admit that there is very little greater interval as animals between the gorilla and the man than exists between the gorilla and the baboon.
—THOMAS HENRY HUXLEY,
lecturing at the Royal
Institution, London
“I know, my dear Watson, that you share my love of all that is bizarre and outside the conventions and humdrum routine of everyday life.”
—SHERLOCK HOLMES
IS MAN AN APE OR AN ANGEL (AS BENJAMIN DISRAELI ASKED IN A famous debate about Darwin’s theory of evolution)? Are we merely chimps with a software upgrade? Or are we in some true sense special, a species that transcends the mindless fluxions of chemistry and instinct? Many scientists, beginning with Darwin himself, have argued the former: that human mental abilities are merely elaborations of faculties that are ultimately of the same kind we see in other apes. This was a radical and controversial proposal in the nineteenth century—some people are still not over it—but ever since Darwin published his world-shattering treatise on the theory of evolution, the case for man’s primate origins has been bolstered a thousandfold. Today it is impossible to seriously refute this point: We are anatomically, neurologically, genetically, physiologically apes. Anyone who has ever been struck by the uncanny near-humanness of the great apes at the zoo has felt the truth of this.
I find it odd how some people are so ardently drawn to either-or dichotomies. “Are apes self-aware or are they automata?” “Is life meaningful or is it meaningless?” “Are humans ‘just’ animals or are we exalted?” As a scientist I am perfectly comfortable with settling on categorical conclusions—when it makes sense. But with many of these supposedly urgent metaphysical dilemmas, I must admit I don’t see the conflict. For instance, why can’t we be a branch of the animal kingdom and a wholly unique and gloriously novel phenomenon in the universe?
I also find it odd how people so often slip words like “merely” and “nothing but” into statements about our origins. Humans are apes. So too we are mammals. We are vertebrates. We are pulpy, throbbing colonies of tens of trillions of cells. We are all of these things, but we are not “merely” these things. And we are, in addition to all these things, something unique, something unprecedented, something transcendent. We are something truly new under the sun, with uncharted and perhaps limitless potential. We are the first and only species whose fate has rested in its own hands, and not just in the hands of chemistry and instinct. On the great Darwinian stage we call Earth, I would argue there has not been an upheaval as big as us since the origin of life itself. When I think about what we are and what we may yet achieve, I can’t see any place for snide little “merelies.”
Any ape can reach for a banana, but only humans can reach for the stars. Apes live, contend, breed, and die in forests—end of story. Humans write, investigate, create, and quest. We splice genes, split atoms, launch rockets. We peer upward into the heart of the Big Bang and delve deeply into the digits of pi. Perhaps most remarkably of all, we gaze inward, piecing together the puzzle of our own unique and marvelous brain. It makes the mind reel. How can a three-pound mass of jelly that you can hold in your palm imagine angels, contemplate the meaning of infinity, and even question its own place in the cosmos? Especially awe inspiring is the fact that any single brain, including yours, is made up of atoms that were forged in the hearts of countless, far-flung stars billions of years ago. These particles drifted for eons and light-years until gravity and chance brought them together here, now. These atoms now form a conglomerate—your brain—that can not only ponder the very stars that gave it birth but can also think about its own ability to think and wonder about its own ability to wonder. With the arrival of humans, it has been said, the universe has suddenly become conscious of itself. This, truly, is the greatest mystery of all.
It is difficult to talk about the brain without waxing lyrical. But how does one go about actually studying it? There are many methods, ranging from single-neuron studies to high-tech brain scanning to cross-species comparison. The methods I favor are unapologetically old-school. I generally see patients who have suffered brain lesions due to stroke, tumor, or head injury and as a result are experiencing disturbances in their perception and consciousness. I also sometimes meet people who do not appear brain damaged or impaired, yet report having wildly unusual perceptual or mental experiences. In either case, the procedure is the same: I interview them, observe their behavior, administer some simple tests, take a peek at their brains (when possible), and then come up with a hypothesis that bridges psychology and neurology—in other words, a hypothesis that connects strange behavior to what has gone wrong in the intricate wiring of the brain.1 A decent percentage of the time I am successful. And so, patient by patient, case by case, I gain a stream of fresh insights into how the human mind and brain work—and how they are inextricably linked. On the coattails of such discoveries I often get evolutionary insights as well, which bring us that much closer to understanding what makes our species unique.
Consider the following examples:
Whenever Susan looks at numbers, she sees each digit tinged with its own inherent hue. For example, 5 is red, 3 is blue. This condition, called synesthesia, is eight times more common in artists, poets, and novelists than in the general population, suggesting that it may be linked to creativity in some mysterious way. Could synesthesia be a neuropsychological fossil of sorts—a clue to understanding the evolutionary origins and nature of human creativity in general?
Humphrey has a phantom arm following an amputation. Phantom limbs are a common experience for amputees, but we noticed something unusual in Humphrey. Imagine his amazement when he merely watches me stroke and tap a student volunteer’s arm—and actually feels these tactile sensations in his phantom. When he watches the student fondle an ice cube, he feels the cold in his phantom fingers. When he watches her massage her own hand, he feels a “phantom massage” that relieves the painful cramp in his phantom hand! Where do his body, his phantom body, and a stranger’s body meld in his mind? What or where is his real sense of self?
A patient named Smith is undergoing neurosurgery at the University of Toronto. He is fully awake and conscious. His scalp has been perfused with a local anesthetic and his skull has been opened. The surgeon places an electrode in Smith’s anterior cingulate, a region near the front of the brain where many of the neurons respond to pain. And sure enough, the doctor is able to find a neuron that becomes active whenever Smith’s hand is poked with a needle. But the surgeon is astonished by what he sees next. The same neuron fires just as vigorously when Smith merely watches another patient being poked. It is as if the neuron (or the functional circuit of which it is a part) is empathizing with another person. A stranger’s pain becomes Smith’s pain, almost literally. Indian and Buddhist mystics assert that there is no essential difference between self and other, and that true enlightenment comes from the compassion that dissolves this barrier. I used to think this was just well-intentioned mumbo-jumbo, but here is a neuron that doesn’t know the difference between self and other. Are our brains uniquely hardwired for empathy and compassion?
When Jonathan is asked to imagine numbers he always sees each number in a particular spatial location in front of him. All numbers from 1 to 60 are laid out sequentially on a virtual number line that is elaborately twisted in three-dimensional space, even doubling back on itself. Jonathan even claims that this twisted line helps him perform arithmetic. (Interestingly, Einstein often claimed to see numbers spatially.) What do cases like Jonathan’s tell us about our unique facility with numbers? Most of us have a vague tendency to image numbers from left to right, but why is Jonathan’s warped and twisted? As we shall see, this a striking example of a neurological anomaly that makes no sense whatsoever except in evolutionary terms.
A patient in San Francisco becomes progressively demented, yet starts creating paintings that are hauntingly beautiful. Has his brain damage somehow unleashed a hidden talent? A world away, in Australia, a typical undergraduate volunteer named John is participating in an unusual experiment. He sits down in a chair and is fitted with a helmet that delivers magnetic pulses to his brain. Some of his head muscles twitch involuntarily from the induced current. More amazingly, John starts producing lovely drawings—something he claims he couldn’t do before. Where are these inner artists emerging from? Is it true that most of us “use only 10 percent of our brain”? Is there a Picasso, a Mozart, and a Srinivasa Ramanujan (a math prodigy) in all of us, waiting to be liberated? Has evolution suppressed our inner geniuses for a reason?
Until his stroke, Dr. Jackson was a prominent physician in Chula Vista, California. Afterward he is left partially paralyzed on his right side, but fortunately only a small part of his cortex, the brain’s seat of higher intelligence, has been damaged. His higher mental functions are largely intact: He can understand most of what is said to him and he can hold up a conversation reasonably well. In the course of probing his mind with various simple tasks and questions, the big surprise comes when we ask him to explain a proverb, “All that glitters is not gold.”
“It means just because something is shiny and yellow doesn’t mean it’s gold, Doctor. It could be copper or some alloy.”
“Yes,” I say, “but is there a deeper meaning beyond that?”
“Yes,” he replies, “it means you have to be very careful when you go to buy jewelry; they often rip you off. One could measure the metal’s specific gravity, I suppose.”
Dr. Jackson has a disorder that I call “metaphor blindness.” Does it follow from this that the human brain has evolved a dedicated “metaphor center”?
Jason is a patient at a rehabilitation center in San Diego. He has been in a semicomatose state called akinetic mutism for several months before he is seen by my colleague Dr. Subramaniam Sriram. Jason is bedridden, unable to walk, recognize, or interact with people—not even his parents—even though he is fully alert and often follows people around with his eyes. Yet if his father goes next door and phones him, Jason instantly becomes fully conscious, recognizes his dad, and converses with him. When his father returns to the room, Jason reverts at once to a zombie-like state. It is as if there are two Jasons trapped inside one body: the one connected to vision, who is alert but not conscious, and the one connected to hearing who is alert and conscious. What might these eerie comings and goings of conscious personhood reveal about how the brain generates self-awareness?
These may sound like phantasmagorical short stories by the likes of Edgar Allan Poe or Philip K. Dick. Yet they are all true, and these are only a few of the cases you will encounter in this book. An intensive study of these people can not only help us figure out why their bizarre symptoms occur, but also help us understand the functions of the normal brain—yours and mine. Maybe someday we will even answer the most difficult question of all: How does the human brain give rise to consciousness? What or who is this “I” within me that illuminates one tiny corner of the universe, while the rest of the cosmos rolls on indifferent to every human concern? A question that comes perilously close to theology.
WHEN PONDERING OUR uniqueness, it is natural to wonder how close other species before us might have come to achieving our cognitive state of grace. Anthropologists have found that the hominin family tree branched many times in the past several million years. At various times numerous protohuman and human-like ape species thrived and roamed the earth, but for some reason our line is the only one that “made it.” What were the brains of those other hominins like? Did they perish because they didn’t stumble on the right combination of neural adaptations? All we have to go on now is the mute testimony of their fossils and their scattered stone tools. Sadly, we may never learn much about how they behaved or what their minds were like.
We stand a much better chance of solving the mystery of the relatively recently extinct Neanderthals, a cousin-species of ours, who were almost certainly within a proverbial stone’s throw of achieving full-blown humanhood. Though traditionally depicted as the archetypical brutish, slow-witted cave dweller, Homo neanderthalensis has been receiving a serious image makeover in recent years. Just like us they made art and jewelry, ate a rich and varied diet, and buried their dead. And evidence is mounting that their language was more complex than the stereotypical “cave man talk” gives them credit for. Nevertheless, around thirty thousand years ago they vanished from the earth. The reigning assumption has always been that the Neanderthals died and humans thrived on because humans were somehow superior: better language, better tools, better social organization, or something like that. But the matter is far from settled. Did we outcompete them? Did we murder them all? Did we—to borrow a phrase from the movie Braveheart—breed them out? Were we just plain lucky, and they unlucky? Could it as easily have been them instead of us who planted a flag on the moon? The Neanderthals’ extinction is recent enough that we have been able to recover actual bones (not just fossils), and along with them some samples of Neanderthal DNA. As genetic studies continue, we will assuredly learn more about the fine line that divided us.
And then of course there were the hobbits.
Far away on a remote island near Java there lived, not so long ago, a race of diminutive creatures—or should I say, people—who were just three feet tall. They were very close to human and yet, to the astonishment of the world, turn out to have been a different species who coexisted alongside us almost up until historical times. On the Connecticut-sized island of Flores they eked out a living hunting twenty-foot dragon-lizards, giant rats, and pigmy elephants. They manufactured miniature tools to wield with their tiny hands and apparently had enough planning skills and foresight to navigate the open seas. And yet incredibly, their brains were about one-third the size of a human’s brain, smaller than that of a chimp.2
If I were to give you this story as a script for a science fiction movie, you would probably reject it as too farfetched. It sounds like something straight out of H. G. Welles or Jules Verne. Yet remarkably, it happens to be true. Their discoverers entered them into the scientific record as Homo floresiensis, but many people refer to them by their nickname, hobbits. The bones are only about fifteen thousand years old, which implies that these strange human cousins lived side by side with our ancestors, perhaps as friends, perhaps as foes—we do not know. Nor again do we know why they vanished, although given our species’ dismal record as responsible stewards of nature, it’s a decent bet that we drove them to extinction. But many islands in Indonesia are still unexplored, and it is not inconceivable that an isolated pocket of them has survived somewhere. (One theory holds that the CIA has spotted them already but the information is being withheld until it is ruled out that they are hoarding weapons of mass destruction like blowpipes.)
The hobbits challenge all our preconceived notions about our supposed privileged status as Homo sapiens. If the hobbits had had the resources of the Eurasian continent at their disposal, might they have invented agriculture, civilization, the wheel, writing? Were they self-conscious? Did they have a moral sense? Were they aware of their mortality? Did they sing and dance? Or are these mental functions (and ipso facto, are their corresponding neural circuits) found only in humans? We still know precious little about the hobbits, but their similarities to and differences from humans might help us further understand what makes us different from the great apes and monkeys, and whether there was a quantum leap in our evolution or a gradual change. Indeed, getting ahold of some samples of hobbit DNA would be a discovery of far greater scientific import than any DNA recovery scenario à la Jurassic Park.
This question of our special status, which will reappear many times in this book, has a long and contentious history. It was a major preoccupation of intellectuals in Victorian times. The protagonists were some of the giants of nineteenth-century science, including Thomas Huxley, Richard Owen, and Alfred Russel Wallace. Even though Darwin started it all, he himself shunned controversy. But Huxley, a large man with piercing dark eyes and bushy eyebrows, was renowned for his pugnacity and wit and had no such compunctions. Unlike Darwin, he was outspoken about the implications of evolutionary theory for humans, earning him the epithet “Darwin’s bulldog.”
Huxley’s adversary, Owen, was convinced that humans were unique. The founding father of the science of comparative anatomy, Owen inspired the often-satirized stereotype of a paleontologist who tries to reconstruct an entire animal from a single bone. His brilliance was matched only by his arrogance. “He knows that he is superior to most men,” wrote Huxley, “and does not conceal that he knows.” Unlike Darwin, Owen was more impressed by the differences than by similarities between different animal groups. He was struck by the absence of living intermediate forms between species, of the kind you might expect to find if one species gradually evolved into another. No one saw elephants with one-foot trunks or giraffes with necks half as long their modern counterparts. (The okapi, which have such necks, were discovered much later.) Observations like these, together with his strong religious views, led him to regard Darwin’s ideas as both implausible and heretical. He emphasized the huge gap between the mental abilities of apes and humans and pointed out (mistakenly) that the human brain had a unique anatomical structure called the “hippocampus minor,” which he said was entirely absent in apes.
Huxley challenged this view; his own dissections failed to turn up the hippocampus minor. The two titans clashed over this for decades. The controversy occupied center stage in the Victorian press, creating the kind of media sensation that is reserved these days for the likes of Washington sex scandals. A parody of the hippocampus minor debate, published in Charles Kingsley’s children’s book The Water-Babies, captures the spirit of the times:
[Huxley] held very strange theories about a good many things. Hedeclared that apes had hippopotamus majors [sic] in their brains just as men have. Which was a shocking thing to say; for, if it were so, what would become of the faith, hope, and charity of immortal millions? You may think that there are other more important differences between you and an ape, such as being able to speak, and make machines, and know right from wrong, and say your prayers, and other little matters of that kind; but that is a child’s fancy, my dear. Nothing is to be depended on but the great hippopotamus test. If you have a hippopotamus major in your brain, you are no ape, though you had four hands, no feet, and were more apish than the apes of all aperies.
Joining the fray was Bishop Samuel Wilberforce, a staunch creationist who often relied on Owen’s anatomical observations to challenge Darwin’s theory. The battle raged on for twenty years until, tragically, Wilberforce was thrown off a horse and died instantly when his head hit the pavement. It is said that Huxley was sipping his cognac at the Athenaeum in London when the news reached him. He wryly quipped to the reporter, “At long last the Bishop’s brain has come into contact with hard reality, and the result has been fatal.”
Modern biology has amply demonstrated that Owen was wrong: There is no hippocampus minor, no sudden discontinuity between apes and us. The view that we are special is generally thought to be held only by creationist zealots and religious fundamentalists. Yet I am prepared to defend the somewhat radical view that on this particular issue Owen was right after all—although for reasons entirely different from those he had in mind. Owen was correct in asserting that the human brain—unlike, say, the human liver or heart—is indeed unique and distinct from that of the ape by a huge gap. But this view is entirely compatible with Huxley and Darwin’s claim that our brain evolved piecemeal, sans divine intervention, over millions of years.
But if this is so, you may wonder, where does our uniqueness come from? As Shakespeare and Parmenides had already stated long before Darwin, nothing can come of nothing.
It is a common fallacy to assume that gradual, small changes can only engender gradual, incremental results. But this is linear thinking, which seems to be our default mode for thinking about the world. This may be due to the simple fact that most of the phenomena that are perceptible to humans, at everyday human scales of time and magnitude and within the limited scope of our naked senses, tend to follow linear trends. Two stones feel twice as heavy as one stone. It takes three times as much food to feed three times as many people. And so on. But outside of the sphere of practical human concerns, nature is full of nonlinear phenomena. Highly complex processes can emerge from deceptively simple rules or parts, and small changes in one underlying factor of a complex system can engender radical, qualitative shifts in other factors that depend on it.
Think of this very simple example: Imagine you have block of ice in front of you and you are gradually warming it up: 20 degrees Fahrenheit…21 degrees…22 degrees…Most of the time, heating the ice up by one more degree doesn’t have any interesting effect: all you have that you didn’t have a minute ago is a slightly warmer block of ice. But then you come to 32 degrees Fahrenheit. As soon as you reach this critical temperature, you see an abrupt, dramatic change. The crystalline structure of the ice decoheres, and suddenly the water molecules start slipping and flowing around each other freely. Your frozen water has turned into liquid water, thanks to that one critical degree of heat energy. At that key point, incremental changes stopped having incremental effects, and precipitated a sudden qualitative change called a phase transition.
Nature is full of phase transitions. Frozen water to liquid water is one. Liquid water to gaseous water (steam) is another. But they are not confined to chemistry examples. They can occur in social systems, for example, where millions of individual decisions or attitudes can interact to rapidly shift the entire system into a new balance. Phase transitions are afoot during speculative bubbles, stock market crashes, and spontaneous traffic jams. On a more positive note, they were on display in the breakup of the Soviet Bloc and the exponential rise of the Internet.
I would even suggest that phase transitions may apply to human origins. Over the millions of years that led up to Homo sapiens, natural selection continued to tinker with the brains of our ancestors in the normal evolutionary fashion—which is to say, gradual and piecemeal: a dime-sized expansion of the cortex here, a 5 percent thickening of the fiber tract connecting two structures there, and so on for countless generations. With each new generation, the results of these slight neural improvements were apes who were slightly better at various things: slightly defter at wielding sticks and stones; slightly cleverer at social scheming, wheeling and dealing; slightly more foresightful about the behaviors of game or the portents of weather and season; slightly better at remembering the distant past and seeing connections to the present.
Then sometime about a hundred and fifty thousand years ago there was an explosive development of certain key brain structures and functions whose fortuitous combinations resulted in the mental abilities that make us special in the sense that I am arguing for. We went through a mental phase transition. All the same old parts were there, but they started working together in new ways that were far more than the sum of their parts. This transition brought us things like full-fledged human language, artistic and religious sensibilities, and consciousness and self-awareness. Within the space of perhaps thirty thousand years we began to build our own shelters, stitch hides and furs into garments, create shell jewelry and rock paintings, and carve flutes out of bones. We were more or less finished with genetic evolution, but had embarked on a much (much!) faster-paced form of evolution that acted not on genes but on culture.
And just what structural brain improvements were the keys to all of this? I will be happy to explain. But before I do that, I should give you a survey of brain anatomy so you can best appreciate the answer.
A Brief Tour of Your Brain
The human brain is made up of about 100 billion nerve cells, or neurons (Figure Int.1). Neurons “talk” to each other through threadlike fibers that alternately resemble dense, twiggy thickets (dendrites) and long, sinuous transmission cables (axons). Each neuron makes from one thousand to ten thousand contacts with other neurons. These points of contact, called synapses, are where information gets shared between neurons. Each synapse can be excitatory or inhibitory, and at any given moment can be on or off. With all these permutations the number of possible brain states is staggeringly vast; in fact, it easily exceeds the number of elementary particles in the known universe.
Given this bewildering complexity, it’s hardly surprising that medical students find neuroanatomy tough going. There are almost a hundred structures to reckon with, most of them with arcane-sounding names. The fimbria. The fornix. The indusium griseum. The locus coeruleus. The nucleus motoris dissipatus formationis of Riley. The medulla oblongata. I must say, I love the way these Latin names roll off the tongue. Meh-dull-a oblong-gah-ta! My favorite is the substantia innominata, which literally means “substance without a name.” And the smallest muscle in the body, which is used to abduct the little toe, is the abductor ossis metatarsi digiti quinti minimi. I think it sounds like a poem. (With the first wave of the Harry Potter generation now coming up through medical school, perhaps soon we’ll finally start hearing these terms pronounced with more of the relish they deserve.)
Fortunately, underlying all this lyrical complexity there is a basic plan of organization that’s easy to understand. Neurons are connected into networks that can process information. The brain’s many dozens of structures are ultimately all purpose-built networks of neurons, and often have elegant internal organization. Each of these structures performs some set of discrete (though not always easy to decipher) cognitive or physiological functions. Each structure makes patterned connections with other brain structures, thus forming circuits. Circuits pass information back and forth and in repeating loops, and allow brain structures to work together to create sophisticated perceptions, thoughts, and behaviors.
FIGURE INT.1 Drawing of a neuron showing the cell body, dendrites, and axon. The axon transmits information (in the form of nerve impulses) to the next neuron (or set of neurons) in the chain. The axon is quite long, and only part of it is shown here. The dendrites receive information from the axons of other neurons. The flow of information is thus always unidirectional.
The information processing that occurs both within and between brain structures can get quite complicated—this is, after all, the information-processing engine that generates the human mind—but there is plenty that can be understood and appreciated by nonspecialists. We will revisit many of these areas in greater depth in the chapters ahead, but a basic acquaintance now with each region will help you to appreciate how these specialized areas work together to determine mind, personality, and behavior.
The human brain looks like a walnut made of two mirror-image halves (Figure Int.2). These shell-like halves are the cerebral cortex. The cortex is split down the middle into two hemispheres: one on the left, one on the right. In humans the cortex has grown so large that it has been forced to become convoluted (folded), giving it its famous cauliflower-like appearance. (In contrast, the cortex of most other mammals is smooth and flat for the most part, with few if any folds in the surface.) The cortex is essentially the seat of higher thought, the tabula (far from) rasa where all of our highest mental functions are carried out. Not surprisingly, it is especially well developed in two groups of mammals: dolphins and primates. We’ll return to the cortex later in the chapter. For now let’s look at the other parts of the brain.
FIGURE INT.2 The human brain viewed from the top and from the left side. The top view shows the two mirror-symmetric cerebral hemispheres, each of which controls the movements of—and receives signals from—the opposite side of the body (though there are some exceptions to this rule). Abbreviations: DLF, dorsolateral prefrontal cortex; OFC, orbitofrontal cortex; IPL, inferior parietal lobule; I, insula, which is tucked away deep beneath the Sylvian fissure below the frontal lobe. The ventromedial prefrontal cortex (VMF, not labeled) is tucked away in the inner lower part of the frontal lobe, and the OFC is part of it.
FIGURE INT.3 A schematic drawing of the human brain showing internal structures such as the amygdala, hippocampus, basal ganglia, and hypothalamus.
Running up and down the core of the spinal column is a thick bundle of nerve fibers—the spinal cord—that conducts a steady stream of messages between brain and body. These messages include things like touch and pain flowing up from the skin, and motor commands rat-a-tat-tatting down to the muscles. At its uppermost extent the spinal cord pokes up out of its bony sheath of vertebrae, enters the skull, and grows thick and bulbous (Figure Int.3). This thickening is called the brainstem, and it is divided into three lobes: medulla, pons, and midbrain. The medulla and nuclei (neural clusters) on the floor of the pons control important vital functions like breathing, blood pressure, and body temperature. A hemorrhage from even a tiny artery supplying this region can spell instant death. (Paradoxically, the higher areas of the brain can sustain comparatively massive damage and leave the patient alive and even fit. For example, a large tumor in the frontal lobe might produce barely detectable neurological symptoms.)
Sitting on the roof of the pons is the cerebellum (Latin for “little brain”), which controls the fine coordination of movements and is also involved in balance, gait, and posture. When your motor cortex (a higher brain region that issues voluntary movement commands) sends a signal to the muscles via the spinal cord, a copy of that signal—sort of like a CC email—gets sent to the cerebellum. The cerebellum also receives sensory feedback from muscle and joint receptors throughout the body. Thus the cerebellum is able to detect any mismatches that may occur between the intended action and the actual action, and in response can insert appropriate corrections into the outgoing motor signal. This sort of real-time, feedback-driven mechanism is called a servo-control loop. Damage to the cerebellum causes the loop to go into oscillation. For example, a patient may attempt to touch her nose, feel her hand overshooting, and attempt to compensate with an opposing motion, which causes her hand to overshoot even more wildly in the opposite direction. This is called an intention tremor.
Surrounding the top portion of the brainstem are the thalamus and the basal ganglia. The thalamus receives its major inputs from the sense organs and relays them to the sensory cortex for more sophisticated processing. Why we need a relay station is far from clear. The basal ganglia are a strangely shaped cluster of structures that are concerned with the control of automatic movements associated with complex volitional actions—for example, adjusting your shoulder when throwing a dart, or coordinating the force and tension in dozens of muscles throughout your body while you walk. Damage to cells in the basal ganglia results in disorders like Parkinson’s disease, in which the patient’s torso is stiff, his face is an expressionless mask, and he walks with a characteristic shuffling gait. (Our neurology professor in medical school used to diagnose Parkinson’s by just listening to the patient’s footsteps next door; if we couldn’t do the same, he would fail us. Those were the days before high-tech medicine and magnetic resonance imaging, or MRI.) In contrast, excessive amounts of the brain chemical dopamine in the basal ganglia can lead to disorders known a choreas, which are characterized by uncontrollable movements that bear a superficial resemblance to dancing.
Finally we come to the cerebral cortex. Each cerebral hemisphere is subdivided into four lobes (see Figure Int.2): occipital, temporal, parietal, and frontal. These lobes have distinct domains of functioning, although in practice there is a great deal of interaction between them.
Broadly speaking, the occipital lobes are mainly concerned with visual processing. In fact, they are subdivided into as many as thirty distinct processing regions, each partially specialized for a different aspect of vision such as color, motion, and form.
The temporal lobes are specialized for higher perceptual functions, such as recognizing faces and other objects and linking them to appropriate emotions. They do this latter job in close cooperation with a structure called the amygdala (“almond”), which lies in the front ties (anterior poles) of the temporal lobes. Also tucked away beneath each temporal lobe is the hippocampus (“seahorse”), which lays down new memory traces. In addition to all this, the upper part of the left temporal lobe contains a patch of cortex known as Wernicke’s area. In humans this area has ballooned to seven times the size of the same area in chimpanzees; it is one of the few brain areas that can be safely declared unique to our species. Its job is nothing less than the comprehension of meaning and the semantic aspects of language—functions that are prime differentiators between human beings and mere apes.
The parietal lobes are primarily involved in processing touch, muscle, and joint information from the body and combining it with vision, hearing, and balance to give you a rich “multimedia” understanding of your corporeal self and the world around it. Damage to the right parietal lobe commonly results in a phenomenon called hemispatial neglect: The patient loses awareness of the left half of visual space. Even more remarkable is somatoparaphrenia, the patient’s vehement denial of ownership of her own left arm and insistence that it belongs to someone else. The parietal lobes have expanded greatly in human evolution, but no part of them has grown more than the inferior parietal lobules (IPL; see Figure Int.2). So great was this expansion that at some point in our past a large portion of it split into two new processing regions called the angular gyrus and the supramarginal gyrus. These uniquely human areas house some truly quintessential human abilities.
The right parietal lobe is involved in creating a mental model of the spatial layout of the outside world: your immediate environs, plus all the locations (but not identity) of objects, hazards, and people within it, along with your physical relationship to each of these things. Thus you can grab things, dodge missiles, and avoid obstacles. The right parietal, especially the right superior lobule (just above the IPL), is also responsible for constructing your body image—the vivid mental awareness you have of your body’s configuration and movement in space. Note that even though it is called an “image,” the body image is not a purely visual construct; it is also partly touch and muscle based. After all, a blind person has a body image too, and an extremely good one at that. In fact, if you zap the right angular gyrus with an electrode, you will have an out-of-body experience.
Now let’s consider the left parietal lobe. The left angular gyrus is involved in important functions unique to humans such as arithmetic, abstraction, and aspects of language such as word finding and metaphor. The left supramarginal gyrus, on the other hand, conjures up a vivid image of intended skilled actions—for example, sewing with a needle, hammering a nail, or waving goodbye—and executes them. Consequently, lesions in the left angular gyrus eliminate abstract skills like reading, writing, and arithmetic, while injury to the left supramarginal gyrus hinders you from orchestrating skilled movements. When I ask you to salute, you conjure up a visual image of the salute and, in a sense, use the image to guide your arm movements. But if your left supramarginal gyrus is damaged, you will simply stare at your hand perplexed or flail it around. Even though it isn’t paralyzed or weak and you clearly understand the command, you won’t be able to make your hand respond to your intention.
The frontal lobes also perform several distinct and vital functions. Part of this region the motor cortex—the vertical strip of cortex running just in front of the big furrow in the middle of the brain (Figure Int.2)—is involved in issuing simple motor commands. Other parts are involved in planning actions and keeping goals in mind long enough to follow through on them. There is another small part of the frontal lobe that is required for holding things in memory long enough to know what to attend to. This faculty is called working memory or short-term memory.
So far so good. But when you move to the more anterior part of the frontal lobes you enter the most inscrutable terra incognita of the brain: the prefrontal cortex (parts of which are identified in Figure Int.2). Oddly enough, a person can sustain massive damage to this area and come out of it showing no obvious signs of any neurological or cognitive deficits. The patient may seem perfectly normal if you casually interact with her for a few minutes. Yet if you talk to her relatives, they will tell you that her personality has changed beyond recognition. “She isn’t in there anymore. I don’t even recognize this new person” is the sort of heart-wrenching statement you frequently hear from bewildered spouses and lifelong friends. And if you continue to interact with the patient for a few hours or days, you too will see that there is something profoundly deranged.
If the left prefrontal lobe is damaged, the patient may withdraw from the social world and show a marked reluctance to do anything at all. This is euphemistically called pseudodepression—“pseudo” because none of the standard criteria for identifying depression, such as feelings of bleakness and chronic negative thought patterns, are revealed by psychological or neurological probing. Conversely, if the right prefrontal lobe is damaged, a patient will seem euphoric even though, once again he really won’t be. Cases of prefrontal damage are especially distressing to relatives. Such a patient seems to lose all interest in his own future and he shows no moral compunctions of any kind. He may laugh at a funeral or urinate in public. The great paradox is that he seems normal in most respects: his language, his memory, and even his IQ are unaffected. Yet he has lost many of the most quintessential attributes that define human nature: ambition, empathy, foresight, a complex personality, a sense of morality, and a sense of dignity as a human being. (Interestingly, a lack of empathy, moral standards, and self-restraint are also frequently seen in sociopaths, and the neurologist Antonio Damasio has pointed out they may have some clinically undetected frontal dysfunction.) For these reasons the prefrontal cortex has long been regarded as the “seat of humanity.” As for the question of how such a relatively small patch of the brain manages to orchestrate such a sophisticated and elusive suite of functions, we are still very much at a loss.
Is it possible to isolate a given part of the brain, as Owen attempted, that makes our species unique? Not quite. There is no region or structure that appears to have been grafted into the brain de novo by an intelligent designer; at the anatomical level, every part of our brain has a direct analog in the brains of the great apes. However, recent research has identified a handful of brain regions that have been so radically elaborated that at the functional (or cognitive) level they actually can be considered novel and unique. I mentioned three of these areas above: Wernicke’s area in the left temporal lobe, the prefrontal cortex, and the IPL in each parietal lobe. Indeed, the offshoots of the IPL—namely, the supramarginal and angular gyri, are anatomically nonexistent in apes. (Owen would have loved to have known about these.) The extraordinarily rapid development of these areas in humans suggests that something crucial must have been going on there, and clinical observations confirm this.
Within some of these regions, there is a special class of nerve cells called mirror neurons. These neurons fire not only when you perform an action, but also when you watch someone else perform the same action. This sounds so simple that its huge implications are easy to miss. What these cells do is effectively allow you to empathize with the other person and “read” her intentions—figure out what she is really up to. You do this by running a simulation of her actions using your own body image.
When you watch someone else reach for a glass of water, for example, your mirror neurons automatically simulate the same action in your (usually subconscious) imagination. Your mirror neurons will often go a step further and have you perform the action they anticipate the other person is about to take—say, to lift the water to her lips and take a drink. Thus you automatically form an assumption about her intentions and motivations—in this case, that she is thirsty and is taking steps to quench that thirst. Now, you could be wrong in this assumption—she might intend to use the water to douse a fire or to fling in the face of a boorish suitor—but usually your mirror neurons are reasonably accurate guessers of others’ intentions. As such, they are the closest thing to telepathy that nature was able to endow us with.
These abilities (and the underlying mirror-neuron circuitry) are also seen in apes, but only in humans do they seem to have developed to the point of being able to model aspects of others’ minds rather than merely their actions. Inevitably this would have required the development of additional connections to allow a more sophisticated deployment of such circuits in complex social situations. Deciphering the nature of these connections—rather than just saying, “It’s done by mirror neurons”—is one of the major goals of current brain research.
It is difficult to overstate the importance of understanding mirror neurons and their function. They may well be central to social learning, imitation, and the cultural transmission of skills and attitudes—perhaps even of the pressed-together sound clusters we call “words.” By hyper-developing the mirror-neuron system, evolution in effect turned culture into the new genome. Armed with culture, humans could adapt to hostile new environments and figure out how to exploit formerly inaccessible or poisonous food sources in just one or two generations—instead of the hundreds or thousands of generations such adaptations would have taken to accomplish through genetic evolution.
Thus culture became a significant new source of evolutionary pressure, which helped select for brains that had even better mirror-neuron systems and the imitative learning associated with them. The result was one of the many self-amplifying snowball effects that culminated in Homo sapiens, the ape that looked into its own mind and saw the whole cosmos reflected inside.
CHAPTER 1
Phantom Limbs and Plastic Brains
I love fools’ experiments. I am always making them.
—CHARLES DARWIN
AS A MEDICAL STUDENT I EXAMINED A PATIENT NAMED MIKHEY during my neurology rotation. Routine clinical testing required me to poke her neck with a sharp needle. It should have been mildly painful, but with each poke she laughed out loud, saying it was ticklish. This, I realized, was the ultimate paradox: laughter in the face of pain, a microcosm of the human condition itself. I was never able to investigate Mikhey’s case as I would have liked.
Soon after this episode, I decided to study human vision and perception, a decision largely influenced by Richard Gregory’s excellent book Eye and Brain. I spent several years doing research on neurophysiology and visual perception, first at the University of Cambridge’s Trinity College, and then in collaboration with Jack Pettigrew at Caltech.
But I never forgot the patients like Mikhey whom I had encountered during my neurology rotation as a medical student. In neurology, it seemed, there were so many questions left unresolved. Why did Mikhey laugh when poked? Why does the big toe go up when you stroke the outer border of the foot of a stroke patient? Why do patients with temporal lobe seizures believe they experience God and exhibit hypergraphia (incessant, uncontrollable writing)? Why do otherwise intelligent, perfectly lucid patients with damage to the right parietal lobe deny that their left arm belongs to them? Why does an emaciated anorexic with perfectly normal eyesight look in a mirror and claim she looks obese? And so, after years of specializing in vision, I returned to my first love: neurology. I surveyed the many unanswered questions of the field and decided to focus on a specific problem: phantom limbs. Little did I know that my research would yield unprecedented evidence of the amazing plasticity and adaptability of the human brain.
It had been known for over a century that when a patient loses an arm to amputation, she may continue to feel vividly the presence of that arm—as though the arm’s ghost were still lingering, haunting its former stump. There had been various attempts to explain this baffling phenomenon, ranging from flaky Freudian scenarios involving wish fulfillment to invocations of an immaterial soul. Not being satisfied with any of these explanations, I decided to tackle it from a neuroscience perspective.
I remember a patient named Victor on whom I conducted nearly a month of frenzied experiments. He came to see me because his left arm had been amputated below the elbow about three weeks prior to his visit. I first verified that there was nothing wrong with him neurologically: His brain was intact, his mind was normal. Based on a hunch I blindfolded him and started touching various parts of his body with a Q-tip, asking him to report what he felt, and where. His answers were all normal and correct until I started touching the left side of his face. Then something very odd happened.
He said, “Doctor, I feel that on my phantom hand. You’re touching my thumb.”
I used my knee hammer to stroke the lower part of his jaw. “How about now?” I asked.
“I feel a sharp object moving across the pinky to the palm,” he said.
By repeating this procedure I discovered that there was an entire map of the missing hand on his face. The map was surprisingly precise and consistent, with fingers clearly delineated (Figure 1.1). On one occasion I pressed a damp Q-tip against his cheek and sent a bead of water trickling down his face like a tear. He felt the water move down his cheek in the normal fashion, but claimed he could also feel the droplet trickling down the length of his phantom arm. Using his right index finger, he even traced the meandering path of the trickle through the empty air in front of his stump. Out of curiosity I asked him to elevate his stump and point the phantom upward toward the ceiling. To his astonishment he felt the next drop of water flowing up along the phantom, defying the law of gravity.
FIGURE 1.1 A patient with a phantom left arm. Touching different parts of his face evoked sensations in different parts of the phantom: P, pinky; T, thumb; B, ball of thumb; I, index finger.
Victor said he had never discovered this virtual hand on his face before, but as soon as he knew about it he found a way to put it to good use: Whenever his phantom palm itches—a frequent occurrence that used to drive him crazy—he says he can now relieve it by scratching the corresponding location on his face.
Why does all this happen? The answer, I realized, lies in the brain’s anatomy. The entire skin surface of the left side of the body is mapped onto a strip of cortex called the postcentral gyrus (see Figure Int.2 in the Introduction) running down the right side of the brain. This map is often illustrated with a cartoon of a man draped on the brain surface (Figure 1.2). Even though the map is accurate for the most part, some portions of it are scrambled with respect to the body’s actual layout. Notice how the map of the face is located next to the map of the hand instead of being near the neck where it “should” be. This provided the clue I was looking for.
Think of what happens when an arm is amputated. There is no longer an arm, but there is still a map of the arm in the brain. The job of this map, its raison d’être, is to represent its arm. The arm may be gone but the brain map, having nothing better to do, soldiers on. It keeps representing the arm, second by second, day after day. This map persistence explains the basic phantom limb phenomenon—why the felt presence of the limb persists long after the flesh-and-blood limb has been severed.
FIGURE 1.2 The Penfield map of the skin surface on the postcentral gyrus (see Figure Int.2). The drawing shows a coronal section (roughly, a cross section) going through the middle of the brain at the level of the postcentral gyrus. The artist’s whimsical depiction of a person draped on the brain surface shows the exaggerated representations of certain body parts (face and hand) and the fact that the hand map is above the face map.
Now, how to explain the bizarre tendency to attribute touch sensations arising from the face to the phantom hand? The orphaned brain map continues to represent the missing arm and hand in absentia, but it is not receiving any actual touch inputs. It is listening to a dead channel, so to speak, and is hungry for sensory signals. There are two possible explanations for what happens next. The first is that the sensory input flowing from the facial skin to the face map in the brain begins to actively invade the vacated territory corresponding the missing hand. The nerve fibers from the facial skin that normally project to the face cortex sprout thousands of neural tendrils that creep over into the arm map and establish strong, new synapses. As a result of this cross-wiring, touch signals applied to the face not only activate the face map, as they normally do, but also activate the hand map in the cortex, which shouts “hand!” to higher brain areas. The net result is that the patient feels that his phantom hand is being touched every time his face is touched.
A second possibility is that even prior to amputation, the sensory input from the face not only gets sent to the face area but partially encroaches into the hand region, almost as if they are reserve troops ready to be called into action. But these abnormal connections are ordinarily silent; perhaps they are continuously inhibited or damped down by the normal baseline activity from the hand itself. Amputation would then unmask these ordinarily silent synapses so that touching the face activates cells in the hand area of the brain. That in turn causes the patient to experience the sensations as arising from the missing hand.
Independent of which of these two theories—sprouting or unmasking—is correct, there is an important take-home message. Generations of medical students were told that the brain’s trillions of neural connections are laid down in the fetus and during early infancy and that adult brains lose their ability to form new connections. This lack of plasticity—this lack of ability to be reshaped or molded—was often used as an excuse to tell patients why they could expect to recover very little function after a stroke or traumatic brain injury. Our observations flatly contradicted this dogma by showing, for the first time, that even the basic sensory maps in the adult human brain can change over distances of several centimeters. We were then able to use brain-imaging techniques to show directly that our theory was correct: Victor’s brain maps had indeed changed as predicted (Figure 1.3).
FIGURE 1.3 A MEG (magnetoencephalograph) map of the body surface in a right-arm amputee. Hatched area, hand; black areas, face; white areas, upper arm. Notice that the region corresponding to the right hand (hatched area) is missing from the left hemisphere, but this region gets activated by touching the face or upper arm.
Soon after we published, evidence confirming and extending these findings started to come in from many groups. Two Italian researchers, Giovanni Berlucchi and Salvatore Aglioti, found that after amputation of a finger there was a “map” of a single finger draped neatly across the face as expected. In another patient the trigeminal nerve (the sensory nerve supplying the face) was severed and soon a map of the face appeared on the palm: the exact converse of what we had seen. Finally, after amputation of the foot of another patient, sensations from the penis were felt in the phantom foot. (Indeed, the patient claimed that his orgasm spread into his foot and was therefore “much bigger than it used to be.”) This occurs because of another of these odd discontinuities in the brain’s map of the body: The map of the genitals is right next to the map of the foot.
MY SECOND EXPERIMENT on phantom limbs was even simpler. In a nutshell, I created a simple setup using ordinary mirrors to mobilize paralyzed phantom limbs and reduce phantom pain. To understand how this works, I first need to explain why some patients are able to “move” their phantoms but others are not.
Many patients with phantoms have a vivid sense of being able to move their missing limbs. They say things like “It’s waving goodbye” or “It’s reaching out to answer the phone.” Of course, they know perfectly well that their hands aren’t really doing these things—they aren’t delusional, just armless—but subjectively they have a realistic sensation that they are moving the phantom. Where do these feelings come from?
I conjectured that they were coming from the motor command centers in the front of the brain. You might recall from the Introduction how the cerebellum fine-tunes our actions through a servo-loop process. What I didn’t mention is that the parietal lobes also participate in this servo-loop process through essentially the same mechanism. Again briefly: Motor output signals to the muscles are (in effect) CC’ed to the parietal lobes, where they are compared to sensory feedback signals from the muscles, skin, joints, and eyes. If the parietal lobes detect any mismatches between the intended movements and the hand’s actual movements, they make corrective adjustments to the next round of motor signals. You use this servo-guided system all the time. This is what allows you, for instance, to maneuver a heavy juice pitcher into a vacant spot on the breakfast table without spilling or knocking over the surrounding tableware. Now imagine what happens if the arm is amputated. The motor command centers in the front of the brain don’t “know” the arm is gone—they are on autopilot—so they continue to send motor command signals to the missing arm. By the same token, they continue to CC these signals to the parietal lobes. These signals flow into the orphaned, input-hungry hand region of your body-image center in the parietal lobe. These CC’ed signals from motor commands are misinterpreted by the brain as actual movements of the phantom.
Now you may wonder why, if this is true, you don’t experience the same sort of vivid phantom movement when you imagine moving your hand while deliberately holding it still. Here is the explanation I proposed several years ago, which has been since confirmed by brain-imaging studies. When your arm is intact, the sensory feedback from the skin, muscles, and joint sensors in your arm, as well as the visual feedback from your eyes, are all testifying in unison that your arm is not in fact moving. Even though your motor cortex is sending “move” signals to your parietal lobe, the countervailing testimony of the sensory feedback acts as a powerful veto. As a result, you don’t experience the imagined movement as though it were real. If the arm is gone, however, your muscles, skin, joints, and eyes cannot provide this potent reality check. Without the feedback veto, the strongest signal entering your parietal lobe is the motor command to the hand. As a result, you experience actual movement sensations.
Moving phantom limbs is bizarre enough, but it gets even stranger. Many patients with phantom limbs report the exact opposite: Their phantoms are paralyzed. “It’s frozen, Doctor.” “It’s in a block of cement.” For some of these patients the phantom is twisted into an awkward, extremely painful position. “If only I could move it,” a patient once told me, “it might help alleviate the pain.”
When I first saw this, I was baffled. It made no sense. They had lost their limbs, but the sensory-motor connections in their brains were presumably the same as they had been before their amputations. Puzzled, I started examining some of these patients’ charts and quickly found the clue I was looking for. Prior to amputation, many of these patients had had real paralysis of their arm caused by a peripheral nerve injury: the nerve that used to innervate the arm had been ripped out of the spinal cord, like a phone cord being yanked out of its wall jack, by some violent accident. So the arm had lain intact but paralyzed for many months prior to amputation. I started to wonder if perhaps this period of real paralysis could lead to a state of learned paralysis, which I conjectured could come about in the following way.
During the preamputation period, every time the motor cortex sent a movement command to the arm, the sensory cortex in the parietal lobe would receive negative feedback from the muscles, skin, joints, and eyes. The entire feedback loop had gone dead. Now, it is well established that experience modifies the brain by strengthening or weakening the synapses that link neurons together. This modification process is known as learning. When patterns are constantly reinforced—when the brain sees that event B invariably follows event A, for instance—the synapses between the neurons that represent A and the neurons that represent B are strengthened. On the other hand, if A and B stop having any apparent relationship to each other, the neurons that represent A and B will shut down their mutual connections to reflect this new reality.
So here we have a situation where the motor cortex was continually sending out movement commands to the arm, which the parietal lobe continually saw as having absolutely zero muscular or sensory effect. The synapses that used to support the strong correlations between motor commands and the sensory feedback they should generate were shown to be liars. Every new, impotent motor signal reinforced this trend, so the synapses grew weaker and weaker and eventually became moribund. In other words, the paralysis was learned by the brain, stamped into the circuitry where the patient’s body image was constructed. Later, when the arm was amputated, the learned paralysis got carried over into the phantom so the phantom felt paralyzed.
How could one test such an outlandish theory? I hit on the idea of constructing a mirror box (Figure 1.4). I placed an upright mirror in the center of a cardboard box whose top and front had been removed. If you stood in front of the box, held your hands on either side of the mirror and looked down at them from an angle, you would see the reflection of one hand precisely superimposed on the felt location of your other hand. In other words, you would get the vivid but false impression that you were looking at both of your hands; in fact, you would only be looking at one actual hand and one reflection of a hand.
If you have two normal, intact hands, it can be entertaining to play around with this illusion in the mirror box. For example, you can move your hands synchronously and symmetrically for a few moments—pretending to conduct an orchestra works well—and then suddenly move them in different ways. Even though you know it’s an illusion, a jolt of mild surprise invariably shoots through your mind when you do this. The surprise comes from the sudden mismatch between two streams of feedback: The skin-and-muscle feedback you get from the hand behind the mirror says one thing, but the visual feedback you get from the reflected hand—which your parietal lobe had become convinced is the hidden hand itself—reports some other movement.
FIGURE 1.4 The mirror arrangement for animating the phantom limb. The patient “puts” his paralyzed and painful phantom left arm behind the mirror and his intact right hand in front of the mirror. If he then views the mirror reflection of the right hand by looking into the right side of the mirror, he gets the illusion that the phantom has been resurrected. Moving the real hand causes the phantom to appear to move, and it then feels like it is moving—sometimes for the first time in years. In many patients this exercise relieves the phantom cramp and associated pain. In clinical trials, mirror visual feedback has also been shown to be more effective than conventional treatments for chronic regional pain syndrome and paralysis resulting from stroke.
Now let’s look at what this mirror-box setup does for a person with a paralyzed phantom limb. The first patient we tried this on, Jimmie, had an intact right arm, phantom left arm. His phantom jutted like a mannequin’s resin-cast forearm out of his stump. Far worse, it was also subject to painful cramping that his doctors could do nothing about. I showed him the mirror box and explained to him this might seem like a slightly off-the-wall thing we were about to try, with no guarantee that it would have any effect, but he was cheerfully willing to give it a try. He held out his paralyzed phantom on the left side of the mirror, looked into the right side of the box and carefully positioned his right hand so that its image was congruent with (superimposed on) the felt position of the phantom. This immediately gave him the startling visual impression that the phantom had been resurrected. I then asked him to perform mirror-symmetric movements of both arms and hands while he continued looking into the mirror. He cried out, “It’s like it’s plugged back in!” Now he not only had a vivid impression that the phantom was obeying his commands, but to his amazement, it began to relieve his painful phantom spasms for the first time in years. It was as though the mirror visual feedback (MVF) had allowed his brain to “unlearn” the learned paralysis.
Even more remarkably, when one of our patients, Ron, took the mirror box home and played around with it for three weeks in his spare time, his phantom limb vanished completely, along with the pain. All of us were shocked. A simple mirror box had exorcised a phantom. How? No one has proven the mechanism yet, but here is how I suspect it works. When faced with such a welter of conflicting sensory inputs—no joint or muscle feedback, impotent copies of motor-command signals, and now discrepant visual feedback thrown in via the mirror box—the brain just gives up and says, in effect, “To hell with it; there is no arm.” The brain resorts to denial. I often tell my medical colleagues that this is the first case in the history of medicine of a successful amputation of a phantom limb. When I first observed this disappearance of the phantom using MVF, I myself didn’t quite believe it. The notion that you could amputate a phantom with a mirror seemed outlandish, but it has now been replicated by other groups of researchers, especially Herta Flor, a neuroscientist at the University of Heidelberg. The reduction of phantom pain has also been confirmed by Jack Tsao’s group at the at the Walter Reed Army Medical Center in Maryland. They conducted a placebo-controlled clinical study on 24 patients (including 16 placebo controls). The phantom pain vanished after just three weeks in the 8 patients using the mirror, whereas none of the control patients (who used Plexiglas and visual imagery instead of mirrors) showed any improvement. Moreover, when the control patients were switched over to the mirror, they showed the same substantial pain reduction as the original experimental group.
More important, MVF is now being used for accelerating recovery from paralysis following stroke. My postdoctoral colleague Eric Altschuler and I first reported this in The Lancet in 1998, but our sample size was small—just 9 patients. A German group led by Christian Dohle has recently tried the technique on 50 stroke patients in a triple-blind controlled study, and shown that a majority of them regained both sensory and motor functions. Given that one in six people will suffer from a stroke, this is an important discovery.
More clinical applications for MVF continue to emerge. One pertains to a curious pain disorder with an equally curious name—complex regional pain syndrome–Type II (CRPS-II)—which is simply a verbal smoke screen for “Sounds awful! I have no idea what it is.” Whatever you call it, this affliction is actually quite common: It manifests in about 10 percent of stroke victims. The better-known variant of the disorder occurs after a minor injury such as an ordinarily innocuous hairline fracture in one of the metacarpals (hand bones). There is initially pain, of course, as one would expect to accompany a broken hand. Ordinarily the pain gradually goes away as the bone heals. But in an unfortunate subset of patients this doesn’t happen. They end up with chronic, excruciating pain that is unrelenting and persists indefinitely long after the original wound has healed. There is no cure—or at least, that’s what I had been taught in medical school.
It occurred to me that an evolutionary approach to this problem might be useful. We usually think of pain as a single thing, but from a functional point of view there are at least two kinds of pain. There is acute pain—as when you accidentally put your hand on a hot stove, yelp, and yank your hand away—and then there is chronic pain: pain that persists or recurs over long or indefinite periods, such as might accompany a bone fracture in the hand. Although the two feel the same (painful), they have different biological functions and different evolutionary origins. Acute pain causes you to instantly remove your hand from the stove to prevent further tissue damage. Chronic pain motivates you to keep your fractured hand immobilized to prevent reinjury while it heals.
I began to wonder: If learned paralysis could explain immobilized phantoms, perhaps CRPS-II is a form of “learned pain.” Imagine a patient with a fractured hand. Imagine how, during his long convalescence, pain shoots through his hand every time he moves it. His brain is seeing a constant “if A then B” pattern of events, where A is movement and B is pain. Thus the synapses between the various neurons that represent these two events are strengthened daily—for months on end. Eventually the very attempt to move the hand elicits excruciating pain. This pain may even spread to the arm, causing it to freeze up. In some such cases, the arm not only develops paralysis but actually becomes swollen and inflamed, and in the case of Sudek’s atrophy the bone may even start atrophying. All of this can be seen as a strange manifestation of mind-body interactions gone horribly awry.
At the “Decade of the Brain” symposium that I organized at the University of California, San Diego, in October 1996, I suggested that the mirror box might help alleviate learned pain in the same way that it affects phantom pain. The patient could try moving her limbs in synchrony while looking in the mirror, creating the illusion that the afflicted arm is moving freely, with no pain being evoked. Watching this repeatedly may lead to an “unlearning” of learned pain. A few years later the mirror box was tested by two research groups and found to be effective in treating CRPS-II in a majority of patients. Both studies were conducted double-blind using placebo controls. To be honest I was quite surprised. Since that time, two other double-blind randomized studies have confirmed the striking effectiveness of the procedure. (There is a variant of CRPS-II seen in 15 percent of stroke victims, and the mirror is effective in them as well.)
I’ll mention one last observation on phantom limbs that is even more remarkable than the cases mentioned so far. I used the conventional mirror box but added a novel twist. I had the patient, Chuck, looking at the reflection of his intact limb so as to optically resurrect the phantom as before. But this time, instead of asking him to move his arm, I asked him to hold it steady while I put a minifying (image-shrinking) concave lens between his line of sight and the mirror reflection. From Chuck’s point of view, his phantom now appeared to be about one-half or one-third its “real” size.
Chuck looked surprised and said, “It’s amazing, Doctor. My phantom not only looks small but feels small as well. And guess what—the pain has shrunk too! Down to about one-fourth the intensity it was before.”
This raises the intriguing question of whether even real pain in a real arm evoked with a pinprick would also be diminished by optically shrinking the pin and the arm. In several of the experiments I just described, we saw just how potent a factor vision (or its lack) can be in influencing phantom pain and motor paralysis. If this sort of optically mediated anesthesia could be shown to work on an intact hand, it would be another astonishing example of mind-body interaction.
IT IS FAIR to say that these discoveries—together with the pioneering animal studies of Mike Merzenich and John Kaas and some ingenious clinical work by Leonardo Cohen and Paul Bach y Rita—ushered in a whole new era in neurology, and in neurorehabilitation especially. They led to a radical shift in the way we think about the brain. The old view, which prevailed through the 1980s, was that the brain consists of many specialized modules that are hardwired from birth to perform specific jobs. (The box-and-arrow diagrams of brain connectivity in anatomy textbooks have fostered this highly misleading picture in the minds of generations of medical students. Even today, some textbooks continue to represent this “pre-Copernican” view.)
But starting in the 1990s, this static view of the brain was steadily supplanted by a much more dynamic picture. The brain’s so-called modules don’t do their jobs in isolation; there is a great deal of back-and-forth interaction between them, far more than previously suspected. Changes in the operation of one module—say, from damage, or from maturation, or from learning and life experience—can lead to significant changes in the operations of many other modules to which it is connected. To a surprising extent, one module can even take over the functions of another. Far from being wired up according to rigid, prenatal genetic blueprints, the brain’s wiring is highly malleable—and not just in infants and young children, but throughout every adult lifetime. As we have seen, even the basic “touch” map in the brain can be modified over relatively large distances, and a phantom can be “amputated” with a mirror. We can now say with confidence that the brain is an extraordinarily plastic biological system that is in a state of dynamic equilibrium with the external world. Even its basic connections are being constantly updated in response to changing sensory demands. And if you take mirror neurons into account, then we can infer that your brain is also in synch with other brains—analogous to a global Internet of Facebook pals constantly modifying and enriching each other.
As remarkable as this paradigm shift was, and leaving aside its vast clinical importance, you may be wondering at this point what these tales of phantom limbs and plastic brains have to do with human uniqueness. Is lifelong plasticity a distinctly human trait? In fact, it is not. Don’t lower primates get phantom limbs? Yes, they do. Don’t their cortical limb and face representations remap following amputation? Definitely. So what does plasticity tell us about our uniqueness?
The answer is that lifelong plasticity (not just genes) is one of the central players in the evolution of human uniqueness. Through natural selection our brains evolved the ability to exploit learning and culture to drive our mental phase transitions. We might as well call ourselves Homo plasticus. While other animal brains exhibit plasticity, we are the only species to use it as a central player in brain refinement and evolution. One of the major ways we managed to leverage neuroplasticity to such stratospheric heights is known as neoteny—our almost absurdly prolonged infancy and youth, which leaves us both hyperplastic and hyperdependent on older generations for well over a decade. Human childhood helps lay the groundwork of the adult mind, but plasticity remains a major force throughout life. Without neoteny and plasticity, we would still be naked savanna apes—without fire, without tools, without writing, lore, beliefs, or dreams. We really would be “nothing but” apes, instead of aspiring angels.
INCIDENTALLY, EVEN THOUGH I was never able to directly study Mikhey—the patient I met as a medical student who laughed when she should have yelped in pain—I never stopped pondering her case. Mikhey’s laughter raises an interesting question: Why does anybody laugh at anything? Laughter—and its cognitive companion, humor—is a universal trait present in all cultures. Some apes are known to “laugh” when tickled, but I doubt if they would laugh upon seeing a portly ape slip on a banana peel and fall on his arse. Jane Goodall certainly has never reported anything about chimpanzees performing pantomime skits for each other à la the Three Stooges or the Keystone Kops. Why and how humor evolved in us is a mystery, but Mikhey’s predicament gave me a clue.
Any joke or humorous incident has the following form. You narrate a story step-by-step, leading your listener along a garden path of expectation, and then you introduce an unexpected twist, a punch line, the comprehension of which requires a complete reinterpretation of the preceding events. But that’s not enough: No scientist whose theoretical edifice is demolished by a single ugly fact entailing a complete overhaul is likely to find it amusing. (Believe me, I’ve tried!) Deflation of expectation is necessary but not sufficient. The extra key ingredient is that the new interpretation must be inconsequential. Let me illustrate. The dean of the medical school starts walking along a path, but before reaching his destination he slips on a banana peel and falls. If his skull is fractured and blood starts gushing out, you rush to his aid and call the ambulance. You don’t laugh. But if he gets up unhurt, wiping the banana off his expensive trousers, you break out into a fit of laughter. It’s called slapstick. The key difference is that in the first case, there is a true alarm requiring urgent attention. In the second case it’s a false alarm, and by laughing you inform your kin in the vicinity not to waste their resources rushing to his aid. It is nature’s “all’s okay” signal. What is left unexplained is the slight schadenfreude aspect to the whole thing.
How does this explain Mikhey’s laughter? I didn’t know this at that time, but many years later I saw another patient named Dorothy with a similar “laughter from pain” syndrome. A CT (computed tomography) scan revealed that one of the pain pathways in her brain was damaged. Even though we think of pain as a single sensation, there are in fact several layers to it. The sensation of pain is initially processed in a small structure called the insula (“island”), which is folded deep beneath the temporal lobe on each side of the brain (see Figure Int.2, in the Introduction). From the insula the pain information is then relayed to the anterior cingulate in the frontal lobes. It is here you feel the actual unpleasantness—the agony and the awfulness of the pain—along with an expectation of danger. If this pathway is cut, as it was in Dorothy and presumably in Mikhey, the insula continues to provide the basic sensation of pain but it doesn’t lead to the expected awfulness and agony: The anterior cingulate doesn’t get the message. It says, in effect, “all’s okay.” So here we have the two key ingredients for laughter: A palpable and imminent indication that alarm is warranted (from the insula) followed by a “no big whoop” follow-up (from the silence of the anterior cingulate). So the patient laughs uncontrollably.
And the same holds for tickling. The huge adult approaches the child menacingly. She is clearly outmatched, prey, completely at the mercy of a hulking Grendel. Some instinctive part of her—her inner primate, primed to flee from the terrors of eagles and jaguars and pythons (oh my!)—cannot help but interpret the situation this way. But then the monster turns out be gentle. It deflates her expectation of danger. What might have been fangs and claws digging fatally into her ribs turn out to be nothing but firmly undulating fingers. And the child laughs. It may well be that tickling evolved as a early playful rehearsal for adult humor.
The false-alarm theory explains slapstick, and it is easy to see how it might have been evolutionarily coopted (exapted, to use the technical term) for cognitive slapstick—jokes, in other words. Cognitive slapstick may similarly serve to deflate falsely evoked expectations of danger which might otherwise result in resources being wasted on imaginary dangers. Indeed, one could go so far as to say that humor helps as an effective antidote against a useless struggle against the ultimate danger: the ever-present fear of death in self-conscious beings like us.
Lastly, consider that universal greeting gesture in humans: the smile. When an ape is approached by another ape, the default assumption is that it is being approached by a potentially dangerous stranger, so it signals its readiness to fight by protruding its canines in a grimace. This evolved further and became ritualized into a mock threat expression, an aggressive gesture warning the intruder of potential retaliation. But if the approaching ape is recognized as a friend, the threat expression (baring canines) is aborted halfway, and this halfway grimace (partly hiding the canines) becomes an expression of appeasement and friendliness. Once again a potential threat (attack) is abruptly aborted—the key ingredients for laughter. No wonder a smile has the same subjective feeling as laughter. It incorporates the same logic and may piggyback on the same neural circuits. How very odd that when your lover smiles at you, she is in fact half-baring her canines, reminding you of her bestial origins.
And so it is that we can begin with a bizarre mystery that could have come straight from Edgar Allan Poe, apply Sherlock Holmes’s methods, diagnose and explain Mikhey’s symptoms, and, as a bonus, illuminate the possible evolution and biological function of a much treasured but deeply enigmatic aspect of the human mind.
CHAPTER 2
Seeing and Knowing
“You see but you do not observe.”
—SHERLOCK HOLMES
THIS CHAPTER IS ABOUT VISION. OF COURSE, EYES AND VISION ARE not unique to humans—not by a long shot. In fact, the ability to see is so useful that eyes have evolved many separate times in the history of life. The eyes of the octopus are eerily similar to our own, despite the fact that our last common ancestor was a blind aquatic slug-or snail-like creature that lived well over half a billion years ago.1 Eyes are not unique to us, but vision does not occur in the eye. It occurs in the brain. And there is no other creature on earth that sees objects quite the way we do. Some animals have much higher visual acuity than we do. You sometimes hear factoids like the fact that an eagle could read tiny newsprint from fifty feet away. But of course, eagles can’t read.
This book is about what makes humans special, and a recurring theme is that our unique mental traits must have evolved from preexisting brain structures. We begin our journey with visual perception, partly because more is known about its intricacies than about any other brain function and partly because the development of visual areas accelerated greatly in primate evolution, culminating in humans. Carnivores and herbivores probably have fewer than a dozen visual areas and no color vision. The same holds for our own ancestors, tiny nocturnal insectivores scurrying up tree branches, little realizing that their descendents would one day inherit—and possibly annihilate!—the earth. But humans have as many as thirty visual areas instead of a mere dozen. What are they doing, given that a sheep can get away with far fewer?
When our shrewlike ancestors became diurnal, evolving into prosimians and monkeys, they began to develop extrasophisticated visuomotor capacities for precisely grasping and manipulating branches, twigs, and leaves. Furthermore, the shift in diet from tiny nocturnal insects to red, yellow, and blue fruits, as well as to leaves whose nutritional value was color coded in various shades of green, brown, and yellow, propelled the emergence of a sophisticated system for color vision. This rewarding aspect of color perception may have subsequently been exploited by female primates to advertise their monthly sexual receptivity and ovulation with estrus—a conspicuous colorful swelling of the rumps to resemble ripe fruits. (This feature has been lost in human females, who have evolved to be continuously receptive sexually throughout the month—something I have yet to observe personally.) In a further twist, as our ape ancestors evolved toward adopting a full-time upright bipedal posture, the allure of swollen pink rumps may have been transferred to plump lips. One is tempted to suggest—tongue in cheek—that our predilection for oral sex may also be an evolutionary throwback to our ancestors’ days as frugivores (fruit eaters). It is an ironic thought that our enjoyment of a Monet or a Van Gogh or of Romeo’s savoring Juliet’s kiss may ultimately trace back to an ancient attraction to ripe fruits and rumps. (This is what makes evolutionary psychology so much fun: You can come up with an outlandishly satirical theory and get away with it.)
In addition to the extreme agility of our fingers, the human thumb developed a unique saddle joint allowing it to oppose the forefinger. This feature, which enables the so-called precision grip, may seem trivial, but it is useful for picking small fruits, nuts, and insects. It also turns out to be quite useful for threading needles, hafting hand axes, counting, or conveying Buddha’s peace gesture. The requirement for fine independent finger movements, opposable thumbs, and exquisitely precise eye-hand coordination—the evolution of which was set in motion early in the primate line—may have been the final source of selection pressure that led us to develop our plethora of sophisticated visual and visuomotor areas in the brain. Without all these areas, it is arguable whether you could blow a kiss, write, count, throw a dart, smoke a joint, or—if you are a monarch—wield a scepter.
This link between action and perception has become especially clear in the last decade with the discovery of a new class of neurons in the frontal lobes called canonical neurons. These neurons are similar in some respects to the mirror neurons I introduced in the last chapter. Like mirror neurons, each canonical neuron fires during the performance of a specific action such as reaching for a vertical twig or an apple. But the same neuron will also fire at the mere sight of a twig or an apple. In other words, it is as though the abstract property of graspability were being encoded as an intrinsic aspect of the object’s visual shape. The distinction between perception and action exists in our ordinary language, but it is one that the brain evidently doesn’t always respect.
While the line between visual perception and prehensile action became increasingly blurred in primate evolution, so too did the line between visual perception and visual imagination in human evolution. A monkey, a dolphin, or a dog probably enjoys some rudimentary form of visual imagery, but only humans can create symbolic visual tokens and juggle them around in the mind’s eye to try out novel juxtapositions. An ape can probably conjure up a mental picture of a banana or the alpha male of his troop, but only a human can mentally juggle visual symbols to create novel combinations, such as babies sprouting wings (angels) or beings that are half-horse, half-human (centaurs). Such imagery and “off-line” symbol juggling may, in turn, be a requirement for another unique human trait, language, which we take up in Chapter 6.
IN 1988 A sixty-year-old man was taken to the emergency room of a hospital in Middlesex, England. John had been a fighter pilot World War II. Until that fateful day, when he suddenly developed severe abdominal pain and vomiting, he had been in perfect health. The house officer, Dr. David McFee, elicited a history of the illness. The pain had begun near the navel and then migrated to the lower right side of his abdomen. This sounded to Dr. McFee like a textbook case of appendicitis: an inflammation of a tiny vestigial appendage protruding from the colon on the right side of the body. In the fetus the appendix first starts growing directly under the navel, but as the intestines lengthen and become convoluted the appendix gets pushed into the lower right quadrant of the abdomen. But the brain remembers its original location, so that is where it experiences the initial pain—under the belly button. Soon the inflammation spreads to the abdominal wall overlying it. That’s when the pain migrates to the right.
Next Dr. McFee elicited a classic sign called rebound tenderness. With three fingers he very slowly compressed the lower right abdominal wall and noted that this caused no pain. But when he suddenly withdrew his hand to release the pressure, there was a short delay followed by sudden pain. This delay results from the inertial lag of the inflamed appendix as it rebounds to hit the abdominal wall.
Finally, Dr. McFee applied pressure in John’s lower left quadrant, causing him to feel a sharp twinge of pain in the lower right, the true location of the appendix. The pain is caused by the pressure displacing the gas from the left to the right side of the colon, which causes the appendix to inflate slightly. This tell-tale sign, together with John’s high fever and vomiting, clinched the diagnosis. Dr. McFee scheduled the appendectomy right away: The swollen, inflamed appendix could rupture anytime and spill its contents into the abdominal cavity, producing life-threatening peritonitis. The surgery went smoothly, and John was moved to the recovery room to rest and recuperate.
Alas, John’s real troubles had only just begun.2 What should have been a routine recovery became a waking nightmare when a small clot from a vein in his leg was released into his blood and clogged up one of his cerebral arteries, causing a stroke. The first sign of this was when his wife walked into the room. Imagine John’s astonishment—and hers—when he could no longer recognize her face. The only way he knew who he was talking to was because he could still recognize her voice. Nor could he recognize anyone else’s face—not even his own face in a mirror.
“I know it’s me,” he said. “It winks when I wink and it moves when I do. It’s obviously a mirror. But it doesn’t look like me.”
John emphasized repeatedly that there was nothing wrong with his eyesight.
“My vision is fine, Doctor. Things are out of focus in my mind, not in my eye.”
Even more remarkably, he couldn’t recognize familiar objects.
When shown a carrot, he said, “It’s a long thing with a tuft at the end—a paint brush?”
He was using fragments of the object to intellectually deduce what it was instead of recognizing it instantly as a whole like most of us do. When shown a picture of a goat, he described it as “an animal of some kind. Maybe a dog.” Often John could perceive the generic class the object belonged to—he could tells animals from plants, for example—but could not say what specific exemplar of that class it was. These symptoms were not caused by any limitation of intellect or verbal sophistication. Here is John’s description of a carrot, which I’m sure you will agree is much more detailed than what most of us could produce:
A carrot is a root vegetable cultivated and eaten as human consumption worldwide. Grown from seed as an annual crop, the carrot produces long thin leaves growing from a root head. This is deep growing and large in comparison with the leaf growth, sometimes gaining a length of twelve inches under a leaf top of similar height when grown in good soil. Carrots may be eaten raw or cooked and can be harvested during any size or state of growth. The general shape of a carrot is an elongated cone, and its color ranges between red and yellow.
John could no longer identify objects, but he could still deal with them in terms of their spatial extent, their dimensions, and their movement. He was able to walk around the hospital without bumping into obstacles. He could even drive short distances with some help—a truly amazing feat, given all the traffic he had to negotiate. He could locate and gauge the approximate speed of a moving vehicle, although he couldn’t tell if it was a Jaguar, a Volvo, or even a truck. These distinctions prove to be irrelevant to actually driving.
When he reached home, he saw an engraving of St. Paul’s Cathedral that had been hanging on the wall for decades. He said he knew someone had given it to him but had forgotten what it depicted. He could produce an astonishingly accurate drawing, copying its every detail—including printing flaws! But even after he had done so, he still couldn’t say what it was. John could see perfectly clearly; he just didn’t know what he was seeing—which is why the flaws weren’t “flaws” for him.
John had been an avid gardener prior to his stroke. He walked out of his house and much to his wife’s surprise picked up a pair of shears and proceeded to trim the hedge effortlessly. However, when he tried to tidy up the garden, he often plucked the flowers from the ground because he couldn’t tell them from the weeds. Trimming the hedge, on the other hand, required only that John see where the unevenness was. No identification of objects was required. The distinction between seeing and knowing is illustrated well by John’s predicament.
Although an inability to know what he was looking at was John’s main problem, he had other subtler difficulties as well. For instance he had tunnel vision, often losing the proverbial forest for the trees. He could reach out and grab a cup of coffee when it was on an uncluttered table by itself, but got hopelessly muddled when confronted with a buffet service. Imagine his surprise when he discovered he had poured mayonnaise rather than cream into his coffee.
Our perception of the world ordinarily seems so effortless that we tend to take it for granted. You look, you see, you understand—it seems as natural and inevitable as water flowing downhill. Its only when something goes wrong, as in patients like John, that we realize how extraordinarily sophisticated it really is. Even though our picture of the world seems coherent and unified, it actually emerges from the activity those thirty (or more) different visual areas in the cortex, each of which mediates multiple subtle functions. Many of these areas are ones we share with other mammals but some of them “split” off at some point to become newly specialized modules in higher primates. Exactly how many of our visual areas are unique to humans isn’t clear. But a great deal more is known about them than about other higher brain regions such as the frontal lobes, which are involved in such things as morality, compassion, and ambition. A thorough understanding of how the visual system really works may therefore provide insights into the more general strategies the brain uses to handle information, including the ones that are unique to us.
A FEW YEARS ago I was at an after-dinner speech given by David Attenborough at the university aquarium in La Jolla, California, near where I work. Sitting next to me was a distinguished-looking man with a walrus moustache. After his fourth glass of wine he told me that he worked for the creation science institute in San Diego. I was very tempted to tell him that creation science is an oxymoron, but before I could do so he interrupted me to ask where I worked and what I was currently interested in.
“Autism and synesthesia these days. But I also study vision.”
“Vision? What’s there to study?”
“Well, what do you think goes on in your head when you look at something—that chair for example?”
“There is an optical image of the chair in my eye—on my retina. The image is transmitted along a nerve to the visual area of the brain and you see it. Of course, the image in the eye is upside down, so it has to be made upright again in the brain before you see it.”
His answer embodies a logical fallacy called the homunculus fallacy. If the image on the retina is transmitted to the brain and “projected” on some internal mental screen, then you would need some sort of “little man”—a homunculus—inside your head looking at the image and interpreting or understanding it for you. But how would the homunculus be able to understand the images flashing by on his screen? There would have to be another, even smaller chap looking at the image in his head—and so on. It is a situation of infinite regress of eyes, images, and little people, without really solving the problem of perception.
In order to understand perception, you need to first get rid of the notion that the image at the back of your eye simply gets “relayed” back to your brain to be displayed on a screen. Instead, you must understand that as soon as the rays of light are converted into neural impulses at the back of your eye, it no longer makes any sense to think of the visual information as being an image. We must think, instead, of symbolic descriptions that represent the scenes and objects that had been in the image. Say I wanted someone to know what the chair across the room from me looks like. I could take him there and point it out to him so he could see it for himself, but that isn’t a symbolic description. I could show him a photograph or a drawing of the chair, but that is still not symbolic because it bears a physical resemblance. But if I hand the person a written note describing the chair, we have crossed over into the realm of symbolic description: The squiggles of ink on the paper bear no physical resemblance to the chair; they merely symbolize it.
Analogously, the brain creates symbolic descriptions. It does not re-create the original image, but represents the various features and aspects of the image in totally new terms—not with squiggles of ink, of course, but in its own alphabet of nerve impulses. These symbolic encodings are created partly in your retina itself but mostly in your brain. Once there, they are parceled and transformed and combined in the extensive network of visual brain areas that eventually let you recognize objects. Of course, the vast majority of this processing goes on behind the scenes without entering your conscious awareness, which is why it feels effortless and obvious, as it did to my dinner companion.
I’ve been glibly dismissing the homunculus fallacy by pointing out the logical problem of infinite regress. But is there any direct evidence that it is in fact a fallacy?
First, what you see can’t just be the image on the retina because the retinal image can remain constant but your perception can change radically. If perception simply involves transmitting and displaying an image on an inner mental screen, how can this be true? Second, the converse is also true: The retinal image can change, yet your perception of the object remains stable. Third, despite appearances, perception takes time and happens in stages.
The first reason is the most easy to appreciate. It’s the basis of many visual illusions. A famous example is the Necker cube, discovered accidentally by the Swiss crystallographer Louis Albert Necker (Figure 2.1). He was gazing at a cuboid crystal through a microscope one day, and imagine his amazement when the crystal suddenly seemed to flip! Without visibly moving, it switched its orientation right in front of his very eyes. Was the crystal itself changing? To find out he drew a wire-frame cube on a scrap of paper and noticed that the drawing did the same thing. Conclusion: His perception was changing, not the crystal. You can try this on yourself. It is fun even if you have tried it dozens of times in the past. You will see that the drawing suddenly flips on you, and it’s partly—but only partly—under voluntary control. The fact that your perception of an unchanging image can change and flip radically is proof that perception must involve more than simply displaying an image in the brain. Even the simplest act of perception involves judgment and interpretation. Perception is an actively formed opinion of the world rather than a passive reaction to sensory input from it.
FIGURE 2.1 Skeleton outline drawing of a cube: You can see it in either of two different ways, as if it were above you or below you.
FIGURE 2.2 This picture has not been Photoshopped! It was taken with an ordinary camera from the special viewing point that makes the Ames room work. The fun part of this illusion comes when you have two people walk to opposite ends of the room: It looks for all the world as if they are standing just a few feet apart from each other and one of them has grown giant, with his head brushing the ceiling, while the other has shrunk to the size of a fairy.
Another striking example is the famous Ames room illusion (Figure 2.2). Imagine taking a regular room like the one you are in now and stretching out one corner so the ceiling is much taller in that corner than elsewhere. Now make a small hole in any of the walls and look inside the room. From nearly any viewing perspective you see a bizarrely deformed trapezoidal room. But there is one special vantage point from which, astonishingly, the room looks completely normal! The walls, floor, and ceiling all seem to be arranged at proper right angles to each other, and the windows and floor tiles seem to be of uniform size. The usual explanation for this illusion is that from this particular vantage point the image cast on your retina by the distorted room is identical to that which would be produced by a normal room—it’s just geometric optics. But surely this begs the question. How does your visual system know what a normal room should look like from exactly this particular vantage point?
To turn the problem on its head, let’s assume you are looking through a peephole into a normal room. There is in fact an infinity of distorted trapezoidal Ames rooms that could produce exactly the same image, yet you stably perceive a normal room. Your perception doesn’t oscillate wildly between a million possibilities; it homes in instantly on the correct interpretation. The only way it can do this is by bringing in certain built-in knowledge or hidden assumptions about the world—such as walls being parallel, floor tiles being squares, and so on—to eliminate the infinity of false rooms.
The study of perception, then, is the study of these assumptions and the manner in which they are enshrined in the neural hardware of your brain. A life-size Ames room is hard to construct, but over the years psychologists have created hundreds of visual illusions that have been cunningly devised to help us explore the assumptions that drive perception. Illusions are fun to look at since they seem to violate common sense. But they have the same effect on a perceptual psychologist as the smell of burning rubber does on an engineer—an irresistible urge to discover the cause (to quote what biologist Peter Medawar said in a different context).
Take the simplest of illusions, foreshadowed by Isaac Newton and established clearly by Thomas Young (who, coincidentally, also deciphered the Egyptian hieroglyphics). If you project a red and a green circle of light to overlap on a white screen, the circle you see actually looks yellow. If you have three projectors—one shining red, another green, and another blue—with proper adjustment of each projector’s brightness you can produce any color of the rainbow—indeed, hundreds of different hues just by mixing them in the right ratio. You can even produce white. This illusion is so astonishing that people have difficulty believing it when they first see it. It’s also telling you something fundamental about vision. It illustrates the fact that even though you can distinguish thousands of colors, you have only three classes of color-sensitive cells in the eye: one for red light, one for green, and one for blue. Each of these responds optimally to just one wavelength but will continue to respond, though less well, to other wavelengths. Thus any observed color will excite the red, green, and blue receptors in different ratios, and higher brain mechanisms interpret each ratio as a different color. Yellow light, for example, falls halfway in the spectrum between red and green, so it activates red and green receptors equally and the brain has learned, or evolved to interpret, this as the color we call yellow. Using just colored lights to figure out the laws of color vision was one of the great triumphs of visual science. And it paved the way for color printing (economically using just three dyes) and color TV.
My favorite example of how we can use illusions to discover the hidden assumptions underlying perception is shape-from-shading (Figure 2.3). Although artists have long used shading to enhance the impression of depth in their pictures, it’s only recently that scientists have begun to investigate it carefully. For example, in 1987 I created several computerized displays like the one shown in Figure 2.3—arrays of randomly scattered disks in a field of gray. Each disk contains a smooth gradient from white at one end to black on the other, and the background is the exact “middle gray” between black and white. These experiments were inspired, in part, by the observations of the Victorian physicist David Brewster. If you inspect the disks in Figure 2.3, they will initially look like a set of eggs lit from the right side. With some effort you can also see them as cavities lit from the left side. But you cannot simultaneously see some as eggs and some as cavities even if you try hard. Why? One possibility is that the brain picks the simplest interpretation by default, seeing all of the disks the same way. It occurred to me that another possibility is that your visual system assumes that there is only a single light source illuminating the entire scene or large chunks of it. This isn’t strictly true of an artificially lit environment with many lightbulbs, but it is largely true of the natural world, given that our planetary system has only one sun. If you ever catch hold of an alien, be sure to show her this display to find out if her solar system had a single sun like ours. A creature from a binary star system might be immune to the illusion.
FIGURE 2.3 Eggs or cavities? You can flip between the two depending on which direction you decide the light is shining from, right or left. They always all flip together.
So which explanation is correct—a preference for the simpler interpretation, or an assumption of a single light source? To find out I did the obvious experiment of creating the mixed display shown in Figure 2.4 in which the top and bottom rows have different directions of shading. You will notice that in this display, if you get yourself to see the top row as eggs, then the bottom row is always seen as cavities, and vice versa, and it is impossible to see them all simultaneously as eggs or simultaneously as cavities. This proves it’s not simplicity but the assumption of a single light source.
FIGURE 2.4 Two rows of shaded disks. When the top row is seen as eggs, the bottom row looks like cavities, and vice versa. It is impossible to see them all the same way. Illustrates the “single light source” assumption built into perceptual processing.
FIGURE 2.5 Sunny side up. Half the disks (light on top) are seen as eggs and half as cavities. This illusion shows that the visual system automatically assumes that light shines from above. View the page upside down, and the eggs and cavities will switch.
It gets better. In Figure 2.5 the shaded disks have been shaded vertically rather than horizontally. You will notice that the ones that are light on top are nearly always seen as eggs bulging toward you, whereas the ones that are dark on top are seen as cavities. We may conclude that, in addition to the single-light-source assumption revealed in Figure 2.4, there is another even stronger assumption at work, which is that the light is shining from above. Again, makes sense given the position of the sun in the natural world. Of course, this isn’t always true; the sun is sometimes on the horizon. But its true statistically—and it’s certainly never below you. If you rotate the picture so it’s upside down, you will find that all the bumps and cavities switch. On the other hand, if you rotate it exactly 90 degrees, you will find that the shaded disks are now ambiguous as in Figure 2.4, since you don’t have a built-in bias for assuming light comes from the left or the right.
Now I’d like you to try another experiment. Go back to Figure 2.4, but this time, instead of rotating the page, hold it upright and tilt your body and head to the right, so your right ear almost touches your right shoulder and your head is parallel to the ground. What happens? The ambiguity disappears. The top row always looks like bumps and the bottom row as cavities. This is because the top row is now light on the top with reference to your head and retina, even though it’s still light on the right in reference to the world. Another way of saying this is that the overhead lighting assumption is head centered, not world centered or body-axis centered. It’s as if your brain assumes that the sun is stuck to the top of your head and remains stuck to it when you tilt your head 90 degrees! Why such a silly assumption? Because statistically speaking, your head is upright most of the time. Your ape ancestors rarely walked around looking at the world with their heads tilted. Your visual system therefore takes a shortcut; it makes the simplifying assumption that the sun is stuck to your head. The goal of vision is not to get things perfectly right all the time, but to do get it right often enough and quickly enough to survive as long as possible to leave behind as many babies as you can. As far as evolution is concerned, that’s all that matters. Of course, this shortcut makes you vulnerable to certain incorrect judgments, as when you tilt your head, but this happens so rarely in real life that your brain can get away with being lazy like this. The explanation of this visual illusion illustrates how you can begin with a relatively simple set of displays, ask questions of the kind that your grandmother might ask, and gain real insights, in a matter of minutes, into how we perceive the world.
Illusions are an example of the black-box approach to the brain. The metaphor of the black box comes to us from engineering. An engineering student might be given a sealed box with electrical terminals and lightbulbs studding the surface. Running electricity through certain terminals causes certain bulbs to light up, but not in a straightforward or one-to-one relationship. The assignment is for the student to try different combinations of electrical inputs, noting which lightbulbs are activated in each case, and from this trial-and-error process deduce the wiring diagram of the circuit inside the box without opening it.
In perceptual psychology we are often faced with the same basic problem. To narrow down the range of hypotheses about how the brain processes certain kinds of visual information, we simply try varying the sensory inputs and noting what people see or believe they see. Such experiments enable us discover the laws of visual function, in much the same way Gregor Mendel was able to discover the laws heredity by cross-breeding plants with various traits, even though he had no way to know anything about the molecular and genetic mechanisms that made them true. In the case of vision, I think the best example is one we’ve already considered, in which Thomas Young predicted the existence of three kinds of color receptors in the eye based on playing around with colored lights.
When studying perception and discovering the underlying laws, sooner or later one wants to know how these laws actually arise from the activity of neurons. The only way to find out is by opening the black box—that is, by directly experimenting on the brain. Traditionally there are three ways to approach this: neurology (studying patients with brain lesions), neurophysiology (monitoring the activity of neural circuits or even of single cells), and brain imaging. Specialists in each of these areas are mutually contemptuous and have tended to see their own methodology as the most important window on brain functioning, but in recent decades there has been a growing realization that a combined attack on the problem is needed. Even philosophers have now joined the fray. Some of them, like Pat Churchland and Daniel Dennett, have a broad vision, which can be a valuable antidote to the narrow cul-de-sacs of specialization that the majority of neuroscientists find themselves trapped in.
IN PRIMATES, INCLUDING humans, a large chunk of the brain—comprising the occipital lobes and parts of the temporal and parietal lobes—is devoted to vision. Each of the thirty or so visual areas within this chunk contains either a complete or partial map of the visual world. Anyone who thinks vision is simple should look at one of David Van Essen’s anatomical diagrams depicting the structure of the visual pathways in monkeys (Figure 2.6), bearing in mind that they are likely to be even more complex in humans.
Notice especially that there are at least as many fibers (actually many more!) coming back from each stage of processing to an earlier stage as there are fibers going forward from each area into the next area higher up in the hierarchy. The classical notion of vision as a stage-by-stage sequential analysis of the image, with increasing sophistication as you go along, is demolished by the existence of so much feedback. What these back projections are doing is anybody’s guess, but my hunch is that at each stage in processing, whenever the brain achieves a partial solution to a perceptual “problem”—such as determining an object’s identity, location, or movement—this partial solution is immediately fed back to earlier stages. Repeated cycles of such an iterative process help eliminate dead ends and false solutions when you look at “noisy” visual images such as camouflaged objects (like the scene “hidden” in Figure 2.7).3 In other words, these back projections allow you to play a sort of “twenty questions” game with the image, enabling you to rapidly home in on the correct answer. It’s as if each of us is hallucinating all the time and what we call perception involves merely selecting the one hallucination that best matches the current input. This is an overstatement, of course, but it has a large grain of truth. (And, as we shall see later, may help explain aspects of our appreciation of art.)
FIGURE 2.6 David Van Essen’s diagram depicting the extraordinary complexity of the connections between the visual areas in primates, with multiple feedback loops at every stage in the hierarchy. The “black box” has been opened, and it turns out to contain…a whole labyrinth of smaller black boxes! Oh well, no deity ever promised us it would be easy to figure ourselves out.
FIGURE 2.7 What do you see? It looks like random splatterings of black ink at first, but when you look long enough you can see the hidden scene.
The exact manner in which object recognition is achieved is still quite mysterious. How do the neurons firing away when you look at an object recognize it as a face rather than, say, a chair? What are the defining attributes of a chair? In modern designer furniture shops a big blob of plastic with a dimple in the middle is recognized as a chair. It would appear that what is critical is its function—something that permits sitting—rather than whether it has four legs or a back rest. Somehow the nervous system translates the act of sitting as synonymous with the perception of chair. If it is a face, how do you recognize the person instantly even though you have encountered millions of faces over a lifetime and stored away the corresponding representations in your memory banks?
Certain features or signatures of an object can serve as a shortcut to recognizing it. In Figure 2.8a, for example, there is a circle with a squiggle in the middle but you see a pig’s rump. Similarly, in Figure 2.8b you have four blobs on either side of a pair of straight vertical lines, but as soon as I add some features such as claws, you might see it as a bear climbing a tree. These images suggest that certain very simple features can serve as diagnostic labels for more complex objects, but they don’t answer the even more basic question of how the features themselves are extracted and recognized. How is a squiggle recognized as a squiggle? And surely the squiggle in Figure 2.8a can only be a tail given the overall context of being inside a circle. No rump is seen if the squiggle falls outside the circle. This raises the central problem in object recognition; namely, how does the visual system determine relationships between features to identify the object? We still have precious little understanding.
FIGURE 2.8 (a) A pig rump.
(b) A bear.
The problem is even more acute for faces. Figure 2.9a is a cartoon face. The mere presence of horizontal and vertical dashes can substitute for nose, eyes, and mouth, but only if the relationship between them is correct. The face in Figure 2.9b has the same exact features as the one in Figure 2.9a, but they’re scrambled. No face is seen—unless you happen to be Picasso. Their correct arrangement is crucial.
But surely there is more to it. As Steven Kosslyn of Harvard University has pointed out, the relationship between features (such as nose, eyes, mouth in the right relative positions) tells you only that it’s a face and not, say, a pig or a donkey; it doesn’t tell you whose face it is. For recognizing individual faces you have to switch to measuring the relative sizes and distances between features. It’s as if your brain has a created a generic template of the human face by averaging together the thousands of faces it has encountered. Then, when you encounter a novel face, you compare the new face with the template—that is, your neurons mathematically subtract the average face from the new one. The pattern of deviation from the average face becomes your specific template for the new face. For example, compared to the average face Richard Nixon’s face would have a bulbous nose and shaggy eyebrows. In fact, you can deliberately exaggerate these deviations and produce a caricature—a face that can be said to look more like Nixon than the original. Again, we will see later how this has relevance to some types of art.
FIGURE 2.9 (a) A cartoon face.
(b) A scrambled face.
We have to bear in mind, though, that words such as “exaggeration,” “template,” and “relationships” can lull us into a false sense of having explained much more than we really have. They conceal depths of ignorance. We don’t know how neurons in the brain perform any of these operations. Nonetheless, the scheme I have outlined might provide a useful place to start future research on these questions. For example, over twenty years ago neuroscientists discovered neurons in the temporal lobes of monkeys that respond to faces; each set of neurons firing when the monkey looks at a specific familiar face, such as Joe the alpha male or Lana the pride of his harem. In an essay on art that I published in 1998, I predicted that such neurons might, paradoxically, fire even more vigorously in response to an exaggerated caricature of the face in question than to the original. Intriguingly, this prediction has now been confirmed in an elegant series of experiments performed at Harvard. Such experiments are important because they will help us translate purely theoretical speculations on vision and art into more precise, testable models of visual function.
Object recognition is a difficult problem, and I have offered some speculations on what the steps involved are. The word “recognition,” however, doesn’t tell us anything much unless we can explain how the object or face in question evokes meaning—based on the memory associations of the face. The question of how neurons encode meaning and evoke all the semantic associations of an object is the holy grail of neuroscience, whether you are studying memory, perception, art, or consciousness.
AGAIN, WE DON’T really know why we higher primates have such a large number of distinct visual areas, but it seems that they are all specialized for different aspects of vision, such as color vision, seeing movement, seeing shapes, recognizing faces, and so on. The computational strategies for each of these might be sufficiently different that evolution developed the neural hardware separately.
A good example of this is the middle temporal (MT) area, a small patch of cortical tissue found in each hemisphere, that appears to be mainly concerned with seeing movement. In the late 1970s a woman in Zurich, whom I’ll call Ingrid, suffered a stroke that damaged the MT areas on both sides of her brain but left the rest of her brain intact. Ingrid’s vision was normal in most respects: She could read newspapers and recognize objects and people. But she had great difficulty seeing movement. When she looked at a moving car, it appeared like a long succession of static snapshots, as if seen under a strobe. She could read the number plate and tell you what color it was, but there was no impression of motion. She was terrified of crossing the street because she didn’t know how fast the cars were approaching. When she poured water into a glass, the stream of water looked like a static icicle. She didn’t know when to stop pouring because she couldn’t see the rate at which the water level was rising, so it always overflowed. Even talking to people was like “talking on a phone,” she said, because she couldn’t see the lips moving. Life became a strange ordeal for her. So it would seem that the MT areas are concerned mainly with seeing motion but not with other aspects of vision. There are four other bits of evidence supporting this view.
First, you can record from single nerve cells in a monkey’s MT areas. The cells signal the direction of moving objects but don’t seem that interested in color or shape. Second, you can use microelectrodes to stimulate tiny clusters of cells in a monkey’s MT area. This causes the cells to fire, and the monkey starts hallucinating motion when the current is applied. We know this because the monkey starts moving his eyes around tracking imaginary moving objects in its visual field. Third, in human volunteers, you can watch MT activity with functional brain imaging such as fMRI (functional MRI). In fMRI, magnetic fields in the brain produced by changes in blood flow are measured while the subject is doing or looking at something. In this case, the MT areas lights up while you are looking at moving objects, but not when you are shown static pictures, colors, or printed words. And fourth, you can use a device called a transcranial magnetic stimulator to briefly stun the neurons of volunteers’ MT areas—in effect creating a temporary brain lesion. Lo and behold, the subjects become briefly motion blind like Ingrid while the rest of their visual abilities remain, to all appearances, intact. All this might seem like overkill to prove the single point that MT is the motion area of the brain, but in science it never hurts to have converging lines of evidence that prove the same thing.
Likewise, there is an area called V4 in the temporal lobe that appears to be specialized for processing color. When this area is damaged on both sides of the brain, the entire world becomes drained of color and looks like a black-and-white motion picture. But the patient’s other visual functions seem to remain perfectly intact: She can still perceive motion, recognize faces, read, and so on. And just as with the MT areas, you can get converging lines of evidence through single-neuron studies, functional imaging, and direct electrical stimulation to show that V4 is the brain’s “color center.”
Unfortunately, unlike MT and V4, most of the rest of the thirty or so visual areas of the primate brain do not reveal their functions so cleanly when they are lesioned, imaged, or zapped. This may be because they are not as narrowly specialized, or their functions are more easily compensated for by other regions (like water flowing around an obstacle), or perhaps our definition of what constitutes a single function is murky (“ill posed,” as computer scientists say). But in any case, beneath all the bewildering anatomical complexity there is a simple organizational pattern that is very helpful in the study of vision. This pattern is a division of the flow of visual information along (semi)separate, parallel pathways (Figure 2.10).
Let’s first consider the two pathways by which visual information enters the cortex. The so-called old pathway starts in the retinas, relays through an ancient midbrain structure called the superior colliculus, and then projects—via the pulvinar—to the parietal lobes (see Figure 2.10). This pathway is concerned with spatial aspects of vision: where, but not what, an object is. The old pathway enables us to orient toward objects and track them with our eyes and heads. If you damage this pathway in a hamster, the animal develops a curious tunnel vision, seeing and recognizing only what is directly in front of its nose.
FIGURE 2.10 The visual information from the retina gets to the brain via two pathways. One (called the old pathway) relays through the superior colliculus, arriving eventually in the parietal lobe. The other (called the new pathway) goes via the lateral geniculate nucleus (LGN) to the visual cortex and then splits once again into the “how” and “what” streams.
The new pathway, which is highly developed in humans and in primates generally, allows sophisticated analysis and recognition of complex visual scenes and objects. This pathway projects from the retina to V1, the first and largest of our cortical visual maps, and from there splits into two subpathways, or streams: pathway 1, or what is often called the “how” stream, and pathway 2 the “what” stream. You can think of the “how” stream (sometimes called the “where” stream) as being concerned with the relationships among visual objects in space, while the “what” stream is concerned with the relationships of features within visual objects themselves. Thus the “how” stream’s function overlaps to some extent with that of the old pathway, but it mediates much more sophisticated aspects of spatial vision—determining the overall spatial layout of the visual scene rather than just the location of an object. The “how” stream projects to the parietal lobe and has strong links to the motor system. When you dodge an object hurled at you, when you navigate around a room avoiding bumping into things, when you step gingerly over a tree branch or a pit, or when you reach out to grab an object or fend off a blow, you are relying on the “how” stream. Most of these computations are unconscious and highly automated, like a robot or a zombie copilot that follows your instructions without need of much guidance or monitoring.
Before we consider the “what” stream, let me first mention the fascinating visual phenomenon of blindsight. It was discovered in Oxford in the late 1970s by Larry Weizkrantz. A patient named Gy had suffered substantial damage to his left visual cortex—the origin point for both the “how” and the “what” streams. As a result he became completely blind in his right visual field—or so it seemed at first. In the course of testing Gy’s intact vision, Weizkrantz told him to reach out and try to touch a tiny spot of light that he told Gy was to his right. Gy protested that he couldn’t see it and there would be no point, but Weizkrantz asked him to try anyway. To his amazement, Gy correctly touched the spot. Gy insisted that he had been guessing, and was surprised when he was told that he had pointed correctly. But repeated trials proved that it had not been a lucky stab in the dark; Gy’s finger homed in on target after target, even though he had no conscious visual experience of where they were or what they looked like. Weizkrantz dubbed the syndrome blindsight to emphasize its paradoxical nature. Short of ESP, how can we explain this? How can a person locate something he cannot see? The answer lies in the anatomical division between the old and new pathways in the brain. Gy’s new pathway, running through V1, was damaged, but his old pathway was perfectly intact. Information about the spot’s location traveled up smoothly to his parietal lobes, which in turn directed the hand to move to the correct location.
This explanation of blindsight is elegant and widely accepted, but it raises an even more intriguing question: Doesn’t this imply that only the new pathway has visual consciousness? When the new pathway is blocked, as in Gy’s case, visual awareness winks out. The old pathway, on the other hand, is apparently performing equally complex computations to guide the hand, but without a wisp of consciousness creeping in. This is one reason why I likened this pathway to a robot or a zombie. Why should this be so? After all, they are just two parallel pathways made up of identical-looking neurons, so why is only one of them linked to conscious awareness?
Why indeed. While I have raised it here as a teaser, the question of conscious awareness is a big one that we will leave for the final chapter.
Now let’s have look at pathway 2, the “what” stream. This stream is concerned mainly with recognizing what an object is and what it means to you. This pathway projects from V1 to the fusiform gyrus (see Figure 3.6), and from there to other parts of the temporal lobes. Note that the fusiform area itself mainly performs a dry classification of objects: It discriminates Ps from Qs, hawks from handsaws, and Joe from Jane, but it does not assign significance to any of them. Its role is analogous to that of a shell collector (conchologist) or a butterfly collector (lepidopterist), who classifies and labels hundreds of specimens into discrete nonoverlapping conceptual bins without necessarily knowing (or caring) anything else about them. (This is approximately true but not completely; some aspects of meaning are probably fed back from higher centers to the fusiform.)
But as pathway 2 proceeds past the fusiform to other parts of the temporal lobes, it evokes not only the name of a thing but a penumbra of associated memories and facts about it—broadly speaking the semantics, or meaning, of an object. You not only recognize Joe’s face as being “Joe,” but you remember all sorts of things about him: He is married to Jane, has a warped sense of humor, is allergic to cats, and is on your bowling team. This semantic retrieval process involves widespread activation of the temporal lobes, but it seems to center on a handful of “bottlenecks” that include Wernicke’s language area and the inferior parietal lobule (IPL), which is involved in quintessentially human abilities as such as naming, reading, writing, and arithmetic. Once meaning is extracted in these bottleneck regions, the messages are relayed to the amygdala, which lies embedded in the front tip of the temporal lobes, to evoke feelings about what (or whom) you are seeing.
In addition to pathways 1 and 24 there seems to be an alternate, somewhat more reflexive pathway for emotional response to objects that I call pathway 3. If the first two were the “how” and “what” streams, this one could be thought of as the “so what” stream. In this pathway, biologically salient stimuli such as eyes, food, facial expressions, and animate motion (such as someone’s gait and gesturing) pass from the fusiform gyrus through an area in the temporal lobe called the superior temporal sulcus (STS) and then straight to the amygdala.5 In other words, pathway 3 bypasses high-level object perception—and the whole rich penumbra of associations evoked through pathway 2—and shunts quickly to the amygdala, the gateway to the emotional core of the brain, the limbic system. This shortcut probably evolved to promote fast reaction to high-value situations, whether innate or learned.
The amygdala works in conjunction with past stored memories and other structures in the limbic system to gauge the emotional significance of whatever you are looking at: Is it friend, foe, mate? Food, water, danger? Or is it just something mundane? If it’s insignificant—just a log, a piece of lint, the trees rustling in the wind—you feel nothing toward it and most likely will ignore it. But if it’s important, you instantly feel something. If it is an intense feeling, the signals from the amygdala also cascade into your hypothalamus (see Figure Int.3), which not only orchestrates the release of hormones but also activates the autonomic nervous system to prepare you to take appropriate action, whether it’s feeding, fighting, fleeing, or wooing. (Medical students use the mnemonic of the “four Fs” to remember these.) These autonomic responses include all the physiological signs of strong emotion such as increased heart rate, rapid shallow breathing, and sweating. The human amygdala is also connected with the frontal lobes, which add subtle flavors to this “four F” cocktail of primal emotions, so that you have not just anger, lust, and fear, but also arrogance, pride, caution, admiration, magnanimity, and the like.
LET US NOW return to John, our stroke patient from earlier in the chapter. Can we explain at least some of his symptoms based on the broad-brushstrokes layout of the visual system I have just painted? John was definitely not blind. Remember, he could almost perfectly copy an engraving of St. Paul’s Cathedral even though he did not recognize what he was drawing. The earlier stages of visual processing were intact, so John’s brain could extract lines and shapes and even discern relationships between them. But the crucial next link in the “what” stream—the fusiform gyrus—from which visual information could trigger recognition, memory, and feelings—had been cut off. This disorder is called agnosia, a term coined by Sigmund Freud meaning that the patient sees but doesn’t know. (It would have been interesting to see if John had the right emotional response to a lion even while being unable to distinguish it consciously from a goat, but the researchers didn’t try that. It would have implied a selective sparing of pathway 3.)
John could still “see” objects, could reach out and grab them, and walk around the room dodging obstacles because his “how” stream was largely intact. Indeed, anyone watching him walk around wouldn’t even suspect that his perception had been profoundly deranged. Remember, when he returned home from the hospital, he could trim hedges with shears or pull out a plant from the soil. And yet he could not tell weeds from flowers, or for that matter recognize faces or cars or tell salad dressing from cream. Thus symptoms that would otherwise seem bizarre and incomprehensible begin to make sense in terms of the anatomical scheme with it’s the multiple visual pathways that I’ve just outlined.
This is not to say that his spatial sense was completely intact. Recall that he could grab an isolated coffee cup easily enough but was befuddled by a cluttered buffet table. This suggests that he was also experiencing some disruption of a process vision researchers call segmentation: knowing which fragments of a visual scene belong together to constitute a single object. Segmentation is a critical prelude to object recognition in the “what” stream. For instance, if you see the head and hindquarters of a cow protruding from opposite sides of a tree trunk, you automatically perceive the entire animal—your mind’s eye fills it in without question. We really have no idea how neurons in the early stages of visual processing accomplish this linking so effortlessly. Aspects of this process of segmentation were probably also damaged in John.
Additionally, John’s lack of color vision suggests that there was damage to his color area, V4, which not surprisingly lies in the same brain region—the fusiform gyrus—as the face recognition area. John’s main symptoms can be partially explained in terms of damage to specific aspects of visual function, but some of them cannot be. One of his most intriguing symptoms became manifest when he was asked to draw flowers from memory. Figure 2.11 shows the drawings he produced, which he confidently labeled rose, tulip, and iris. Notice that the flowers are drawn well but they don’t look like any real flowers that we know! It’s as though he had a generic concept of a flower and, lacking access to memories of real flowers, produces what might be called Martian flowers that really don’t exist.
FIGURE 2.11 “Martian flowers.” When asked to draw specific flowers, John instead produced generic flowers, conjured up, without realizing it, in his imagination.
A few years after John returned home, his wife died and he moved to a sheltered home for the rest of his life. (He died about three years before this book was printed.) While he was there, he managed to take care of himself by staying in a small room where everything was organized to facilitate his recognition. Unfortunately, as his physician Glyn Humphreys pointed out to me, he would still get terribly lost going outside—even getting lost in the garden once. Yet despite these handicaps he displayed considerable fortitude and courage, keeping up his spirits until the very end.
JOHN’S SYMPTOMS ARE strange enough but, not long ago, I encountered a patient named David who had an even more bizarre symptom. His problem was not with recognizing objects or faces but with responding to them emotionally—the very last step in the chain of events that we call perception. I described him in my previous book, Phantoms in the Brain. David was a student in one of my classes before he was involved in a car crash that left him comatose for two weeks. After he woke up from the coma, he made a remarkable recovery within a few months. He could think clearly, was alert and attentive, and could understand what was said to him. He could also speak, write, and read fluently even though his speech was slightly slurred. Unlike John he had no problem recognizing objects and people. Yet he had one profound delusion. Whenever he saw his mother, he would say, “Doctor, this woman looks exactly like my mother but she isn’t—she’s an imposter pretending to be my mother.”
He had a similar delusion about his father but not about anyone else. David had what we now call the Capgras syndrome (or delusion), named after the physician who first described it. David was the first patient I had ever seen with this disorder, and I was transformed from skeptic to believer. Over the years I had learned to be wary of odd syndromes. A majority of them are real but sometimes you read about a syndrome that represents little more than a neurologist’s or psychiatrist’s vanity—an attempted shortcut to fame by having a disease named after him or being credited with its discovery.
But seeing David convinced me that the Capgras syndrome is bona fide. What could be causing such a bizarre delusion? One interpretation that can still be found in older psychiatry textbooks is a Freudian one. The explanation would run like this: Maybe David, like all men, had a strong sexual attraction to his mother when he was a baby—the so-called Oedipus complex. Fortunately, when he grew up his cortex became more dominant over his primitive emotional structures and began repressing or inhibiting these forbidden sexual impulses toward mom. But maybe the blow to David’s head damaged his cortex, thereby removing the inhibition and allowing his dormant sexual urges to emerge into consciousness. Suddenly and inexplicably, David found himself being sexually turned on by his mother. Perhaps the only way he could “rationalize” this away was to assume she wasn’t really his mother. Hence the delusion.
This explanation is ingenious but it never made much sense to me. For example, soon after I had seen David, I encountered another patient, Steve, who had the same delusion about his pet poodle! “This dog looks just like Fifi,” he would say “but it really isn’t. It just looks like Fifi.” Now how can the Freudian theory account for this? You would have to posit latent bestial tendencies lurking in the subconscious minds of all men, or something equally absurd.
The correct explanation, it turns out, is anatomical. (Ironically Freud himself famously said, “Anatomy is destiny.”) As noted previously, visual information is initially sent to the fusiform gyrus, where objects, including faces, are first discriminated. The output from the fusiform is relayed via pathway 3 to the amygdala, which performs an emotional surveillance of the object or face and generates the appropriate emotional response. What about David, though? It occurred to me that the car accident might have selectively damaged the fibers in pathway 3 that connect his fusiform gyrus, partly via the STS, to his amygdala while leaving both those structures, as well as pathway 2, completely intact. Because pathway 2 (meaning and language) is unaffected, he still knows his mother’s face by sight and remembers everything about her. And because his amygdala and the rest of his limbic system are unaffected, he can still feel laughter and loss like any normal person. But the link between perception and emotion has been severed, so his mother’s face doesn’t evoke the expected feelings of warmth. In other words, there is recognition but without the expected emotional jolt. Perhaps the only way David’s brain can cope with this dilemma is to rationalize it away by concluding that she is an imposter.6 This seems an extreme rationalization, but as we shall see in the final chapter the brain abhors discrepancies of any kind and an absurdly far-fetched delusion is sometimes the only way out.
The advantage of our neurological theory over the Freudian view is that it can be tested experimentally. As we saw earlier, when you look at something that’s emotionally evocative—a tiger, your lover, or indeed, your mother—your amygdala signals your hypothalamus to prepare your body for action. This fight-or-flight reaction is not all or nothing; it operates on a continuum. A mildly, moderately, or profoundly emotional experience elicits a mild, moderate, or profound autonomic reaction, respectively. And part of these continuous autonomic reactions to experience is microsweating: Your whole body, including your palms, becomes damper or dryer in proportion to any upticks or downticks in your level of emotional arousal at any given moment.
This is good news for us scientists because it means we can measure your emotional reaction to the things you see by simply monitoring the degree of your microsweating. This can be done simply by taping two passive electrodes to your skin and routing them through a device called an ohmmeter to monitor your galvanic skin response (GSR), the moment-to-moment fluctuations in the electrical resistance of your skin. (GSR is also called the skin conductance response, or SCR.) Thus when you see a foxy pinup or a gruesome medical picture, your body sweats, your skin resistance drops, and you get a big GSR. On the other hand, if you see something completely neutral, like a doorknob or an unfamiliar face, you get no GSR (although the doorknob may very well produce a GSR in a Freudian psychoanalyst).
Now you may well wonder why we should go through the elaborate process of measuring GSR to monitor emotional arousal. Why not simply ask people how something made them feel? The answer is that between the stage of emotional reaction and the verbal report, there are many complex layers of processing, so what you often get is an intellectualized or censored story. For instance, if a subject is a closet homosexual, he may in fact deny his arousal when he sees a Chippendales dancer. But his GSR can’t lie because he has no control over it. (GSR is one of the physiological signals that is used in polygraph, or so-called lie-detector tests.) It’s a foolproof test to see if emotions are genuine as opposed to verbally faked. And believe it or not, all normal people get huge GSR jolts when they are shown a picture of their mothers—they don’t even have to be Jewish!
Based on this reasoning we measured David’s GSR. When we flashed neutral pictures of things like a table and chairs, there was no GSR. Nor did his GSR change when he was shown unfamiliar faces, since there was no jolt of familiarity. So far, nothing unusual. But when we showed him his mother’s picture, there was no GSR either. This never occurs in normal people. This observation provides striking confirmation of our theory.
But if this is true, why doesn’t David call, say, his mailman an imposter, assuming he used to know his mailman prior to the accident? After all, the disconnection between vision and emotion should apply equally to the mailman—not just his mother. Shouldn’t this lead to the same symptom? The answer is that his brain doesn’t expect an emotional jolt when he sees the mailman. Your mother is your life; your mail carrier is just some person.
Another paradox was that David did not have the imposter delusion when his mother spoke to him on the phone from the adjacent room.
“Oh Mom, it’s so good to hear from you. How are you?” he would say.
How does my theory account for this? How can someone be delusional about his mother when she shows up in person but not when she phones him? There is in fact an elegantly simple explanation. It turns out that there is a separate anatomical pathway from the hearing centers of the brain (the auditory cortex) to your amygdala. This pathway was not destroyed in David, so his mother’s voice evoked the strong positive emotions he expected to feel. This time there was no need for delusion.
Soon after our findings on David were published in the journal Proceedings of the Royal Society of London, I received a letter from a patient named Mr. Turner, who lived in Georgia. He claimed to have developed Capgras syndrome after a head injury. He liked my theory, he said, because he now understood he wasn’t crazy or losing his mind; there was a perfectly logical explanation for his strange symptoms, which he would now try to overcome if he could. But he then went on to add that what troubled him most was not the imposter illusion, but the fact that he no longer enjoyed visual scenes—such as beautiful landscapes and flower gardens—which had been immensely pleasing prior to the accident. Nor did he enjoy great works of art like he used to. His knowledge that this was caused by the disconnection in his brain did not restore the appeal of flowers or art. This made me wonder whether these connections might play a role in all of us when we enjoy art. Can we study these connections to explore the neural basis of our aesthetic response to beauty? I’ll return to this question when we discuss the neurology of art in Chapters 7 and 8.
One last twist to this strange tale. It was late at night and I was in bed, when the phone rang. I woke up and looked at the clock: it was 4 A.M. It was an attorney. He was calling me from London and had apparently overlooked the time difference.
“Is this Dr. Ramachandran?”
“Yes it is,” I mumbled, still half-asleep.
“I am Mr. Watson. We have a case we would like your opinion on. Perhaps you could fly over and examine the patient?”
“What’s this all about?” I said, trying not to sound irritated.
“My client, Mr. Dobbs, was in a car accident,” he said. “He was unconscious for several days. When he came out of it he was quite normal except for a slight difficulty finding the right word when he talks.”
“Well, I’m happy to hear that,” I said. “Some slight word-finding difficulty is extremely common after brain injury—no matter where the injury is.” There was a pause. So I asked, “What can I do for you?”
“Mr. Dobbs—Jonathan—wants to file a lawsuit against the people whose car collided with his. This fault was clearly the other party’s, so their insurance company is going to compensate Jonathan financially for the damage to his car. But the legal system is very conservative here in England. The physicians here have found him to be physically normal—his MRI is normal and there are no neurological symptoms or other injuries anywhere in his body. So the insurance company will only pay for the car damage, not for any health-related issues.”
“Well.”
“The problem, Dr. Ramachandran, is that he claims to have developed the Capgras syndrome. Even though he knows that he is looking at his wife, she often seems like a stranger, a new person. This is extremely troubling to him, and he wants to sue the other party for a million dollars for having caused a permanent neuropsychiatric disturbance.”
“Pray continue.”
“Soon after the accident someone found your book Phantoms in the Brain lying on my client’s coffee table. He admitted to reading it, which is when he realized he might have the Capgras syndrome. But this bit of self-diagnosis didn’t help him in any way. The symptoms remained just the same. So he and I want to sue the other party for a million dollars for having produced this permanent neurological symptom. He fears he may even end up divorcing his wife.
“The trouble is, Dr. Ramachandran, the other attorney is claiming that my client has simply fabricated the whole thing after reading your book. Because if you think about it, it’s very easy to fake the Capgras syndrome. Mr. Dobbs and I would like to fly you out to London so you can administer the GSR test and prove to the court that he does indeed have the Capgras syndrome, that he isn’t malingering. I understand you cannot fake this test.”
The attorney had done his homework. But I had no intention of flying to London just to administer this test.
“Mr. Watson, what’s the problem? If Mr. Dobbs finds that his wife looks like a new woman every time he sees her, he should find her perpetually attractive. This is a good thing—not bad at all. We should all be so lucky!” My only excuse for this tasteless joke is that I was still only barely awake.
There was a long pause at the other end and a click as he hung up on me. I never heard from him again. My sense of humor is not always well received.
Even though my remark may have sounded frivolous, it wasn’t entirely off the mark. There’s a well-known psychological phenomenon called the Coolidge effect, named after President Calvin Coolidge. It’s based on a little-known experiment performed by rat psychologists decades ago. Start with a sex-deprived male rat in a cage. Put a female rat in the cage. The male mounts the female, consummating the relationship several times until he collapses from sheer sexual exhaustion. Or so it would seem. The fun begins if you now introduce a new female into the cage. He gets going again and performs several times until he is once again thoroughly exhausted. Now introduce a third new female rat, and our apparently exhausted male rat starts all over again. This voyeuristic experiment is a striking demonstration of the potent effect of novelty on sexual attraction and performance. I have often wondered whether the effect is also true for female rats courting males, but to my knowledge that hasn’t been tried—probably because for many years most psychologists were men.
The story is told that President Coolidge and his wife were on a state visit to Oklahoma, and they were invited to a chicken coop—apparently one of their major tourist attractions. The president had to first give a speech, but since Mrs. Coolidge had already heard the speech many times she decided to go to the coop an hour earlier. She was being shown around by the farmer. She was surprised to see that the coop had dozens of hens but only one majestic rooster. When she asked the guide about this, he replied, “Well, he is a fine rooster. He goes on and on all night and day servicing the hens.”
“All night?” said Mrs. Coolidge. “Will you do me a big favor? When the president gets here, tell him in exactly the same words—what you just told me.”
An hour later when the president showed up, the farmer repeated the story.
The president asked, “Tell me something: Does the rooster go on all night with the same hen or different hens?”
“Why, different hens of course,” replied the farmer.
“Well, do me a favor,” said the president. “Tell the First Lady what you just told me.”
This story may be apocryphal, but it does raise a fascinating question. Would a patient with Capgras syndrome never get bored with his wife? Would she remain perpetually novel and attractive? If the syndrome could somehow be evoked temporarily with transcranial magnetic stimulation…one could make a fortune.
CHAPTER 3
Loud Colors and Hot Babes: Synesthesia
“My life is spent in one long effort to escape from the commonplaces of existence. These little problems help me to do so.”
—SHERLOCK HOLMES
WHENEVER FRANCESCA CLOSES HER EYES AND TOUCHES A PARTICULAR texture, she experiences a vivid emotion: Denim, extreme sadness. Silk, peace and calm. Orange peel, shock. Wax, embarrassment. She sometimes feels subtle nuances of emotions. Grade 60 sandpaper produces guilt, and grade 120 evokes “the feeling of telling a white lie.”
Mirabelle, on the other hand, experiences colors every time she sees numbers, even though they are typed in black ink. When recalling a phone number she conjures up a spectrum of the colors corresponding to the numbers in her mind’s eye and proceeds to read off the numbers one by one, deducing them from the colors. This makes it easy to memorize phone numbers.
When Esmeralda hears a C-sharp played on the piano, she sees blue. Other notes evoke other distinct colors—so much so that different piano keys are actually color coded for her, making it easier to remember and play musical scales.
These women are not crazy, nor are they suffering from a neurological disorder. They and millions of otherwise normal people have synesthesia, a surreal blending of sensation, perception, and emotion. Synesthetes (as such people are called) experience the ordinary world in extraordinary ways, seeming to inhabit a strange no-man’s-land between reality and fantasy. They taste colors, see sounds, hear shapes, or touch emotions in myriad combinations.
When my lab colleagues and I first came across synesthesia in 1997, we didn’t know what to make of it. But in the years since, it has proven to be an unexpected key for unlocking the mysteries of what makes us distinctly human. It turns out this little quirky phenomenon not only sheds light on normal sensory processing, but it takes us on a meandering path to confront some of the most intriguing aspects of our minds—such as abstract thinking and metaphor. It may illuminate attributes of human brain architecture and genetics that might underlie important aspects of creativity and imagination.
When I embarked on this journey nearly twelve years ago, I had four goals in mind. First, to show that synesthesia is real: These people aren’t just making it up. Second, to propose a theory of exactly what is going on in their brains that sets them apart from nonsynesthetes. Third, to explore the genetics of the condition. And fourth, and most important, to explore the possibility that, far from being a mere curiosity, synesthesia may give us valuable clues to understanding some of the most mysterious aspects of the human mind—abilities such as language, creativity, and abstract thought that come to us so effortlessly that we take them for granted. Finally, as an additional bonus, synesthesia may also shed light on age-old philosophical questions of qualia—the ineffable raw qualities of experience—and consciousness.