3. Humans can use words “off-line,” that is, to refer to things or events that are not currently visible or exist only in the past, the future, or a hypothetical reality: “I saw an apple on the tree yesterday, and decided I will pluck it tomorrow but only if it is ripe.” This type of complexity isn’t found in most spontaneous forms of animal communication. (Apes who are taught sign language can, of course, use signs in the absence of the object being referred to. For example, they can sign “banana” when hungry.)
4. Only humans, as far as we know, can use metaphor and analogy, although here we are in a gray area: the elusive boundary between thought and language. When an alpha male ape makes a genital display to intimidate a rival into submission, is this analogous to the metaphor “F—k you” that humans use to insult one another? I wonder. But even so, this limited kind of metaphor falls far short of puns and poems, or of Tagore’s description of the Taj Mahal as a “tear drop on the cheek of time.” Here again is that mysterious boundary between language and thought.
5. Flexible, recursive syntax is found only in human language. Most linguists single out this feature to argue for a qualitative jump between animal and human communication, possibly because it has more regularities and can be tackled more rigorously than other, more nebulous aspects of language.
These five aspects of language are by and large unique to humans. Of these, the first four are often lumped together as protolanguage, a term invented by the linguist Derek Bickerton. As we’ll see, protolanguage set the stage for the subsequent emergence and culmination of a highly sophisticated system of interacting parts that we call, as a whole system, true language.
TWO TOPICS IN brain research always seem to attract geniuses and crackpots. One is consciousness and the other is the question of how language evolved. So many zany ideas on language origins were being proposed in the nineteenth century that the Linguistic Society of Paris introduced a formal ban on all papers dealing with this topic. The society argued that, given the paucity of evolutionary intermediates or fossil languages, the whole enterprise was doomed to fail. More likely, linguists of the day were so fascinated by the intricacies of rules intrinsic to language itself that they were not curious about how it may have all started. But censorship bans and negative predictions are never a good idea in science.
A number of cognitive neuroscientists, myself included, believe that mainstream linguists have been overemphasizing the structural aspects of language. Pointing to the fact that the mind’s grammatical systems are to a large extent autonomous and modular, most linguists have shunned the question of how these interact with other cognitive processes. They profess interest solely in the rules that are fundamental to the brain’s grammatical circuits, not how the circuits actually work. This narrow focus removes the incentive to investigate how this mechanism interacts with other mental capacities such as semantics (which orthodox linguists don’t even regard as an aspect of language!), or to ask evolutionary questions about how it might have evolved from preexisting brain structures.
The linguists can be forgiven, if not applauded, for their wariness of evolutionary questions. With so many interlocking parts working in such a coordinated manner, it’s hard to figure out, or even imagine, how language could have evolved by the essentially blind process of natural selection. (By “natural selection,” I mean the progressive accumulation of chance variations that enhance the organism’s ability to pass on its genes to the next generation.) It’s not difficult to imagine a single trait, such as a giraffe’s long neck, being a product of this relatively simple adaptive process. Giraffe ancestors that had mutant genes conferring slightly longer necks had better access to tree leaves, causing them to survive longer or breed more, which caused the beneficial genes to increase in number down through the generations. The result was a progressive increase in neck length.
But how can multiple traits, each of which would be useless without the other, evolve in tandem? Many complex, interwoven systems in biology have been held up by would-be debunkers of evolutionary theory to argue for so-called intelligent design—the idea that the complexities of life could only occur through divine intervention or the hand of God. For example, how could the vertebrate eye evolve via natural selection? A lens and a retina are mutually necessary, so each would be useless without the other. Yet by definition the mechanism of natural selection has no foresight, so it couldn’t have created the one in preparation for the other.
Fortunately, as Richard Dawkins has pointed out, there are numerous creatures in nature with eyes at all stages of complexity. It turns out there is a logical evolutionary sequence that leads from the simplest possible light-sensing mechanism—a patch of light-sensitive cells on the outer skin—to the exquisite optical organ we enjoy today.
Language is similarly complex, but in this case we have no idea what the intermediate steps might have been. As the French linguists pointed out, there are no fossil languages or half-human creatures around for us to study. But this hasn’t stopped people from speculating on how the transition might have come about. Broadly speaking, there have been four main ideas. Some of the confusion between these ideas results from failing to define “language” clearly in the narrow sense of syntax versus the broader sense that includes semantics. I will use the term in the broader sense.
THE FIRST IDEA was advanced by Darwin’s contemporary Alfred Russel Wallace, who independently discovered the principle of natural selection (though he rarely gets the credit he deserves, probably because he was Welsh rather than English). Wallace argued that while natural selection was fine for turning fins into feet or scales into hair, language was too sophisticated to have emerged in this way. His solution to the problem was simple: Language was put into our brains by God. This idea may or may not be right but as scientists we can’t test it, so let’s move on.
Second, there’s the idea put forward by the founding father of modern linguistic science, Noam Chomsky. Like Wallace, he too was struck by the sophistication and complexity of language. Again, he couldn’t conceive of natural selection being the correct explanation for how language evolved.
Chomsky’s theory of language origins is based on the principle of emergence. The word simply means the whole is greater—sometimes vastly so—than the mere sum of the parts. A good example would be the production of salt—an edible white crystal—by combining the pungent, greenish, poisonous gas chlorine with the shiny, light metal sodium. Neither of these elements has anything saltlike about it, yet they combine into salt. Now if such a complex, wholly unpredictable new property can emerge from a simple interaction between two elementary substances, then who can predict what novel unforeseen properties might emerge when you pack 100 billion nerve cells into the tiny space of the human cranial cavity? Maybe language is one such property.
Chomsky’s idea isn’t quite as silly as some of my colleagues think. But even if it’s right, there’s not much one can say or do about it given the current state of brain science. There’s simply no way of testing it. And although Chomsky doesn’t speak of God, his idea comes perilously close to Wallace’s. I don’t know for sure that he is wrong, but I don’t like the idea for the simple reason that one can’t get very far in science by saying (in effect) something miraculous happened. I’m interested in finding a more convincing explanation that’s based on the known principles of organic evolution and brain function.
The third theory, proposed by one of the most distinguished exponents of evolutionary theory in this country, the late Stephen Jay Gould, argues that contrary to what most linguists claim, language is not a specialized mechanism based on brain modules and that it did not evolve specifically for its most obvious present purpose, communication. On the contrary, it represents the specific implementation of a more general mechanism that evolved earlier for other reasons, namely thinking. In Gould’s theory, language is rooted in a system that gave our ancestors a more sophisticated way to mentally represent the world and, as we shall see in the Chapter 9, a way to represent themselves within that representation. Only later did this system get repurposed or extended into a means of communication. In this view, then, thinking was an exaptation—a mechanism that originally evolved for one function and then provided the opportunity for something very different (in this case language) to evolve.
We need to bear in mind that the exaptation itself must have evolved by conventional natural selection. Failure to appreciate this has resulted in much confusion and bitter feuds. The principle of exaptation is not an alternative to natural selection, as Gould’s critics believe, but actually complements and expands its scope and range of applicability. For instance, feathers originally evolved from reptilian scales as an adaptation to provide insulation (just like hair in mammals), but then were exapted for flight. Reptiles evolved a three-bone multihinged lower jaw to permit swallowing large prey, but two of these three bones became an exaptation for improved hearing. The convenient location of these bones made possible the evolution of two little sound-amplifying bones inside your middle ear. No engineer would have dreamed of such an inelegant solution, which goes to illustrate the opportunistic nature of evolution. (As Francis Crick once said, “God is a hacker, not an engineer.”) I will expand on these ideas about jawbones transforming into ear bones at the end of this chapter.
Another example of a more general-purpose adaptation is the evolution of flexible fingers. Our arboreal ancestors originally evolved them for climbing trees, but hominins adapted them for fine manipulation and tool use. Today, thanks to the power of culture, fingers are a general-purpose mechanism that can be used for rocking a cradle, wielding a scepter, pointing, or even counting for math. But no one—not even a naïve adaptationist or evolutionary psychologist—would argue that fingers evolved because they were selected for pointing and counting.
Similarly, Gould argues, thinking may have evolved first, given its obvious usefulness in dealing with the world, which then set the stage for language. I agree with Gould’s general idea that language didn’t originally evolve specifically for communication. But I don’t like the idea that thinking evolved first and language (by which I mean all of language—not just in the Chomskian sense of emergence) was simply a byproduct. One reason I don’t like it is that it merely postpones the problem rather than solving it. Since we know even less about thinking and how it might have evolved than we do about language, saying language evolved from thought doesn’t tell us very much. As I have said many times before, you can’t get very far in science by trying to explain one mystery with another mystery.
The fourth idea—diametrically opposed to Gould’s—was proposed by the distinguished Harvard University linguist Steven Pinker, who declares language to be an instinct, as ingrained in human nature as coughing, sneezing, or yawning. By this he doesn’t mean it’s as simple as these other instincts, but that it is a highly specialized brain mechanism, an adaptation that is unique to humans and that evolved through conventional mechanisms of natural selection expressly for communication. So Pinker agrees with his former teacher Chomsky in asserting (correctly, I believe) that language is a highly specialized organ, but disagrees with Gould’s views on the important role played by exaptation. I think there is merit to Pinker’s view, but I also think his idea is far too general to be useful. It is not actually wrong, but it is incomplete. It seems a bit like saying that the digestion of food must be based on the first law of thermodynamics—which is true for sure, but it’s also true for every other system on earth. The idea doesn’t tell you much about the detailed mechanisms of digestion. In considering the evolution of any complex biological system (whether the ear or the language “organ”), we would like to know not merely that it was done by natural selection, but exactly how it got started and then evolved to its present level of sophistication. This isn’t as important for a more straightforward problem like the giraffe’s neck (although even there, one wants to know how genes selectively lengthen neck vertebrae). But it is an important part of the story when you are dealing with more complex adaptations.
So there you have it, four different theories of language. Of these we can discard the first two—not because we know for sure that they are wrong, but because they can’t be tested. But of the remaining two, who’s right—Gould or Pinker? I’d like to suggest that neither of them is, although there’s a grain of truth in each (so if you are a Gould/Pinker fan, you could say they were both right but didn’t take their arguments far enough).
I would like to propose a different framework for thinking about language evolution that incorporates some features of both but then goes well beyond them. I call it the “synesthetic bootstrapping theory.” As we shall see, it provides a valuable clue to understanding the origins of not only language, but also a host of other uniquely human traits such as metaphorical thinking and abstraction. In particular, I’ll argue that language and many aspects of abstract thought evolved through exaptations whose fortuitous combination yielded novel solutions. Notice that this is different from saying that language evolved from some general mechanism such as thinking, and it also differs from Pinker’s idea that language evolved as a specialized mechanism exclusively for communication.
NO DISCUSSION OF the evolution of language would be complete without considering the question of nature versus nurture. To what extent are the rules of language innate, and to what extent are they absorbed from the world early in life? Arguments about the evolution of language have been fierce, and the nature-versus-nurture debate has been the most acrimonious of all. I mention it here only briefly because it has already been the subject of a number of recent books. Everyone agrees that words are not hardwired in the brain. The same object can have different names in different languages—“dog” in English, “chien” in French, “kutta” in Hindi, “maaa” in Thai, and “nai” in Tamil—which don’t even sound alike. But with regard to the rules of language, there is no such agreement. Rather, three viewpoints vie for supremacy.
In the first view, the rules themselves are entirely hardwired. Exposure to adult speech is needed only to act as a switch to turn the mechanism on. The second view asserts that the rules of language are extracted statistically through listening. Bolstering this idea, artificial neural networks have been trained to categorize words and infer rules of syntax simply through passive exposure to language.
While these two models certainly capture some aspect of language acquisition, they cannot be the whole story. After all, apes, housecats, and iguanas have neural networks in their skulls, but they do not learn language even when raised in human households. A bonobo ape educated at Eton or Cambridge would still be an ape without language.
According to the third view, the competence to acquire the rules is innate, but exposure is needed to pick up the actual rules. This competence is bestowed by a still-unidentified “language acquisition device,” or LAD. Humans have this LAD. Apes lack it.
I favor this third view because it is the one most compatible with my evolutionary framework, and is supported by two complementary facts. First, apes cannot acquire true language even when they are treated like human children and trained daily in hand signs. They end up being able to sign for something they need right away, but their signing lacks generativity (the ability to generate arbitrarily complex new combinations of words), function words, and recursion. Conversely, it is nearly impossible to prevent human children from acquiring language. In some areas of the world, where people from different language backgrounds must trade or work together, children and adults develop a simplified pseudo-language—one with a limited vocabulary, rudimentary syntax, and little flexibility—called a pidgin. But the first generation of children who grow up surrounded by a pidgin spontaneously turn it into a creole—a full-fledged language, with true syntax and all the flexibility and nuance needed to compose novels, songs, and poetry. The fact that creoles arise time and time again from pidgins is compelling evidence for an LAD.
These are important and obviously difficult issues, and it’s unfortunate that the popular press often oversimplifies them by just asking questions like, Is language mainly innate or mainly acquired? Or similarly, Is IQ determined mainly by one’s genes or mainly by one’s environment? When two processes interact linearly, in ways that can be tracked with arithmetic, such questions can be meaningful. You can ask, for instance, “How much of our profits came from investments and how much from sales?” But if the relationships are complex and nonlinear—as they are for any mental attribute, be it language, IQ, or creativity—the question should be not, Which contributes more? but rather, How do they interact to create the final product? Asking whether language is mainly nurture is as silly as asking whether the saltiness of table salt comes mainly from chlorine or mainly from sodium.
The late biologist Peter Medawar provides a compelling analogy to illustrate the fallacy. An inherited disorder called phenylketonuria (PKU) is caused by a rarely occurring abnormal gene that results in a failure to metabolize the amino acid phenylalanine in the body. As the amino acid starts accumulating in the child’s brain, he becomes profoundly retarded. The cure is simple. If you diagnose it early enough, all you do is withhold phenylalanine-containing foods from the diet and the child grows up with an entirely normal IQ.
Now imagine two boundary conditions. Assume there is a planet where the gene is uncommon and phenylalanine is everywhere, like oxygen or water, and is indispensable for life. On this planet, retardation caused by PKU, and therefore variance in IQ in the population, would be entirely attributable to the PKU gene. Here you would be justified in saying that retardation was a genetic disorder or that IQ was inherited. Now consider another planet in which the converse is true: Everyone has the PKU gene but phenylalanine is rare. On this planet you would say that PKU is an environmental disorder caused by a poison called phenylalanine, and most of the variance in IQ is caused by the environment. This example shows that when the interaction between two variables is labyrinthine it is meaningless to ascribe percentage values to the contribution made by either. And if this is true for just one gene interacting with one environmental variable, the argument must hold with even greater force for something as complex and multifactorial as human intelligence, since genes interact not only with the environment but with each other.
Ironically, the IQ evangelists (such as Arthur Jensen, William Shockley, Richard Herrnstein, and Charles Murray) use the heritability of IQ itself (sometimes called “general intelligence” or “little g”) to argue that intelligence is a single measurable trait. This would be roughly analogous to saying that general health is one thing just because life span has a strong heritable component that can be expressed as a single number—age! No medical student who believed in “general health” as a monolithic entity would get very far in medical school or be allowed to become a physician—and rightly so—and yet whole careers in psychology and political movements have been built on the equally absurd belief in single measurable general intelligence. Their contributions have little more than shock value.
Returning to language, it should now be obvious which side of the fence I am on: neither. I straddle it proudly. Hence this chapter is not really about how language evolved—though I have been using that phrasing as shorthand—but how language competence, or the ability to acquire language so quickly, evolved. This competence is controlled by genes that were selected for by the evolutionary process. Our questions in the rest of this chapter are, Why were these genes selected, and how did this highly sophisticated competence evolve? Is it modular? How did it all get started? And how did we make the evolutionary transition from the grunts and howls of our apelike ancestors to the transcendent lyricism of Shakespeare?
RECALL THE SIMPLE bouba-kiki experiment. Could it hold the key to understanding how the first words evolved among a band of ancestral hominins in the African savanna between one and two hundred thousand years ago? Since words for the same object are often utterly different in different languages, one is tempted to think that the words chosen for particular objects are entirely arbitrary. This in fact is the standard view among linguists. Now, maybe one night the first band of ancestral hominins just sat around the tribal fire and said,
“Okay, let’s all call this thing a bird. Now let’s all say it together, biiirrrrddddd. Okay let’s repeat again, birrrrrrrdddddd.”
This story is downright silly, of course. But if it’s not how an initial lexicon was constructed, how did it happen? The answer comes from our bouba-kiki experiment, which clearly shows that there is a built-in, nonarbitrary correspondence between the visual shape of an object and the sound (or at least, the kind of sound) that might be its “partner.” This preexisting bias may be hardwired. This bias may have been very small, but it may have been sufficient to get the process started. This idea sounds very much like the now discredited “onomatopoeic theory” of language origins, but it isn’t. “Onomatopoeia” refers to words that are based on an imitation of a sound—for example, “thump” and “cluck” to refer to certain sounds, or how a child might call a cat a “meow-meow.” The onomatopoeic theory posited that sounds associated with an object become shorthand to refer to the objects themselves. But the theory I favor, the synesthetic theory, is different. The rounded visual shape of the bouba doesn’t make a rounded sound, or indeed any sound at all. Instead, its visual profile resembles the profile of the undulating sound at an abstract level. The onomatopoeic theory held that the link between word and sound was arbitrary and merely occurred through repeated association. The synesthetic theory says the link is nonarbitrary and grounded in a true resemblance of the two in a more abstract mental space.
What’s the evidence for this? The anthropologist Brent Berlin has pointed out that the Huambisa tribe of northern Peru have over thirty different names for thirty bird species in their jungle and an equal number of fish names for different Amazonian fishes. If you were to jumble up these sixty names and give them to someone from a completely different sociolinguistic background—say, a Chinese peasant—and ask him to classify the names into two groups, one for birds, one for fish, you would find that, astonishingly, he succeeds in this task well above chance level even though his language doesn’t bear the slightest shred of resemblance to the South American one. I would argue that this is a manifestation of the bouba-kiki effect, in other words, of sound-shape translation.1
But this is only a small part of the story. In Chapter 4, I introduced some ideas about the contribution mirror neurons may have made to the evolution of language. Now, in the remainder of this chapter, we can look at the matter more deeply. To understand the next part, let’s return to Broca’s area in the frontal cortex. This area contains maps, or motor programs, that send signals down to the various muscles of the tongue, lips, palate, and larynx to orchestrate speech. Not coincidentally, this region is also rich in mirror neurons, providing an interface between the oral actions for sounds, listening to sounds, and (least important) watching lip movements.
Just as there is a nonarbitrary correspondence and cross-activation between brain maps for sights and sounds (the bouba-kiki effect), perhaps there is a similar correspondence—a built-in translation—between visual and auditory maps, on the one hand, and the motor maps in Broca’s area on the other. If this sounds a bit cryptic, think again of words like “teeny-weeny,” “un peau,” and “diminutive,” for which the mouth and lips and pharynx actually become small as if to echo or mime the visual smallness, whereas words like “enormous” and “large” entail an actual physical enlargement of the mouth. A less obvious example is “fudge,” “trudge,” “sludge,” “smudge,” and so on, in which there is a prolonged tongue pressing on the palate before the sudden release, as if to mimic the prolonged sticking of the shoe in mud before the relatively sudden release. Here, yet again, is a built-in abstraction device that translates visual and auditory contours into vocal contours specified by muscle twitches.
Another less obvious piece of the puzzle is the link between manual gestures and lip and tongue movements. As mentioned in Chapter 4, Darwin noticed that when you cut with a pair of scissors, you may unconsciously echo these movements by clenching and unclenching your jaws. Since the cortical areas concerned with the mouth and hand are right next to each other, perhaps there is an actual spillover of signals from hands to mouth. As in synesthesia, there appears to be a built-in cross-activation between brain maps, except here it is between two motor maps rather than between sensory maps. We need a new name for this, so let’s call it “synkinesia” (syn meaning “together,” kinesia meaning “movement”).
Synkinesia may have played a pivotal role in transforming an earlier gestural language (or protolanguage, if you prefer) of the hands into spoken language. We know that emotional growls and shrieks in primates arise mainly in the right hemisphere, especially from a part of the limbic system (the emotional core of the brain) called the anterior cingulate. If a manual gesture were being echoed by orofacial movements while the creature was simultaneously making emotional utterances, the net result would be what we call words. In short, ancient hominins had a built-in, preexisting mechanism for spontaneously translating gestures into words. This makes it easier to see how a primitive gestural language could have evolved into speech—an idea that many classical psycholinguists find unappealing.
As a concrete example, consider the phrase “come hither.” Notice that you gesture this idea by holding your palm up and flexing your fingers toward yourself as if to touch the lower part of the palm. Amazingly, your tongue makes a very similar movement as it curls back to touch the palate to utter “hither” or “here”—examples of synkinesia. “Go” involves pouting the lips outward, whereas “come” involves drawing the lips together inward. (In the Indian Dravidian language Tamil—unrelated to English—the word for go is “po”).
Obviously, whatever the original language was back in the Stone Age, it has since been embellished and transformed countless times beyond reckoning, so that today we have languages as diverse as English, Japanese, !Kung, and Cherokee. Language, after all, evolves with incredible rapidity; sometimes just two hundred years is enough to alter a language to the point where a young speaker would be barely able to communicate with her great-great-grandmother. By this token, once the juggernaut of full linguistic competence arose in the human mind and culture, the original synkinetic correspondences were probably lost or blended beyond recognition. But in my account, synkinesia sowed the initial seeds of lexicon, helping to form the original vocabulary base on which subsequent linguistic elaboration was built.
Synkinesia and other allied attributes, such as mimicry of other people’s movements and extraction of commonalities between vision and hearing (bouba-kiki), may all rely on computations analogous to what mirror neurons are supposed to do: link concepts across brain maps. These sorts of linkages remind us again of their potential role in the evolution of protolanguage. This hypothesis may seem speculative to orthodox cognitive psychologists, but it provides a window of opportunity—indeed, the only one we have to date—for exploring the actual neural mechanisms of language. And that’s a big step forward. We will pick up the threads of this argument later in this chapter.
We also need to ask how gesturing evolved in the first place.2 At least for verbs like “come” or “go,” it may have emerged through the ritualization of movements that were once used for performing those actions. For instance, you may actually pull someone toward you by flexing your fingers and elbow toward you while grabbing the person. So the movement itself (even if divorced from the actual physical object) became a means of communicating intent. The result is a gesture. You can see how the same argument applies to “push,” “eat,” “throw,” and other basic verbs. And once you have a vocabulary of gestures in place, it becomes easier for corresponding vocalizations to evolve, given the preexisting hardwired translation produced by synkinesia. (The ritualization and reading of gestures may, in turn, have involved mirror neurons, as alluded to in previous chapters.)
So we now have three types of map-to-map resonance going on in the early hominin brain: visual-auditory mapping (bouba-kiki); mapping between auditory and visual sensory maps, and motor vocalization maps in Broca’s area; and mapping between Broca’s area and motor areas controlling manual gestures. Bear in mind that each of these biases was probably very small, but acting in conjunction they could have progressively bootstrapped each other, creating the snowball effect that culminated in modern language.
IS THERE ANY neurological evidence for the ideas discussed so far? Recall that many neurons in a monkey’s frontal lobe (in the same region that appears to have become Broca’s area in us) fire when the animal performs a highly specific action like reaching for a peanut, and that a subset of these neurons also fires when the monkey watches another monkey grab a peanut. To do this, the neuron (by which I really mean “the network of which the neuron is a part”) has to compute the abstract similarity between the command signals specifying muscle contraction sequences and the visual appearance of peanut reaching seen from the other monkey’s vantage point. So the neuron is effectively reading the other individual’s intention and could, in theory, also understand a ritualized gesture that resembles the real action. It struck me that the bouba-kiki effect provides an effective bridge between these mirror neurons and ideas about synesthetic bootstrapping I have presented so far. I considered this argument briefly in an earlier chapter, let me elaborate the argument now to make the case for its relevance to the evolution of protolanguage.
The bouba-kiki effect requires a built-in translation between visual appearance, sound representation in the auditory cortex, and sequences of muscle twitches in Broca’s area. Performing this translation almost certainly involves the activation of circuits with mirror-neuron-like properties, mapping one dimension onto another. The inferior parietal lobule (IPL), rich in mirror neurons, is ideally suited for this role. Perhaps the IPL serves as a facilitator for all such types of abstraction. I emphasize, again, that these three features (visual shape, sound inflections, and lip and tongue contour) have absolutely nothing in common except the abstract property of, say, jaggedness or roundness. So what we are seeing here is the rudiments—and perhaps relics of the origins—of the process called abstraction that we humans excel at, namely, the ability to extract the common denominator between entities that are otherwise utterly dissimilar. From being able to extract the jaggedness of the broken glass shape and the sound kiki to seeing the “fiveness” of five pigs, five donkeys, or five chirps may have been a short step in evolution but a giant step for humankind.
I HAVE ARGUED, so far, that the bouba-kiki effect may have fueled the emergence of protowords and a rudimentary lexicon. This was an important step, but language isn’t just words. There are two other important aspects to consider: syntax and semantics. How are these represented in the brain and how did they evolve? The fact that these two functions are at least partially autonomous is well illustrated by Broca’s and Wernicke’s aphasias. As we have seen, a patient with the latter syndrome produces elaborate, smoothly articulated, grammatically flawless sentences that convey no meaning whatsoever. The Chomskian “syntax box” in the intact Broca’s area goes “open loop” and produces well-formed sentences, but without Wernicke’s area to inform it with cultivated content, the sentences are gibberish. It’s as though Broca’s area on its own can juggle the words with the correct rules of grammar—just like a computer program might—without any awareness of meaning. (Whether it is capable of more complex rules such as recursion remains to be seen; it’s something we are currently studying.)
We’ll come back to syntax, but first let’s look at semantics (again, roughly speaking, the meaning of a sentence). What exactly is meaning? It’s a word that conceals vast depths of ignorance. Although we know that Wernicke’s area and parts of the temporo-parieto-occipital (TPO) junction, including the angular gyrus (Figure 6.2), are critically involved, we have no idea how neurons in these areas actually do their job. Indeed, the manner in which neural circuitry embodies meaning is one of the great unsolved mysteries of neuroscience. But if you allow that abstraction is an important step in the genesis of meaning, then our bouba-kiki example might once again provide the clue. As already noted, the sound kiki and the jagged drawing would seem to have nothing in common. One is a one-dimensional, time-varying pattern on the sound receptors in your ear, whereas the other is a two-dimensional pattern of light arriving on your retina all in one instant. Yet your brain has no difficulty in abstracting the property of jaggedness from both signals. As we have seen, there are strong hints that the angular gyrus is involved in this remarkable ability we call cross-modal abstraction.
FIGURE 6.2 A schematic depiction of resonance between brain areas that may have accelerated the evolution of protolanguage. Abbreviations: B, Broca’s area (for speech and syntactic structure). A, auditory cortex (hearing). W, Wernicke’s area for language comprehension (semantics). AG, angular gyrus for cross-modal abstraction. H, hand area of the motor cortex, which sends motor commands to the hand (compare with Penfield’s sensory cortical map in Figure 1.2). F, face area of the motor cortex (which sends command messages to the facial muscles, including lips and tongue). IT, the inferotemporal cortex/fusiform area, which represents visual shapes. Arrows depict two-way interactions that may have emerged in human evolution: 1, connections between the fusiform area (visual processing) and auditory cortex mediate the bouba-kiki effect. The cross-modal abstraction required for this probably requires initial passage through the angular gyrus. 2, interactions between the posterior language areas (including Wernicke’s area) and motor areas in or near Broca’s area. These connections (the arcuate fasciculus) are involved in cross-domain mapping between sound contours and motor maps (mediated partly by neurons with mirror-neuron-like properties) in Broca’s area. 3, cortical motor-to-motor mappings (synkinesia) caused by links between hand gestures and tongue, lip, and mouth movements in Penfield’s motor map. For example, the oral gestures for “diminutive,” “little,” “teeny-weeny,” and the French phrase “en peau” synkinetically mimic the small pincer gesture made by opposing thumb and index finger (as opposed to “large” or “enormous”). Similarly, pouting your lips outward to say “you” or (in French) “vous” mimic pointing outward.
There was an accelerated development of the left IPL in primate evolution culminating in humans. In addition, the front part of the lobule in humans (and humans alone), split into two gyri called the supramarginal gyrus and the angular gyrus. It doesn’t require deep insight to suggest therefore that the IPL and its subsequent splitting must have played a pivotal role in the emergence of functions unique to humans. Those functions, I suggest, include high-level types of abstraction.
The IPL (including the angular gyrus)—strategically located between the touch, vision, and hearing parts of the brain—evolved originally for cross-modal abstraction. But once this happened, cross-modal abstraction served as an exaptation for more high-level abstraction of the kind we humans take great pride in. And since we have two angular gyri (one in each hemisphere), they may have evolved different styles of abstraction: the right for visuospatial and body-based metaphors and abstraction, and the left for more language-based metaphors, including puns. This evolutionary framework may give neuroscience a distinct advantage over classical cognitive psychology and linguistics because it allows us to embark on a whole new program of research on the representation of language and thought in the brain.
The upper part of the IPL, the supramarginal gyrus, is also unique to humans, and is directly involved in the production, comprehension, and imitation of complex skills. Once again, these abilities are especially well developed in us compared with the great apes. When the left supramarginal gyrus is damaged, the result is apraxia, which is a fascinating disorder. A patient with apraxia is mentally normal in most respects, including his ability to understand and produce language. Yet when you ask him to mime a simple action—“pretend you are hammering a nail”—he will make a fist and bang it on the table instead of holding a “pretend” handle as you or I might. If asked to pretend he is combing his hair, he might stroke his hair with his palm or wiggle his fingers in his hair instead of “holding” and moving an imaginary comb through his hair. If requested to pretend waving goodbye, he may stare at his hand intently trying to figure out what to do or flail it around near his face. But if questioned, “What does ‘waving goodbye’ mean?” he might say, “Well, it’s what you do when you are parting company,” so obviously he clearly understands at a conceptual level what’s expected. Furthermore, his hands are not paralyzed or clumsy: He can move individual fingers as gracefully and independently as any of us. What’s missing is the ability to conjure up a vibrant, dynamic internal picture of the required action which can be used to guide the orchestration of muscle twitches to mime the action. Not surprisingly, putting the actual hammer in his hand may (as it does in some patients) lead to accurate performance since it doesn’t require him to rely on an internal image of the hammer.
Three additional points about these patients. First, they cannot judge whether someone else is performing the requested action correctly or not, reminding us that their problem lies in neither motor ability nor perception but in linking the two. Second, some patients with apraxia have difficulty imitating novel gestures produced by the examining physician. Third and most surprisingly, they are completely unaware that they themselves are miming incorrectly; there is no sign of frustration. All of these missing abilities sound compellingly reminiscent of the abilities traditionally attributed to mirror neurons. Surely it can’t be a coincidence that the IPL in monkeys is rich in mirror neurons. Based on this reasoning my postdoctoral colleague Paul McGeoch and I suggested in 2007 that apraxia is fundamentally a disorder of mirror-neuron function. Intriguingly, many autistic children also have apraxia, an unexpected link that lends support to our idea that a mirror-neuron deficit might underlie both disorders. Paul and I opened a bottle to celebrate having clinched the diagnosis.
But what caused the accelerated evolution of the IPL—and the angular gyrus part of it—in the first place? Did the selection pressure come from the need for higher forms of abstraction? Probably not. The most likely cause of its explosive development in primates was the need to achieve an exquisitely refined, fine-grained interaction between vision and muscle and joint position sense while negotiating branches on treetops. This resulted in the capacity of cross-modal abstraction, for example, when a branch is signaled as being horizontal both by the image falling on the retina and the dynamic stimulation of touch, joint, and muscle receptors in the hands.
The next step was critical: The lower part of the IPL split accidentally, possibly as a result of gene duplication, a frequent occurrence in evolution. The upper part, the supramarginal gyrus, retained the old function of its ancestral lobule—hand-eye coordination—elaborating it to the new levels of sophistication required for skilled tool use and imitation in humans. In the angular gyrus the very same computational ability set the stage (became an exaptation) for other types of abstraction as well: the ability to extract the common denominator among superficially dissimilar entities. A weeping willow looks sad because you project sadness on to it. Juliet is the sun because you can abstract certain things they have in common. Five donkeys and five apples have “fiveness” in common.
A tangential piece of evidence for this idea comes from my examination of patients who have damage to the IPL of the left hemisphere. These patients usually have anomia. (difficulty finding words), but I found that some of them failed the bouba-kiki test and were also abysmal at interpreting proverbs, often interpreting them literally instead of metaphorically. One patient I saw in India recently got 14 out of 15 proverbs wrong even though he was perfectly intelligent in other respects. Obviously this study needs to be repeated on additional patients but it promises to be a fruitful line of enquiry.
The angular gyrus is also involved in naming objects, even common objects such as comb or pig. This reminds us that a word, too, is a form of abstraction from multiple instances (for example, multiple views of a comb seen in different contexts but always serving the function of hairdressing). Sometimes they will substitute a related word (“cow” for “pig”) or try to define the word in absurdly comical ways. (One patient said “eye medicine” when I pointed to my glasses.) Even more intriguing was an observation I made in India on a fifty-year-old physician with anomia. Every Indian child learns about many gods in Indian mythology, but two great favorites are Ganesha (the elephant-headed god) and Hanuman (the monkey god) and each has an elaborate family history. When I showed him a sculpture of Hanuman, he picked it up, scrutinized it, and misidentified it as Ganesha, which belongs to the same category, namely god. But when I asked him to tell me more about the sculpture, which he continued to inspect, he said it was the son of Shiva and Parvati—a statement that is true for Ganesha, not Hanuman. It’s as if the mere act of mislabeling the sculpture vetoed its visual appearance, causing him to give incorrect attributes to Hanuman! Thus the name of an object, far from being just any other attribute of the object, seems to be a magic key that opens a whole treasury of meanings associated with the object. I can’t think of a simpler explanation for this phenomenon, but the existence of such unsolved mysteries fuels my interest in neurology just as much as the explanations for which we can generate and test specific hypotheses.
LET US TURN now to the aspect of language that is most unequivocally human: syntax. The so-called syntactic structure, which I mentioned earlier, gives human language its enormous range and flexibility. It seems to have evolved rules that are intrinsic to this system, rules that no ape has been able to master but every human language has. How did this particular aspect of language evolve? The answer comes, once again, from the exaptation principle—the notion that adaptation to one specific function becomes assimilated into another, entirely different function. One intriguing possibility is that the hierarchical tree structure of syntax may have evolved from a more primitive neural circuit that was already in place for tool use in the brains of our early hominin ancestors.
Let’s take this a step further. Even the simplest type of opportunistic tool use, such as using a stone to crack open a coconut, involves an action—in this case, cracking (the verb)—performed by the right hand of the tool user (the subject) on the object held passively by the left hand (the object). If this basic sequence were already embedded in the neural circuitry for manual actions, it’s easy to see how it might have set the stage for the subject-verb-object sequence that is an important aspect of natural language.
In the next stage of hominin evolution, two amazing new abilities emerged that were destined to transform the course of human evolution. First was the ability to find, shape, and store a tool for future use, leading to our sense of planning and anticipation. Second—and especially important for subsequent language origin—was use of the subassembly technique in tool manufacture. Taking an axe head and hafting (tying) it to a long wooden handle to create a composite tool is one example. Another is hafting a small knife at an angle to a small pole and then tying this assembly to another pole to lengthen it so that fruits can be reached and yanked off trees. The wielding of a composite structure bears a tantalizing resemblance to the embedding of, say, a noun phrase within a longer sentence. I suggest that this isn’t just a superficial analogy. It’s entirely possible that the brain mechanism that implemented the hierarchical subassembly strategy in tool use became coopted for a totally novel function, the syntactic tree structure.
But if the tool-use subassembly mechanism were borrowed for aspects of syntax, then wouldn’t the tool-use skills deteriorate correspondingly as syntax evolved, given limited neural space in the brain? Not necessarily. A frequent occurrence in evolution is the duplication of preexisting body parts brought about by actual gene duplication. Just think of multisegmented worms, whose bodies are composed of repeating, semi-independent body sections, a bit like a chain of railroad cars. When such duplicated structures are harmless and not metabolically costly, they can endure many generations. And they can, under the right circumstances, provide the perfect opportunity for that duplicate structure to become specialized for a different function. This sort of thing has happened repeatedly in the evolution of the rest of the body, but its role in the evolution of brain mechanisms is not widely appreciated by psychologists. I suggest that an area very close to what we now call Broca’s area originally evolved in tandem with the IPL (especially the supramarginal portion) for the multimodal and hierarchical subassembly routines of tool use. There was a subsequent duplication of this ancestral area, and one of the two new subareas became further specialized for syntactic structure that is divorced from actual manipulation of physical objects in the world—in other words, it became Broca’s area. Add to this cocktail the influence of semantics, imported from Wernicke’s area, and aspects of abstraction from the angular gyrus, and you have a potent mix ready for the explosive development of full-fledged language. Not coincidentally, perhaps, these are the very areas in which mirror neurons abound.
Bear in mind that my argument thus far focuses on evolution and exaptation. Another question remains. Are the concepts of subassembly tool use, hierarchical tree structure of syntax (including recursion), and conceptual recursion mediated by separate modules in the brains of modern humans? How autonomous, really, are these modules in our brains? Would a patient with apraxia (the inability to mime the use of tools) caused by damage to the supramarginal gyrus also have problems with subassembly in tool use? We know that patients with Wernicke’s aphasia produce syntactically normal gibberish—the basis for suggesting that, at least in modern brains, syntax doesn’t depend on the recursive-ness of semantics or indeed of high-level embedding of concepts within concepts.3
But how syntactically normal is their gibberish? Does their speech—mediated entirely by Broca’s area on autopilot—really have the kinds of syntactic tree structure and recursion that characterize normal speech? If not, are we really justified in calling Broca’s area a “syntax box”? Can a Broca’s aphasic do algebra, given that algebra also requires recursion to some extent? In other words, does algebra piggyback on preexisting neural circuits that evolved for natural syntax? Earlier in this chapter I gave the example of a single patient with Broca’s aphasia who could do algebra, but there are precious few studies on these topics, each of which could generate a PhD thesis.
SO FAR I have taken you on an evolutionary journey that culminated in the emergence of two key human abilities: language and abstraction. But there is another feature of human uniqueness that has puzzled philosophers for centuries, namely, the link between language and sequential thinking, or reasoning in logical steps. Can we think without silent internal speech? We have already discussed language, but we need to be clear about what is meant by thinking before we try grappling with this question. Thinking involves, among other things, the ability to engage in open-ended symbol manipulation in your brain following certain rules. How closely are these rules related to those of syntax? The key phrase here is “open-ended.”
To understand this, think of a spider spinning a web and ask yourself, Does the spider have knowledge about Hooke’s law regarding the tension of stretched strings? The spider must “know” about this in some sense, otherwise the web would fall apart. Would it be more accurate to say that the spider’s brain has tacit, rather than explicit, knowledge of Hooke’s law? Although the spider behaves as though it knows this law—the very existence of the web attests to this—the spider’s brain (yes, it has one) has no explicit representation of it. It cannot use the law for any purpose other than weaving webs and, in fact, it can only weave webs according to a fixed motor sequence. This isn’t true of a human engineer who consciously deploys Hooke’s law, which she learned and understood from physics textbooks. The human’s deployment of the law is open-ended and flexible, available for an infinite number of applications. Unlike the spider he has an explicit representation of it in his mind—what we call understanding. Most of the knowledge of the world that we have falls in between these two extremes: the mindless knowledge of a spider and the abstract knowledge of the physicist.
What do we mean by “knowledge” or “understanding”? And how do billions of neurons achieve them? These are complete mysteries. Admittedly, cognitive neuroscientists are still very vague about the exact meaning of words like “understand,” “think,” and indeed the word “meaning” itself. But it is the business of science to find answers step by step through speculation and experiment. Can we approach some of these mysteries experimentally? For instance, what about the link between language and thinking? How might you experimentally explore the elusive interface between language and thought?
Common sense suggests that some of the activities regarded as thinking don’t require language. For example, I can ask you to fix a light-bulb on a ceiling and show you three wooden boxes lying on the floor. You would have the internal sense of juggling the visual images of the boxes—stacking them up in your mind’s eye to reach the bulb socket—before actually doing so. It certainly doesn’t feel like you are engaging in silent internal speech—“Let me stack box A on box B,” and so on. It feels as if we do this kind of thinking visually and not by using language. But we have to be careful with this deduction because introspection about what’s going in one’s head (stacking the three boxes) is not a reliable guide to what’s actually going on. It’s not inconceivable that what feels like the internal juggling of visual symbols actually taps into the same circuitry in the brain that mediates language, even though the task feels purely geometric or spatial. However much this seems to violate common sense, the activation of visual image–like representations may be incidental rather than causal.
Let’s leave visual imagery aside for the moment and ask the same question about the formal operations underlying logical thinking. We say, “If Joe is bigger than Sue, and if Sue is bigger than Rick, then Joe must be bigger than Rick.” You don’t have to conjure up mental images to realize that the deduction (“then Joe must be…”) follows from the two premises (“If Joe is…and if Sue is…”). It’s even easier to appreciate this if you substitute their names with abstract tokens like A, B, and C: If A > B and B > C, then it must be true that A > C. We also can intuit that if A > C and B > C, it doesn’t necessarily follow that A > B.
But where do these obvious deductions, based on the rules of transitivity, come from? Is it hardwired into your brain and present at birth? Was it learned from induction because every time in the past, when any entity A was bigger than B and B was bigger than C, it was always the case that A was bigger than C as well? Or was it learned initially through language? Whether this ability is innate or learned, does it depend on some kind of silent internal language that mirrors and partially taps into the same neural machinery used for spoken language? Does language precede propositional logic, or vice versa? Or perhaps neither is necessary for the other, even though they mutually enrich each other.
These are intriguing theoretical questions, but can we translate them into experiments and find some answers? Doing so has proved to be notoriously difficult in the past, but I’ll propose what philosophers would call a thought experiment (although, unlike philosophers’ thought experiments, this one can actually be done). Imagine I show you three boxes of three different sizes on the floor and a desirable object dangling from a high ceiling. You will instantly stack the three boxes, with the largest one at the bottom and the smallest at the top, and then climb up to retrieve the reward. A chimp can also solve this problem but presumably requires physical trial-and-error exploration of the boxes (unless you pick an Einstein among chimps).
But now I modify the experiment: I put a colored luminous spot on each of the boxes—red (on the big box), blue (intermediate box), and green (small box)—and have the boxes lying separately on the floor. I bring you into the room for the first time and expose you to the boxes long enough for you to realize which box has which spot. Then I switch the room lights off so that only the luminous colored dots are visible. Finally, I bring a luminous reward into the dark room and dangle it from the ceiling.
If you have a normal brain you will, without hesitation, put the red-dotted box at the bottom, the blue-dotted box in the middle, and the green-dotted box on top, and then climb to the top of the pile to retrieve the dangling reward. (Let’s assume the boxes have handles sticking out that you use to pick them up with, and that the boxes have been made equal weight so that you can’t use tactile cues to distinguish them.) In other words, as a human being you can create arbitrary symbols (loosely analogous to words) and then juggle them entirely in your brain, doing a virtual-reality simulation to discover a solution. You could even do this if during the first phase you were shown only the red-and green-dotted boxes, and then separately shown the green-and blue-dotted boxes, followed finally in the test phase by seeing the red-and green-dotted boxes alone. (Assume that stacking even two boxes gives you better access to the reward.) Even though the relative sizes of the boxes were not currently visible during these three viewing stages, I bet you could now juggle the symbols entirely in your head to establish the transitivity using conditional (if-then) statements—“If red is bigger than blue and blue is bigger than green, then red must be bigger than green”—and then proceed to stack the green box on the red box in the dark to reach the reward. An ape would almost certainly fail at this task, which requires off-line (out of sight) manipulation of arbitrary signs, the basis of language.
But to what extent is language an actual requirement for conditional statements mentally processed off-line, especially in novel situations? Perhaps one could find out by carrying out the same experiment on a patient who has Wernicke’s aphasia. Given the claim that the patient can produce sentences like “If Blaka is bigger than Guli, then Lika tuk,” the question is whether she understands the transitivity implied in the sentence. If so, would she pass the three-boxes test we designed for chimps? Conversely, what about a patient with Broca’s aphasia, who purportedly has a broken syntax box? He no longer uses “ifs,” “buts,” and “thens” in his sentences and doesn’t comprehend these words when he hears or reads them. Would such a patient nevertheless be able to pass the three-boxes test, implying he doesn’t need the syntax module to understand and deploy the rules of deductive if-then inferences in a versatile manner? One could ask the same question of a number of other rules of logic as well. Without such experiments the interface between language and thought will forever remain a nebulous topic reserved for philosophers.
I have used the three-boxes idea to illustrate that one can, in principle, experimentally disentangle language and thought. But if the experiment proves impractical to carry out, one could conceivably confront the patient with cleverly designed video games that embody the same logic but do not require explicit verbal instructions. How good would the patient be at such games? And indeed, can the games themselves be used to slowly coax language comprehension back into action?
Another point to consider is that the ability to deploy transitivity in abstract logic may have evolved initially in a social context. Ape A sees ape B bullying and subduing ape C, who has on previous occasions successfully subdued A. Would A then spontaneously retreat from B, implying the ability to employ transitivity? (As a control, one would have to show that A doesn’t retreat from B if B is only seen subduing some other random ape C.)
The three-boxes test given to Wernicke’s aphasics might help us to disentangle the internal logic of our thought processes and the extent to which they interact with language. But there is also a curious emotional aspect to this syndrome that has received scant attention, namely, aphasics’ complete indifference—indeed, ignorance—of the fact that they are producing gibberish and their failure to register the expression of incomprehension on the faces of people they are talking to. Conversely, I once wandered into a clinic and started saying “Sawadee Khrap. Chua alai? Kin Krao la yang?” to an American patient and he smiled and nodded acknowledgment. Without his language comprehension module he couldn’t tell nonsense speech and normal speech apart, whether the speech emerged from his own mouth or from mine. My postdoctoral colleague Eric Altschuler and I have often toyed with the idea of introducing two Wernicke’s aphasics to each other. Would they talk incessantly to each other all day, and without getting bored? We joked about the possibility that Wernicke’s aphasics are not talking gibberish; maybe they have a private language comprehensible only to each other.
WE HAVE BEEN speculating on the evolution of language and thought, but still haven’t resolved it. (The three-boxes experiment or its video-game analog hasn’t been tried yet.) Nor have we considered the modularity of language itself: the distinction between semantics and syntax (including what we defined earlier in the chapter as recursive embedding, for example, “The girl who killed the cat that ate the rat started to sing”). Presently, the strongest evidence for the modularity of syntax comes from neurology, from the observation that patients with a damaged Wernicke’s area produce elaborate, grammatically correct sentences that are devoid of meaning. Conversely, in patients who have a damaged Broca’s area but an intact Wernicke’s area, like Dr. Hamdi, meaning is preserved, but there is no syntactic deep structure. If semantics (“thought”) and syntax were mediated by the same brain region or by diffuse neural networks, such an “uncoupling” or dissociation of the two functions couldn’t occur. This is the standard view presented by psycholinguists, but is it really true? The fact that the deep structure of language is deranged in Broca’s aphasia is beyond question, but does it follow that this brain region is specialized exclusively for key aspects of language such as recursion and hierarchical embedding? If I lop off your hand you can’t write, but your writing center is in the angular gyrus, not in your hand. To counter this argument psycholinguists usually point out that the converse of this syndrome occurs when Wernicke’s area is damaged: Deep structure underlying grammar is preserved but meaning is abolished.
My postdoctoral colleagues Paul McGeoch and David Brang and I decided to take a closer look. In an influential and brilliant paper written in 2001 in the journal Science, the linguist Noam Chomsky and cognitive neuroscientist Marc Hauser surveyed the whole field of psycholinguistics and the conventional wisdom that language is unique to humans (and probably modular). They found that almost every aspect of language could be seen in other species, after adequate training, such as in chimps, but the one aspect that makes the deep grammatical structure in humans unique is recursive embedding. When people say that deep structure and syntactic organization are normal in Wernicke’s aphasia, they are usually referring to the more obvious aspects, such as the ability to generate a fully formed sentence employing nouns, prepositions, and conjunctions but carrying no meaningful content (“John and Mary went to the joyful bank and paid hat”). But clinicians have long known that, contrary to popular wisdom, the speech output of Wernicke’s aphasics isn’t entirely normal even in its syntactic structure. It’s usually somewhat impoverished. However, these clinical observations were largely ignored because they were made long before recursion was recognized as the sine qua non of human language. Their true importance was missed.
When we carefully examined the speech output of many Wernicke’s aphasics, we found that, in addition to the absence of meaning, the most striking and obvious loss was in recursive embedding. Patients spoke in loosely strung together phrases using conjunctions: “Susan came and hit John and took the bus and Charles fell down,” and so forth. But they could almost never construct recursive sentences such as “John who loved Julie used a spoon.” (Even without setting “who loved Julie” off with commas, we know instantly that John used the spoon, not Julie.) This observation demolishes the long-standing claim that Broca’s area is a syntax box that is autonomous from Wernicke’s area. Recursion may turn out to be a property of Wernicke’s area, and indeed may be a general property common to many brain functions. Furthermore, we mustn’t confuse the issue of functional autonomy and modularity in the modern human brain with the question of evolution: Did one module provide a substrate for the other or even evolve into another, or did they evolve completely independently in response to different selection pressures?
Linguists are mainly interested in the former question—the autonomy of rules intrinsic to the module—whereas the evolutionary question usually elicits a yawn (just as any talk of evolution or brain modules would seem pointless to a number theorist interested in rules intrinsic to the number system). Biologists and developmental psychologists, on the other hand, are interested not only in the rules that govern language but also in the evolution, development, and neural substrates of language, including (but not confined to) syntax. A failure to make this distinction has bedeviled the whole language evolution debate for nearly a century. The key difference, of course, is that language capacity evolved through natural selection over two hundred thousand years, whereas number theory is barely two thousand years old. So for what it is worth, my own (entirely unbiased) view is that on this particular issue the biologists are right. As an analogy, I’ll invoke again my favorite example, the relationship between the chewing and hearing. All mammals have three tiny bones—malleus, stapes, and incus—inside the middle ear. These bones transmit and amplify sounds from the eardrum to the inner ear. Their sudden emergence in vertebrate evolution (mammals have them but their reptilian ancestors don’t) was a complete mystery and often used as ammunition by creationists until comparative anatomists, embryologists, and paleontologists discovered that they actually evolved from the back of the jawbone of the reptile. (Recall that the back of your jaw articulates very close to your ear.) The sequence of steps makes a fascinating story.
The mammalian jaw has a single bone, the mandible, whereas our reptilian ancestors had three. The reason is that reptiles, unlike mammals, frequently consume enormous prey rather than frequent small meals. The jaw is used exclusively for swallowing, not chewing, and due to reptiles’ slow metabolic rate, the unchewed food in the stomach can take weeks to break down and digest. This kind of eating requires a large, flexible, multihinged jaw. But as reptiles evolved into metabolically active mammals, the survival strategy switched to consumption of frequent small meals to maintain a high metabolic rate.
Remember also that reptiles lie low on the ground with their limbs sprawled outward, thereby swinging the neck and head close to the ground while they sniff for prey. The three bones of the jaw lying on the ground allowed reptiles to also transmit sounds made by other animals’ nearby footsteps to the vicinity of the ear. This is called bone conduction, as opposed air conduction which is used by mammals.
As they evolved into mammals, reptiles raised themselves up from the sprawling position to stand higher up off the ground on vertical legs. This allowed two of the three jaw bones to become progressively assimilated into the middle ear, being taken over entirely for hearing airborne sounds and giving up their chewing function altogether. But this change in function was only possible because they were already strategically located—in the right place at the right time—and were already beginning to be used for hearing terrestrially transmitted sound vibrations. This radical shift in function also served the additional purpose of transforming the jaw into a single, rigid nonhinged bone—the mandible—which was much stronger and more useful for chewing.
The analogy with language evolution should be obvious. If I were to ask you whether chewing and hearing are modular and independent of each other, both structurally and functionally, the answer would obviously be yes. And yet we know that the latter evolved from the former, and we can even specify the steps involved. Likewise, there is clear evidence that language functions such as syntax and semantics are modular and autonomous and furthermore are also distinct from thinking, perhaps as distinct as hearing is from chewing. Yet it is entirely possible that one of these functions, such as syntax, evolved from other, earlier functions such as tool use and/or thinking. Unfortunately, since language doesn’t fossilize like jaws or ear bones, we can only construct plausible scenarios. We may have to live with not knowing what the exact sequence of events was. But hopefully I have given you a glimpse of the kind of theory that we need to come up with, and the kinds of experiments we need to do, to account for the emergence of full-fledged language, the most glorious of all our mental attributes.
CHAPTER 7
Beauty and the Brain: The Emergence of Aesthetics
Art is a lie that makes us realize the truth.
—PABLO PICASSO
AN OLD INDIAN MYTH SAYS THAT BRAHMA CREATED THE UNIVERSE and all the beautiful snow-clad mountains, rivers, flowers, birds, and trees—even humans. Yet soon afterward, he was sitting on a chair, his head in his hands. His consort, Saraswati, asked him, “My lord—you created the whole beautiful Universe, populated with men of great valor and intellect who worship you—why are you so despondent?” Brahma replied, “Yes, all this is true, but the men whom I have created have no appreciation of the beauty of my creations and, without this, all their intellect means nothing.” Whereupon Saraswati reassured Brahma, “I will give mankind a gift called art.” From that moment on people developed an aesthetic sense, started responding to beauty, and saw the divine spark in all things. Saraswati is therefore worshipped throughout India as the goddess of art and music—as humankind’s muse.
This chapter and the next are concerned with a deeply fascinating question: How does the human brain respond to beauty? How are we special in terms of how we respond to and create art? How does Saraswati work her magic? There are probably as many answers to this question as there are artists. At one end of the spectrum is the lofty idea that art is the ultimate antidote to the absurdity of the human predicament—the only “escape from this vale of tears,” as the British surrealist and poet Roland Penrose once said. At the other extreme is the school of Dada, the notion that “anything goes,” which says that what we call art is largely contextual or even entirely in the mind of the beholder. (The most famous example is Marcel Duchamp putting a urinal bowl in a gallery and saying, in effect, “I call it art; therefore it’s art.”) But is Dada really art? Or is it merely art mocking itself? How often have you walked into a gallery of contemporary art and felt like the little boy who knew instantly that the emperor had no clothes?
Art endures in a staggering diversity of styles: Classical Greek art, Tibetan art, African Art, Khmer art, Chola bronzes, Renaissance art, impressionism, expressionism, cubism, fauvism, abstract art—the list is endless. But beneath all this variety, might there some general principles or artistic universals that cut across cultural boundaries? Can we come up with a science of art? Science and art seem fundamentally antithetical. One is a quest for general principles and tidy explanations while the other is a celebration of the individual imagination and spirit, so that the very notion of a science of art seems like an oxymoron. Yet that is my goal for this chapter and the next: to convince you that our knowledge of human vision and of the brain is now sophisticated enough that we can speculate intelligently on the neural basis of art and maybe begin to construct a scientific theory of artistic experience. Saying this does not in any way detract from the originality of the individual artist, for the manner in which she deploys these universal principles is entirely hers.
First, I want to make a distinction between art as defined by historians and the broad topic of aesthetics. Because both art and aesthetics require the brain to respond to beauty, there is bound to be a great deal of overlap. But art includes such things as Dada (whose aesthetic value is dubious), whereas aesthetics includes such things as fashion design, which is not typically regarded as high art. Maybe there can never be a science of high art, but I suggest there can be of the principles of aesthetics that underlie it.
Many principles of aesthetics are common to both humans and other creatures and therefore cannot be the result of culture. Can it be a coincidence that we find flowers to be beautiful even though they evolved to be beautiful to bees rather than to us? This is not because our brains evolved from bee brains (they didn’t), but because both groups independently converged on some of the same universal principles of aesthetics. The same is true for why we find male birds of paradise such a feast for the eyes—to the point of using them as head-dresses—even though they evolved for females of their own species and not for Homo sapiens.
FIGURE 7.1 The elaborately constructed “nest,” or bower, of the male bowerbird, designed to attract females. Such “artistic” principles as grouping by color, contrast, and symmetry are in evidence.
Some creatures, such as bowerbirds from Australia and New Guinea, possess what we humans perceive as artistic talent. The males of the genus are drab little fellows but, perhaps as a Freudian compensation, they build enormous gorgeously decorated bowers—bachelor pads—to attract mates (Figure 7.1). One species builds a bower that is eight feet tall with elaborately constructed entrances, archways, and even lawns in front of the entryway. On different parts of the bower, he arranges clusters of flowers into bouquets, sorts berries of various types by color, and forms gleaming white hillocks out of bits of bone and eggshell. Smooth shiny pebbles arranged into elaborate designs are often part of the display. If the bowers are near human habitation, the bird will borrow bits of cigarette foil or shiny shards of glass (the avian equivalent of jewelry) to provide accent.
The male bowerbird takes great pride in the overall appearance and even fine details of his structure. Displace one berry, and he will hop over to put it back, showing the kind of fastidiousness seen in many a human artist. Different species of bowerbirds build discernibly different nests, and most remarkable of all, individuals within a species have different styles. In short, the bird shows artistic originality which serves to impress and attract individual females. If one of these bowers were displayed in a Manhattan art gallery without revealing that it was created by a bird brain, I’d wager it would elicit favorable comments.
Returning to humans, one problem concerning aesthetics has always puzzled me. What, if anything, is the key difference between kitsch art and real art? Some would argue that one person’s kitsch might be another person’s high art. In other words, the judgment is entirely subjective. But if a theory of art cannot objectively distinguish kitsch from the real, how complete is that theory, and in what sense can we claim to have really understood the meaning of art? One reason for thinking that there’s a genuine difference is that you can learn to like real art after enjoying kitsch, but it’s virtually impossible to slide back into kitsch after knowing the delights of high art. Yet the difference between the two remains tantalizingly elusive. In fact, I will lay out a challenge that no theory of aesthetics can be said to be complete unless it confronts this problem and can objectively spell out the distinction.
In this chapter, I’ll speculate on the possibility that real art—or indeed aesthetics—involves the proper and effective deployment of certain artistic universals, whereas kitsch merely goes through the motions, as if to make a mockery of the principles without a genuine understanding of them. This isn’t a full theory, but it’s a start.
FOR A LONG time I had no real interest in art. Well, that isn’t entirely true, because any time I’d attend a scientific meeting in a big city I would visit the local galleries, if only to prove to myself that I was cultured. But it’s fair to say I had no deep passion for art. But all that changed in 1994 when I went on a sabbatical to India and began what was to become a lasting love affair with aesthetics. During a three-month visit to Chennai (also known as Madras), the city in southern India where I was born, I found myself with extra time on my hands. I was there as a visiting professor at the Institute of Neurology to work on patients with stroke, phantom limbs following amputation, or a sensory loss caused by leprosy. The clinic was undergoing a dry spell, so there weren’t many patients to see. This gave me ample opportunity for leisurely walks through the Shiva temple in my neighborhood in Mylapore, which dates back to the first millennium B.C.E.
A strange thought occurred to me as I looked at the stone and bronze sculptures (or “idols,” as the English used to call them) in the temple. In the West, these are now found mostly in museums and galleries and are referred to as Indian art. Yet I grew up praying to these as a child and never thought of them as art. They are so well integrated into the fabric of life in India—the daily worship, music, and dance—that it’s hard to know where art ends and where ordinary life begins. Such sculptures are not separate strands of existence the way they are here in the West.
Until that particular visit to Chennai, I had a rather colonial view of Indian sculptures thanks to my Western education. I thought of them largely as religious iconography or mythology rather than fine art. Yet on this visit, these images had a profound impact on me as beautiful works of art, not as religious artifacts.
When the English arrived in India during Victorian times, they regarded the study of Indian art mainly as ethnography and anthropology. (This would be equivalent to putting Picasso in the anthropology section of the national museum in Delhi.) They were appalled by the nudity and often described the sculptures as primitive or not realistic. For example, the bronze sculpture of Parvati (Figure 7.2a), which dates back to the zenith of southern Indian art during the Chola period (A.D. twelfth century), is regarded in India as the very epitome of feminine sensuality, grace, poise, dignity, and charm—indeed, of all that is feminine. Yet when the Englishmen looked at this and other similar sculptures (Figure 7.2b), they complained that it wasn’t art because the sculptures didn’t resemble real women. The breasts and hips were too big, the waist too narrow. Similarly, they pointed out that the miniature paintings of the Mogul or Rajasthani school often lacked the perspective found in natural scenes.
In making these criticisms they were, of course, unconsciously comparing ancient Indian art with the ideals of Western art, especially classical Greek and Renaissance art in which realism is emphasized. But if art is about realism, why even create the images? Why not just walk around looking at things around you? Most people recognize that the purpose of art is not to create a realistic replica of something but the exact opposite: It is to deliberately distort, exaggerate—even transcend—realism in order to achieve certain pleasing (and sometimes disturbing) effects in the viewer. And the more effectively you do this, the bigger the aesthetic jolt.
FIGURE 7.2 (a) A bronze sculpture of the goddess Parvati created during the Chola period (tenth to thirteenth century) in southern India.
(b) Replica of a sandstone sculpture of a stone nymph standing below an arched bough, from Khajuraho, India, in the twelfth century, demonstrating “peak shift” of feminine form. The ripe mangos on the branch are a visual echo of her ripe, young breasts and (like the breasts) a metaphor of the fertility and fecundity of nature.
Picasso’s Cubist pictures were anything but realistic. His women—with two eyes on one side of the face, hunchbacks, misplaced limbs, and so on—were considerably more distorted than any Chola bronze or Mogul miniature. Yet the Western response to Picasso was that he was a genius who liberated us from the tyranny of realism by showing us that art doesn’t have to even try to be realistic. I do not mean to detract from Picasso’s brilliance, but he was doing what Indian artists had done a millennium earlier. Even his trick of depicting multiple views of an object in a single plane was used by Mogul artists. (I might add that I am not a great fan of Picasso’s art.)
Thus the metaphorical nuances of Indian art were lost on Western art historians. One eminent bard, the nineteenth-century naturalist and writer Sir George Christopher Molesworth Birdwood, considered Indian art to be mere “crafts” and was repulsed by the fact that many of the gods had multiple arms (often allegorically signifying their many divine attributes). He referred to Indian art’s greatest icon, The Dancing Shiva, or Nataraja, which appears in the next chapter, as a multiarmed monstrosity. Oddly enough, he didn’t have the same opinion of angels depicted in Renaissance art—human children with wings sprouting on their scapulae—which were probably just as monstrous to some Indian eyes. As a medical man, I might add that multiple arms in humans do occasionally crop up—a staple of freak shows in the old days—but a human being sprouting wings is impossible. (However, a recent survey revealed that about one-third of all Americans claim they have seen angels, a frequency that’s higher than even Elvis sightings!)
So works of art are not photocopies; they involve deliberate hyperbole and distortion of reality. But you can’t just randomly distort an image and call it art (although, here in La Jolla, many do). The question is, what types of distortion are effective? Are there any rules that the artist deploys, either consciously or unconsciously, to change the image in a systematic way? And if so, how universal are these rules?
While I was struggling with this question and poring over ancient Indian manuals on art and aesthetics, I often noticed the word rasa. This Sanskrit word is difficult to translate, but roughly it means “capturing the very essence, the very spirit of something, in order to evoke a specific mood or emotion in the viewer’s brain.” I realized that, if you want to understand art, you have to understand rasa and how it is represented in the neural circuitry in the brain. One afternoon, in a whimsical mood, I sat at the entrance of the temple and jotted down what I thought might be the “eight universal laws of aesthetics,” analogous to the Buddha’s eightfold path to wisdom and enlightenment. (I later came up with an additional ninth law—so there, Buddha!) These are rules of thumb that the artist or even fashion designer deploys to create visually pleasing images that more optimally titillate the visual areas in the brain compared with what he could accomplish using realistic images or real objects.
In the pages that follow I will elaborate on these laws. Some I believe are genuinely new, or at least haven’t been stated explicitly in the context of visual art. Others are well known to artists, art historians, and philosophers. My goal is not to provide a complete account of the neurology of aesthetics (even assuming such a thing were possible) but to tie strands together from many different disciplines and to provide a coherent framework. Semir Zeki, a neuroscientist at the University College of London, has embarked on a similar venture which he calls “neuroesthetics.” Please be assured that this type of analysis doesn’t in any way detract from the more lofty spiritual dimensions of art any more than describing the physiology of sexuality in the brain detracts from the magic of romantic love. We are dealing with different levels of descriptions that complement rather than contradict each other. (No one would deny that sexuality is a strong component of romantic love.)
In addition to identifying and cataloging these laws, we also need to understand what their function might be, if any, and why they evolved. This is an important difference between the laws of biology and the laws of physics. The latter exist simply because they exist, even though the physicist may wonder why they always seem so simple and elegant to the human mind. Biological laws, on the other hand, must have evolved because they helped the organism deal with the world reliably, enabling it to survive and transmit its genes more efficiently. (This isn’t always true, but it’s true often enough to make it worthwhile for a biologist to constantly keep it in mind.) So the quest for biological laws shouldn’t be driven by a quest for simplicity or elegance. No woman who has been through labor would say that it’s an elegant solution to giving birth to a baby.
Moreover, to assert there might be universal laws of aesthetics and art does not in any way diminish the important role of culture in the creation and appreciation of art. Without cultures, there wouldn’t be distinct styles of art such as Indian and Western. My interest is not in the differences between various artistic styles but in principles that cut across cultural barriers, even if those principles account for only, say 20 percent of the variance seen in art. Of course, cultural variations in art are fascinating, but I would argue that certain systematic principles lie behind these variations.
Here are the names of my nine laws of aesthetics:
Grouping
Peak shift
Contrast
Isolation
Peekaboo, or perceptual problem solving
Abhorrence of coincidences
Orderliness
Symmetry
Metaphor
It isn’t enough to just list these laws and describe them; we need a coherent biological perspective. In particular, when exploring any universal human trait such as humor, music, art, or language, we need to keep in mind three basic questions: roughly speaking, What? Why? and How? First, what is the internal logical structure of the particular trait you are looking at (corresponding roughly to what I call laws)? For example, the law of grouping simply means that the visual system tends to group similar elements or features in the image into clusters. Second, why does the particular trait have the logical structure that it does? In other words, what is the biological function it evolved for? And third, how is the trait or law mediated by the neural machinery in the brain?1 All three of these questions need to be answered before we can genuinely claim to have understood any aspect of human nature.
In my view, most older approaches to aesthetics have either failed or remained frustratingly incomplete with regard to these questions. For example, the Gestalt psychologists were good at pointing out laws of perception but didn’t correctly answer why such laws may have evolved or how they came to be enshrined in the neural architecture of the brain. (Gestalt psychologists regarded the laws as byproducts of some undiscovered physical principles such as electrical fields in the brain.) Evolutionary psychologists are often good at pointing out what function a law might serve but are typically not concerned with specifying in clear logical terms what the law actually is, with exploring its underlying neural mechanisms, or even with establishing whether the law exists or not! (For instance, is there a law of cooking in the brain because most cultures cook?) And last, the worst offenders are neurophysiologists (except the very best ones), who seem interested in neither the functional logic nor the evolutionary rationale of the neural circuits they explore so diligently. This is amazing, given that as Theodosius Dobzhansky famously said, “Nothing in biology makes any sense except in the light of evolution.”
A useful analogy comes from Horace Barlow, a British visual neuroscientist whose work is central to understanding the statistics of natural scenes. Imagine that a Martian biologist arrives on Earth. The Martian is asexual and reproduces by duplication, like an amoeba, so it doesn’t know anything about sex. The Martian dissects a man’s testicles, studies its microstructure in excruciating detail, and finds innumerable sperm swimming around. Unless the Martian knew about sex (which it doesn’t), it wouldn’t have the foggiest understanding of the structure and function of the testes despite all its meticulous dissections. The Martian would be mystified by these spherical balls dangling in half the human population and might even conclude that the wriggling sperm were parasites. The plight of many of my colleagues in physiology is not unlike that of the Martian. Knowing the minute detail doesn’t necessarily mean you comprehend the function of the whole from its parts.
So with the three overarching principles of internal logic, evolutionary function, and neural mechanics in mind, let’s see the role each of my individual laws plays in constructing a neurobiological view of aesthetics. Let’s begin with a concrete example: grouping.
The Law of Grouping
The law of grouping was discovered by Gestalt psychologists around the turn of the century. Take a moment to look again at Figure 2.7, the Dalmatian dog in Chapter 2. All you see at first is a set of random splotches, but after several seconds you start grouping some of the splotches together. You see a Dalmatian dog sniffing the ground. Your brain glues the “dog” splotches together to form a single object that is clearly delineated from the shadows of leaves around it. This is well known, but vision scientists frequently overlook the fact that successful grouping feels good. You get an internal “Aha!” sensation as if you have just solved a problem.
FIGURE 7.3 In this Renaissance painting, very similar colors (blues, dark brown, and beige) are scattered spatially throughout the painting. The grouping of similar colors is pleasing to the eye even if they are on different objects.
Grouping is used by both artists and fashion designers. In some well-known classic Renaissance paintings (Figure 7.3), the same azure blue color repeats all over the canvas as part of various unrelated objects. Likewise the same beige and brown are used in halos, clothes, and hair throughout the scene. The artist uses a limited set of colors rather than an enormous range of colors. Again, your brain enjoys grouping similar-colored splotches. It feels good, just as it felt good to group the “dog” splotches, and the artist exploits this. He doesn’t do this because he is stingy with paint or has only a limited palette. Think of the last time you selected a mat to frame a painting. If there are bits of blue in the painting you pick a matte that’s tinted blue. If there are mainly green earth tones in the painting, then a brown mat looks most pleasing to the eye.
The same holds for fashion. When you go to Nordstrom’s department store to buy a red skirt, the salesperson will advise you to buy a red scarf and a red belt to go with it. Or if you are a guy buying a blue suit, the salesperson may recommend a tie with some identical blue flecks to go with the suit.
But what’s all this really about? Is there a logical reason for grouping colors? Is it just marketing and hype, or is this telling you something fundamental about the brain? This is the “why” question. The answer is that grouping evolved, to a surprisingly large extent, to defeat camouflage and to detect objects in cluttered scenes. This seems counterintuitive because when you look around, objects are clearly visible—certainly not camouflaged. In a modern urban environment, objects are so commonplace that we don’t realize vision is mainly about detecting objects so that you can avoid them, dodge them, chase them, eat them, or mate with them. We take the familiar for granted, but just think of one of your arboreal ancestors trying to spot a lion hidden behind a screen of green splotches (a tree branch, say). Only visible are several yellow splotches of lion fragments (Figure 7.4). But your brain says (in effect), “What’s the likelihood that all these fragments are exactly the same color by coincidence? Zero. So they probably belong to one object. So let me glue them together to see what it is. Aha! Oops! It’s a lion—run!” This seemingly esoteric ability to group splotches may have made all the difference between life and death.
FIGURE 7.4 A lion seen through foliage. The fragments are grouped by the prey’s visual system before the overall outline of the lion becomes evident.
Little does the salesperson at Nordstrom’s realize that when she picks the matching red scarf for your red skirt, she is tapping into a deep principle underlying brain organization, and that she’s taking advantage of the fact that your brain evolved to detect predators seen behind foliage. Again, grouping feels good. Of course the red scarf and red skirt are not one object, so logically they shouldn’t be grouped, but that doesn’t stop her from exploiting the grouping law anyway, to create an attractive combination. The point is, the rule worked in the treetops in which our brains evolved. It was valid often enough that incorporating it as a law into visual brain centers helped our ancestors leave behind more babies, and that’s all that matters in evolution. The fact that an artist can misapply the rule in an individual painting, making you group splotches from different objects, is irrelevant because your brain is fooled and enjoys the grouping anyway.
Another principle of perceptual grouping, known as good continuation, states that graphic elements suggesting a continued visual contour will tend to be grouped together. I recently tried constructing a version of it that might be especially relevant to aesthetics (Figure 7.5). Figure 7.5b is unattractive, even though it is made of components whose shapes and arrangement are similar to Figure 7.5a, which is pleasing to the eye. This is because of the “Aha!” jolt you get from completion (grouping) of object boundaries behind occluders (7.5a, whereas in 7.5b there is irresolvable tension).
FIGURE 7.5 (a) Viewing the diagram on the left gives you a pleasing sensation of completion: The brain enjoys grouping.
(b) In the right-hand diagram, the smaller blobs flanking the central vertical blob are not grouped by the visual system, creating a sort of perceptual tension.
And now we need to answer the “how” question, the neural mediation of the law. When you see a large lion through foliage, the different yellow lion fragments occupy separate regions of the visual field, yet your brain glues them together. How? Each fragment excites a separate cell (or small cluster of cells) in widely separated portions of the visual cortex and color areas of the brain. Each cell signals the presence of the feature by means of a volley of nerve impulses, a train of what are called spikes. The exact sequence of spikes is random; if you show the same feature to the same cell it will fire again just as vigorously, but there’s a new random sequence of impulses that isn’t identical to the first. What seems to matter for recognition is not the exact pattern of nerve impulses but which neurons fire and how much they fire—a principle known as Müller’s law of specific nerve energies. Proposed in 1826, the law states that the different perceptual qualities evoked in the brain by sound, light, and pinprick—namely, hearing, seeing, and pain—are not caused by differences in patterns of activation but by different locations of nervous structures excited by those stimuli.
That’s the standard story, but an astonishing new discovery by two neuroscientists, Wolf Singer of the Max Planck Institute for Brain Research in Frankfurt, Germany, and Charles Gray from Montana State University, adds a novel twist to it. They found that if a monkey looks at a big object of which only fragments are visible, then many cells fire in parallel to signal the different fragments. That’s what you would expect. But surprisingly, as soon as the features are grouped into a whole object (in this case, a lion), all the spike trains become perfectly synchronized. And so the exact spike trains do matter. We don’t yet know how this occurs, but Singer and Gray suggest that this synchrony tells higher brain centers that the fragments belong to a single object. I would take this argument a step further and suggest that this synchrony allows the spike trains to be encoded in such a way that a coherent output emerges which is relayed to the emotional core of the brain, creating an “Aha! Look here, it’s an object!” jolt in you. This jolt arouses you and makes you swivel your eyeballs and head toward the object, so you can pay attention to it, identify it, and take action. It’s this “Aha!” signal that the artist or designer exploits when she uses grouping. This isn’t as far-fetched as it sounds; there are known back projections from the amygdala and other limbic structures (such as the nucleus accumbens) to almost every visual area in the hierarchy of visual processing discussed in Chapter 2. Surely these projections play a role in mediating the visual “Aha!”
The remaining universal laws of aesthetics are less well understood, but that hasn’t stopped me from speculating on their evolution. (This isn’t easy; some laws may not themselves have a function but may be byproducts of other laws that do.) In fact, some of the laws actually seem to contradict each other, which may actually turn out to be a blessing. Science often progresses by resolving apparent contradictions.
The Law of Peak Shift
My second universal law, the peak-shift effect, relates to how your brain responds to exaggerated stimuli. (I should point out that the phrase “peak shift” has a purportedly precise meaning in the animal learning literature, whereas I am using it more loosely.) It explains why caricatures are so appealing. And as I mentioned earlier, ancient Sanskrit manuals on aesthetics often use the word rasa, which translates roughly to “capturing the very essence of something.” But how exactly does the artist extract the very essence of something and portray it in a painting or a sculpture? And how does your brain respond to rasa?
A clue, oddly enough, comes from studies in animal behavior, especially the behavior of rats and pigeons that are taught to respond to certain visual images. Imagine a hypothetical experiment in which a rat is being taught to discriminate a rectangle from a square (Figure 7.6). Every time the animal approaches the rectangle, you give it a piece of cheese, but if it goes to the square you don’t. After a few dozen trials, the rat learns that “rectangle = food,” it begins to ignore the square and go toward the rectangle alone. In other words, it now likes the rectangle. But amazingly, if you now show the rat a longer and skinnier rectangle than the one you showed it originally, it actually prefers that rectangle to the original! You may be tempted to say, “Well, that’s a bit silly. Why would the rat actually choose the new rectangle rather than the one you trained it with?” The answer is the rat isn’t being silly at all. It has learned a rule—“rectangularity”—rather than a particular prototype rectangle, so from its point of view, the more rectangular, the better. (By that, one means “the higher the ratio of a longer side to a shorter side, the better.”) The more you emphasize the contrast between the rectangle and the square, the more attractive it is, so when shown the long skinny one the rat thinks, “Wow! What a rectangle.”
This effect is called peak shift because ordinarily when you teach an animal something, its peak response is to the stimulus you trained it with. But if you train the animal to discriminate something (in this case, a rectangle) from something else (the square), the peak response is to a totally new rectangle that is shifted away even further from the square in its rectangularity.
What has peak shift got to do with art? Think of caricatures. As I mentioned in Chapter 2, if you want to draw a caricature of Nixon’s face, you take all those features of Nixon that make his face special and different from the average face, such as his big nose and shaggy eyebrows, and you amplify them. Or to put it differently, you take the mathematical average of all male faces and subtract this average from Nixon’s face, and then amplify the difference. By doing this you have created a picture that’s even more Nixon-like than the original Nixon! In short, you have captured the very essence—the rasa—of Nixon. If you overdo it, you get a humorous effect—a caricature—because it doesn’t look even human; but if you do it right, you get great portraiture.
FIGURE 7.6 Demonstration of the peak shift principle: The rat is taught to prefer the rectangle (2) over the square (1) but then spontaneously prefers the longer, skinnier rectangle (3).
Caricatures and portraits aside, how does this principle apply to other art forms? Take a second look at the goddess Parvati (Figure 7.2a), which conveys the essence of feminine sensuality, poise, charm, and dignity. How does the artist achieve this? A first-pass answer is that he has subtracted the average male form from the average female form and amplified the difference. The net result is a woman with exaggerated breasts and hips and an attenuated hourglass waist: slender yet voluptuous. The fact that she doesn’t look like your average real woman is irrelevant; you like the sculpture just as the rat liked the skinnier rectangle more than the original prototype, saying, in effect, “Wow! What a woman!” But there’s surely more to it than that, otherwise any Playboy pinup would be a work of art (although, to be sure, I’ve never seen a pinup whose waist is as narrow as the goddess’s).
Parvati is not merely a sexy babe; she is the very embodiment of feminine perfection—of grace and poise. How does the artist achieve this? He does so by accentuating not merely her breasts and hips but also her feminine posture (formally known as tribhanga, or “triple flexion,” in Sanskrit). There are certain postures that a woman can adopt effortlessly but are impossible (or highly improbable) in a man because of anatomical differences such as the width of the pelvis, the angle between the neck and shaft of the femur, the curvature of the lumbar spine. Instead of subtracting male form from female form, the artist goes into a more abstract posture space, subtracting the average male posture from the average female posture, and then amplifies the difference. The result is an exquisitely feminine posture, conveying poise and grace.
Now take a look at the dancing nymph in Figure 7.7 whose twisting torso is almost anatomically absurd but who nevertheless conveys an incredibly beautiful sense of movement and dance. This is probably achieved, once again, by the deliberate exaggeration of posture that may activate—indeed hyperactivate—mirror neurons in the superior temporal sulcus. These cells respond powerfully when a person is viewing changing postures and movements of the body as well as changing facial expressions. (Remember pathway 3, the “so what” stream in vision processing discussed in Chapter 2?) Perhaps sculptures such as the dancing nymph are producing an especially powerful stimulation of certain classes of mirror neurons, resulting in a correspondingly heightened reading of the body language of dynamic postures. It’s hardly surprising, then, that even most types of dance—Indian or Western—involve clever ritualized exaggerations of movements and postures that convey specific emotions. (Remember Michael Jackson?)
FIGURE 7.7 Dancing stone nymph from Rajasthan, India, eleventh century. Does it stimulate mirror neurons?
The relevance of the peak-shift law to caricatures and to the human body is obvious, but how about other kinds of art?2 Can we even begin to approach Van Gogh, Rodin, Gustav Klimt, Henry Moore, or Picasso? What can neuroscience tell us about abstract and semiabstract art? This is where most theories of art either fail or start invoking culture, but I’d like to suggest that we don’t really need to. The important clue to understanding these so-called higher art forms comes from a very unexpected source: ethology, the science of animal behavior, in particular, from the work of the Nobel Prize–winning biologist Nikolaas Tinbergen, who did his pioneering work on seagulls in the 1950s.
Tinbergen studied herring gulls, common on both the English and American coasts. The mother gull has a prominent red spot on her long yellow beak. The gull chick, soon after it hatches from the egg, begs for food by pecking vigorously on the red spot on the mother’s beak. The mother then regurgitates half-digested food into her chick’s gaping mouth. Tinbergen asked himself a very simple question: How does the chick recognize its mom? Why doesn’t it beg for food from any animal that’s passing by?
Tinbergen found that to elicit this begging behavior in the chick you don’t really need a mother seagull. When he waved a disembodied beak in front of the chick, it pecked at the red spot just as vigorously, begging the beak-wielding human for food. The chick’s behavior—confusing a human adult for a mother seagull—might seem silly, but it isn’t. Remember, vision evolved to discover and respond to objects (recognize them, dodge them, eat them, catch them, or mate with them) quickly and reliably by doing as little work as needed for the job at hand—taking short-cuts where necessary to minimize computational load. Through millions of years of accumulated evolutionary wisdom, the gull chick’s brain has learned that the only time it will see a long yellow thing with a red spot on the end is when there’s a mom attached to it at the other end. After all, in nature the chick is never likely to encounter a mutant pig with a beak or a malicious ethologist waving around a fake beak. So the chick’s brain can take advantage of this statistical redundancy in nature and the equation “long thing with red spot = mom” gets hardwired into its brain.
In fact Tinbergen found that you don’t even need a beak; you can just have a rectangular strip of cardboard with a red dot on the end, and the chick will beg for food equally vigorously. This happens because the chick brain’s visual machinery isn’t perfect; it’s wired up in such a way that it has a high enough hit rate in detecting mom to survive and leave offspring. So you can readily fool these neurons by providing a visual stimulus that approximates the original (just as a key doesn’t have to be absolutely perfect to fit a cheap lock; it can be rusty or slightly corroded.)
But the best was yet to come. To his amazement, Tinbergen found that if he had a very long thick stick with three red stripes on the end, the chick goes berserk, pecking at it much more intensely than at a real beak. It actually prefers this strange pattern, which bears almost no resemblance to the original! Tinbergen doesn’t tell us why this happens, but it’s almost as though the chick had stumbled on a superbeak (Figure 7.8).
FIGURE 7.8 The gull chick pecks at a disembodied beak or, a stick with a spot that is a reasonable approximation of the beak given the limits of sophistication of visual processing. Paradoxically, a stick with three red stripes is even more effective than a real beak; it is an ultranormal stimulus.
Why could such a thing happen? We really don’t know the “alphabet” of visual perception, whether in gulls or humans. Obviously, neurons in the visual centers of the gull’s brain (which have fancy Latin names like nucleus rotundum, hyperstriatum, and ectostriatum) are not optimally functioning machines; they are merely wired up in such a way that they can detect beaks, and therefore mothers, reliably enough. Survival is the only thing evolution cares about. The neuron may have a rule like “the more red outline the better,” so if you show it a long skinny stick with three stripes, the cell actually likes it even more! This is related to the peak-shift effect on rats mentioned earlier, except for one key difference: in the case of the rat responding to the skinnier rectangle, it’s perfectly obvious what rule the animal has learned and what you are amplifying. But in the case of the seagull, the stick with three stripes is hardly an exaggerated version of a real beak; it isn’t clear at all what rule you are tapping into or amplifying. The heightened response to the striped beak may be an inadvertent consequence of the way the cells are wired up rather than the deployment of a rule with an obvious function.
We need a new name for this type of stimulus, so I’ll call it an “ultranormal” stimulus (to distinguish it from “supernormal,” a phrase that already exists). The response to an ultranormal stimulus pattern (such as the three-striped beak) cannot be predicted from looking at the original (the single-spot beak). You could predict the response—at least in theory—if you knew in detail the functional logic of the circuitry in the chick’s brain that allows the rapid, efficient detection of beaks. You could then devise patterns that actually excite these neurons even more effectively than the original stimulus, so the chick’s brain goes “Wow! What a sexy beak!” Or you might be able to discover the ultranormal stimulus by trial and error, stumbling on it as Tinbergen did.
This brings me to my punch line about semiabstract or even abstract art for which no adequate theory has been proposed so far. Imagine that seagulls had an art gallery. They would hang this long thin stick with three stripes on the wall. They would call it a Picasso, worship it, fetishize it, and pay millions of dollars for it, while all the time wondering why they are turned on by it so much, even though (and this is the key point) it doesn’t resemble anything in their world. I suggest this is exactly what human art connoisseurs are doing when they look at or purchase abstract works of art; they are behaving exactly like the gull chicks.
By trial and error, intuition or genius, human artists like Picasso or Henry Moore have discovered the human brain’s equivalent of the seagull brain’s stick with three stripes. They are tapping into the figural primitives of our perceptual grammar and creating ultranormal stimuli that more powerfully excite certain visual neurons in our brains as opposed to realistic-looking images. This is the essence of abstract art. It may sound like a highly reductionist, oversimplified view of art, but bear in mind that I’m not saying that’s all there is to art, only that it’s an important component.
The same principle may apply to impressionist art—a Van Gogh or a Monet canvas. In Chapter 2, I noted that visual space is organized in the brain so that spatially adjacent points are mapped one-to-one onto adjacent points on the cortex. Moreover, out of the thirty or so areas in the human brain, a few—especially V4—are devoted primarily to color. But in the color area, wavelengths adjacent in an abstract “color space” are mapped onto adjacent points in the brain even when they are not near each other in external space. Perhaps Monet and Van Gogh were introducing peak shifts in abstract color space rather than “form space,” even deliberately smudging form when required. A black-and-white Monet is an oxymoron.
This principle of ultranormal stimuli may be relevant not just to art but to other quirks of aesthetic preference as well, like whom you are attracted to. Each of us carries templates for members of the opposite sex (such as your mother or father, or your first really sizzling amorous encounter), and maybe those whom you find inexplicably and disproportionately attractive later in life are ultranormal versions of these early prototypes. So the next time you are unaccountably—even perversely—attracted to someone who is not beautiful in any obvious sense, don’t jump to the conclusion that it’s just pheromones or “the right chemistry.” Consider the possibility that she (or he) is an ultranormal version of the gender you’re attracted to buried deep in your unconscious. It’s a strange thought that human life is built on such quicksand, governed largely by vagaries and accidental encounters from the past, even though we take such great pride in our aesthetic sensibilities and freedom of choice. On this one point I am in complete agreement with Freud.
There is a potential objection to the notion that our brains are at least partially hardwired to appreciate art. If this were really true, then why doesn’t everyone like Henry Moore or a Chola bronze? This is an important question. The surprising answer might be that everyone does “like” a Henry Moore or Parvati, but not everyone knows it. The key to understanding this quandary is to recognize that the human brain has many quasi-independent modules that can at times signal inconsistent information. It may be that all of us have basic neural circuits in our visual areas which show a heightened response to a Henry Moore sculpture, given that it is constructed out of certain form primitives that hyperactivate cells that are tuned to respond to these primitives. But perhaps in many of us, other higher cognitive systems (such as the mechanisms of language and thought in the left hemisphere) kick in and censor or veto the output of the face neurons by saying, in effect, “There is something wrong with this sculpture; it looks like a funny twisted blob. So ignore that strong signal from cells at an earlier stage in your visual processing.” In short, I am saying all of us do like Henry Moore but many of us are in denial about it! The idea that people who claim not to like Henry Moore are closet Henry Moore enthusiasts could in principle be tested with brain imaging. (And the same holds for the Victorian Englishman’s response to the Chola bronze Parvati.)
An even more striking example of quirky aesthetic preference is the manner in which certain guppies prefer decoys of the opposite sex that are painted blue, even though there’s nothing in the guppy that’s blue. (If a chance mutation were to occur making one guppy blue, I predict the emergence of a future race of guppies in the next few millennia that evolve to become uselessly, intensely blue.) Could the appeal of silver foil to bowerbirds and the universal appeal of shiny metallic jewelry and precious stones to people also be based on some idiosyncratic quirk of brain wiring? (Maybe evolved for detecting water?) It’s a sobering thought when you consider how many wars have been fought, loves lost, and lives ruined for the sake of precious stones.
SO FAR I have discussed only two of my nine laws. The remaining seven are the subject of the next chapter. But before we continue, I want to take up one final challenge. The ideas I have considered so far on abstract and semiabstract art and portraiture sound plausible, but how do we know they actually are true? The only way to find out would be to do experiments. This may seem obvious, but the whole concept of an experiment—the need to test your idea by manipulating one variable alone while keeping everything else constant—is new and surprisingly alien to the human mind. It’s a relatively recent cultural invention that began with Galileo’s experiments. Before him, people “knew” that if a heavy stone and a peanut were dropped simultaneously from the top of a tower, the heavier one would obviously fall faster. All it took was a five-minute experiment by Galileo to topple two thousand years of wisdom. This experiment, moreover, that can be repeated by any ten-year-old schoolgirl.
A common fallacy is that science begins with naïve unprejudiced observations about the world while in fact the opposite is true. When exploring new terrain, you always begin with a tacit hypothesis of what might be true—a preconceived notion or prejudice. As the British zoologist and philosopher of science Peter Medawar once said, we are not “cows grazing on the pasture of knowledge.” Every act of discovery involves two critical steps: first, unambiguously stating your conjecture of what might be true, and second, devising a crucial experiment to test your conjecture. Most theoretical approaches to aesthetics in the past have been concerned mainly with step 1 but not step 2. Indeed, the theories are usually not stated in a manner that permits either confirmation or refutation. (One notable exception is Brent Berlin’s pioneering work on the use of the galvanic skin response.)
Can we experimentally test our ideas about peak shift, supernormal stimuli, and other laws of aesthetics? There are at least three ways of doing so. The first one is based on the galvanic skin response (GSR); the second is based on recording nerve impulses from single nerve cells in the visual area in the brain; and the third is based on the idea that if there is anything to these laws, we should be able to use them to devise new pictures that are more attractive than what you might have predicted from common sense (what I refer to as the “grandmother test”: If an elaborate theory cannot predict what your grandmother knows using common sense, then it isn’t worth much).
You already know about GSR from previous chapters. This test provides an excellent, highly reliable index of your emotional arousal when you look at anything. If you look at something scary, violent, or sexy (or, as it turns out, a familiar face like your mother or Angelina Jolie), there is a big jolt in GSR, but nothing happens if you look at a shoe or furniture. This is a better test of someone’s raw, gut-level emotional reactions to the world than asking what she feels. A person’s verbal response is likely to be inauthentic. It may be contaminated by the “opinions” of other areas of the brain.
So GSR gives us a handy experimental probe for understanding art. If my conjectures about the appeal of Henry Moore sculptures are correct, then the Renaissance scholar who denies an interest in such abstract works (or, for that matter, the English art historian who feigns indifference to Chola bronzes) should nevertheless register a whopping GSR to the very images whose aesthetic appeal he denies. His skin can’t lie. Similarly, we know that you will show a higher GSR to a photo of your mother than to a photo of a stranger, and I predict that the difference will be even greater if you look at a caricature or evocative sketch of your mother rather than at a realistic photo. This would be interesting because it’s counterintuitive. As a control for comparison, you could use a countercaricature, by which I mean a sketch that deviates from the prototype toward the average face rather than away from it (or indeed, a face outline that deviates in a random direction). This would ensure that any enhanced GSR you observed with the caricature wasn’t simply because of the surprise caused by the distortion. It would be genuinely due to its appeal as a caricature.
But GSR can only take us so far; it is a relatively coarse measure because it pools several types of arousal and it can’t discriminate positive from negative responses. But even though it’s a crude measure, it’s not a bad place to start because it can tell the experimenter when you are indifferent to a work of art and when you are feigning indifference. The criticism that the test can’t discriminate negative arousal from positive arousal (at least not yet!) isn’t as damaging as it sounds because who is to say that negative arousal isn’t also part of art? Indeed, attention grabbing—whether initially positive or negative—is often a prelude to attraction. (After all, slaughtered cows pickled in formaldehyde were displayed in the venerable MOMA [Museum of Modern Art] in New York, sending shock waves throughout the art world). There are many layers of reaction to art, which contribute to its richness and appeal.
A second approach is to use eye movements, in particular, a technique pioneered by the Russian psychologist Alfred Yarbus. You can use an electronic optical device to see where a person is fixating and how she is moving her eyes from one region to another in a painting. The fixations tend to be clustered around eyes and lips. One could therefore show a normally proportioned cartoon of a person on one side of the image and a hyperbolic version on the other side. I would predict that even though the normal cartoon looks more natural, the eye fixations will cluster more around the caricature. (A randomly distorted cartoon could be included to control for novelty.) These findings could be used to complement the GSR results.
The third experimental approach to aesthetics would be to record from cells along the visual pathways in primates and compare their responses to art versus any old picture. The advantage of recording from single cells is that it may eventually allow a more fine-grained analysis of the neurology of aesthetics than what could be achieved with GSR alone. We know that there are cells in a region called the fusiform gyrus that respond mainly to specific familiar faces. You have brain cells that fire in response to a picture of your mother, your boss, Bill Clinton, or Madonna. I predict that a “boss cell” in this face recognition region should show an even bigger response to a caricature of your boss than to an authentic, undistorted face of your boss (and perhaps an even smaller response to a plain-looking countercaricature). I first suggested this in a paper I wrote with Bill Hirstein in the mid-1990s. The experiment has now been done on monkeys by researchers at Harvard and MIT, and sure enough the caricatures hyperactivate the face cells as expected. Their results provide grounds for optimism that some of the other laws of aesthetics I have proposed may also turn out to be true.
THERE IS A widespread fear among scholars in the humanities and arts that science may someday take over their discipline and deprive them of employment, a syndrome I have dubbed “neuron envy.” Nothing could be further from the truth. Our appreciation of Shakespeare is not diminished by the existence of a universal grammar or Chomskian deep structure underlying all languages. Nor should the diamond you are about to give your lover lose its radiance or romance if you tell her that it is made of carbon and was forged in the bowels of Earth when the solar system was born. In fact, the diamond’s appeal should be enhanced! Similarly, our conviction that great art can be divinely inspired and may have spiritual significance, or that it transcends not only realism but reality itself, should not stop us from looking for those elemental forces in the brain that govern our aesthetic impulses.
CHAPTER 8
The Artful Brain: Universal Laws
Art is the accomplishment of our desire to find ourselves among the phenomena of the external world.
—RICHARD WAGNER
BEFORE MOVING ON TO THE NEXT SEVEN LAWS, I WANT TO CLARIFY what I mean by “universal.” To say that the wiring in your visual centers embodies universal laws does not negate the critical role of culture and experience in shaping your brain and mind. Many cognitive faculties that are fundamental to your human way of life are only partly specified by your genes. Nature and nurture interact. Genes wire up your brain’s emotional and cortical circuits to a certain extent and then leave it to faith that the environment will shape your brain the rest of the way, producing you, the individual. In this respect the human brain is absolutely unique—as symbiotic with culture as a hermit crab is with its shell. While the laws are hardwired, the content is learned.
Consider face recognition. While your ability to learn faces is innate, you are not born knowing your mother’s face or the mail carrier’s face. Your specialized face cells learn to recognize faces through exposure to the people you encounter.
Once face knowledge is acquired, the circuitry may spontaneously respond more effectively to caricatures or Cubist portraits Once your brain learns about other classes of objects or shapes—bodies, animals, automobiles, and such—your innate circuitry may spontaneously display the peak-shift principle or respond to bizarre ultranormal stimuli analogous to the stick with stripes. Because this ability emerges in all human brains that develop normally, we are safe in calling it universal.
Contrast
It is hard to imagine a painting or sketch without contrast. Even the simplest doodle requires contrasting brightness between the black line and white background. White paint on a white canvas could hardly be called art (although in the 1990s the purchase of an all-white painting figured in Yasmina Reza’s hilarious award-winning play “Art,” poking fun at how easily people are influenced by art critics).
In scientific parlance, contrast is a relatively sudden change in luminance, color, or some other property between two spatially contiguous homogeneous regions. We can speak of luminance contrast, color contrast, texture contrast, or even depth contrast. The bigger the difference between the two regions, the higher the contrast.
Contrast is important in art or design; in a sense it’s a minimum requirement. It creates edges and boundaries as well as figures against background. With zero contrast you see nothing at all. Too little contrast and a design can be bland. And too much contrast can be confusing.
Some contrast combinations are more pleasing to the eye than others. For example, high-contrast colors such as a blue splotch on a yellow background are more attention grabbing than low-contrast pairings like a yellow splotch on an orange background. It’s puzzling at first glance. After all, you can easily see a yellow object against an orange background but that combination does not draw your attention the same way as blue on yellow.
The reason a boundary of high color contrast is more attention getting can be traced to our primate origins, to when we swung arm over arm like Spiderman in the unruly treetops, in dim twilight or across great distances. Many fruits are red on green so our primate eyes will see them. The plants advertise themselves so animals and birds can spot them from a great distance, knowing they are ripe and ready to eat and be dispersed through defecation of the seeds. If trees on Mars were mainly yellow, we would expect to see blue fruits.
The law of contrast—juxtaposing dissimilar colors and/or luminances—might seem to contradict the law of grouping, which involves connecting similar or identical colors. And yet the evolutionary function of both principles is, broadly speaking, the same: to delineate and direct attention to object boundaries. In nature, both laws help species survive. Their main difference lies in the area over which the comparison or integration of colors occurs. Contrast detection involves comparing regions of color that lie right next to each other in visual space. This makes evolutionary sense because object boundaries usually coincide with contrasting luminance or color. Grouping, on the other hand, performs comparisons over wider distances. Its goal is to detect an object that is partially obscured, like a lion hiding behind a bush. Glue those yellow patches together perceptually, and it turns out to be one big lump shaped like a lion.
In modern times we harness contrast and grouping to serve novel purposes unrelated to their original survival function. For example, a good fashion designer will emphasize the salience of an edge by using dissimilar, highly contrasting colors (contrast), but will use similar colors for far-flung regions (grouping). As I mentioned in Chapter 7, red shoes go with a red shirt (conducive to grouping). It’s true, of course, that the red shoes aren’t an innate part of the red shirt, but the designer is tapping into the principle that, in your evolutionary past, they would have belonged to a single object. But vermilion scarf on a ruby-red shirt is hideous. Too much low contrast. Yet a high-contrast blue scarf on a red shirt will work fine, and it’s even better if the blue is flecked with red polka dots or floral prints.
Similarly, an abstract artist will use a more abstract form of the law of contrast to capture your attention. The San Diego Museum of Contemporary Art has in its contemporary art collection a large cube about three feet in diameter, densely covered with tiny metal needles pointing in random directions (by Tara Donovan). The sculpture resembles fur made of shining metal. Several violations of expectations are at work here. Large metal cubes usually have smooth surfaces but this one is furry. Cubes are inorganic while fur is organic. Fur is usually a natural brown or white, and is soft to touch, not metallic and prickly. These shocking conceptual contrasts endlessly titillate your attention.
Indian artists use a similar trick in their sculptures of voluptuous nymphs. The nymph is naked except for a few strings of very ornate coarsely textured jewelry draped on her (or flying off her chest if she is dancing). The baroque jewelry contrasts sharply with her body, making her bare skin look even more smooth and sensuous.
Isolation
Earlier I suggested that art involves creating images that produce heightened activation of visual areas in your brain and emotions associated with visual images. Yet any artist will tell you that a simple outline or doodle—say, Picasso’s doves or Rodin’s sketches of nudes—can be much more effective than a full color photo of the same object. The artist emphasizes a single source of information—such as color, form, or motion—and deliberately plays down or deletes other sources. I call this the “law of isolation.”
Again we have an apparent contradiction. Earlier I emphasized peak shift—hyperbole and exaggeration in art—but now I am emphasizing understatement. Aren’t the two ideas polar opposites? How can less be more? The answer: They aim to achieve different goals.
If you look in standard physiology and psychology textbooks, you will learn that a sketch is effective because cells in your primary visual cortex, where the earliest stage of visual processing occurs, only care about lines. These cells respond to the boundaries and edges of things but are insensitive to the feature-poor fill regions of an image. This fact about the circuitry of the primary visual area is true, but does it explain why a mere outline sketch can convey an extra vivid impression of what’s being depicted? Surely not. It only predicts that an outline sketch should be adequate, that it should be as effective as a halftone (the reproduction of a black-and-white photo). It doesn’t tell you why it’s more effective.
A sketch can be more effective because there is an attentional bottleneck in your brain. You can pay attention to only one aspect of an image or one entity at a time (although what we mean by “aspect” or “entity” is far from clear). Even though your brain has 100 billion nerve cells, only a small subset of them can be active at any given instant. In the dynamics of perception, one stable percept (perceived image) automatically excludes others. Overlapping patterns of neural activity and the neural networks in your brain constantly compete for limited attentional resources. Thus when you look at a full-color picture, your attention is distracted by the clutter of texture and other details in the image. But a sketch of the same object allows you to allocate all your attentional resources to the outline, where the action is.
FIGURE 8.1 Comparison between (a) Nadia’s drawing of a horse, (b) da Vinci’s drawing, and (c) the drawing of a normal eight-year-old.
Conversely, if an artist wants to evoke the rasa of color by introducing peak shifts and ultranormal stimuli in color space, then she would be better off playing down the outlines. She might deemphasize boundaries, deliberately smudging the outlines or leaving them out entirely. This reduces the competitive bid from outlines on your attentional resources, freeing up your brain to focus on color space. As mentioned in Chapter 7, that is what Van Gogh and Monet do. It’s called impressionism.
Great artists intuitively tap into the law of isolation, but evidence for it also comes from neurology—cases in which many areas in the brain are dysfunctional—and the “isolation” of a single brain module allows the brain to gain effortless access to its limited attentional resources, without the patient even trying.
One striking example comes from an unexpected source: autistic children. Compare the three illustrations of horses in Figure 8.1. The one on the right (Figure 8.1c) is by a normal eight-year-old child. Pardon me for saying so, but it’s quite hideous—completely lifeless, like a cardboard cutout. The one on the left (Figure 8.1a), amazingly, is by a seven-year-old mentally retarded autistic child named Nadia. Nadia can’t converse with people and can barely tie a shoelace, yet her drawing brilliantly conveys the rasa of a horse; the beast seems to almost leap out of the canvas. Finally, in the middle (Figure 8.1b) is a horse drawn by Leonardo da Vinci. When giving lectures, I often conduct informal polls by asking the audience to rank-order the three horses by how well they are drawn without telling them in advance who drew them. Surprisingly, more people prefer Nadia’s horse to da Vinci’s. Here again we have a paradox. How is it possible that a retarded autistic child who can barely talk can draw better than one of the greatest geniuses of the Renaissance?
The answer comes from the law of isolation as well as the brain’s modular organization. (Modularity is a fancy term for the notion that different brain structures are specialized for different functions.) Nadia’s social awkwardness, emotional immaturity, language deficits, and retardation all stem from the fact that many areas in her brain are damaged and function abnormally. But maybe—as I suggested in my book Phantoms in the Brain—there is a spared island of cortical tissue in her right parietal lobe, a region known to be involved in many spatial skills, including our sense of artistic proportion. If the right parietal lobe is damaged by a stroke or tumor, a patient often loses the ability to draw even a simple sketch. The pictures they manage to draw are usually detailed but lack fluidity of line and vividness. Conversely, I have noticed that when a patient’s left parietal lobe is damaged, his drawings sometimes actually improve. He starts leaving out irrelevant details. You might wonder if the right parietal lobe is the brain’s rasa module for artistic expression.
I suggest that poor functioning in many of Nadia’s brain areas results in freeing her spared right parietal—her rasa module—to get the lion’s share of her attentional resources. You and I could achieve such a thing only through years of training and effort. This hypothesis would explain why her art is so much more evocative than Leonardo’s. It may turn out that a similar explanation holds for autistic calculating prodigies: profoundly retarded children who can nonetheless perform astonishing feats of arithmetic like multiplying two 13-digit numbers in a matter of seconds. (Notice I said, “calculating,” not math. True mathematical talent may require not just calculation but a combination of several skills, including spatial visualization.) We know that the left parietal lobe is involved in numerical computation, since a stroke there will typically knock out a patient’s ability to subtract or divide. In calculating savants, the left parietal may be spared relative to the right. If all of the autistic child’s attention is allocated to this number module in the left parietal, the result would be a calculating prodigy rather than a drawing prodigy.
In an ironic twist, once Nadia reached adolescence, she became less autistic. She also completely lost her ability to draw. This observation lends credibility to the isolation idea. Once Nadia matured and gained some higher abilities, she could no longer allocate the bulk of her attention to the rasa module in her right parietal (implying, perhaps, that formal education can actually stifle some aspects of creativity).
In addition to reallocating attention, there may be actual anatomical changes in the brains of autistics that explain their creativity. Perhaps spared areas grow larger, attaining enhanced efficacy. So Nadia may have had an enlarged right parietal, especially the right angular gyrus, which would explain her profound artistic skills. Autistic children with savant skills are often referred to me by their parents, and one of these days I will get around to having their brains scanned to see if there are indeed spared islands of supergrown tissue. Unfortunately, this isn’t as easy as it sounds, as autistic children often find it very difficult to sit still in the scanner. Incidentally, Albert Einstein had huge angular gyri, and I once made the whimsical suggestion that this allowed him to combine numerical (left parietal) and spatial (right parietal) skills in extraordinary ways that we lesser mortals cannot even begin to imagine.
Evidence for the isolation principle in art can also be found in clinical neurology. For example, not long ago a physician wrote to me about epileptic seizures originating in his temporal lobes. (Seizures are uncontrolled volleys of nerve impulses that course through the brain the way feedback amplifies through a speaker and microphone.) Until his seizures began quite unexpectedly at the age of sixty, the physician had no interest whatsoever in poetry. Yet all of a sudden, voluminous rhyme poured out. It was a revelation, a sudden enrichment of his mental life, just when he was starting to get jaded.
A second example, from the elegant work of Bruce Miller, a neurologist at the University of California, San Francisco, concerns patients who late in life develop a form of rapidly progressive dementia and blunting of intellect. Called frontotemporal dementia, the disorder selectively affects the frontal lobes—the seat of judgment and of crucial aspects of attention and reasoning—and the temporal lobes, but it spares islands of parietal cortex. As their mental faculties deteriorate, some of these patients suddenly, much to their surprise and to the surprise of those around them, develop an extraordinary ability to paint and draw. This is consistent with my speculations about Nadia—that her artistic skills were the result of her spared, hyperfunctioning right parietal lobe.
These speculations on autistic savants and patients with epilepsy and frontotemporal dementia raise a fascinating question. Is it possible that we less-gifted, normal people also have latent artistic or mathematical talents waiting to be liberated by brain disease? If so, would it be possible to unleash these talents without actually damaging our brains or paying the price of destroying other skills? This seems like science fiction, but as the Australian physicist Allan Snyder has pointed out, it could be true. Maybe the idea could be tested.
I was mulling over this possibility during a recent visit to India when I received what must surely be the strangest phone call of my life (and that’s saying a lot). It was long distance, from a reporter at an Australian newspaper.
“Dr. Ramachandran, I’m sorry to bother you at home,” he said. “An amazing new discovery has been made. Can I ask you some questions about it?”
“Sure, go ahead.”
“You know Dr. Snyder’s idea about autistic savants?” he asked.
“Yes,” I said. “He suggests that in a normal child’s brain, lower visual areas create sophisticated three-dimensional representations of a horse or any other object. After all, that’s what vision evolved for. But as the child gradually learns more about the world, higher cortical areas generate more abstract, conceptual descriptions of a horse; for example, ‘it’s an animal with a long snout and four legs and a whisklike trail, etc.’ With time, the child’s view of the horse becomes dominated by these higher abstractions. He becomes more concept driven and has less access to the earlier, more visual representations that capture art. In an autistic child these higher areas fail to develop, so he is able to access these earlier representations in a manner that you and I can’t. Hence the child’s amazing talent in art. Snyder presents a similar argument for math savants that I find hard to follow.”
“What do you think of his idea?” the reporter asked.
“I agree with it and have made many of the same arguments,” I said. “But the scientific community has been highly skeptical, arguing that Snyder’s idea is too vague to be useful or testable. I disagree. Every neurologist has at least one story up her sleeve about a patient who suddenly developed a quirky new talent following a stroke or brain trauma. But the best part of his theory,” I continued, “is a prediction he made that now seems obvious in hindsight. He suggested that if you were to somehow temporarily inactivate ‘higher’ centers in a normal person’s brain, that person might suddenly be able to access the so-called lower representations and create beautiful drawings or start generating prime numbers.
“Now, what I like about this prediction is that it’s not just a thought experiment. We can use a device called a transcranial magnetic stimulator, or TMS, to harmlessly and temporarily inactivate portions of a normal adult’s brain. Would you then see a sudden efflorescence of artistic or mathematical talent while the inactivation lasted? And would this teach that person to transcend his usual conceptual blocks? If so, would he pay the penalty of losing his conceptual skills? And once the stimulation has caused him to overcome a block (if it does), can he then do it on his own without the magnet?”
“Well, Dr. Ramachandran,” said the reporter, “I have news for you. Two researchers, here in Australia, who were inspired in part by Dr. Snyder’s suggestion, actually tried the experiment. They recruited normal student volunteers and tried it out.”
“Really?” I said, fascinated. “What happened?”
“Well, they zapped the student’s brains with a magnet, and suddenly these students could effortlessly produce beautiful sketches. And in one case the student could generate prime numbers the same way some idiot savants do.”
The reporter must have sensed my bewilderment, because I remained silent.
“Dr. Ramachandran, are you still there? Can you still hear me?”
It took a whole minute for the impact to sink in. I have heard many strange things in my career as a behavioral neurologist, but this was without doubt the strangest.
I must confess I had (and still have) two very different reactions to this discovery. The first is sheer incredulity and skepticism. The observation doesn’t contradict anything we know in neurology (partly because we know so little), but it sounds outlandish. The very notion of some skill being enhanced by knocking out parts of the brain is bizarre—the sort of thing you would expect to see on The X-Files. It also smacks of the kind of pep talk you hear from motivational gurus who are forever telling you about all your hidden talents waiting to be awakened by purchasing their tapes. Or drug peddlers claiming their magic potions will elevate your mind to whole new dimensions of creativity and imagination. Or that absurd but tenaciously popular factoid about how people only use 10 percent of their brains—whatever that’s supposed to mean. (When reporters ask me about the validity of this claim, I usually tell them, “Well, that’s certainly true here in California.”)
My second reaction was, Why not? After all, we know that astonishing new talent can emerge relatively suddenly in frontotemporal dementia patients. That is, we know such unmasking by brain reorganization can happen. Given this existence proof, why should I be so shocked by the Australian discovery? Why should their observation with TMS be any less likely than Bruce Miller’s observations of patients with profound dementia?
The surprising aspect is the timescale. Brain disease takes years to develop and the magnet works in seconds. Does that matter? According to Allan Snyder, the answer is no. But I’m not so sure.
Perhaps we can test the idea of isolated brain regions more directly. One approach would be to use functional brain imaging such as fMRI, which you may recall measures magnetic fields in the brain produced by changes in blood flow while the subject is doing something or looking at something. My ideas about isolation, along with Allan Snyder’s ideas, predict that, when you look at cartoon sketches or doodles of faces, you should get a higher activation of the face area than of areas dealing with color, topography, or depth. Alternatively, when you look at a color photo of a face, you should see the opposite: a decrement in the relative response to the face. This experiment has not been done.
Peekaboo, or Perceptual Problem Solving
The next aesthetic law superficially resembles isolation but is really quite different. It’s the fact that you can sometimes make something more attractive by making it less visible. I call it the “peekaboo principle.” For example, a picture of a nude woman seen behind a shower curtain or wearing diaphanous, skimpy clothes—an image that men would say approvingly “leaves something to the imagination”—can be much more alluring than a pinup of the same nude woman. Similarly, disheveled tresses that conceal half a face can be enchanting. But why is this so?
After all, if I am correct in saying that art involves hyperactivation of visual and emotional areas, a fully visible naked woman should be more attractive. If you are a heterosexual man, you would expect an unimpeded view of her breasts and genitalia to excite your visual centers more effectively than her partially concealed private parts. Yet often the opposite is true. Similarly, many women will find images of hot and sexy but partially clad men to be more attractive than fully naked men.
We prefer this sort of concealment because we are hardwired to love solving puzzles, and perception is more like puzzle solving than most people realize. Remember the Dalmation dog? Whenever we successfully solve a puzzle, we get rewarded with a zap of pleasure that is not all that different from the “Aha!” of solving a crossword puzzle or scientific problem. The act of searching for a solution to a problem—whether purely intellectual, like a crossword or logic puzzle, or purely visual, like “Where’s Waldo?”—is pleasing even before the solution is found. It’s fortunate that your brain’s visual centers are wired up to your limbic reward mechanisms. Otherwise, when you try to figure out how to convince the girl you like to sneak off into the bushes with you (working out a social puzzle) or chase that elusive prey or mate through the underbrush in dense fog (solving a fast-changing series of sensorimotor puzzles), you might give up too easily!
So, you like partial concealment and you like solving puzzles. To understand the peekaboo law you need to know more about vision. When you look at a simple visual scene, your brain is constantly resolving ambiguities, testing hypotheses, searching for patterns, and comparing current information with memories and expectations.
One naïve view of vision, perpetuated mainly by computer scientists, is that it involves a serial hierarchical processing of the image. Raw data comes in as picture elements, or pixels, in the retina and gets handed up through a succession of visual areas, like a bucket brigade, undergoing more and more sophisticated analysis at each stage, culminating in the eventual recognition of the object. This model of vision ignores the massive feedback projections that each higher visual area sends back to lower areas. These back projections are so massive that it’s misleading to speak of a hierarchy. My hunch is that at each stage in processing, a partial hypothesis, or best-fit guess, is generated about the incoming data and then sent back to lower areas to impose a small bias on subsequent processing. Several such best fits may compete for dominance, but eventually, through such bootstrapping, or successive iterations, the final perceptual solution emerges. It’s as though vision works top down rather than bottom up.
Indeed, the line between perceiving and hallucinating is not as crisp as we like to think. In a sense, when we look at the world, we are hallucinating all the time. One could almost regard perception as the act of choosing the one hallucination that best fits the incoming data, which is often fragmentary and fleeting. Both hallucinations and real perceptions emerge from the same set of processes. The crucial difference is that when we are perceiving, the stability of external objects and events helps anchor them. When we hallucinate, as when we dream or float in a sensory deprivation tank, objects and events wander off in any direction.
To this model I’d add the notion that each time a partial fit is discovered, a small “Aha!” is generated in your brain. This signal is sent to limbic reward structures, which in turn prompt the search for additional, bigger “Ahas!,” until the final object or scene crystallizes. In this view, the goal of art is to create images that generate as many mutually consistent mini-“Aha!” signals as possible (or at least a judicious saturation of them) to titillate the visual areas in your brain. Art in this view is a form of visual foreplay for the grand climax of object recognition.
The law of perceptual problem solving, or peekaboo, should now make more sense. It may have evolved to ensure that the search for visual solutions is inherently pleasurable rather than frustrating, so that you don’t give up too easily. Hence the appeal of a nude behind semitransparent clothes or the smudged water lilies of Monet.1
The analogy between aesthetic joy and the “Aha!” of problem solving is compelling, but analogies can only get us so far in science. Ultimately, we need to ask, What is the actual neural mechanism in the brain that generates the aesthetic “Aha!”?
One possibility is that when certain aesthetic laws are deployed, a signal is sent from your visual areas directly to your limbic structures. As I noted, such signals may be sent from other brain areas at every stage in the perceptual process (by grouping, boundary recognition, and so on) in what I call visual foreplay, and not just from the final stage of object recognition (“Wow! It’s Mary!”). How exactly this happens is unclear, but there are known anatomical connections that go back and forth between limbic structures, such as the amygdala, and other brain areas at almost every stage in the visual hierarchy. It’s not hard to imagine these being involved in producing mini-“Ahas!” The phrase “back and forth” is critical here; it allows artists to simultaneously tap into multiple laws to evoke multiple layers of aesthetic experience.
Back to grouping: There may be a powerful synchronization of nerve impulses from widely separated neurons signaling the features that are grouped. Perhaps this synchrony itself is what subsequently activates limbic neurons. Some such process may be involved in creating the pleasing and harmonious resonance between different aspects of what appears on the surface to be a single great work of art.
We know there are neural pathways directly linking many visual areas with the limbic structures. Remember David, the patient with Capgras syndrome from Chapter 2? His mother looks like an imposter to him because the connections from his visual centers and his limbic structures were severed by an accident, so he doesn’t get the expected emotional jolt when seeing his mom. If such a disconnection between vision and emotion is the basis of the syndrome, then Capgras patients should not be able to enjoy visual art. (Although they should still enjoy music, since hearing centers in their cortices are not disconnected from their limbic systems.) Given the rarity of the syndrome this isn’t easy to test, but there are, in fact, cases of Capgras patients in the older literature who claimed that landscapes and flowers were suddenly no longer beautiful.
Furthermore, if my reasoning about multiple “Ahas!” is correct—in that the reward signal is generated at every stage in the visual process, not just in the final stage of recognition—then people with Capgras syndrome should not only have problems enjoying a Monet but also take much longer to find the Dalmatian dog. They should also have problems solving simple jigsaw puzzles. These are predictions that, to my knowledge, have not been directly tested.
Until we have a clearer understanding of the connections between the brain’s reward systems and visual neurons, it’s also best to postpone discussing certain questions like these: What’s the difference between mere visual pleasure (as when seeing a pinup) and a visual aesthetic response to beauty? Does the latter merely produce a heightened pleasure response in your limbic system (as the stick with three stripes does for the gull chick, described in Chapter 7), or is it, as I suspect, an altogether richer and more multidimensional experience? And how about the difference between the “Aha!” of mere arousal versus the “Aha!” of aesthetic arousal? Isn’t the “Aha!” signal just as big with any old arousal—such as being surprised, scared, or sexually stimulated—and if so, how does the brain distinguish these other types of arousal from a true aesthetic response? It may turn out that these distinctions aren’t as watertight as they seem; who would deny that eros is a vital part of art? Or that an artist’s creative spirit often derives its sustenance from a muse?
I’m not saying these questions are unimportant; in fact, it’s best to be aware of them right up front. But we have to be careful not to give up the whole enterprise just because we cannot yet provide complete answers to every quandary. On the contrary, we should be pleased that the process of trying to discover aesthetic universals has thrown up these questions we are forced to confront.
Abhorrence of Coincidences
When I was a ten-year-old schoolboy in Bangkok, Thailand, I had a wonderful art teacher named Mrs. Vanit. During a class assignment, we were asked to produce landscapes, and I produced a painting that looked a bit like Figure 8.2a—a palm tree growing between two hills.
Mrs. Vanit frowned as she looked at the picture and said, “Rama, you should put the palm tree a bit off to one side, not exactly between the hills.”
I protested, “But Mrs. Vanit, surely there’s nothing logically impossible about this scene. Maybe the tree is growing in such a way that its trunk coincides exactly with the V between the hills. So why do you say the picture is wrong?”
FIGURE 8.2 Two hills with a tree in the middle. (a) The brain dislikes unique vantage points and (b) prefers generic ones.
“Rama, you can’t have coincidences in pictures,” said Mrs. Vanit.
The truth was neither Mrs. Vanit nor I knew the answer to my question at that time. I now realize that my drawing illustrates one of the most important laws in aesthetic perception: the abhorrence of coincidences.
Imagine that Figure 8.2a depicts a real visual scene. Look carefully and you’ll realize that in real life, you could only see the scene in Figure 8.2a from one vantage point, whereas you could see the one in Figure 8.2b from any number of vantage points. One viewpoint is unique and one is generic. As a class, images like the one in Figure 8.2b are much more common. So Figure 8.2a is—to use a phrase introduced by Horace Barlow—“a suspicious coincidence.” And your brain always tries to find a plausible alternate, generic interpretation to avoid the coincidence. In this case it doesn’t find one and so the image isn’t pleasing.
Now let’s look at a case where a coincidence does have an interpretation. Figure 8.3 shows the famous illusory triangle described by Italian psychologist Gaetano Kanizsa. There really isn’t a triangle. It’s just three black Pac-Man-like figures facing one another. But you perceive an opaque white triangle whose three corners partially occlude three black circular discs. Your brain says (in effect), “What’s the likelihood that these three Pac-Men are lined up exactly like this simply by chance? It’s too much of a suspicious coincidence. A more plausible explanation is that it depicts an opaque white triangle occluding three black discs.” Indeed, you can almost hallucinate the edges of the triangle. So in this case your visual system has found a way of explaining the coincidence (eliminating it, you might say) by coming up with an interpretation that feels good. But in the case of the tree centered in the valley, your brain struggles to find an interpretation of the coincidence and is frustrated because there isn’t one.
FIGURE 8.3 Three black discs with pie-shaped wedges removed from them: The brain prefers to see this arrangement as an opaque white triangle whose corners partially occlude circular discs.
Orderliness
The law of what I loosely call “orderliness,” or regularity, is clearly important in art and design, especially the latter. Again, this principle is so obvious that it’s hard to talk about it without sounding banal, but a discussion of visual aesthetics is not complete without it. I will lump a number of principles under this category which have in common an abhorrence for deviation from expectations (for instance, the preference for rectilinearity and parallel edges and for the use of repetitive motifs in carpets). I will touch on these only briefly because many art historians, like Ernst Gombrich and Rudolf Arnheim, have already discussed them extensively.
Consider a picture frame hanging on the wall, slightly tilted. It elicits an immediate negative reaction that is wildly out of proportion to the deviation. The same holds for a drawer that doesn’t close completely because there’s a piece of crumpled paper wedged in it and sticking out. Or an envelope with a single tiny hair accidentally caught under the sealed portion. Or a tiny piece of lint on an otherwise flawless suit. Why we react this way is far from clear. Some of it seems to be simple hygiene, which has both learned and instinctive components. Disgust with dirty feet is surely a cultural development, while picking a piece of lint out of your child’s hair might derive from the primate grooming instinct.
The other examples, such as the tilted frame or slightly disarrayed pile of books, seem to imply that our brains have a built-in need to impose regularity or predictability, although this doesn’t explain much.
It’s unlikely that all examples of regularity or predictability embody the same law. A closely related law, for example, is our love of visual repetition or rhythm, such as floral motifs used in Indian art and Persian carpets. But it’s hard to imagine that this exemplifies the same law as our fondness for a straightly hung picture frame. The only thing the two have in common, at a very abstract level, is that both involve predictability. In each case the need for regularity or order may reflect a deeper need your visual system has for economy of processing.
Sometimes deviations from predictability and order are used by designers and artists to create pleasing effects. So why should some deviations, like a tilted frame, be ugly while others—say, a beauty spot placed asymmetrically near the angle of the mouth of Cindy Crawford, rather than being in the middle of her chin or nose—be attractive? The artist seems to strike a balance between extreme regularity, which is boring, and complete chaos. For example, if she uses a motif of repeating small flowers framing a sculpture of a goddess, she may try to break the monotony of the repetition by adding some more widely spaced large flowers to create two overlapping rhythms of different periodicity. Whether there has to be a certain mathematical relationship between the two scales of repetition and what kind of phase shifts between the two are permissible are good questions—yet to be answered.
Symmetry
Any child who has played with a kaleidoscope and any lover who has seen the Taj Mahal has been under the spell of symmetry. Yet even though designers recognize its allure and poets use it to flatter, the question of why symmetrical objects should be pretty is rarely raised.
Two evolutionary forces might explain the allure of symmetry. The first explanation is based on the fact that vision evolved mainly for discovering objects, whether for grabbing, dodging, mating, eating, or catching. But your visual field is always crammed full of objects: trees, fallen logs, splotches of color on the ground, rushing brooks, clouds, outcroppings of rocks, and on and on. Given that your brain has limited attentional capacity, what rules of thumb might it employ to ensure attention gets allocated to where it’s most needed? How does your brain come up with a hierarchy of precedence rules? In nature, “important” translates into “biological objects” such as prey, predator, member of the same species, or mate, and all such objects have one thing in common: symmetry. This would explain why symmetry grabs your attention and arouses you, and by extension, why the artist or architect can exploit this trait to good use. It would explain why a newborn baby prefers looking at symmetrical inkblots over asymmetrical ones. The preference likely taps a rule of thumb in the baby’s brain that says, in effect, “Hey, something symmetrical. That feels important. I should keep looking.”
The second evolutionary force is more subtle. By presenting a random sequence of faces with varying degrees of symmetry to college undergraduates (the usual guinea pigs in such experiments), psychologists have found that the most symmetrical faces are generally judged to be the most attractive. This in itself is hardly surprising; no one expects the twisted visage of Quasimodo to be attractive. But intriguingly, even minor deviations are not tolerated. Why?
The surprising answer comes from parasites. Parasitic infestation can profoundly reduce the fertility and fecundity of a potential mate, so evolution places a very high premium on being able to detect whether your mate is infected. If the infestation occurred in early fetal life or infancy, one of the most obvious externally visible signs is a subtle loss of symmetry. Therefore, symmetry is a marker, or flag, for good health, which in turn is an indicator of desirability. This argument explains why your visual system finds symmetry appealing and asymmetry disturbing. It’s an odd thought that so many aspects of evolution—even our aesthetic preferences—are driven by the need to avoid parasites. (I once wrote a satirical essay that “gentlemen prefer blondes” for the same reason. It’s much easier to detect anemia and jaundice caused by parasites in a light-skinned blonde than in a swarthy brunette.)
Of course, this preference for symmetrical mates is largely unconscious. You are completely unaware that you are doing it. What a fitting bit of symmetry that the same evolutionary quirk in the great Mogul emperor Shah Jahan’s brain that caused him to select the perfectly symmetrical, parasite-free face of his beloved Mumtaz, also caused him to construct the exquisitely symmetrical Taj Mahal itself, a universal symbol of eternal love!
But we must now deal with the apparent exceptions. Why is a lack of symmetry appealing at times? Imagine you are arranging furniture, pictures, and other accessories in a room. You don’t need a professional designer to tell you that total symmetry won’t work (although within the room you can have islands of symmetry, such as a rectangular table with symmetrically placed chairs). On the contrary, you need carefully chosen asymmetry to create the most dramatic effects. The clue to resolving this paradox comes from the observation that the symmetry rule applies only to objects, not to large-scale scenes. This makes perfect evolutionary sense because a predator, a prey, a friend, or a mate is always an isolated, independent object.
Your preference for symmetrical objects and asymmetrical scenes is also reflected in the “what” and “how” (sometimes called “where”) streams in your brain’s visual processing stream. The “what” stream (one of two subpathways in the new pathway) flows from your primary visual areas toward your temporal lobes, and concerns itself with discrete objects and the spatial relationships of features within objects, such as the internal proportions of a face. The “how” stream flows from your primary visual area toward your parietal lobes and concerns itself more with your general surroundings and the relationships between objects (such as the distance between you, the gazelle you’re chasing, and the tree it’s about to dodge behind). It’s no surprise that a preference for symmetry is rooted in the “what” stream, where it is needed. So the detection and enjoyment of symmetry is based on object-centered algorithms in your brain, not scene-centered ones. Indeed, objects placed symmetrically in a room would look downright silly because, as we have seen, the brain dislikes coincidences it can’t explain.
Metaphor
The use of metaphor in language is well known, but it’s not widely appreciated that it’s also used extensively in visual art. In Figure 8.4 you see a sandstone sculpture from Kajuraho in Northern India, circa A.D. 1100. The sculpture depicts a voluptuous celestial nymph who arches her back to gaze upward as if aspiring to God or heaven. She probably occupied a niche at the base of a temple. Like most Indian nymphs she has a narrow waist weighed down heavily by big hips and breasts. The arch of the bough over her head closely follows the curvature of her arm (a postural example of a grouping principle called closure). Notice the plump, ripe mangoes dangling from the branch which, like the nymph herself, are a metaphor of the fertility and fecundity of nature. In addition, the plumpness of the mangoes provides a sort of visual echo of the plumpness and ripeness of her breasts. So there are multiple layers of metaphor and meaning in the sculpture, and the result is incredibly beautiful. It’s almost as though the multiple metaphors amplify each other, although why this internal resonance and harmony should be especially pleasing is anybody’s guess.
I find it intriguing that the visual metaphor is probably understood by the right hemisphere long before the more literal-minded left hemisphere can spell out the reasons. (Unlike a lot of flaky pop psychology lore about hemispheric specialization, this particular distinction probably does have a grain of truth.) I am tempted to suggest that there is ordinarily a translation barrier between the left hemisphere’s language-based, propositional logic and the more oneiric (dream like), intuitive “thinking” (if that’s the right word) of the right, and great art sometimes succeeds by dissolving this barrier. How often have you listened to a strain of music that evokes a richness of meaning that is far more subtle than what can be articulated by the philistine left hemisphere?
FIGURE 8.4 A stone nymph below an arching bough, looking heavenward for divine inspiration. Khajuraho, India, eleventh century.
A more mundane example is the use of certain attention-drawing tricks used by designers. The word “tilt” printed in visually tilted letters produces a comical yet pleasing effect. This tempts me to posit a separate law of aesthetics, which we might call “visual resonance,” or “echo” (although I am wary of falling into the trap that some Gestaltists fell into of calling every observation a law). Here the resonance is between the concept of the word “tilt” with its actual literal tilt, blurring the boundary between conception and perception.
In comics, words like “scared,” “fear,” or “shiver” are often printed in wiggly lines as if the letters themselves were trembling. Why is this so effective? I’d say it is because the wiggly line is a spatial echo of your own shiver, which in turn resonates with the concept of fear. It may be that watching someone tremble (or tremble as depicted metaphorically by a wiggly letters) makes you echo the tremble ever so slightly because it prepares you to run away, anticipating the predator that may have caused the other person to tremble. If so, your reaction time for detecting the word “fear” depicted in wiggly letters might be much shorter than if the word were depicted in straight lines (smooth letters), an idea that can be tested in the laboratory.2
I will conclude my comments on the aesthetic law of metaphor with Indian art’s greatest icon: The Dancing Shiva, or Nataraja. In Chennai (Madras), there is bronze gallery in the state museum that houses a magnificent collection of southern Indian bronzes. One of its prize works is a twelfth-century Nataraja (Figure 8.5). One day around the turn of the twentieth century, an elderly firangi (“foreigner” or “white” in Hindi) gentleman was observed gazing at the Nataraja in awe. To the amazement of the museum guards and patrons, he went into a sort of trance and proceeded to mimic the dance postures. A crowd gathered around, but the gentleman seemed oblivious until the curator finally showed up to see what was going on. He almost had the poor man arrested until he realized the European was none other than the world-famous sculptor Auguste Rodin. Rodin was moved to tears by The Dancing Shiva. In his writings he referred to it as one of the greatest works of art ever created by the human mind.
You don’t have to be religious or Indian or Rodin to appreciate the grandeur of this bronze. At a very literal level, it depicts the cosmic dance of Shiva, who creates, sustains, and destroys the Universe. But the sculpture is much more than that; it is a metaphor of the dance of the Universe itself, of the movement and energy of the cosmos. The artist depicts this sensation through the skillful use of many devices. For example, the centrifugal motion of Shiva’s arms and legs flailing in different directions and the wavy tresses flying off his head symbolize the agitation and frenzy of the cosmos. Yet right in the midst of all this turbulence—this fitful fever of life—is the calm spirit of Shiva himself. He gazes at his own creation with supreme tranquility and poise. How skillfully the artist has combined these seemingly antithetical elements of movement and energy, on the one hand, and eternal peace and stability on the other. This sense of something eternal and stable (God, if you like) is conveyed partly by Shiva’s slightly bent left leg, which gives him balance and poise even in the midst of his frenzy, and partly by his serene, tranquil expression, which conveys a sense of timelessness. In some Nataraja sculptures this peaceful expression is replaced by an enigmatic half-smile, as though the great god were laughing at life and death alike.
FIGURE 8.5 Nataraja depicting the cosmic dance of Shiva. Southern India, Chola period, twelfth century.
This sculpture has many layers of meaning, and indologists like Heinrich Zimmer and Ananda Coomaraswamy wax lyrically about them. While most Western sculptors try to capture a moment or snapshot in time, the Indian artist tries to convey the very nature of time itself. The ring of fire symbolizes the eternal cyclical nature of creation and destruction of the Universe, a common theme in Eastern philosophy, which is also occasionally hit upon by thinkers in the West. (I am reminded in particular of Fred Hoyle’s theory of the oscillating universe.) One of Shiva’s right hands holds a tambour, which beats the Universe into creation and also represents perhaps the pulse beat of animate matter. But one of his left hands holds the fire that not only heats up and energizes the universe but also consumes it, allowing destruction to perfectly balance out creation in the eternal cycle. And so it is that the Nataraja conveys the abstract, paradoxical nature of time, all devouring yet ever creative.
Below Shiva’s right foot is a hideous demonic creature called Apasmara, or “the illusion of ignorance,” which Shiva is crushing. What is this illusion? It’s the illusion that all of us scientific types suffer from, that there is nothing more to the Universe than the mindless gyrations of atoms and molecules, that there is no deeper reality behind appearances. It is also the delusion of some religions that each of us has a private soul who is watching the phenomena of life from his or her own special vantage point. It is the logical delusion that after death there is nothing but a timeless void. Shiva is telling us that if you destroy this illusion and seek solace under his raised left foot (which he points to with one of his left hands), you will realize that behind external appearances (Maya), there is a deeper truth. And once you realize this, you see that, far from being an aloof spectator, here to briefly watch the show until you die, you are in fact part of the ebb and flow of the cosmos—part of the cosmic dance of Shiva himself. And with this realization comes immortality, or moksha: liberation from the spell of illusion and union with the supreme truth of Shiva himself. There is, in my mind, no greater instantiation of the abstract idea of god—as opposed to a personal God—than the Shiva/Nataraja. As the art critic Coomaraswamy says, “This is poetry, but it is science nonetheless.”
I am afraid I have strayed too far afield. This is a book about neurology, not Indian art. I showed you the Shiva/Nataraja only to underscore that the reductionist approach to aesthetics presented in this chapter is in no way meant to diminish great works of art. On the contrary, it may actually enhance our appreciation of their intrinsic value.
I OFFER THESE nine laws as a way to explain why artists create art and why people enjoy viewing it.3 Just as we consume gourmet food to generate complex, multidimensional taste and texture experiences that titillate our palate, we appreciate art as gourmet food for the visual centers in the brain (as opposed to junk food, which is analogous to kitsch). Even though the rules that artists exploit originally evolved because of their survival value, the production of art itself doesn’t have survival value. We do it because it’s fun and that’s all the justification it needs.
But is that the whole story? Apart from its role in pure enjoyment, I wonder if there might be other, less obvious reasons why humans engage in art so passionately. I can think of four candidate theories. They are about the value of art itself, not merely of aesthetic enjoyment.
First, there is the very clever, if somewhat cheeky and cynical, suggestion favored by Steven Pinker that acquiring or owning unique, one-of-a-kind works may have been a status symbol to advertise superior access to resources (a psychological rule of thumb evolved for assessing superior genes). This is especially true today as the increasing availability of mass copying methods places an ever higher premium (from the art buyer’s perspective) on owning an original—or at least (from the art seller’s perspective) on fooling the buyer into the mock status conferred by purchasing limited-edition prints. No one who has been to an art show cocktail reception in Boston or La Jolla can fail to see that there is some truth to this view.
Second, an ingenious idea has been proposed by Geoffrey Miller, the evolutionary psychologist at the University of New Mexico, and by others that art evolved to advertise to potential mates the artist’s manual dexterity and hand-eye coordination. This was promptly dubbed the “come up and see my etchings” theory of art. Like the male bowerbird, the male artist is in effect telling his muse, “Look at my pictures. They show I have excellent hand-eye coordination and a complex, well-integrated brain—genes I’ll pass on to your babies.” There is an irritating grain of truth to Miller’s idea, but personally I don’t find it very convincing. The main problem is that it doesn’t explain why the advertisement should take the form of art. It seems like overkill. Why not directly advertise this ability to potential mates by showing off your skills in archery or athletic prowess in soccer? If Miller is right, women should find the ability to knit and embroider to be very attractive in potential husbands, given that it requires superb manual dexterity—even though most women, not even feminists, don’t value such skills in a man. Miller might argue that women value not the dexterity and skill per se but the creativity that underlies the finished product. But despite its supreme cultural importance to humans, the biological survival value of art as an index of creativity is dubious given that it doesn’t necessarily spill over into other domains. (Just look at the number of starving artists!)
Notice that Pinker’s theory predicts that the women should hover around the buyers, whereas Miller’s theory predicts they should hover around the starving artists themselves.
To these ideas I’ll add two more. To understand them you need to consider thirty-thousand-year-old cave art from Lascaux, France. These cave-wall images are hauntingly beautiful even to the modern eye. To achieve them, the artists must have used some of the same aesthetic laws used by modern artists. For example, the bisons are mostly depicted as outline drawings (isolation), and bison-like characteristics such as small head and large hump are grossly exaggerated. Basically, it’s a caricature (peak shift) of a bison created by unconsciously subtracting the average generic hoofed quadruped from a bison and amplifying the differences. But apart from just saying, “They made these images just to enjoy them,” can we say anything more?
Humans excel at visual imagery. Our brains evolved this ability to create an internal mental picture or model of the world in which we can rehearse forthcoming actions, without the risks or the penalties of doing them in the real world. There are even hints from brain-imaging studies by Harvard University psychologist Steve Kosslyn showing that your brain uses the same regions to imagine a scene as when you actually view one.
But evolution has seen to it that such internally generated representations are never as authentic as the real thing. This is a wise bit of self-restraint on your genes’ part. If your internal model of the world were a perfect substitute, then anytime you felt hungry you could simply imagine yourself at a banquet, consuming a feast. You would have no incentive to find real food and would soon starve to death. As the Bard said, “You cannot cloy the hungry edge of appetite by bare imagination of a feast.”
Likewise, a creature that developed a mutation that allowed it to imagine orgasms would fail to pass on its genes and would quickly become extinct. (Our brains evolved long before porn videos, Playboy magazine, and sperm banks.) No “imagine orgasm” gene is likely to make a big splash in the gene pool.
Now what if our hominin ancestors were worse than us at mental imagery? Imagine they wanted to rehearse a forthcoming bison or lion hunt. Perhaps it was easier to engage in realistic rehearsal if they had actual props, and perhaps these props are what we today call cave art. They may have used these painted scenes in much the way that a child enacts imaginary fights between his toy soldiers, as a form of play to educate his internal imagery. Cave art could also have been used for teaching hunting skills to novices. Over several millennia these skills would become assimilated into culture and acquired religious significance. Art, in short, may be nature’s own virtual reality.
Finally, a fourth, less prosaic reason for art’s timeless appeal may be that it speaks an oneiric, right-hemisphere-based language that is unintelligible—alien, even—to the more literal-minded left hemisphere. Art conveys nuances of meaning and subtleties of mood that can only be dimly apprehended or conveyed through spoken language. The neural codes used by the two hemispheres for representing higher cognitive functions may be utterly different. Perhaps art facilitates communion between these two modes of thinking that would otherwise remain mutually unintelligible and walled off. Perhaps emotions also need a virtual reality rehearsal to increase their range and subtlety for future use, just as we engage in athletics for motor rehearsal and frown over crossword puzzles or ponder over Gödel’s theorem for intellectual invigoration. Art, in this view, is the right hemisphere’s aerobics. It’s a pity that it isn’t emphasized more in our schools.
SO FAR, WE have said very little about the creation—as opposed to the perception—of art. Steve Kosslyn and Martha Farah of Harvard have used brain-imaging techniques to show that creatively conjuring up a visual image probably involves the inner (ventromedial cortex) portion of the frontal lobes. This portion of the brain has back-and-forth connections with parts of the temporal lobes concerned with visual memories. A crude template of the desired image is initially evoked through these connections. Back-and-forth interactions between this template and what’s being painted or sculpted lead to progressive embellishments and refinements of the painting, resulting in the multiple, stage-by-stage mini-“Ahas!” we spoke of earlier. When the self-amplifying echoes between these layers of visual processing reach a critical volume, they get delivered as a final, kick-ass “Aha!” to reward centers such as the septal nuclei and the nucleus accumbens. The artist can then relax with her cigarette, cognac, and muse.
Thus the creative production of art and the appreciation of art may be tapping into the same pathways (except for the frontal involvement in the former). We have seen that faces and objects enhanced through peak shifts (caricatures, in other words) hyperactivate cells in the fusiform gyrus. Overall scene layout—as in landscape paintings—probably requires the right inferior parietal lobule, whereas “metaphorical,” or conceptual aspects of art might require both the left and right angular gyri. A more thorough study of artists with damage to different portions of either the right or left hemisphere might be worthwhile—especially bearing in mind our laws of aesthetics.
Clearly we have a long way to go. Meanwhile, it’s fun to speculate. As Charles Darwin said in his Descent of Man,
false facts are highly injurious to the progress of science, for they often endure long; but false views, if supported by some evidence, do little harm, for everyone takes a salutary pleasure in proving their falseness; and when this is done, one path toward errors is closed and the road to truth is often at the same time opened.
CHAPTER 9
An Ape with a Soul: How Introspection Evolved
Hang up philosophy! Unless philosophy can make a Juliet…
—WILLIAM SHAKESPEARE
JASON MURDOCH WAS AN INPATIENT AT A REHABILITATION CENTER in San Diego. After a serious head injury in a car accident near the Mexican border, he had been in a semiconscious state of vigilant coma (also called akinetic mutism) for nearly three months before my colleague, Dr. Subramaniam Sriram, examined him. Because of damage to the anterior cingulate cortex in the front of his brain, Jason couldn’t walk, talk, or initiate actions. His sleep-wake cycle was normal but he was bedridden. When awake he seemed alert and conscious (if that’s the right word—words lose their resolving power when dealing with such states). He sometimes had slight “ouch” withdrawal in response to pain, but not consistently. He could move his eyes, often swiveling them around to follow people. Yet he couldn’t recognize anyone—not even his parents or siblings. He could not talk or comprehend speech, nor could he interact with people meaningfully.
But if his father, Mr. Murdoch, phoned him from next door, Jason suddenly became alert and talkative, recognizing his dad and engaging him in conversation. That is until Mr. Murdoch went back into the room. Then Jason lapsed back into his semiconscious “zombie” state. Jason’s cluster of symptoms has a name: telephone syndrome. He could be made to flip back and forth between the two states, depending on whether his father was directly in his presence or not.
Think of what this means. It is almost as if there are two Jasons trapped inside one body: the Jason on the phone, who is fully alert and conscious, and the Jason in person, who is a barely conscious zombie. How can this be? The answer has to do with how the accident affected the visual and auditory pathways in Jason’s brain. To a surprising extent, the activity of each pathway—vision and hearing—must be segregated all the way up to the critically important anterior cingulate. This collar of tissue, as we shall see, is where your sense of free will partly originates.
If the anterior cingulate is extensively damaged, the result is the full picture of akinetic mutism; unlike Jason, the patient is in a permanent twilight state, not interacting with anyone under any circumstances. But what if the damage to the anterior cingulate is more subtle—say, the visual pathway to the anterior cingulate is damaged selectively at some stage, but the auditory pathway is fine. The result is telephone syndrome: Jason springs to action (speaking metaphorically!) when chatting on the phone but lapses into akinetic mutism when his father walks into the room. Except when he is on the telephone, Jason is no longer a person.
I am not making this distinction arbitrarily. Although Jason’s visuomotor system can still track and automatically attend to objects in space, he cannot recognize or attribute meaning to what he sees. Except when he is on the phone with his father, Jason lacks the ability to form rich, meaningful metarepresentations, which are essential to not only our uniqueness as a species but also our uniqueness as individuals and our sense of self.
Why is Jason a person when he is on the phone but not otherwise? Very early in evolution the brain developed the ability to create first-order sensory representations of external objects that could elicit only a very limited number of reactions. For example a rat’s brain has only a first-order representation of a cat—specifically, as a furry, moving thing to avoid reflexively. But as the human brain evolved further, there emerged a second brain—a set of nerve connections, to be exact—that was in a sense parasitic on the old one. This second brain creates metarepresentations (representations of representations—a higher order of abstraction) by processing the information from the first brain into manageable chunks that can be used for a wider repertoire of more sophisticated responses, including language and symbolic thought. This is why, instead of just “the furry enemy” that it is for the rat, the cat appears to you as a mammal, a predator, a pet, an enemy of dogs and rats, a thing that has ears, whiskers, a long tail, and a meow; it even reminds you of Halle Berry in a latex suit. It also has a name, “cat,” symbolizing the whole cloud of associations. In short, the second brain imbues an object with meaning, creating a metarepresentation that allows you to be consciously aware of a cat in a way that the rat isn’t.
Metarepresentations are also a prerequisite for our values, beliefs, and priorities. For example, a first-order representation of disgust is a visceral “avoid it” reaction, while a metarepresentation would include, among other things, the social disgust you feel toward something you consider morally wrong or ethically inappropriate. Such higher-order representations can be juggled around in your mind in a manner that is unique to humans. They are linked to our sense of self and enable us to find meaning in the outside world—both material and social—and allow us to define ourselves in relation to it. For example, I can say, “I find her attitude toward emptying the cat litter box disgusting.”
The visual Jason is essentially dead and gone as a person, because his ability to have metarepresentations of what he sees is compromised.1 But the auditory Jason lives on; his metarepresentations of his father, his self, and their life together are largely intact as activated via the auditory channels of his brain. Intriguingly, the hearing Jason is temporarily switched off when Mr. Murdoch appears in person to talk to his son. Perhaps because the human brain emphasizes visual processing, the visual Jason stifles his auditory twin.
Jason presents a striking case of a fragmented self. Some of the “pieces” of Jason have been destroyed, yet others have been preserved and retain a surprising degree of functionality. Is Jason still Jason if he can be broken into fragments? As we shall see, a variety of neurological conditions show us that the self is not the monolithic entity it believes itself to be. This conclusion flies directly in the face of some of our most deep-seated intuitions about ourselves—but data are data. What the neurology tells us is that the self consists of many components, and the notion of one unitary self may well be an illusion.
SOMETIME IN THE twenty-first century, science will confront one of its last great mysteries: the nature of the self. That lump of flesh in your cranial vault not only generates an “objective” account of the outside world but also directly experiences an internal world—a rich mental life of sensations, meanings, and feelings. Most mysteriously, your brain also turns its view back on itself to generate your sense of self-awareness.
The search for the self—and the solutions to its many mysteries—is hardly a new pursuit. This area of study has traditionally been the preserve of philosophers, and it is fair to say that on the whole they haven’t made a lot of progress (though not for want of effort; they have been at it for two thousand years). Nonetheless, philosophy has been extremely useful in maintaining semantic hygiene and emphasizing the need for clarity in terminology.2 For example, people often use the word “consciousness” loosely to refer to two different things. One is qualia—the immediate experiential qualities of sensation, such as the redness of red or the pungency of curry—and the second is the self who experiences these sensations. Qualia are vexing to philosophers and scientists alike because even though they are palpably real and seem to lie at the very core of mental experience, physical and computational theories about brain function are utterly silent on the question of how they might arise or why they might exist.
Let me illustrate the problem with a thought experiment. Imagine an intellectually highly advanced but color-blind Martian scientist who sets out to understand what humans mean when they talk about color. With his Star Trek–level technology he studies your brain and completely figures out down to every last detail what happens when you have mental experiences involving the color red. At the end of his study he can account for every physicochemical and neurocomputational event that occurs when you see red, think of red, or say “red.” Now ask yourself: Does this account encompass everything there is to the ability to see and think about redness? Can the color-blind Martian now rest assured that he understands your alien mode of visual experience even though his brain is not wired to respond to that particular wavelength of electromagnetic radiation? Most people would say no. Most would say that no matter how detailed and accurate this outside-objective description of color cognition might be, it has a gaping hole at its center because it leaves out the quale of redness. (“Quale,” pronounced “kwah-lee,” is the singular form of “qualia.”) Indeed, there is no way you can convey the ineffable quality of redness to someone else short of hooking up your brain directly to that person’s brain.