CHAPTER 2
This Teetering Bulb of Dread and Dream
What Is a “Brain Structure”?
I HAVE often been asked, when people hear that my research amounts to a quest after the hidden machinery of human thought, “Oh, so that means that you study the brain?”
One part of me wants to reply, “No, no — I think about thinking. I think about how concepts and words are related, what ‘thinking in French’ is, what underlies slips of the tongue and other types of errors, how one event effortlessly reminds us of another, how we recognize written letters and words, how we understand sloppily spoken, slurred, slangy speech, how we toss off untold numbers of utterly bland-seeming yet never-beforemade analogies and occasionally come up with sparklingly original ones, how each of our concepts grows in subtlety and fluidity over our lifetime, and so forth. I don’t think in the least about the brain — I leave the wet, messy, tangled web of the brain to the neurophysiologists.”
Another part of me, however, wants to reply, “Of course I think about the human brain. By definition, I think about the brain, since the human brain is precisely the machinery that carries out human thinking.”
This amusing contradiction has forced me to ask myself, “What do I mean, and what do other people mean, by ‘brain research’?”, and this leads naturally to the question, “What are the structures in the brain that someone could in principle study?” Most neuroscientists, if they were asked such a question, would make a list that would include (at least some of) the following items (listed roughly in order of physical size):
amino acids
neurotransmitters
DNA and RNA
synapses
dendrites
neurons
Hebbian neural assemblies
columns in the visual cortex
area 19 of the visual cortex
the entire visual cortex
the left hemisphere
Although these are all legitimate and important objects of neurological study, to me this list betrays a limited point of view. Saying that studying the brain is limited to the study of physical entities such as these would be like saying that literary criticism must focus on paper and bookbinding, ink and its chemistry, page sizes and margin widths, typefaces and paragraph lengths, and so forth. But what about the high abstractions that are the heart of literature — plot and character, style and point of view, irony and humor, allusion and metaphor, empathy and distance, and so on? Where did these crucial essences disappear in the list of topics for literary critics?
My point is simple: abstractions are central, whether in the study of literature or in the study of the brain. Accordingly, I herewith propose a list of abstractions that “researchers of the brain” should be just as concerned with:
the concept “dog”
the associative link between the concepts “dog” and “bark”
object files (as proposed by Anne Treisman)
frames (as proposed by Marvin Minsky)
memory organization packets (as proposed by Roger Schank)
long-term memory and short-term memory
episodic memory and melodic memory
analogical bridges (as proposed by my own research group)
mental spaces (as proposed by Gilles Fauconnier)
memes (as proposed by Richard Dawkins)
the ego, id, and superego (as proposed by Sigmund Freud)
the grammar of one’s native language
sense of humor
“I”
I could extend this list arbitrarily. It is merely suggestive, intended to convey my thesis that the term “brain structure” should include items of this general sort. It goes without saying that some of the above-listed theoretical notions are unlikely to have lasting validity, while others may be increasingly confirmed by various types of research. Just as the notion of “gene” as an invisible entity that enabled the passing-on of traits from parents to progeny was proposed and studied scientifically long before any physical object could be identified as an actual carrier of such traits, and just as the notion of “atoms” as the building blocks of all physical objects was proposed and studied scientifically long before individual atoms were isolated and internally probed, so any of the notions listed above might legitimately be considered as invisible structures for brain researchers to try to pinpoint physically in the human brain.
Although I’m convinced that finding the exact physical incarnation of any such structure in “the human brain” (is there only one?) would be an amazing stride forward, I nonetheless don’t see why physical mapping should constitute the be-all and end-all of neurological inquiry. Why couldn’t the establishment of various sorts of precise relationships among the above-listed kinds of entities, prior to (or after) physical identification, be just as validly considered brain research? This is how scientific research on genes and atoms went on for many decades before genes and atoms were confirmed as physical objects and their inner structure was probed.
A Simple Analogy between Heart and Brain
I wish to offer a simple but crucial analogy between the study of the brain and the study of the heart. In our day, we all take for granted that bodies and their organs are made of cells. Thus a heart is made of many billions of cells. But concentrating on a heart at that microscopic scale, though obviously important, risks missing the big picture, which is that a heart is a pump. Analogously, a brain is a thinking machine, and if we’re interested in understanding what thinking is, we don’t want to focus on the trees (or their leaves!) at the expense of the forest. The big picture will become clear only when we focus on the brain’s large-scale architecture, rather than doing ever more fine-grained analyses of its building blocks.
At some point a billion years or so ago, natural selection, in its usual random-walk fashion, bumped into cells that contracted rhythmically, and little beings possessing such cells did well for themselves because the cells’ contractions helped send useful stuff here and there inside the being itself. Thus, by accident, were pumps born, and in the abstract design space of all such proto-pumps, nature favored designs that were more efficient. The inner workings of the pulsating cells making up those pumps had been found, in essence, and the cells’ innards thus ceased being the crucial variables that were selected for. It was a brand-new game, in which rival architectures of hearts became the chief contenders for selection by nature, and on that new level, ever more complex patterns quickly evolved.
For this reason, heart surgeons don’t think about the details of heart cells but concentrate instead on large architectural structures in the heart, just as car buyers don’t think about the physics of protons and neutrons or the chemistry of alloys, but concentrate instead on high abstractions such as comfort, safety, fuel efficiency, maneuverability, sexiness, and so forth. And thus, to close out my heart–brain analogy, the bottom line is simply that the microscopic level may well be — or rather, almost certainly is — the wrong level in the brain on which to look, if we are seeking to explain such enormously abstract phenomena as concepts, ideas, prototypes, stereotypes, analogies, abstraction, remembering, forgetting, confusing, comparing, creativity, consciousness, sympathy, empathy, and the like.
Can Toilet Paper Think?
Simple though this analogy is, its bottom line seems sadly to sail right by many philosophers, brain researchers, psychologists, and others interested in the relationship between brain and mind. For instance, consider the case of John Searle, a philosopher who has spent much of his career heaping scorn on artificial-intelligence research and computational models of thinking, taking special delight in mocking Turing machines.
A momentary digression… Turing machines are extremely simple idealized computers whose memory consists of an infinitely long (i.e., arbitrarily extensible) “tape” of so-called “cells”, each of which is just a square that either is blank or has a dot inside it. A Turing machine comes with a movable “head”, which looks at any one square at a time, and can “read” the cell (i.e., tell if it has a dot or not) and “write” on it (i.e., put a dot there, or erase a dot). Lastly, a Turing machine has, stored in its “head”, a fixed list of instructions telling the head under which conditions to move left one cell or right one cell, or to make a new dot or to erase an old dot. Though the basic operations of all Turing machines are supremely trivial, any computation of any sort can be carried out by an appropriate Turing machine (numbers being represented by adjacent dot-filled cells, so that “•••” flanked by blanks would represent the integer 3).
Back now to philosopher John Searle. He has gotten a lot of mileage out of the fact that a Turing machine is an abstract machine, and therefore could, in principle, be built out of any materials whatsoever. In a ploy that, in my opinion, should fool only third-graders but that unfortunately takes in great multitudes of his professional colleagues, he pokes merciless fun at the idea that thinking could ever be implemented in a system made of such far-fetched physical substrates as toilet paper and pebbles (the tape would be an infinite roll of toilet paper, and a pebble on a square of paper would act as the dot in a cell), or Tinkertoys, or a vast assemblage of beer cans and ping-pong balls bashing together.
In his vivid writings, Searle gives the appearance of tossing off these humorous images light-heartedly and spontaneously, but in fact he is carefully and premeditatedly instilling in his readers a profound prejudice, or perhaps merely profiting from a preexistent prejudice. After all, it does sound preposterous to propose “thinking toilet paper” (no matter how long the roll might be, and regardless of whether pebbles are thrown in for good measure), or “thinking beer cans”, “thinking Tinkertoys”, and so forth. The light-hearted, apparently spontaneous images that Searle puts up for mockery are in reality skillfully calculated to make his readers scoff at such notions without giving them further thought — and sadly, they often work.
The Terribly Thirsty Beer Can
Indeed, Searle goes very far in his attempt to ridicule the systems that he portrays in this humorous fashion. For example, to ridicule the notion that a gigantic system of interacting beer cans might “have experiences” (yet another term for consciousness), he takes thirst as the experience in question, and then, in what seems like a casual allusion to something obvious to everyone, he drops the idea that in such a system there would have to be one particular can that would “pop up” (whatever that might mean, since he conveniently leaves out all description of how these beer cans might interact) on which the English words “I am thirsty” are written. The popping-up of this single beer can (a micro-element of a vast system, and thus comparable to, say, one neuron or one synapse in a brain) is meant to constitute the system’s experience of thirst. In fact, Searle has chosen this silly image very deliberately, because he knows that no one would attribute it the slightest amount of plausibility. How could a metallic beer can possibly experience thirst? And how would its “popping up” constitute thirst? And why should the words “I am thirsty” written on a beer can be taken any more seriously than the words “I want to be washed” scribbled on a truck caked in mud?
The sad truth is that this image is the most ludicrous possible distortion of computer-based research aimed at understanding how cognition and sensation take place in minds. It could be criticized in any number of ways, but the key sleight of hand that I would like to focus on here is how Searle casually states that the experience claimed for this beer-can brain model is localized to one single beer can, and how he carefully avoids any suggestion that one might instead seek the system’s experience of thirst in a more complex, more global, high-level property of the beer cans’ configuration.
When one seriously tries to think of how a beer-can model of thinking or sensation might be implemented, the “thinking” and the “feeling”, no matter how superficial they might be, would not be localized phenomena associated with a single beer can. They would be vast processes involving millions or billions or trillions of beer cans, and the state of “experiencing thirst” would not reside in three English words pre-painted on the side of a single beer can that popped up, but in a very intricate pattern involving huge numbers of beer cans. In short, Searle is merely mocking a trivial target of his own invention. No serious modeler of mental processes would ever propose the idea of one lonely beer can (or neuron) for each sensation or concept, and so Searle’s cheap shot misses the mark by a wide margin.
It’s also worth noting that Searle’s image of the “single beer can as thirst-experiencer” is but a distorted replay of a long-discredited idea in neurology — that of the “grandmother cell”. This is the idea that your visual recognition of your grandmother would take place if and only if one special cell in your brain were activated, that cell constituting your brain’s physical representation of your grandmother. What significant difference is there between a grandmother cell and a thirst can? None at all. And yet, because John Searle has a gift for catchy imagery, his specious ideas have, over the years, had a great deal of impact on many professional colleagues, graduate students, and lay people.
It’s not my aim here to attack Searle in detail (that would take a whole dreary chapter), but to point out how widespread is the tacit assumption that the level of the most primordial physical components of a brain must also be the level at which the brain’s most complex and elusive mental properties reside. Just as many aspects of a mineral (its density, its color, its magnetism or lack thereof, its optical reflectivity, its thermal and electrical conductivity, its elasticity, its heat capacity, how fast sound spreads through it, and on and on) are properties that come from how its billions of atomic constituents interact and form high-level patterns, so mental properties of the brain reside not on the level of a single tiny constituent but on the level of vast abstract patterns involving those constituents.
Dealing with brains as multi-level systems is essential if we are to make even the slightest progress in analyzing elusive mental phenomena such as perception, concepts, thinking, consciousness, “I”, free will, and so forth. Trying to localize a concept or a sensation or a memory (etc.) down to a single neuron makes no sense at all. Even localization to a higher level of structure, such as a column in the cerebral cortex (these are small structures containing on the order of forty neurons, and they exhibit a more complex collective behavior than single neurons do), makes no sense when it comes to aspects of thinking like analogy-making or the spontaneous bubbling-up of episodes from long ago.
Levels and Forces in the Brain
I once saw a book whose title was “Molecular Gods: How Molecules Determine Our Behavior”. Although I didn’t buy it, its title stimulated many thoughts in my brain. (What is a thought in a brain? Is a thought really inside a brain? Is a thought made of molecules?) Indeed, the very fact that I soon placed the book back up on the shelf is a perfect example of the kinds of thoughts that its title triggered in my brain. What exactly determined my behavior that day (e.g., my interest in the book, my pondering about its title, my decision not to buy it)? Was it some molecules inside my brain that made me reshelve it? Or was it some ideas in my brain? What is the proper way to talk about what was going on in my head as I first flipped through that book and then put it back?
At the time, I was reading books by many different writers on the brain, and in one of them I came across a chapter by the neurologist Roger Sperry, which not only was written with a special zest but also expressed a point of view that resonated strongly with my own intuitions. I would like to quote here a short passage from Sperry’s essay “Mind, Brain, and Humanist Values”, which I find particularly provocative.
In my own hypothetical brain model, conscious awareness does get representation as a very real causal agent and rates an important place in the causal sequence and chain of control in brain events, in which it appears as an active, operational force….
To put it very simply, it comes down to the issue of who pushes whom around in the population of causal forces that occupy the cranium. It is a matter, in other words, of straightening out the peck-order hierarchy among intracranial control agents. There exists within the cranium a whole world of diverse causal forces; what is more, there are forces within forces within forces, as in no other cubic half-foot of universe that we know….
To make a long story short, if one keeps climbing upward in the chain of command within the brain, one finds at the very top those over-all organizational forces and dynamic properties of the large patterns of cerebral excitation that are correlated with mental states or psychic activity…. Near the apex of this command system in the brain…. we find ideas.
Man over the chimpanzee has ideas and ideals. In the brain model proposed here, the causal potency of an idea, or an ideal, becomes just as real as that of a molecule, a cell, or a nerve impulse. Ideas cause ideas and help evolve new ideas. They interact with each other and with other mental forces in the same brain, in neighboring brains, and, thanks to global communication, in far distant, foreign brains. And they also interact with the external surroundings to produce in toto a burstwise advance in evolution that is far beyond anything to hit the evolutionary scene yet, including the emergence of the living cell.
Who Shoves Whom Around Inside the Cranium?
Yes, reader, I ask you: Who shoves whom around in the tangled megaganglion that is your brain, and who shoves whom around in “this teetering bulb of dread and dream” that is mine? (The marvelously evocative phrase in quotes, serving also as this chapter’s title, is taken from “The Floor” by American poet Russell Edson.)
Sperry’s pecking-order query puts its finger on what we need to know about ourselves — or, more pointedly, about our selves. What was really going on in that fine brain on that fine day when, allegedly, something calling itself “I” did something called “deciding”, after which a jointed appendage moved in a fluid fashion and a book found itself back where it had been just a few seconds before? Was there truly something referable-to as “I” that was “shoving around” various physical brain structures, resulting in the sending of certain carefully coordinated messages through nerve fibers and the consequent moving of shoulder, elbow, wrist, and fingers in a certain complex pattern that left the book upright in its original spot — or, contrariwise, were there merely myriads of microscopic physical processes (quantum-mechanical collisions involving electrons, photons, gluons, quarks, and so forth) taking place in that localized region of the spatiotemporal continuum that poet Edson dubbed a “teetering bulb”?
Do dreads and dreams, hopes and griefs, ideas and beliefs, interests and doubts, infatuations and envies, memories and ambitions, bouts of nostalgia and floods of empathy, flashes of guilt and sparks of genius, play any role in the world of physical objects? Do such pure abstractions have causal powers? Can they shove massive things around, or are they just impotent fictions? Can a blurry, intangible “I” dictate to concrete physical objects such as electrons or muscles (or for that matter, books) what to do?
Have religious beliefs caused any wars, or have all wars just been caused by the interactions of quintillions (to underestimate the truth absurdly) of infinitesimal particles according to the laws of physics? Does fire cause smoke? Do cars cause smog? Do drones cause boredom? Do jokes cause laughter? Do smiles cause swoons? Does love cause marriage? Or, in the end, are there just myriads of particles pushing each other around according to the laws of physics — leaving, in the end, no room for selves or souls, dreads or dreams, love or marriage, smiles or swoons, jokes or laughter, drones or boredom, cars or smog, or even smoke or fire?
Thermodynamics and Statistical Mechanics
I grew up with a physicist for a father, and to me it was natural to see physics as underlying every last thing that happened in the universe. Even as a very young boy, I knew from popular science books that chemical reactions were a consequence of the physics of interacting atoms, and when I became more sophisticated, I saw molecular biology as the result of the laws of physics acting on complex molecules. In short, I grew up seeing no room for “extra” forces in the world, over and above the four basic forces that physicists had identified (gravity, electromagnetism, and two types of nuclear force — strong and weak).
But how, as I grew older, did I reconcile that rock-solid belief with my additional convictions that evolution caused hearts to evolve, that religious dogmas have caused wars, that nostalgia inspired Chopin to write a certain étude, that intense professional jealousy has caused the writing of many a nasty book review, and so forth and so on? These easily graspable macroscopic causal forces seem radically different from the four ineffable forces of physics that I was sure caused every event in the universe.
The answer is simple: I conceived of these “macroscopic forces” as being merely ways of describing complex patterns engendered by basic physical forces, much as physicists came to realize that such macroscopic phenomena as friction, viscosity, translucency, pressure, and temperature could be understood as highly predictable regularities determined by the statistics of astronomical numbers of invisible microscopic constituents careening about in spacetime and colliding with each other, with everything dictated by only the four basic forces of physics.
I also realized that this kind of shift in levels of description yielded something very precious to living beings: comprehensibility. To describe a gas’s behavior by writing a gigantic piece of text having Avogadro’s number of equations in it (assuming such a herculean feat were possible) would not lead to anyone’s understanding of anything. But throwing away huge amounts of information and making a statistical summary could do a lot for comprehensibility. Just as I feel comfortable referring to “a pile of autumn leaves” without specifying the exact shape and orientation and color of each leaf, so I feel comfortable referring to a gas by specifying just its temperature, pressure, and volume, and nothing else.
All of this, to be sure, is very old hat to all physicists and to most philosophers as well, and can be summarized by the unoriginal maxim Thermodynamics is explained by statistical mechanics, but perhaps the idea becomes somewhat clearer when it is turned around, as follows: Statistical mechanics can be bypassed by talking at the level of thermodynamics.
Our existence as animals whose perception is limited to the world of everyday macroscopic objects forces us, quite obviously, to function without any reference to entities and processes at microscopic levels. No one really knew the slightest thing about atoms until only about a hundred years ago, and yet people got along perfectly well. Ferdinand Magellan circumnavigated the globe, William Shakespeare wrote some plays, J. S. Bach composed some cantatas, and Joan of Arc got herself burned at the stake, all for their own good (or bad) reasons, none of which, from their point of view, had the least thing to do with DNA, RNA, and proteins, or with carbon, oxygen, hydrogen, and nitrogen, or with photons, electrons, protons, and neutrons, let alone with quarks, gluons, W and Z bosons, gravitons, and Higgs particles.
Thinkodynamics and Statistical Mentalics
It thus comes as no news to anyone that different levels of description have different kinds of utility, depending on the purpose and the context, and I have accordingly summarized my view of this simple truth as it applies to the world of thinking and the brain: Thinkodynamics is explained by statistical mentalics, as well as its flipped-around version: Statistical mentalics can be bypassed by talking at the level of thinkodynamics.
What do I mean by these two terms, “thinkodynamics” and “statistical mentalics”? It is pretty straightforward. Thinkodynamics is analogous to thermodynamics; it involves large-scale structures and patterns in the brain, and makes no reference to microscopic events such as neural firings. Thinkodynamics is what psychologists study: how people make choices, commit errors, perceive patterns, experience novel remindings, and so on.
By contrast, by “mentalics” I mean the small-scale phenomena that neurologists traditionally study: how neurotransmitters cross synapses, how cells are wired together, how cell assemblies reverberate in synchrony, and so forth. And by “statistical mentalics”, I mean the averaged-out, collective behavior of these very small entities — in other words, the behavior of a huge swarm as a whole, as opposed to a tiny buzz inside it.
However, as neurologist Sperry made very clear in the passage cited above, there is not, in the brain, just one single natural upward jump, as there is in a gas, all the way from the basic constituents to the whole thing; rather, there are many way-stations in the upward passage from mentalics to thinkodynamics, and this means that it is particularly hard for us to see, or even to imagine, the ground-level, neural-level explanation for why a certain professor of cognitive science once chose to reshelve a certain book on the brain, or once refrained from swatting a certain fly, or once broke out in giggles during a solemn ceremony, or once exclaimed, lamenting the departure of a cherished co-worker, “She’ll be hard shoes to fill!”
The pressures of daily life require us, force us, to talk about events at the level on which we directly perceive them. Access at that level is what our sensory organs, our language, and our culture provide us with. From earliest childhood on, we are handed concepts such as “milk”, “finger”, “wall”, “mosquito”, “sting”, “itch”, “swat”, and so on, on a silver platter. We perceive the world in terms of such notions, not in terms of microscopic notions like “proboscis” and “hair follicle”, let alone “cytoplasm”, “ribosome”, “peptide bond”, or “carbon atom”. We can of course acquire such notions later, and some of us master them profoundly, but they can never replace the silver-platter ones we grew up with. In sum, then, we are victims of our macroscopicness, and cannot escape from the trap of using everyday words to describe the events that we witness, and perceive as real.
This is why it is much more natural for us to say that a war was triggered for religious or economic reasons than to try to imagine a war as a vast pattern of interacting elementary particles and to think of what triggered it in similar terms — even though physicists may insist that that is the only “true” level of explanation for it, in the sense that no information would be thrown away if we were to speak at that level. But having such phenomenal accuracy is, alas (or rather, “Thank God!”), not our fate.
We mortals are condemned not to speak at that level of no information loss. We necessarily simplify, and indeed, vastly so. But that sacrifice is also our glory. Drastic simplification is what allows us to reduce situations to their bare bones, to discover abstract essences, to put our fingers on what matters, to understand phenomena at amazingly high levels, to survive reliably in this world, and to formulate literature, art, music, and science.