Chapter 5 THE LIFE OF THE EMBODIED MIND

SOMETHING STRANGE JUST HAPPENED IN THE LAST chapter. Did you notice? When we applied selection pressure on our Tadro3s they responded by evolving next-generations of smarter Tadro3s with better feeding behavior than their parents had—in a real sense, they got smarter. But when our population of Tadro3s became smarter, they did so by evolving their bodies, not their brains.

How in the artificial-water-world can Tadros, or any robot for that matter, become smarter? And even if they can gain intelligence, how can intelligence be “in the body” rather than “in the brain”? Isn’t the brain the seat of intelligence? And by the way, as long we’re inquiring, where is the brain of a Tadro3 anyway, and what is it doing?

We need to tackle these questions because they help put Tadro3, which you met in the last chapter, and Tadro4, which you’ll meet later in this chapter, into the broader context of intelligent machines. Although I’m not claiming that either Evolvabot is going to win a merit scholarship to attend Vassar, I will claim that Tadros—by virtue of being goal directed, autonomous, and physically embodied—have intelligence. Hang on: the ride is going to be bumpy!

THE SEARCH FOR INTELLIGENT (ARTIFICIAL) LIFE

What we’ve got here, thanks to Tadro3 (there, I’ll blame the robot), is not a failure to communicate but rather an opportunity to lay bare some of our conceptual problems. Most humans would argue, for example, that Tadro3s are not intelligent. Yet clearly, the autonomous, self-propelled Tadro3 has something. Let’s call it skill: the ability to detect light, move toward it, then hang around it. Yes, say the humans, but moths possess the same skill, and we know that moths aren’t intelligent. We do? What do you mean by “intelligent”? Intelligence is not just some simple skill, they say, like detecting and finding light; that’s more like a reflex. Instead, the argument goes, intelligence involves the skill of thinking, using our minds in the special, linguistic way that only humans do. Thus, many humans speak of “human-like intelligence” as the sine qua non of intelligence.

Where does that leave the rest of the world? Are nonhuman primates intelligent? Is your dog intelligent? How ‘bout your African Gray parrot? If you answer either “yes” or “no,” then answer this: how do you know?[49] One answer, given by Alan Turing, is that you know by interacting.[50] He asks, “Can machines think?” In terms we’ve been using, Turing comes up with a way to answer this question—with the assumption that thinking is proof of intelligence—by creating an environment that includes at least two agents, each being a part of the other’s environment.

For Turing, the interaction between agents and their environments is conducted using language: agents, hidden from each other, converse via keyboards. The brilliance of this conversational environment is that it is dynamic and far-reaching: linguistic communication in real time is a back and forth that can be about anything. If the machine, in practice an artificial intelligence (AI) in the form of a computer program and its hardware, can fool its human interactant into thinking that the human is conversing with another human, then the AI is said to have “passed” what we have come to call the “Turing Test.” The Loebner Prize Competition is a Turing Test held every year as an international competition.[51] A bronze medal, along with a cash prize, is awarded to the AI that fools the most human judges. A gold medal awaits the AI that is indistinguishable from a human. We are still waiting for an AI to claim that prize.

An even tougher test of human-level intelligence is what Stevan Harnad calls “the total Turing Test.”[52] In the total Turing Test (T3) the AI has to be embodied and physically present in an environment shared with the human interrogators. In other words, the AI has to be an embodied robot, and human-level intelligence is only achievable with a body and a brain. The embodied robot must be able to physically perform, in all ways, as an indistinguishable member of the group of organic agents to pass the T3. You can see that the T3 is a tall order, especially if you think of humans and the human interactional environment: language, movement, and physical appearance—all have to be on the mark, like a teenager struggling to fit in. In the human arena robots are not even close to competing at the bronze-level equivalent of the T3. At the moment the T3 for humans is the stuff of science fiction, like the replicants in the movie Blade Runner.[53]

In opposition to the interaction-based Turing Test, John Searle takes a different approach to the search for intelligence.[54] He looks for systems that understand what they are thinking about. For example, you know that you are thinking right now because you can use the symbols of written or spoken language to talk to yourself or to others about your thinking. You understand that you are “expressing” yourself. Your subjective first-person experience as the agent doing the expressing allows you to know that the word symbols you manipulate in your speech or writing contain meaning. You have the ability to analyze your own mental states, and by so doing, you are aware of your own intelligence and the processes that underwrite that intelligence. You can verify that your linguistic symbols have meaning to you.

This ability to be aware of ourselves analyzing ourselves is why human scientists get excited when a dolphin recognizes itself in a mirror.[55] Diana Reiss and Lori Marino, the researchers who’ve done this work, show behavioral evidence of the dolphin’s self-awareness. The dolphin looks in the mirror, sees a spot of investigator-applied zinc oxide on the dolphin in the mirror, and then proceeds to spend time turning its body to examine the body of the dolphin-in-the-mirror for other blemishes. Reiss and Marino interpret this behavior as showing that the dolphin understands that the image in the mirror is representing “self” and not “other.” Pretty cool.

Distinguishing between yourself-as-an-agent and others-as-agents is the basis of inferring that other agents may be intelligent. We have the ability to make this distinction, and we use that ability to infer that other conscious, human agents possess the same ability. We can report those subjective experiences to others using language: I know that I’m intelligent; I know that you are a human agent like me; therefore, I infer that you are intelligent like me.

When we search for intelligent life, we combine the approaches of Searle and Turing. First, I understand that I’m intelligent because I’m the “I” experiencing my intelligence (Searle’s criterion). Second, I’m guessing that you are intelligent because when we interact you behave in ways that make me think that the only way we can be having an interaction like this is if you have an intelligence very much like my own (Turing’s test).

Most people, when asked by Daniel Wegner and his colleagues at the Mental Control Laboratory at Harvard, say that other human beings have features that we associate with an intelligent mind: consciousness, personality, feelings, emotions, rights, responsibilities, self-control, planning, thought, and recognition of emotion in others.[56] A surprise is that these same humans perceive that some of these mind-like features are possessed, to varying degrees, by entities that include the nonliving, such as God and robots. If it’s fair for me to perform the sleight of hand that equates mind-likeness and intelligence, then we twenty-first-century humans readily perceive intelligence all over the place. Perception, however, is not necessarily reality.

TADRO’S KNOW-HOW

When Adam Lammert and I showed my colleague Ken Livingston the first working Tadro, Tadro1, we were excited and a bit nervous. Livingston, professor of psychology and one of the founders of the Cognitive Science Program at Vassar,[57] had served for both of us as a mentor in the ways of embodied robotics and artificial intelligence. When he saw Tadro swimming around in a big sink in the lab, following the beam of a flashlight we were moving around, he grinned and said, “Tadros are a piece of embodied intelligence.” Would you agree?[58] Adam and I did. Here’s why.

Putting Turing’s hat back on, let’s think about what we were doing with Tadro. We put a Tadro in the sink, turned off the lights, and then turned on a flashlight. The Tadro, which had been aimlessly swimming around the tank, changed course with what looked like purpose and curved in a right-handed loop toward us, bumping the wall of the tank, turning around to the left, and then heading back in our direction. We then played a trick: lights off. Because we’ve put green and red navigation lights on Tadro, we could see Tadro in the dark as it changed the curvature of its heading, moving now in a left-handed arc along the wall of the tank. We snuck around to where Tadro was headed, and surprise! We turned on the flashlight directly over Tadro’s head. Tadro’s response was immediate: a quick turn to the right, moving off the wall, and heading back into the darkness.

Okay, so is this the most fun we’ve ever had? Nope, but it beats washing the dishes. When you play around with Tadro, you experience a sense that you have to learn, through your interactions, about what’s not predictable and what is. You can’t predict exactly what Tadro will do, how much it will turn, where it will hit the wall of the sink. At the same time, you learn very quickly that in response to light on its single eyespot, Tadro turns to the right. When that eyespot is in darkness, Tadro turns to the left. You even figure out that you can interact with Tadro in such a way as to get it to swim straight for just a bit when you find just the right light intensity that is midway between full dark and full light.

Now let’s take off Turing’s hat and put on Searle’s. We immediately turn on the lights and pick up the dang Tadro. What is this thing? What’s inside? We look inside the plastic bowl that serves as Tadro’s hull and see a small, black, rectangular box with what looks like a big, engorged tick sticking on it (sorry, I live in upstate New York, one of the world’s hotspots for blood-sucking ticks and the diseases that they spread; I see them everywhere …). That “tick” is actually a capacitor, a common element of electronic circuits, and it is interspersed with other bug-sized bits of like-minded paraphernalia: rectangular silver-legged spiders (integrated circuits), columns of red and green “ants” (indicator lights), and the long tracks of tiny potholes left by centipedes (input and output connections for wire). This palm-size block of electronics is a microcontroller,[59] a fully functioning computer (a central processing unit, or CPU) with its own power supply, memory, and systems to operate motors and sensors.

Is this microcontroller a brain? Doesn’t look like it. It’s a computer with the ability to interact with motors and sensors. You can program the microcontroller to tell the tail motor which way to turn depending on the light intensity hitting the photoresistor that serves as the eyespot. The program is not the brain, either. It is written in a programming language called Interactive C, which was created especially for controlling mobile robots.[60] We can call up the original Interactive C program that Adam Lammert wrote for Tadro2 and see for ourselves that there appears to be nothing brain-like about it (Figure 5.1): a bunch of words and type-written symbols, a regularity of symbol patterning that indicates a syntax, and words like “if” and “else,” which if used in the same way as those words are in English, may indicate something about the program making decisions. Even if that naïve description sounds brain-like because of its references to language, keep in mind that Searle would argue that the Tadro program is not, in and of itself, intelligent; the program doesn’t know what it is doing. It just is a deaf, dumb, blind kid telling the hardware how to play pinball with electrons.

FIGURE 5.1. Making Tadro go. This is the complete Tadro2 program, written in computer language called “Interactive C” by Adam Lammert for his senior thesis in cognitive science at Vassar College. The program ran on a HandyBoard microcontroller, taking input from the single sensor, a photoresistor acting as an eyespot, and turning it into a value for the variable, “beta,” that told the always-flapping tail which way to turn. This sensor-motor interaction, shown in the gray box, is always changing because the new turning command alters the heading of the Tadro that, in turn, alters the light hitting the sensor. Thus, the whole system—program, microcontroller, sensor, motor, body, and environment—can be thought of as continually calculating an answer to this question: what’s the angle of my tail?

See the problem here? If we define intelligence by what Tadro does, then it clearly has skill, the know-how to detect and follow a light source. Because Tadro’s light-following ability depends on its propulsion, maneuverability, and the sensitivity of its photoresistor, its body is clearly important. You can get a sense for the importance of having a body to help you think next time you put together a difficult jigsaw puzzle: you simply can’t solve the puzzle unless you allow yourself to pick pieces up, rotate them in different directions, and try to align and engage the pieces.

If you don’t believe me, try this: have a friend spread out pieces of a jigsaw puzzle on the table. Rule 1: You are not allowed to touch the pieces. Rule 2: You are not allowed to move from where you sit or stand. Rule 3: Using only your voice (not gestures or written instructions), tell your friend how to assemble the puzzle, piece by piece. Note that you can’t just say, “Put the puzzle together.” No how. You’ve got to give low-level instructions like, “Take the piece right in front of you and move it next to the piece right over there.” Rule 4: All your friend can do is follow your rules. These rules turn out to be what we call motor commands when we talk about neural circuits. You’ll soon be impressed—unless you have a very simple puzzle—with just how much intelligence depends on your movements and your physical manipulation of the world. That movement-based intelligence begins with what Alva Noë, associate professor at the Institute for Cognitive and Brain Sciences at the University of California at Berkeley, calls “enactive perception.”[61] Enactive perception in robots combining active vision and feature selection helps simplify vision-based behavior, as shown in experiments by Dario Floreano, director of the Laboratory of Intelligent Systems at the École Polytechnique Fédérale de Lausanne in Switzerland and one of the founders of the field of evolutionary robotics.[62]

Adam and I give Tadro credit for having the know-how of enactive perception: wandering around, exploring its space, detecting a light gradient (if it’s there), moving toward the source of the light, and orbiting around that source. If you had to tell your puzzle-helping friend to do the same, step by literal step, I bet you’d learn to respect our piece of embodied intelligence!

THE EMBODIED-BRAIN OF TADRO3

With the hats of both Turing and Searle off, let’s baldly go where no one has gone before: into Tadro’s embodied-brain. I say “embodied-brain” as a single-word construct here because I want to reference a shift in perspective for neuroscientists promoted by Professor Barry Trimmer, neuroscientist and director of Tuft University’s Biomimetic Devices Laboratory. I visited his lab recently, and as we were discussing how animals create behavior, he said, “Every brain has a body.” Sounds straightforward. But wait—that seemingly self-evident phrase, familiar to many within the fields of philosophy of mind,[63] ecological psychology,[64] grounded cognition,[65] and embodied artificial intelligence,[66] rings wrong-headed to neuroscientists who’ve specialized in the brain’s molecular channels, neurotransmitter systems, control circuits, or functional regionalization. Why?

Most of us have been trained to think of the brain as the control center, the place on the anatomical map where all of the sensor inputs are read and discussed. We know that the brain is “in control of behavior” because damage to the brain alters our thinking: damage to Phineas Gage’s frontal lobes compromised his ability to process emotions and make rational decisions.[67] We’ve seen cool functional-MRI videos of Oliver Sack’s brain responding differently to music by Bach and Beethoven, in concert with his reported subjective experience.[68] After much thinking, our subjective, first-person experience of being an autonomous agent tells us that the control center creates a plan that is sent out to the soldiers in the field, the muscles that put the plan into action.

As neuroscientist Joaquin Fuster of the UCLA Neuropsychiatric Institute more formally states, “All forms of adaptive behavior require the processing of streams of sensory information and their transduction into series of goal-directed actions.”[69] Fuster reviews experimental work that shows how goal-directed plans activate the prefrontal and premotor regions of the brain. In this view, planning is a central, if not the central, function of our brains as thinking machines (Figure 5.2).

In a move that Oz might be tempted to call the “reverse Scarecrow,” Fuster takes care to give the brain a body. The prefrontal and premotor regions of the action-planning brain are part of what he calls the “perception-action cycle,” which is “the circular flow of information from the environment to sensory structures, to motor structures, back again to the environment, to sensory structures, and so on, during the processing of goal-directed behavior.” Pushing this point further, Trimmer, whose team specializes in designing soft-bodied robots to test ideas about how caterpillars move and modulate their behavior, said at our meeting, “The body is doing the computational work of interacting with the environment.” But what is the nature of the “computational work” being done?

The body of Tadro3 “computes” everything that the microcontroller running the Interactive C program (see Figure 5.1) doesn’t: all the really difficult physics. By virtue of being in the real world, interacting with real water, Tadro3 automatically solves the intensely complex dynamics of a flexible propeller transducing an oscillating uniaxial bending couple into a propagating bending moment that flexes the tail, which, in turn, is also loaded hydromechanically in a time-varying manner as its relative motion in the water changes. In response to the tail’s coupled internal and external force computations, the body, to which the tail is attached, undergoes the yaw wobbles—recoil and turning maneuvers—that we talked about in Chapter 4. Coupled computations that allow elastic and fluid forces to interact have been elegantly simulated by Eric Tytell, of Tufts University, and his colleagues at the University of Maryland using the “immersed boundary method” for a steadily swimming lamprey.[70]

But wait. Order before midnight and your Tadro3 comes with free motor and sensory computations. Tadro’s rotational and translational motion has angular and linear components of both velocity and acceleration that interact to produce the overall motion of the Tadro according to Newton’s laws of motion. As the Tadro3 wobbles and winds its way through the water world, it presents its attached photoresistor, acting as an eyespot, to a gradient of light. As the light intensity at any place on the water’s surface changes as the Tadro moves, the photoresistor continuously recomputes light intensity as a change in voltage by virtue of being part of a little electric circuit that works via Ohm’s law.

FIGURE 5.2. The neurocentric view: thinking is planning. Planning is something we do “in our heads,” in our brain, with input from our senses, to create actions. This view is consistent with our subjective experience, coupled with information from neuroscientific studies of brain activity correlated with what we are thinking. The neurocentric view dominates in nearly every field concerned with human thought, language, and behavior.

However impressive Tadro3 might be as a student of physics, you may be objecting to the notion that Tadro3 is “computing” or “solving” anything with its body. “Computing” has a formal definition that goes back to Turing. We talk about “Turing-computable algorithms” as those procedures that can be solved, ultimately, with the simple deterministic rules a digital machine puts into play. Meanwhile, “solving” has a more mathematical flavor because we talk about solving a set of equations, like the Newtonian equations that govern the motion of an object. In both formal systems symbols are being manipulated according to a set of rules. Not so with Tadro. With the exception of what’s going on in its microcontroller (see Figure 5.1), Tadro manipulates only its body as it interacts with the rule-based physical world.

Just because we can represent physical rules—by computing physical interactions within and among physical entities—doesn’t mean that the world is hosting physical interactions in the same way. Borrowing a page from Searle now, we’d say that the computer is not having the actual physical interactions but is, instead, simulating them via symbol manipulation.

By saying that the body does the “computational work,” what Trimmer means is that the ongoing body-environment interaction, by virtue of its being an actual physical phenomenon, doesn’t necessarily need to be mediated through a nervous system. From the neuro centric perspective (see Figure 5.2), the brain doesn’t need to control how the tail interacts with the water because brainless physics governs that interaction. The brain doesn’t need to solve Newtonian equations of motion. The physics takes care of itself according to its own rules. Without a neural imperative to “control behavior,” what, therefore, does a nervous system need to do?

ARE BRAINS COMPUTERS?

It’s not that brains are unimportant. Brains do something—when they are present. The paradox is that some behaving animals and robots don’t have any structure or program that we would say is a “brain.” But before we talk about brainless behavior, we need to delve deeper into what we think brains are and what we think they do.

A huge body of physical evidence shows that the embodied-brain in a variety of animals is involved in some of the functional events that create the behavior that we recognize as an agent interacting with its environment. Are we happy now? Isn’t this what we’ve been intuiting all along about the importance of brains? No, no. Academics are never happy because the world is never that simple. And what brains do is not simple.

Let’s go back to the contrasting paradigms of Turing and Searle. Turing gets the blame (or credit) for this whole “brain is a computer” problem, having argued that if every kind of thought that a human might have was an algorithm—where an algorithm is a mathematically expressible series of instructions for completing a specified task—then a computer was working IN THE SAME WAY as a brain, manipulating symbols in a deterministic manner.[71]

In case you missed my subtle use of capitalization, the key phrase here is “in the same way.” This gets us to the heart of the matter: if two different types of physical contraption are operating “in the same way,” does that mean that they are the same thing? For example, if a coal-fired locomotive and a diesel automobile both operate by expanding gases in a chamber and using that pressure-volume work to push a cylinder, are they the same thing? On the level of pressure-volume work transduced to linear displacement, yes. On another, no. The locomotive heats a boiler filled with water that is turned into steam; the automobile compresses the vaporized diesel fuel, which then explodes. My ten-year-old daughter would also point out that locomotives run on tracks, whereas cars run on the road; locomotives pull huge numbers of cars behind them; automobiles are smaller and have rubber tires.

Several of you reading this are snickering, I can tell, because you love trains and have thought of something wicked to disturb our little thought experiment: turn the coal-fired locomotive into a diesel-powered one, just like the automobile. Now locomotive and automobile use power plants that operate in an identical fashion. Size still a problem? You may have ridden on small-scale trains that are about the size of automobiles (or buses). Tires and tracks? You see the game: in the face of objections to sameness, change the objecting feature to be the same, ad infinitum.

What we did initially was to focus on what we saw as an essential property of our system—the power plant—and let that drive our discussion of sameness. This is akin to the process of making analogies: finding the similarities between two situations or mental representations and then using the similarities as a reason for inferring something new about one of the two entities.[72] Because both locomotives and automobiles are the same in terms of using diesel-fired internal combustion, we infer, by analogy, that they must have other similarities. And they do, depending on how we construe the categories for those other similarities.

Because the mental categories that we create are hierarchical and, if we are careful, mutually exclusive at a given level in the hierarchy, we can always find similarities among any and all physical things.[73] This kind of general sameness—for example, all physical things are similar because they are composed of matter—is what a philosopher would call “trivial.” Trivial inferences are either tautological (as here, when we define the thing of interest by merely restating the thing) or self-evident.

Lesson learned: whenever we talk about two things being the “same” or “different,” we need to first (1) define our terms and agree on those definitions, (2) define and identify our categories of comparison, and then (3) keep the discussion focused on those terms and categories.

Let’s try this out. By analogy and category expansion we can always argue that a brain is the same thing as a computer. However, can we find some nontrivial category of in-the-same-way-as, some functional equivalence between brains and computers? Yes, we can. Both brains and computers can use, at least some of the time, explicit algorithmic steps to calculate. By “calculate” here I mean, as we were talking about with a Turing-computable algorithm, steps of instructions that can be explained with symbols such as numbers or words. In addition, those instructions, if followed exactly, will always produce the same result if you start with the same inputs. Thus, in the category of “calculators,” brains and computers can perform mathematical calculations like “1 + 1 = 2.” With input from external sensors brains and computers can perform more complex calculations like “when will that football hit my head.”

Now you might argue that calculation is a trivial, self-evident similarity because humans created digital calculators, what we call “computers,” with their brains. Thus, you’d argue, we merely figured out how our brains worked and then made machines do the same thing. Precisely. But far from being trivial, I’d argue, this is an example of what philosophers of mind and artificial intelligence call “functionalism,” the similarity of how two different entities operate.[74] Under functionalism, “the mind” of humans contains a whole library of innate and learned functions that can be carried out by any number of physical mechanisms, including a brain and a computer.[75] From this perspective, the brain is a computer and the computer is a brain in the sense that both can work the same way—manipulating symbols—at least some of the time and for at least some kinds of operations.

Emotional response: any damn fool knows that a brain and a computer are different! Just look at them. One is grown out of cells, is wet, and is possessed by animals. The other is built by human animals out of silicon, metal, and plastic and it sits on my desk. They may share some functions—like calculation and memory—but I don’t see my computer functioning as a food-seeking, reproducing, and evolving creature. Hmmm … or do I? Maybe my computer just needs a body the same way that my brain does.

BACK TO BRAIN BASICS

Confused? That’s a good sign, even if you feel bad about the situation. Confusion means that your assumptions are being challenged and that you are open, probably against your will, to learning something new. But confusion is a vulnerable state. My students in “Introduction to Cognitive Science” hate confusion and the vulnerability it reveals. They came to college for information, not questions, damn it! How can they show me how smart they are if they don’t have facts to learn, apply, and brandish? My job, they tell me, is to illuminate, not obfuscate.

Here’s part of the problem: functionalism complicates the landscape, removing minds and intelligence from the sole province of humans. With the mind-blowing idea that intelligence might be found and built in nonhuman entities, most students seek comfort food. From their menu they order neuroscience: “Look,” they say, “if we start from a neuroscientific perspective, we can build up from neurotransmitters, synapses, and neural circuits to an understanding of how brains, and human brains in particular, work. And once we know how human brains work,” goes their logic, “we can understand what intelligence really is!”

Okay, folks. Let’s see how far we get. Let’s embrace the powerful neurocentric and reductionistic view (I’m not being sarcastic—it is powerful) and build a brain from the ground up. This is science, after all, and as we saw in Chapter 3 with the engineers’ secret code, if we understand it, then we should be able to build it.

“But first,” asks the evil professor, “what is the ‘it’ that you want to build?” Is “it” the brain? Which brain? If you mean an “animal brain,” then, which group of animals? If you mean “human,” then I have to ask, “What anatomical structures are you including?” What if our understanding of the human brain, at any structural or functional level, is incomplete? (Which it is.) Do you want to start instead with the more inclusive “central nervous system,” which includes the spinal cord and its anterior extensions? Would you want to include the so-called peripheral nervous system? What about the sensory systems, including the proprioceptive systems in our joints and muscles? And hormonal systems, including the glands that make chemicals that alter brain function? And what about the circulatory system, which delivers those chemicals, including glucose and oxygen, to the nervous system? Do we include the lungs, which provide the oxygen, and the digestive system, including the liver, that provides the glucose?

Have I left anything out? No, and that’s a blessing and curse. By admitting that that brain, however we define it and whatever structural context we put it in, is dependent on nonbrain stuff, we’ve just described an embodied-brain, a brain integrated into the whole critter. Can we dissect out just the brain? Yes. But as Phaedrus pointed out in Zen and the Art of Motorcycle Maintenance,[76] when you cut any system apart with your intellectual scalpel—when you analyze—you do so arbitrarily. In our case, when we look for nice, clean, predefined borders between “brain” and “body,” there simply are none. The best we can do is acknowledge what analysis does and make it clear to ourselves and to others that there are an infinite number of ways to skin Weiner’s cat.[77]

One place to apply the analytical knife is between anatomy and physiology. From the anatomical (structural) perspective, the brain is what it’s made of. If we start in the most general way possible, we need an anatomical definition of “brain” that works for all animals. Richard and Gary Brusca, expert invertebrate zoologists, state, “[The] central nervous system is made up of an anteriorly located neuronal mass (ganglion) from which arise one or more longitudinal nerve cords.”[78] This anterior (= toward the front of the animal) mass of neurons is called either a “brain” or an “anterior ganglion,” depending on whom you are speaking to.

The brain of vertebrates, suggests Georg Striedter, associate professor of neurobiology and behavior at the University of California at Irvine, can be defined in three distinct ways anatomically: by region, by cells, and by molecules.[79] By region, Striedter says, “All adult vertebrate brains are divisible into telencephalon, diencephalon, mesencephalon, and rhombencephalon.” By cells, the brains of jawed vertebrates possess the cells of the broad types known as neurons and glia. By molecules, Streidter explains that the brains of both vertebrates and invertebrates are characterized by the presence of the same neurotransmitters (you’ll hear more about what those do later), including glutamate, GABA, acetylcholine, dopamine, noradrenaline, and serotonin. For the anthropocentric, Striedter points out that the anatomy of the human brain is different from other vertebrate brains by degree: it is more than four or five times larger than expected for a mammal of its body mass (a metric called “relative brain size”) and has the most layers of neurons in the cortex (a division of the telencephalon).

If you think the anatomical perspective is messy, then put on your wet-weather gear as we approach physiology. From the physiological (= functional) perspective, the brain is what it does. The problem is that the brain, this anterior ganglion, “does” or participates in nearly every function of a vertebrate. Muscle contraction? Absolutely! Heart rate? Yes. Growth and development? Yes, even that, given the brain’s involvement in hormonal regulation.

With all this brain-mediated physiology going on, it’s extremely useful to try to focus on a single function. Let’s go back to intelligence. Problem: we can’t even agree on a definition. We’ve skimmed through the Turing-versus-Searle debate—that is just one axis of the pool of arguments. Even if we stick with skills-based definitions, we fight over what abilities indicate intelligence. Howard Gardner, professor of cognition and education at Harvard’s Graduate School of Education, famously framed “multiple intelligences” for humans and, even though he has no fixed definition of intelligence, identified eight domain-specific types: spatial, linguistic, logical-mathematical, bodily-kinesthetic, musical, interpersonal, intrapersonal, and naturalistic.[80] Of course, Gardner’s approach gets hammered too. So the paradox is this: how can we study a “thing” if we are all studying different things?

THE NEURAL CIRCUIT AS THE FUNDAMENTAL SENSORY-MOTOR SYSTEM

This is where neuroscience really shows its powerful Kung Fu. See, grumpy students of mine, I actually agree with you that by starting with the basics of nervous systems, we create an empirical and materialistic foundation, a molecule cell region body chain of causal understanding.[81] However, this understanding only takes us so far, at the moment, and then we are left holding the bag of subjective first-person and other-minds experience to intuit an understanding of our human intelligence. The empirical promise, articulated cogently by Patricia and Paul Churchland, professors of philosophy at the University of California at San Diego, is that our burgeoning neuroscientific understanding is creating a new and neuroscientifically based psychology.[82] Another important build-it-up-from-neurofundamentals approach is that of Jeff Hawkins, founder of Palm Computing and Handspring, who has demonstrated that the anatomy of the human cortex reflects the physiology of two fundamental pieces of intelligence: memory and the ability to predict.[83]

By carefully reviewing cortical anatomy and physiology, Hawkins has shown that the two are conjoined and inseparable. As Stephen Wainwright and Steven Vogel, cofounders of the field that we now recognize as comparative biomechanics, wrote in one of their early lab manuals, “Structure without function is a corpse; function sans structure is a ghost.”[84]

Speaking of ghosts, one kind of confusion about the difference between a brain and a body stems from the implicit substance dualism that permeates our human cultures. Substance dualism, formulated carefully by René Descartes, claims that the mind is made of stuff that is different from the physical stuff that makes up bodies. Consider that for many people, a “mind” is equivalent to or created by a “brain.” If, by intuition or religion, you believe that minds are made of a mysterious and nonmaterial substance, existing in some other dimension or plane after we die, then our brains must be of some special nonbody substance or quality too or possess the ability, as Descartes suggested, to interact with the nonphysical realm. Meanwhile, bodies, fashioned from clay, are earthly containers that are secular, ephemeral. The reasoning and predictions of substance dualism, however, have been refuted repeatedly.[85] We proceed knowing that brains and bodies are both physical entities, but we appreciate that the ghost of dualism lingers.

In practice, to study the brain scientifically we have to make some choices. If, for the moment, we choose to limit ourselves to just looking at sensory-motor neural circuits, what matters is the anatomical pattern of connections between neurons and the physiology of the type and timing of the chemical and electrical signals operating within the circuit.[86] To understand how any circuit functions, you also need to be able to measure when the circuit is active relative to when the animal possessing that circuit is doing something. Once you establish this correlation between a circuit’s activity and the animal’s behavior, then you need to test whether or not that circuit is necessary and sufficient for that behavior. You can test for necessity by removing the circuit genetically or surgically and then seeing how behavior changes.

Sufficiency is much harder to show: the activity of the circuit, independent of other circuits, must be able to cause the behavior previously correlated with the circuit’s activity. To achieve the isolation that sufficiency demands, often the only way to go is to simulate the circuit on a computer or in an autonomous robot. The problem with simulation, as we’ve seen throughout this book, is that critics see simulation as “only” simulation, a model and not the thing itself.

Another way to show that a neural circuit is sufficient for a specific behavior is to find a “simple” animal—usually an underappreciated and overworked invertebrate sea slug, nematode worm, or fruit fly—that has the behavior and the circuit of interest but doesn’t have all the other neural machinery vertebrates possess to complicate the analytical situation. The great power of the basic brains of invertebrates is that we can identify each neuron and its connections, something that remains nearly impossible, in practice, in vertebrates. With just a few overlapping circuits operating to move, find food, and mate, invertebrates have become powerful tools for neuroscientists. Using invertebrates as model organisms, neuroscientists have identified multiple circuits that are necessary and sufficient for escaping, digesting, flying, and learning.

Neural circuits get linked to behavior in two related fields called “behavioral neuroscience” and “behavioral neurobiology.” Thomas Carew, professor of neurobiology and behavior at the University of California at Irvine, has made the strong argument that invertebrates help us understand the basic principles of anatomy and physiology that create the neural circuits that are necessary and sufficient to explain how animals behave.[87] And using general principles gleaned from invertebrates, along with the experimental approaches outlined above, neuroscientists are able to understand some behaviors in vertebrates. The most thoroughly understood behaviors, with mechanisms examined at the molecular through the behavioral levels, are echolocation in bats, hunting in owls, and navigation in rats.[88]

What behavioral neuroscience shows beautifully is just what Trimmer had said: every brain has a body. Once more, with feeling: understanding behavior involves not just the neural circuit but also the neural circuit placed within the nervous system, the nervous system connected to sensors and muscles, the sensors and muscles part of a particular body, and the particular body interacting with the physical world, including other agents.

We haven’t, I realize, built a brain from the ground up. But by starting with circuits, we are making progress in terms of understanding how brains operate. By combining the brain basics of behavioral neuroscience with the functionalism of artificial intelligence, we come to three inescapable conclusions:

* Every brain has a body, both in terms of cooperational physiology and connective anatomy. The brain alone is not sufficient to explain behavior.

* The embodied-brain has some functions that it shares with computers and microcontrollers and some that it does not.

* Some kinds of functions that we associate with the structure called the vertebrate brain we can see in so-called simple[89] organic and artificial agents that have no brain; thus, the brain doesn’t control or determine all behaviors. The brain is not necessary for behavior.

This last assertion is probably the most controversial. Begging to differ might be George Lakoff, who helped develop the concept of the embodied mind within the fields of philosophy and cognitive linguistics.[90] Lakoff, writing about his development of the Neural Theory of Language, states, “Every action our body performs is controlled by our brains, and every input from the external world is made sense of by our brains. We think with our brains. There is no other choice.”[91] As you can see, Lakoff’s embodied perspective is still a neurocentric one (see Figure 5.2). So let’s get rid of the brain altogether and see what happens!

EMBODIED INTELLIGENCE: WHO NEEDS A BRAIN WHEN YOU HAVE A SMART BODY?

If, as Trimmer says, the body interacting with the world is doing part of the computational work of the nervous system, then we ought to be able to see bodies with very little brain or even no brain doing interesting things as autonomous agents.[92] Sound familiar? Doing just that is Tadro3, as I claimed at the beginning of this chapter. So let’s continue to use Tadro3 to see how far we can push the idea of being intelligent without having a brain. I’ll try to convince you that Tadro3 approaches the limit of being the simplest autonomous agent possible. By pushing the limits, I hope to show you that all it takes is a little KISS to create intelligent behavior.

You’ve seen Tadro3’s neural programming (see Figure 5.1). Let me translate the central computation from its computer code into mathematical terms so you can see how simple its programming is. The computer code takes a voltage input from Tadro3’s single eyespot and converts it to an intensity value, i, that is, in turn, converted to a value, ß (Greek letter beta), which represents a turning signal for the tail, the tail angle:


ß(t) = i(t) x c


where the t indicates that both i and ß are changing through time, t, and c is a “constant of proportionality,” a numeric fudge factor that scales the light intensity to the size that we need to calculate a realistic tail angle. In words, this equation can be read as follows: “The angle of the tail at any time is linearly proportional to the intensity of light hitting the photoresistor at any time.” That’s it. It’s hard to imagine a much simpler equation with variables. (I know, if you got rid of the c then it’d be even simpler, but at that point you’d have a simple identity equation.)

As brains go, this doesn’t qualify. If we built a circuit that would perform this computation, you’d likely see something like this in a vertebrate (Figure 5.3): a sensory cell with a membrane potential that varies continuously with light intensity, a primary sensory neuron that converts the sensory cell’s graded input into a train of action potentials and connects to two other neurons, one an inhibitory interneuron that reduces the activity of the motor neuron connected to the left-turning muscle and the other a motor neuron, without an intervening interneuron, connected to the right-turning muscle. This simple circuit has only four neurons and three other cells that complete the sensory-motor system.

Let’s make the circuit even simpler! We can think about how a bioengineer might try to accomplish the task using wetware, cells and proteins of biological origin that she can arrange as needed. In her build-a-brain workshop she could make the circuit simpler by creating a receptor that connects directly to the muscle cells without any neurons at all (Figure 5.3). Let’s presume that this simple circuit is, in principle, possible. Then this bioengineering design raises an awkward question: why have vertebrates made such a muddle of their circuit design? Why don’t they go all the way with the KISS principle?

To put it another way: why go through the trouble of building a chain of multiple cells? There are good reasons. What you get with more neurons is more synapses. Each synapse, because it converts electrical signals to chemical ones, is a place where you can regulate and adjust how a neuron or muscle is responding to the “upstream” cell signaling the “downstream” cell. These cell-level adjustments are important in creating functional changes of the circuit during development and learning. Another consequence of having multiple neurons is that you can increase the number of connections that the circuit makes with other circuits (branching connections not shown), increasing opportunities for coordination and computation.[93]

FIGURE 5.3. Designing the nervous system of Tadro3 in wetware. The top circuit, built in the way that vertebrates build neural circuits, contains seven cells: one receptor, one sensory neuron, one inhibitor interneuron, two motor neurons, and two muscle cells. The bottom circuit, built in a way that a bioengineer might be tempted to try, contains only three cells: one that directly innervates the two muscle cells. Both hypothetical neural circuits have the same function: in the presense of light on the receptor, decrease the activity of the left-turning muscle and increase the activity of the right-turning muscle. The gaps between the cells represent synapses, across which cells communicate by diffusing chemical neurotransmitters. A synapse is excitatory if unlabeled or labeled with a positive sign. A synapse is inhibitory if labeled with a negative sign. The large circles with smaller embedded dark circles represent the cell bodies of the neurons.

FIGURE 5.4. Tadro3 morphed into a wheeled vehicle. The single light sensor (cup) sends a reverse (–sign) and a forward (+ sign) signal to the two motors (small black rectangles) that independently control the two wheels (large black rectangles) that spin at different rates.

If you look at the neural circuits in Figure 5.3, you’ll notice that we’ve done it again: we forgot the body! To be fair, we did this on purpose so that we could see what an isolated Tadro3 nervous system might look like. Note, also, that this circuit is not a brain in the anatomical sense of Brusca and Brusca’s invertebrates: it is not a mass of neurons. This is a diffuse nervous system, and it needs a body. We can create a body that is as similarly abstract as the bioengineer’s neural circuit. To keep it simple, let’s put the Tadro3 nervous system into a wheeled vehicle (Figure 5.4). With a body operating on land, by the way, we don’t have to worry about all the crazy physics of swimming that we mentioned previously.

Let’s give Tadro3 simple wheels. The simplest wheels spin but don’t turn. Tadro3 turns by having different levels of power go to the two motors that drive the wheels. Having two motors to turn in the wheeled Tadro3 is the functional equivalent of having two muscles, working in pairs, to control the direction of the tail for turning in the swimming Tadro3 (Figure 5.3). Also notice that the simplest neural circuit is used here: a single light sensor provides both an excitatory and inhibitory signal. The cup-shaped light sensor is directional in the sense that it registers light only when that light hits its concave surface directly (not by coming through the back of the cup).

FIGURE 5.5. Vehicular Tadro3 (vT3) turns in response to light. In this thought experiment, when no light is hitting the cup-shaped light receptor, vT3 arcs to the right. When the receptor faces the light and is close enough to register the light, its path straightens as more power is delivered to the right wheel’s motor and less is delivered to the left wheel’s motor. The vT3 is inspired by the vehicles of Valentino Braitenberg.

How does vehicular Tadro3, or “vT3” for short, behave? It’s time for a thought experiment, a cognitive simulation (Figure 5.5). First, suppose that vT3 has an intrinsic rate of wheel spinning and is always moving around. When vT3 is in the dark, the left motor gets a bit more power than the right, so vT3 arcs to the right. As soon as light falls on vT3’s light sensor, though, the right motor starts to get more power, and the left gets some of its power reduced because of inhibition. When this happens, vT3 straightens out its heading.

Doing thought experiments like this, with a wheeled vehicle and a simple sensory-motor circuit, was the brainchild of Valentino Braitenberg, a neuroanatomist. His 1984 book, Vehicles: Experiments in Synthetic Psychology, inspired a generation of workers in artificial intelligence and behavior-based robotics. By taking the reader through thought experiments with an evolving fleet of Vehicles, Braitenberg creates the “law of uphill analysis and downhill invention.” This law is drawn from what we would be tempted to call a functionalist observation: “It is actually impossible in theory to determine exactly what the hidden mechanism is without opening the box, since there are always many different mechanisms with identical behavior.”[94]

Braitenberg calls analysis “uphill” because “when we analyze a mechanism, we tend to overestimate its complexity.”[95] From Braitenberg we can see that anyone interested in understanding the mechanistic basis of behavior, including the behavior that we call intelligence, either has to open the box, as we did with our Searle hat on, or, as Braitenberg did, take the “downhill invention” route and create behavior from the ground up, like an engineer applying the secret code.

When we morphed Tadro3 into vT3, we showed that at least two mechanisms, two neural circuits in our case, could drive the sensory-motor responses to light. We also used Braitenberg’s approach to show how little—in terms of circuitry—that Tadro3 needs in order to behave. Nowhere in the circuits of Tadro3 or vT3 do we see a collection and connection of interneurons that we’d be tempted to call a brain.

Braitenberg Vehicles are brainless and yet still manage to exhibit what, to the observer of the Vehicle who is blind to the Vehicle’s internal mechanism, we would call intelligence, at least at the level of goal-directed, purposeful autonomy. To be fair, though, we haven’t shown that vT3 actually works; we only ran the simulation in our minds. For part of his senior thesis in cognitive science at Vassar, Adam Lammert implemented the vT3 circuit on a wheeled robot to see if it worked as we had imagined. It did (Figure 5.6).

Because embodied Braitenberg Vehicles work both in physical reality and in simulation, we can use them to explore what kinds of bodies make behavior and what kinds don’t (Figure 5.7).

In this embodied view you can see right away what’s needed to be an autonomous agent. With a single sensor and a single wheel, what Braitenberg called a Vehicle of brand 1, this simplest autonomous Vehicle will speed up if it is facing a light source and slow down if not.

FIGURE 5.6. The vT3 operating as an embodied and autonomous wheeled robot. Top panel shows the arc-like path of vT3 over the course of a three-minute experiment run by Adam Lammert. When the path turns from gray to black, vT3 has detected the light. The bottom panel shows what happens when vT3 detects the light. At about ten seconds into the trial vT3 detects the light and changes its heading by almost 55 degrees, straightening out its arc, heading toward the light, and then orbiting it. Keep in mind that vT3 can be thought of as a Braitenberg Vehicle of brand 1.5.

FIGURE 5.7. The embodied view: intelligence is what we do autonomously. In a thought experiment created by Valentino Braitenberg, simple vehicles can have sensors attached directly to actuators, without an intervening brain. Without a sensor, an actuator, and a connection between them, the vehicle cannot behave because it has no way to sense or move. To have autonomous behavior, a sensor must be connected to an actuator. Here is a simple thought experiment: take the autonomous vehicle with one light sensor, one motorized wheel, and an excitatory connection between them. Put a light in front of this vehicle. What happens?

Although this isn’t terribly exciting behavior, it is behavior as we’ve defined it: the interaction of the agent and the environment. Vehicle 1 shows that what’s necessary and sufficient for behavior is (1) a sensor connected to a motor, (2) the sensory-motor linkage embodied in a chassis that has an actuator, (3) the Vehicle situated in an environment with a variable energy field that the sensor can detect, and (4) the Vehicle situated in an environment with a substrate to which the actuator can transfer its momentum. Behavior is impossible if any of these features are missing.

If you study these Braitenberg Vehicles (Figure 5.7), you can see where vT3 might belong: between the first and second autonomous Vehicles. In the lexicon of Braitenberg vT3 is thus neither a Vehicle of brand 1 (single sensor, single motor) nor a Vehicle of brands 2 or 3 (double sensor, double motor). In recognition of its intermediate character, Lammert called vT3 Vehicle 1.5. We can characterize Vehicle 1.5 not only graphically (Figure 5.4) but also by using the parameter space for Vehicles (Figure 5.7): (1) one sensor, (2) two actuators, (3) two connections from the one sensor, (4) one connection to each motor, and (5) both excitatory and inhibitory connections.

EMBODIED AND SITUATED AGENTS

No brain? No problem. As Tadro3, vT3, and Braitenberg Vehicle of brand 1.5 all show, we can build autonomous agents without what Professor Rodney Brooks of the Massachusetts Institute of Technology calls the “cognition box.” Brooks, a mainstream member of the world of artificial intelligence, revolutionized AI in the 1980s. While others had slow-moving robots burdened with computationally intensive problems like vision, path planning, and world mapping, Brooks built simple robots that could literally run circles around their more complex brethren.[96]

Inspired by what invertebrates could do without much in the way of a brain, Brooks and his colleagues programmed the computers inside mobile robots with a parallel of arrays of what most of us would call reflexes. In a reflex, a simple stimulus, like intense heat on the palm of your hand, causes an immediate response: flex the joints of your arm. When the joints flex, your hand moves toward your body and usually away from the heat source. In this sense Tadro3 also works by a kind of reflex, one that is ongoing and gradual rather than working from an on-off switch.

Brooks reasoned—and then demonstrated in the mid-1980s—that robots could use a storehouse of reflexes to do what the brain-based, cognition-box robots of the day could not: navigate in a changing environment. Brooks’s autonomous six-legged robot, Genghis, could walk over rough terrain and follow a human.[97] At the time Genghis was a breakthrough in the true sense of the word, the existence proof for what has become the field of behavior-based robotics.[98]

Behavior-based robotics uses the synthetic method to build up from the basics. We’ve encountered this when we spoke of Braitenberg’s “downhill invention” approach to understanding behavior. The synthetic approach also works hand in hand with the KISS principle because the whole idea is that the building blocks, like the reflex modules, are simple constructs, as with a stimulus linked directly to a response. The synthetic approach also works with our secret engineers’ code because we can understand the simple elements and then, piece by piece, put them together to build other now-more-complex systems that we can still understand.

When you start to synthesize the nervous system of an autonomous agent out of reflex modules, you run into an immediate problem: how do you coordinate those modules? If each reflex module automatically creates a behavior when its stimulus switch is flipped on, then what happens if two behavior modules get flipped on at the same time? Or what happens if behaviors are stimulated in sequence, one after the other, and their automatic actions overlap in time? This kind of conflict between automatic controls needs to be settled by an arbiter, a system that decides which module gets the green light. Brooks, again inspired by animals, created an arbitration scheme that he called “subsumption.” In a subsumption-style neural architecture, the robot’s programmer ranks the behavior modules. In case of conflict, the behavior module with the higher rank “subsumes,” or suppresses, the behavior module with the lower rank.[99] Once programmed, subsumption is a built-in decision arbiter. You, the autonomous agent, don’t need to consider what to do next; you simply do the lowest-level behavior as the default until you are stimulated to do something else. I’ve tried to program myself to operate with subsumption when I drive. At the bottom of the hierarchy, my default layer is a behavior I call “drive efficiently.” This module is actually a collection of submodules that include behaviors like: avoid sudden acceleration, adjust speed to avoid red lights, and choose un-congested routes. At the top of my two-level subsumption hierarchy is “drive safely.” This module is a coordinated set of modules straight out of driver training class: stay on the road, keep a safe following distance, don’t hit the car in front of me, and scan ahead for possible problems. In practice, the “drive safely” behavior overrides “drive efficiently” most of the time because the presence of other cars or challenging driving conditions like rain, darkness, or unfamiliar roads stimulates the module.

With subsumption in mind, I enjoy trying to analyze the driving behavior of other humans. At the lowest level many drivers appear to have the behavior, “drive like hell.” This default appears to involve a collection of submodules that includes behaviors like: pass or tailgate any car in front of you; switch lanes rapidly and, if necessary, without signaling; prevent other cars from passing you; accelerate quickly from a stop. At high-stimulus thresholds, in most drivers the “drive safely” module appears to override “drive like hell.”

Any agent who can do the behavior arbitration dance with subsumption must have some familiar accoutrements: a body, a body with sensors, a body with actuators, and a body operating in the real world. Brooks sums up these requirements as follows: an autonomous agent must be embodied and situated. An embodied agent reacts to events in the world by virtue of having a physical body; this is the “body computation” that we’ve talked about before. A situated agent reacts to events in the real world by virtue of having senses; this is the basis of the “neural computation” that we invoke the minute we put together a circuit diagram of a nervous system.

TADRO4, FINALLY: TO EAT AND NOT BE EATEN

As you’ve probably figured out, Tadro3 lacks a subsumption-style nervous system. Tadro3’s decisions are all ongoing, continuous adjustments of the turning angle of the tail. The resulting light-seeking behavior gives Tadro3 the know-how to detect light, move toward it, and then orbit around the spot of highest intensity.

Sadly, even though Tadro3 can evolve better feeding behavior by evolving its body, it lacks the genetic wherewithal to evolve different skills. For example, if danger is lurking, Tadro3 has no way of knowing: it just senses the intensity of light through its single eyespot. See no evil, hear no evil. Such no-know-how is a good way for an organic agent, like the tunicate tadpole larva after which Tadro3 is modeled, to become lunch in the game of life.

Predation is thought to be one of the strongest selection pressures in living fishes, as we talked about at the end of Chapter 4. Applying a strong and ecologically relevant selection pressure like predation thus seems like a great way to get back to where we started: trying to understand what drove the evolution of vertebrae in the first fish-like vertebrates.

If predation is the hypothesized selection pressure, then, for the reasons just mentioned, Tadro3 can’t do the job. It’s not built to be prey. Instead, we need to upgrade to a Tadro that has both the nervous system and the body to eat and to avoid being eaten. Tadro4 is up to the task, and I’ll explain its design using the ideas we’ve developed in this chapter on embodied intelligence.

We designed Tadro4 to do what living fish (but not tunicate tadpole larvae) do. Tadro4 swims around with two eyes (photoresistors) foraging for food. When and if a predator approaches, Tadro4 detects the predator using an infrared proximity detector, which is the functional equivalent of a lateral line—an array of tiny hairs and cells running along the length of the fish’s body that move when water is displaced by the fish itself or by something nearby moving.[100] When a detector on either side of the body is triggered, Tadro4 tries to escape. This switch in behavior, from feeding to fleeing, is accomplished by a nervous system that is a two-layer subsumption hierarchy (Figure 5.8).

What’s really cool about this two-layer subsumption design is that it very closely resembles, at a functional level (think: functionalism), how the nervous system of fish actually operates. Most fish find food by foraging—swimming around and searching for chow. This is layer 1, the default behavior, the kind of behavior that Tadro3 performed when we pretended that the light was a food source. In addition fish also are able to detect predators, and if a predator strikes, the would-be prey hits the neural panic button and performs what we creative biologists call a “fast start.” This is layer 2, the behavior ranked higher in terms of importance that layer 1.

The fast start is an escape response that involves the highest accelerations ever measured in fish, over ten times the acceleration due to gravity.[101] For comparison, astronauts on the US space shuttle experience maximum accelerations of about three Gs when the main engines ignite for the last minute of orbit-reaching propulsion.[102] The simultaneous firing of nearly all of the muscles on one side of the fish’s body make these incredible accelerations possible. This muscle activity is coordinated by a purpose-built neural circuit called the recticulospinal system.[103] The reticulospinal circuit activates the motor neurons of the muscle after it receives a stimulus from the eighth cranial nerve, the nerve that is connected to the inner ear and the lateral line of the fish. Sounds like predator detection if you ask me. Given that the lateral line runs all the way to the tail in many fishes, this is like having the proverbial eyes in the back of your head—or, um, body.

Here’s the really cool part: if the fish is swimming around when it detects a predator, this escape-response neural circuit overrides the swimming-around circuit! That’s subsumption, baby. This override was demonstrated in a series of elegant experiments on goldfish by Joe Fetcho, professor of neurobiology at Cornell University.[104] He and Karel Svoboda directly measured the nerves’ activity in the fast-start and steady-swimming circuits. The neural signals for steady swimming comes from a system of so-called central pattern generators, clusters of neurons that drum along at a steady rhythm without much input from other circuits. Inputs from the fast-start circuit, though, immediately switch off steady swimming when escape is activated.

FIGURE 5.8. Like a fish, Tadro4 is built to decide when to forage for food and when to escape from predators. The decision to switch behaviors is made using a two-layer subsumption architecture: Tadro4 forages for food until the escape response has been triggered, at which time the foraging is switched off until the escape is complete. Every sensor on Tadro4 can be thought of as continuously answering a question: where’s the food (eyes)? Where’s the predator (lateral line)? The specific answers provide the continually updated perceptions that alter the state of the embodied-brain and drive the immediate actions of Tadro4.

Our design of Tadro4 is propelled, if you will, by what we know about how living fish respond to predators in terms of the neural circuitry, swimming behavior, and evolution. Because we know so much on so many different levels, the predator-prey system is an excellent one for testing hypotheses about the evolutionary origins of vertebrae.

Vertebrae? Remember them? We’ve lost sight of these axial structures as we probed embodied brains and intelligent behavior. For Tadro4, we created an axial skeleton that had actual vertebrae. So we resolved to see how a population of Tadro4 prey responded, in terms of the number of vertebrae, to selection imposed by a predator. The game of life continues.

Загрузка...