CHAPTER 9 THOUGHT EXPERIMENTS ON THE MIND

Minds are simply what brains do.

Marvin Minsky, The Society of Mind

When intelligent machines are constructed, we should not be surprised to find them as confused and as stubborn as men in their convictions about mind-matter, consciousness, free will, and the like.

Marvin Minsky, The Society of Mind

Who Is Conscious?

The real history of consciousness starts with one’s first lie.

Joseph Brodsky

Suffering is the sole origin of consciousness.

Fyodor Dostoevsky, Notes from Underground

There is a kind of plant that eats organic food with its flowers: when a fly settles upon the blossom, the petals close upon it and hold it fast till the plant has absorbed the insect into its system; but they will close on nothing but what is good to eat; of a drop of rain or a piece of stick they will take no notice. Curious! that so unconscious a thing should have such a keen eye to its own interest. If this is unconsciousness, where is the use of consciousness?

Samuel Butler, 1871

We have been examining the brain as an entity that is capable of certain levels of accomplishment. But that perspective essentially leaves our selves out of the picture. We appear to live in our brains. We have subjective lives. How does the objective view of the brain that we have discussed up until now relate to our own feelings, to our sense of being the person having the experiences?

British philosopher Colin McGinn (born in 1950) writes that discussing “consciousness can reduce even the most fastidious thinker to blabbering incoherence.” The reason for this is that people often have unexamined and inconsistent views on exactly what the term means.

Many observers consider consciousness to be a form of performance—for example, the capacity for self-reflection, that is, the ability to understand one’s own thoughts and to explain them. I would describe that as the ability to think about one’s own thinking. Presumably, we could come up with a way of evaluating this ability and then use this test to separate conscious things from unconscious things.

However, we quickly get into trouble in trying to implement this approach. Is a baby conscious? A dog? They’re not very good at describing their own thinking process. There are people who believe that babies and dogs are not conscious beings precisely because they cannot explain themselves. How about the computer known as Watson? It can be put into a mode where it actually does explain how it came up with a given answer. Because it contains a model of its own thinking, is Watson therefore conscious whereas the baby and the dog are not?

Before we proceed to parse this question further, it is important to reflect on the most significant distinction relating to it: What is it that we can ascertain from science, versus what remains truly a matter of philosophy? One view is that philosophy is a kind of halfway house for questions that have not yet yielded to the scientific method. According to this perspective, once science advances sufficiently to resolve a particular set of questions, philosophers can then move on to other concerns, until such time that science resolves them also. This view is endemic where the issue of consciousness is concerned, and specifically the question “What and who is conscious?”

Consider these statements by philosopher John Searle: “We know that brains cause consciousness with specific biological mechanisms…. The essential thing is to recognize that consciousness is a biological process like digestion, lactation, photosynthesis, or mitosis…. The brain is a machine, a biological machine to be sure, but a machine all the same. So the first step is to figure out how the brain does it and then build an artificial machine that has an equally effective mechanism for causing consciousness.”1 People are often surprised to see these quotations because they assume that Searle is devoted to protecting the mystery of consciousness against reductionists like Ray Kurzweil.

The Australian philosopher David Chalmers (born in 1966) has coined the term “the hard problem of consciousness” to describe the difficulty of pinning down this essentially indescribable concept. Sometimes a brief phrase encapsulates an entire school of thought so well that it becomes emblematic (for example, Hannah Arendt’s “the banality of evil”). Chalmers’s famous formulation accomplishes this very well.

When discussing consciousness, it becomes very easy to slip into considering the observable and measurable attributes that we associate with being conscious, but this approach misses the very essence of the idea. I just mentioned the concept of metacognition—the idea of thinking about one’s own thinking—as one such correlate of consciousness. Other observers conflate emotional intelligence or moral intelligence with consciousness. But, again, our ability to express a loving sentiment, to get the joke, or to be sexy are simply types of performances—impressive and intelligent perhaps, but skills that can nonetheless be observed and measured (even if we argue about how to assess them). Figuring out how the brain accomplishes these sorts of tasks and what is going on in the brain when we do them constitutes Chalmers’s “easy” question of consciousness. Of course, the “easy” problem is anything but and represents perhaps the most difficult and important scientific quest of our era. Chalmers’s “hard” question, meanwhile, is so hard that it is essentially ineffable.

In support of this distinction, Chalmers introduces a thought experiment involving what he calls zombies. A zombie is an entity that acts just like a person but simply does not have subjective experience—that is, a zombie is not conscious. Chalmers argues that since we can conceive of zombies, they are at least logically possible. If you were at a cocktail party and there were both “normal” humans and zombies, how would you tell the difference? Perhaps this sounds like a cocktail party you have attended.

Many people answer this question by saying they would interrogate individuals they wished to assess about their emotional reactions to events and ideas. A zombie, they believe, would betray its lack of subjective experience through a deficiency in certain types of emotional responses. But an answer along these lines simply fails to appreciate the assumptions of the thought experiment. If we encountered an unemotional person (such as an individual with certain emotional deficits, as is common in certain types of autism) or an avatar or a robot that was not convincing as an emotional human being, then that entity is not a zombie. Remember: According to Chalmers’s assumption, a zombie is completely normal in his ability to respond, including the ability to react emotionally; he is just lacking subjective experience. The bottom line is that there is no way to identify a zombie, because by definition there is no apparent indication of his zombie nature in his behavior. So is this a distinction without a difference?

Chalmers does not attempt to answer the hard question but does provide some possibilities. One is a form of dualism in which consciousness per se does not exist in the physical world but rather as a separate ontological reality. According to this formulation, what a person does is based on the processes in her brain. Because the brain is causally closed, we can fully explain a person’s actions, including her thoughts, through its processes. Consciousness then exists essentially in another realm, or at least is a property separate from the physical world. This explanation does not permit the mind (that is to say, the conscious property associated with the brain) to causally affect the brain.

Another possibility that Chalmers entertains, which is not logically distinct from his notion of dualism, and is often called panprotopsychism, holds that all physical systems are conscious, albeit a human is more conscious than, say, a light switch. I would certainly agree that a human brain has more to be conscious about than a light switch.

My own view, which is perhaps a subschool of panprotopsychism, is that consciousness is an emergent property of a complex physical system. In this view a dog is also conscious but somewhat less than a human. An ant has some level of consciousness, too, but much less that of a dog. The ant colony, on the other hand, could be considered to have a higher level of consciousness than the individual ant; it is certainly more intelligent than a lone ant. By this reckoning, a computer that is successfully emulating the complexity of a human brain would also have the same emergent consciousness as a human.

Another way to conceptualize the concept of consciousness is as a system that has “qualia.” So what are qualia? One definition of the term is “conscious experiences.” That, however, does not take us very far. Consider this thought experiment: A neuroscientist is completely color-blind—not the sort of color-blind in which one mixes up certain shades of, say, green and red (as I do), but rather a condition in which the afflicted individual lives entirely in a black-and-white world. (In a more extreme version of this scenario, she has grown up in a black-and-white world and has never seen any colors. Bottom line, there is no color in her world.) However, she has extensively studied the physics of color—she is aware that the wavelength of red light is 700 nanometers—as well as the neurological processes of a person who can experience colors normally, and thus knows a great deal about how the brain processes color. She knows more about color than most people. If you wanted to help her out and explain what this actual experience of “red” is like, how would you do it?

Perhaps you would read her a section from the poem “Red” by the Nigerian poet Oluseyi Oluseun:


Red the colour of blood

the symbol of life

Red the colour of danger

the symbol of death

Red the colour of roses

the symbol of beauty

Red the colour of lovers

the symbol of unity

Red the colour of tomato

the symbol of good health

Red the colour of hot fire

the symbol of burning desire


That actually would give her a pretty good idea of some of the associations people have made with red, and may even enable her to hold her own in a conversation about the color. (“Yes, I love the color red, it’s so hot and fiery, so dangerously beautiful…”) If she wanted to, she could probably convince people that she had experienced red, but all the poetry in the world would not actually enable her to have that experience.

Similarly, how would you explain what it feels like to dive into water to someone who has never touched water? We would again be forced to resort to poetry, but there is really no way to impart the experience itself. These experiences are what we refer to as qualia.

Many of the readers of this book have experienced the color red. But how do I know whether your experience of red is not the same experience that I have when I look at blue? We both look at a red object and state assuredly that it is red, but that does not answer the question. I may be experiencing what you experience when you look at blue, but we have both learned to call red things red. We could start swapping poems again, but they would simply reflect the associations that people have made with colors; they do not speak to the actual nature of the qualia. Indeed, congenitally blind people have read a great deal about colors, as such references are replete in literature, and thus they do have some version of an experience of color. How does their experience of red compare with the experience of sighted people? This is really the same question as the one concerning the woman in the black-and-white world. It is remarkable that such common phenomena in our lives are so completely ineffable as to make a simple confirmation, like one that we are experiencing the same qualia, impossible.

Another definition of qualia is the feeling of an experience. However, this definition is no less circular than our attempts at defining consciousness above, as the phrases “feeling,” “having an experience,” and “consciousness” are all synonyms. Consciousness and the closely related question of qualia are a fundamental, perhaps the ultimate, philosophical question (although the issue of identity may be even more important, as I will discuss in the closing section of this chapter).

So with regard to consciousness, what exactly is the question again? It is this: Who or what is conscious? I refer to “mind” in the title of this book rather than “brain” because a mind is a brain that is conscious. We could also say that a mind has free will and identity. The assertion that these issues are philosophical is itself not self-evident. I maintain that these questions can never be fully resolved through science. In other words, there are no falsifiable experiments that we can contemplate that would resolve them, not without making philosophical assumptions. If we were building a consciousness detector, Searle would want it to ascertain that it was squirting biological neurotransmitters. American philosopher Daniel Dennett (born in 1942) would be more flexible on substrate, but might want to determine whether or not the system contained a model of itself and of its own performance. That view comes closer to my own, but at its core is still a philosophical assumption.

Proposals have been regularly presented that purport to be scientific theories linking consciousness to some measurable physical attribute—what Searle refers to as the “mechanism for causing consciousness.” American scientist, philosopher, and anesthesiologist Stuart Hameroff (born in 1947) has written that “cytoskeletal filaments are the roots of consciousness.”2 He is referring to thin threads in every cell (including neurons but not limited to them) called microtubules, which give each cell structural integrity and play a role in cell division. His books and papers on this issue contain detailed descriptions and equations that explain the plausibility that the microtubules play a role in information processing within the cell. But the connection of microtubules to consciousness requires a leap of faith not fundamentally different from the leap of faith implicit in a religious doctrine that describes a supreme being bestowing consciousness (sometimes referred to as a “soul”) to certain (usually human) entities. Some weak evidence is proffered for Hameroff’s view, specifically the observation that the neurological processes that could support this purported cellular computing are stopped during anesthesia. But this is far from compelling substantiation, given that lots of processes are halted during anesthesia. We cannot even say for certain that subjects are not conscious when anesthetized. All we do know is that people do not remember their experiences afterward. Even that is not universal, as some people do remember—accurately—their experience while under anesthesia, including, for example, conversations by their surgeons. Called anesthesia awareness, this phenomenon is estimated to occur about 40,000 times a year in the United States.3 But even setting that aside, consciousness and memory are completely different concepts. As I have discussed extensively, if I think back on my moment-to-moment experiences over the past day, I have had a vast number of sensory impressions yet I remember very few of them. Was I therefore not conscious of what I was seeing and hearing all day? It is actually a good question, and the answer is not so clear.

English physicist and mathematician Roger Penrose (born in 1931) took a different leap of faith in proposing the source of consciousness, though his also concerned the microtubules—specifically, their purported quantum computing abilities. His reasoning, although not explicitly stated, seemed to be that consciousness is mysterious, and a quantum event is also mysterious, so they must be linked in some way.

Penrose started his analysis with Turing’s theorems on unsolvable problems and Gödel’s related incompleteness theorem. Turing’s premise (which was discussed in greater detail in chapter 8) is that there are algorithmic problems that can be stated but that cannot be solved by a Turing machine. Given the computational universality of the Turing machine, we can conclude that these “unsolvable problems” cannot be solved by any machine. Gödel’s incompleteness theorem has a similar result with regard to the ability to prove conjectures involving numbers. Penrose’s argument is that the human brain is able to solve these unsolvable problems, so is therefore capable of doing things that a deterministic machine such as a computer is unable to do. His motivation, at least in part, is to elevate human beings above machines. But his central premise—that humans can solve Turing’s and Gödel’s insoluble problems—is unfortunately simply not true.

A famous unsolvable problem called the busy beaver problem is stated as follows: Find the maximum number of 1s that a Turing machine with a certain number of states can write on its tape. So to determine the busy beaver of the number n, we build all of the Turing machines that have n states (which will be a finite number if n is finite) and then determine the largest number of 1s that these machines write on their tapes, excluding those Turing machines that get into an infinite loop. This is unsolvable because as we seek to simulate all of these n-state Turing machines, our simulator will get into an infinite loop when it attempts to simulate one of the Turing machines that does get into an infinite loop. However, it turns out that computers have nonetheless been able to determine the busy beaver function for certain ns. So have humans, but computers have solved the problem for far more ns than unassisted humans. Computers are generally better than humans at solving Turing’s and Gödel’s unsolvable problems.

Penrose linked these claimed transcendent capabilities of the human brain to the quantum computing that he hypothesized took place in it. According to Penrose, these neural quantum effects were somehow inherently not achievable by computers, so therefore human thinking has an inherent edge. In fact, common electronics uses quantum effects (transistors rely on quantum tunneling of electrons across barriers); quantum computing in the brain has never been demonstrated; human mental performance can be satisfactorily explained by classical computing methods; and in any event nothing bars us from applying quantum computing in computers. None of these objections has ever been addressed by Penrose. It was when critics pointed out that the brain is a warm and messy place for quantum computing that Hameroff and Penrose joined forces. Penrose found a perfect vehicle within neurons that could conceivably support quantum computing—namely, the microtubules that Hameroff had speculated were part of the information processing within a neuron. So the Hameroff-Penrose thesis is that the microtubules in the neurons are doing quantum computing and that this is responsible for consciousness.

This thesis has also been criticized, for example, by Swedish American physicist and cosmologist Max Tegmark (born in 1967), who determined that quantum events in microtubules could survive for only 10−13 seconds, which is much too brief a period of time either to compute results of any significance or to affect neural processes. There are certain types of problems for which quantum computing would show superior capabilities to classical computing—for example, the cracking of encryption codes through the factoring of large numbers. However, unassisted human thinking has proven to be terrible at solving them, and cannot match even classical computers in this area, which suggests that the brain is not demonstrating any quantum computing capabilities. Moreover, even if such a phenomenon as quantum computing in the brain did exist, it would not necessarily be linked to consciousness.

You Gotta Have Faith

What a piece of work is a man! How noble in reason! How infinite in faculties! In form and moving, how express and admirable! In action how like an angel! In apprehension, how like a god! The beauty of the world! The paragon of animals! And yet, to me, what is this quintessence of dust?

Hamlet, in Shakespeare’s Hamlet

The reality is that these theories are all leaps of faith, and I would add that where consciousness is concerned, the guiding principle is “you gotta have faith”—that is, we each need a leap of faith as to what and who is conscious, and who and what we are as conscious beings. Otherwise we could not get up in the morning. But we should be honest about the fundamental need for a leap of faith in this matter and self-reflective as to what our own particular leap involves.

People have very different leaps, despite impressions to the contrary. Individual philosophical assumptions about the nature and source of consciousness underlie disagreements on issues ranging from animal rights to abortion, and will result in even more contentious future conflicts over machine rights. My objective prediction is that machines in the future will appear to be conscious and that they will be convincing to biological people when they speak of their qualia. They will exhibit the full range of subtle, familiar emotional cues; they will make us laugh and cry; and they will get mad at us if we say that we don’t believe that they are conscious. (They will be very smart, so we won’t want that to happen.) We will come to accept that they are conscious persons. My own leap of faith is this: Once machines do succeed in being convincing when they speak of their qualia and conscious experiences, they will indeed constitute conscious persons. I have come to my position via this thought experiment: Imagine that you meet an entity in the future (a robot or an avatar) that is completely convincing in her emotional reactions. She laughs convincingly at your jokes, and in turn makes you laugh and cry (but not just by pinching you). She convinces you of her sincerity when she speaks of her fears and longings. In every way, she seems conscious. She seems, in fact, like a person. Would you accept her as a conscious person?

If your initial reaction is that you would likely detect some way in which she betrays her nonbiological nature, then you are not keeping to the assumptions in this hypothetical situation, which established that she is fully convincing. Given that assumption, if she were threatened with destruction and responded, as a human would, with terror, would you react in the same empathetic way that you would if you witnessed such a scene involving a human? For myself, the answer is yes, and I believe the answer would be the same for most if not virtually all other people regardless of what they might assert now in a philosophical debate. Again, the emphasis here is on the word “convincing.”

There is certainly disagreement on when or even whether we will encounter such a nonbiological entity. My own consistent prediction is that this will first take place in 2029 and become routine in the 2030s. But putting the time frame aside, I believe that we will eventually come to regard such entities as conscious. Consider how we already treat them when we are exposed to them as characters in stories and movies: R2D2 from the Star Wars movies, David and Teddy from the movie A.I., Data from the TV series Star Trek: The Next Generation, Johnny 5 from the movie Short Circuit, WALL-E from Disney’s movie Wall-E, T-800—the (good) Terminator—in the second and later Terminator movies, Rachael the Replicant from the movie Blade Runner (who, by the way, is not aware that she is not human), Bumblebee from the movie, TV, and comic series Transformers, and Sonny from the movie I, Robot. We do empathize with these characters even though we know that they are nonbiological. We regard them as conscious persons, just as we do biological human characters. We share their feelings and fear for them when they get into trouble. If that is how we treat fictional nonbiological characters today, then that is how we will treat real-life intelligences in the future that don’t happen to have a biological substrate.

If you do accept the leap of faith that a nonbiological entity that is convincing in its reactions to qualia is actually conscious, then consider what that implies: namely that consciousness is an emergent property of the overall pattern of an entity, not the substrate it runs on.

There is a conceptual gap between science, which stands for objective measurement and the conclusions we can draw thereby, and consciousness, which is a synonym for subjective experience. We obviously cannot simply ask an entity in question, “Are you conscious?” If we look inside its “head,” biological or otherwise, to ascertain that, then we would have to make philosophical assumptions in determining what it is that we are looking for. The question as to whether or not an entity is conscious is therefore not a scientific one. Based on this, some observers go on to question whether consciousness itself has any basis in reality. English writer and philosopher Susan Blackmore (born in 1951) speaks of the “grand illusion of consciousness.” She acknowledges the reality of the meme (idea) of consciousness—in other words, consciousness certainly exists as an idea, and there are a great many neocortical structures that deal with the idea, not to mention words that have been spoken and written about it. But it is not clear that it refers to something real. Blackburn goes on to explain that she is not necessarily denying the reality of consciousness, but rather attempting to articulate the sorts of dilemmas we encounter when we try to pin down the concept. As British psychologist and writer Stuart Sutherland (1927–1998) wrote in the International Dictionary of Psychology, “Consciousness is a fascinating but elusive phenomenon; it is impossible to specify what it is, what it does, or why it evolved.”4

However, we would be well advised not to dismiss the concept too easily as just a polite debate between philosophers—which, incidentally, dates back two thousand years to the Platonic dialogues. The idea of consciousness underlies our moral system, and our legal system in turn is loosely built on those moral beliefs. If a person extinguishes someone’s consciousness, as in the act of murder, we consider that to be immoral, and with some exceptions, a high crime. Those exceptions are also relevant to consciousness, in that we might authorize police or military forces to kill certain conscious people to protect a greater number of other conscious people. We can debate the merits of particular exceptions, but the underlying principle holds true.

Assaulting someone and causing her to experience suffering is also generally considered immoral and illegal. If I destroy my property, it is probably acceptable. If I destroy your property without your permission, it is probably not acceptable, but not because I am causing suffering to your property, but rather to you as the owner of the property. On the other hand, if my property includes a conscious being such as an animal, then I as the owner of that animal do not necessarily have free moral or legal rein to do with it as I wish—there are, for example, laws against animal cruelty.

Because a great deal of our moral and legal system is based on protecting the existence of and preventing the unnecessary suffering of conscious entities, in order to make responsible judgments we need to answer the question as to who is conscious. That question is therefore not simply a matter for intellectual debate, as is evident in the controversy surrounding an issue like abortion. I should point out that the abortion issue can go somewhat beyond the issue of consciousness, as pro-life proponents argue that the potential for an embryo to ultimately become a conscious person is sufficient reason for it to be awarded protection, just as someone in a coma deserves that right. But fundamentally the issue is a debate about when a fetus becomes conscious.

Perceptions of consciousness also often affect our judgments in controversial areas. Looking at the abortion issue again, many people make a distinction between a measure like the morning-after pill, which prevents the implantation of an embryo in the uterus in the first days of pregnancy, and a late-stage abortion. The difference has to do with the likelihood that the late-stage fetus is conscious. It is difficult to maintain that a few-days-old embryo is conscious unless one takes a panprotopsychist position, but even in these terms it would rank below the simplest animal in terms of consciousness. Similarly, we have very different reactions to the maltreatment of great apes versus, say, insects. No one worries too much today about causing pain and suffering to our computer software (although we do comment extensively on the ability of software to cause us suffering), but when future software has the intellectual, emotional, and moral intelligence of biological humans, this will become a genuine concern.

Thus my position is that I will accept nonbiological entities that are fully convincing in their emotional reactions to be conscious persons, and my prediction is that the consensus in society will accept them as well. Note that this definition extends beyond entities that can pass the Turing test, which requires mastery of human language. The latter are sufficiently humanlike that I would include them, and I believe that most of society will as well, but I also include entities that evidence humanlike emotional reactions but may not be able to pass the Turing test—for example, young children.

Does this resolve the philosophical question of who is conscious, at least for myself and others who accept this particular leap of faith? The answer is: not quite. We’ve only covered one case, which is that of entities that act in a humanlike way. Even though we are discussing future entities that are not biological, we are talking about entities that demonstrate convincing humanlike reactions, so this position is still human-centric. But what about more alien forms of intelligence that are not humanlike? We can imagine intelligences that are as complex as or perhaps vastly more complex and intricate than human brains, but that have completely different emotions and motivations. How do we decide whether or not they are conscious?

We can start by considering creatures in the biological world that have brains comparable to those of humans yet evince very different sorts of behaviors. British philosopher David Cockburn (born in 1949) writes about viewing a video of a giant squid that was under attack (or at least it thought it was—Cockburn hypothesized that it might have been afraid of the human with the video camera). The squid shuddered and cowered, and Cockburn writes, “It responded in a way which struck me immediately and powerfully as one of fear. Part of what was striking in this sequence was the way in which it was possible to see in the behavior of a creature physically so very different from human beings an emotion which was so unambiguously and specifically one of fear.”5 He concludes that the animal was feeling that emotion and he articulates the belief that most other people viewing that film would come to the same conclusion. If we accept Cockburn’s description and conclusion, then we would have to add giant squids to our list of conscious entities. However, this has not gotten us very far either, because it is still based on our empathetic reaction to an emotion that we recognize in ourselves. It is still a self-centric or human-centric perspective.

If we step outside biology, nonbiological intelligence will be even more varied than intelligence in the biological world. For example, some entities may not have a fear of their own destruction, and may not have a need for the emotions we see in humans or in any biological creature. Perhaps they could still pass the Turing test, or perhaps they wouldn’t even be willing to try.

We do in fact build robots today that do not have a sense of self-preservation to carry out missions in dangerous environments. They’re not sufficiently intelligent or complex yet for us to seriously consider their sentience, but we can imagine future robots of this sort that are as complex as humans. What about them?

Personally I would say that if I saw in such a device’s behavior a commitment to a complex and worthy goal and the ability to execute notable decisions and actions to carry out its mission, I would be impressed and probably become upset if it got destroyed. This is now perhaps stretching the concept a bit, in that I am responding to behavior that does not include many emotions we consider universal in people and even in biological creatures of all kinds. But again, I am seeking to connect with attributes that I can relate to in myself and other people. The idea of an entity totally dedicated to a noble goal and carrying it out or at least attempting to do so without regard for its own well-being is, after all, not completely foreign to human experience. In this instance we are also considering an entity that is seeking to protect biological humans or in some way advance our agenda.

What if this entity has its own goals distinct from a human one and is not carrying out a mission we would recognize as noble in our own terms? I might then attempt to see if I could connect or appreciate some of its abilities in some other way. If it is indeed very intelligent, it is likely to be good at math, so perhaps I could have a conversation with it on that topic. Maybe it would appreciate math jokes.

But if the entity has no interest in communicating with me, and I don’t have sufficient access to its actions and decision making to be moved by the beauty of its internal processes, does that mean that it is not conscious? I need to conclude that entities that do not succeed in convincing me of their emotional reactions, or that don’t care to try, are not necessarily not conscious. It would be difficult to recognize another conscious entity without establishing some level of empathetic communication, but that judgment reflects my own limitations more than it does the entity under consideration. We thus need to proceed with humility. It is challenging enough to put ourselves in the subjective shoes of another human, so the task will be that much harder with intelligences that are extremely different from our own.

What Are We Conscious Of?

If we could look through the skull into the brain of a consciously thinking person, and if the place of optimal excitability were luminous, then we should see playing over the cerebral surface, a bright spot with fantastic, waving borders constantly fluctuating in size and form, surrounded by a darkness more or less deep, covering the rest of the hemisphere.

Ivan Petrovich Pavlov, 1913

Returning to the giant squid, we can recognize some of its apparent emotions, but much of its behavior is a mystery. What is it like being a giant squid? How does it feel as it squeezes its spineless body through a tiny opening? We don’t even have the vocabulary to answer this question, given that we cannot even describe experiences that we do share with other people, such as seeing the color red or feeling water splash on our bodies.

But we don’t have to go as far as the bottom of the ocean to find mysteries in the nature of conscious experiences—we need only consider our own. I know, for example, that I am conscious. I assume that you, the reader, are conscious also. (As for people who have not bought my book, I am not so sure.) But what am I conscious of? You might ask yourself the same question.

Try this thought experiment (which will work for those of you who drive a car): Imagine that you are driving in the left lane of a highway. Now close your eyes, grab an imagined steering wheel, and make the movements to change lanes to the lane to your right.

Okay, before continuing to read, try it.

Here is what you probably did: You held the steering wheel. You checked that the right lane is clear. Assuming the lane was clear, you turned the steering wheel to the right for a brief period. Then you straightened it out again. Job done.

It’s a good thing you weren’t in a real car, because you just zoomed across all the lanes of the highway and crashed into a tree. While I probably should have mentioned that you shouldn’t try this in a real moving car (but then I assume you have already mastered the rule that you shouldn’t drive with your eyes closed), that’s not really the key problem here. If you used the procedure I just described—and almost everyone does when doing this thought experiment—you got it wrong. Turning the wheel to the right and then straightening it out causes the car to head in a direction that is diagonal to its original direction. It will cross the lane to the right, as you intended, but it will keep going to the right indefinitely until it zooms off the road. What you needed to do as your car crossed the lane to the right was to then turn the wheel to the left, just as far as you had turned it to the right, and then straighten it out again. This will cause the car to again head straight in the new lane.

Consider the fact that if you’re a regular driver, you’ve done this maneuver thousands of times. Are you not conscious when you do this? Have you never paid attention to what you are actually doing when you change lanes? Assuming that you are not reading this book in a hospital while recovering from a lane-changing accident, you have clearly mastered this skill. Yet you are not conscious of what you did, however many times you’ve accomplished this task.

When people tell stories of their experiences, they describe them as sequences of situations and decisions. But this is not how we experience a story in the first place. Our original experience is as a sequence of high-level patterns, some of which may have triggered feelings. We remember only a small subset of those patterns, if that. Even if we are reasonably accurate in our recounting of a story, we use our powers of confabulation to fill in missing details and convert the sequence into a coherent tale. We cannot be certain what our original conscious experience was from our recollection of it, yet memory is the only access we have to that experience. The present moment is, well, fleeting, and is quickly turned into a memory, or, more often, not. Even if an experience is turned into a memory, it is stored, as the PRTM indicates, as a high-level pattern composed of other patterns in a huge hierarchy. As I have pointed out several times, almost all of the experiences we have (like any of the times we changed lanes) are immediately forgotten. So ascertaining what constitutes our own conscious experience is actually not attainable.

East Is East and West Is West

Before brains there was no color or sound in the universe, nor was there any flavor or aroma and probably little sense and no feeling or emotion.

Roger W. Sperry7

René Descartes walks into a restaurant and sits down for dinner. The waiter comes over and asks if he’d like an appetizer.

“No thank you,” says Descartes, “I’d just like to order dinner.”

“Would you like to hear our daily specials?” asks the waiter.

“No,” says Descartes, getting impatient.

“Would you like a drink before dinner?” the waiter asks.

Descartes is insulted, since he’s a teetotaler. “I think not!” he says indignantly, and POOF! he disappears.

A joke as recalled by David Chalmers

There are two ways to view the questions we have been considering—converse Western and Eastern perspectives on the nature of consciousness and of reality. In the Western perspective, we start with a physical world that evolves patterns of information. After a few billion years of evolution, the entities in that world have evolved sufficiently to become conscious beings. In the Eastern view, consciousness is the fundamental reality; the physical world only comes into existence through the thoughts of conscious beings. The physical world, in other words, is the thoughts of conscious beings made manifest. These are of course simplifications of complex and diverse philosophies, but they represent the principal polarities in the philosophies of consciousness and its relationship to the physical world.

The East-West divide on the issue of consciousness has also found expression in opposing schools of thought in the field of subatomic physics. In quantum mechanics, particles exist as what are called probability fields. Any measurement carried out on them by a measuring device causes what is called a collapse of the wave function, meaning that the particle suddenly assumes a particular location. A popular view is that such a measurement constitutes observation by a conscious observer, because otherwise measurement would be a meaningless concept. Thus the particle assumes a particular location (as well as other properties, such as velocity) only when it is observed. Basically particles figure that if no one is bothering to look at them, they don’t need to decide where they are. I call this the Buddhist school of quantum mechanics, because in it particles essentially don’t exist until they are observed by a conscious person.

There is another interpretation of quantum mechanics that avoids such anthropomorphic terminology. In this analysis, the field representing a particle is not a probability field, but rather just a function that has different values in different locations. The field, therefore, is fundamentally what the particle is. There are constraints on what the values of the field can be in different locations, because the entire field representing a particle represents only a limited amount of information. That is where the word “quantum” comes from. The so-called collapse of the wave function, this view holds, is not a collapse at all. The wave function actually never goes away. It is just that a measurement device is also made up of particles with fields, and the interaction of the particle field being measured and the particle fields of the measuring device results in a reading of the particle being in a particular location. The field, however, is still present. This is the Western interpretation of quantum mechanics, although it is interesting to note that the more popular view among physicists worldwide is what I have called the Eastern interpretation.

There was one philosopher whose work spanned this East-West divide. The Austrian British thinker Ludwig Wittgenstein (1889–1951) studied the philosophy of language and knowledge and contemplated the question of what it is that we can really know. He pondered this subject while a soldier in World War I and took notes for what would be his only book published while he was alive, Tractatus Logico-Philosophicus. The work had an unusual structure, and it was only through the efforts of his former instructor, British mathematician and philosopher Bertrand Russell, that it found a publisher in 1921. It became the bible for a major school of philosophy known as logical positivism, which sought to define the limits of science. The book and the movement surrounding it were influential on Turing and the emergence of the theory of computation and linguistics.

Tractatus Logico-Philosophicus anticipates the insight that all knowledge is inherently hierarchical. The book itself is arranged in nested and numbered statements. For example, the first four statements in the book are:

1 The world is all that is the case.

1.1 The world is the totality of facts, not of things.

1.11 The world is determined by the facts, and by their being all the facts.

1.12 For the totality of facts determines what is the case, and also whatever is not the case.

Another significant statement in the Tractatus—and one that Turing would echo—is this:

4.0031 All philosophy is a critique of language.

Essentially both Tractatus Logico-Philosophicus and the logical positivism movement assert that physical reality exists separate from our perception of it, but that all we can know of that reality is what we perceive with our senses—which can be heightened through our tools—and the logical inferences we can make from these sensory impressions. Essentially Wittgenstein is attempting to describe the methods and goals of science. The final statement in the book is number 7, “What we cannot speak about we must pass over in silence.” The early Wittgenstein, accordingly, considers the discussion of consciousness as circular and tautological and therefore a waste of time.

The later Wittgenstein, however, completely rejected this approach and spent all of his philosophical attention talking about matters that he had earlier argued should be passed over in silence. His writings on this revised thinking were collected and published in 1953, two years after his death, in a book called Philosophical Investigations. He criticized his earlier ideas in the Tractatus, judging them to be circular and void of meaning, and came to the view that what he had advised that we not speak about was in fact all that was worth reflecting on. These writings heavily influenced the existentialists, making Wittgenstein the only figure in modern philosophy to be a major architect of two leading and contradictory schools of thought in philosophy.

What is it that the later Wittgenstein thought was worth thinking and talking about? It was issues such as beauty and love, which he recognized exist imperfectly as ideas in the minds of men. However, he writes that such concepts do exist in a perfect and idealized realm, similar to the perfect “forms” that Plato wrote about in the Platonic dialogues, another work that illuminated apparently contradictory approaches to the nature of reality.

One thinker whose position I believe is mischaracterized is the French philosopher and mathematician René Descartes. His famous “I think, therefore I am” is generally interpreted to extol rational thought, in the sense that “I think, that is I can perform logical thought, therefore I am worthwhile.” Descartes is therefore considered the architect of the Western rational perspective.

Reading this statement in the context of his other writings, however, I get a different impression. Descartes was troubled by what is referred to as the “mind-body problem”: Namely, how does a conscious mind arise from the physical matter of the brain? From this perspective, it seems he was attempting to push rational skepticism to the breaking point, so in my view what his statement really means is, “I think, that is to say, a subjective experience is occurring, so therefore all we know for sure is that something—call it I—exists.” He could not be certain that the physical world exists, because all we have are our own individual sense impressions of it, which might be wrong or completely illusory. We do know, however, that the experiencer exists.

My religious upbringing was in a Unitarian church, where we studied all of the world’s religions. We would spend six months on, say, Buddhism and would go to Buddhist services, read their books, and have discussion groups with their leaders. Then we would switch to another religion, such as Judaism. The overriding theme was “many paths to the truth,” along with tolerance and transcendence. This last idea meant that resolving apparent contradictions between traditions does not require deciding that one is right and the other is wrong. The truth can be discovered only by finding an explanation that overrides—transcends—seeming differences, especially for fundamental questions of meaning and purpose.

This is how I resolve the Western-Eastern divide on consciousness and the physical world. In my view, both perspectives have to be true.

On the one hand, it is foolish to deny the physical world. Even if we do live in a simulation, as speculated by Swedish philosopher Nick Bostrom, reality is nonetheless a conceptual level that is real for us. If we accept the existence of the physical world and the evolution that has taken place in it, then we can see that conscious entities have evolved from it.

On the other hand, the Eastern perspective—that consciousness is fundamental and represents the only reality that is truly important—is also difficult to deny. Just consider the precious regard we give to conscious persons versus unconscious things. We consider the latter to have no intrinsic value except to the extent that they can influence the subjective experience of conscious persons. Even if we regard consciousness as an emergent property of a complex system, we cannot take the position that it is just another attribute (along with “digestion” and “lactation,” to quote John Searle). It represents what is truly important.

The word “spiritual” is often used to denote the things that are of ultimate significance. Many people don’t like to use such terminology from spiritual or religious traditions, because it implies sets of beliefs that they may not subscribe to. But if we strip away the mystical complexities of religious traditions and simply respect “spiritual” as implying something of profound meaning to humans, then the concept of consciousness fits the bill. It reflects the ultimate spiritual value. Indeed, “spirit” itself is often used to denote consciousness.

Evolution can then be viewed as a spiritual process in that it creates spiritual beings, that is, entities that are conscious. Evolution also moves toward greater complexity, greater knowledge, greater intelligence, greater beauty, greater creativity, and the ability to express more transcendent emotions, such as love. These are all descriptions that people have used for the concept of God, albeit God is described as having no limitations in these regards.

People often feel threatened by discussions that imply the possibility that a machine could be conscious, as they view considerations along these lines as a denigration of the spiritual value of conscious persons. But this reaction reflects a misunderstanding of the concept of a machine. Such critics are addressing the issue based on the machines they know today, and as impressive as they are becoming, I agree that contemporary examples of technology are not yet worthy of our respect as conscious beings. My prediction is that they will become indistinguishable from biological humans, whom we do regard as conscious beings, and will therefore share in the spiritual value we ascribe to consciousness. This is not a disparagement of people; rather, it is an elevation of our understanding of (some) future machines. We should probably adopt a different terminology for these entities, as they will be a different sort of machine.

Indeed, as we now look inside the brain and decode its mechanisms we discover methods and algorithms that we can not only understand but re-create—“the parts of a mill pushing on each other,” to paraphrase German mathematician and philosopher Gottfried Wilhelm Leibniz (1646–1716) when he wrote about the brain. Humans already constitute spiritual machines. Moreover, we will merge with the tools we are creating so closely that the distinction between human and machine will blur until the difference disappears. That process is already well under way, even if most of the machines that extend us are not yet inside our bodies and brains.

Free Will

A central aspect of consciousness is the ability to look ahead, the capability we call “foresight.” It is the ability to plan, and in social terms to outline a scenario of what is likely going to happen, or what might happen, in social interactions that have not yet taken place…. It is a system whereby we improve our chances of doing those things that will represent our own best interests…. I suggest that “free will” is our apparent ability to choose and act upon whichever of those seem most useful or appropriate, and our insistence upon the idea that such choices are our own.

Richard D. Alexander

Shall we say that the plant does not know what it is doing merely because it has no eyes, or ears, or brains? If we say that it acts mechanically, and mechanically only, shall we not be forced to admit that sundry other and apparently very deliberate actions are also mechanical? If it seems to us that the plant kills and eats a fly mechanically, may it not seem to the plant that a man must kill and eat a sheep mechanically?

Samuel Butler, 1871

Is the brain, which is notably double in structure, a double organ, “seeming parted, but yet a union in partition”?

Henry Maudsley8

Redundancy, as we have learned, is a key strategy deployed by the neocortex. But there is another level of redundancy in the brain, in that its left and right hemispheres, while not identical, are largely the same. Just as certain regions of the neocortex normally end up processing certain types of information, the hemispheres also specialize to some extent—for example, the left hemisphere typically is responsible for verbal language. But these assignments can also be rerouted, to the point that we can survive and function somewhat normally with only one half. American neuropsychology researchers Stella de Bode and Susan Curtiss reported on forty-nine children who had undergone a hemispherectomy (removal of half of their brain), an extreme operation that is performed on patients with a life-threatening seizure disorder that exists in only one hemisphere. Some who undergo the procedure are left with deficits, but those deficits are specific and the patients have reasonably normal personalities. Many of them thrive, and it is not apparent to observers that they only have half a brain. De Bode and Curtiss write about left-hemispherectomized children who “develop remarkably good language despite removal of the ‘language’ hemisphere.”9 They describe one such student who completed college, attended graduate school, and scored above average on IQ tests. Studies have shown minimal long-term effects on overall cognition, memory, personality, and sense of humor.10 In a 2007 study American researchers Shearwood McClelland and Robert Maxwell showed similar long-term positive results in adults.11

A ten-year-old German girl who was born with only half of her brain has also been reported to be quite normal. She even has almost perfect vision in one eye, whereas hemispherectomy patients lose part of their field of vision right after the operation.12 Scottish researcher Lars Muckli commented, “The brain has amazing plasticity but we were quite astonished to see just how well the single hemisphere of the brain in this girl has adapted to compensate for the missing half.”

While these observations certainly support the idea of plasticity in the neocortex, their more interesting implication is that we each appear to have two brains, not one, and we can do pretty well with either. If we lose one, we do lose the cortical patterns that are uniquely stored there, but each brain is in itself fairly complete. So does each hemisphere have its own consciousness? There is an argument to be made that such is the case.

Consider split-brain patients, who still have both of their brain hemispheres, but the channel between them has been cut. The corpus callosum is a bundle of about 250 million axons that connects the left and right cerebral hemispheres and enables them to communicate and coordinate with each other. Just as two people can communicate closely with each other and act as a single decision maker while remaining separate and whole individuals, the two brain hemispheres can function as a unit while remaining independent.

As the term implies, in split-brain patients the corpus callosum has been cut or damaged, leaving them effectively with two functional brains without a direct communication link between them. American psychology researcher Michael Gazzaniga (born in 1939) has conducted extensive experiments on what each hemisphere in split-brain patients is thinking.

The left hemisphere in a split-brain patient usually sees the right visual field, and vice versa. Gazzaniga and his colleagues showed a split-brain patient a picture of a chicken claw to the right visual field (which was seen by his left hemisphere) and a snowy scene to the left visual field (which was seen by his right hemisphere). He then showed a collection of pictures so that both hemispheres could see them. He asked the patient to choose one of the pictures that went well with the first picture. The patient’s left hand (controlled by his right hemisphere) pointed to a picture of a shovel, whereas his right hand pointed to a picture of a chicken. So far so good—the two hemispheres were acting independently and sensibly. “Why did you choose that?” Gazzaniga asked the patient, who answered verbally (controlled by his left-hemisphere speech center), “The chicken claw obviously goes with the chicken.” But then the patient looked down and, noticing his left hand pointing to the shovel, immediately explained this (again with his left-hemisphere-controlled speech center) as “and you need a shovel to clean out the chicken shed.”

This is a confabulation. The right hemisphere (which controls the left arm and hand) correctly points to the shovel, but because the left hemisphere (which controls the verbal answer) is unaware of the snow, it confabulates an explanation, yet is not aware that it is confabulating. It is taking responsibility for an action it had never decided on and never took, but thinks that it did.

This implies that each of the two hemispheres in a split-brain patient has its own consciousness. The hemispheres appear not to be aware that their body is effectively controlled by two brains, because they learn to coordinate with each other, and their decisions are sufficiently aligned and consistent that each thinks that the decisions of the other are its own.

Gazzaniga’s experiment doesn’t prove that a normal individual with a functioning corpus callosum has two conscious half-brains, but it is suggestive of that possibility. While the corpus callosum allows for effective collaboration between the two half-brains, it doesn’t necessarily mean that they are not separate minds. Each one could be fooled into thinking it has made all the decisions, because they would all be close enough to what each would have decided on its own, and after all, it does have a lot of influence on each decision (by collaborating with the other hemisphere through the corpus callosum). So to each of the two minds it would seem as if it were in control.

How would you test the conjecture that they are both conscious? One could assess them for neurological correlates of consciousness, which is precisely what Gazzaniga has done. His experiments show that each hemisphere is acting as an independent brain. Confabulation is not restricted to brain hemispheres; we each do it on a regular basis. Each hemisphere is about as intelligent as a human, so if we believe that a human brain is conscious, then we have to conclude that each hemisphere is independently conscious. We can assess the neurological correlates and we can conduct our own thought experiments (for example, considering that if two brain hemispheres without a functioning corpus callosum constitute two separate conscious minds, then the same would have to hold true for two hemispheres with a functioning connection between them), but any attempt at a more direct detection of consciousness in each hemisphere confronts us again with the lack of a scientific test for consciousness. But if we do allow that each hemisphere of the brain is conscious, then do we grant that the so-called unconscious activity in the neocortex (which constitutes the vast bulk of its activity) has an independent consciousness too? Or maybe it has more than one? Indeed, Marvin Minsky refers to the brain as a “society of mind.”13

In another split-brain experiment the researchers showed the word “bell” to the right brain and “music” to the left brain. The patient was asked what word he saw. The left-hemisphere-controlled speech center says “music.” The subject was then shown a group of pictures and asked to point to a picture most closely related to the word he was just shown. His right-hemisphere-controlled arm pointed to the bell. When he was asked why he pointed to the bell, his left-hemisphere-controlled speech center replied, “Well, music, the last time I heard any music was the bells banging outside here.” He provided this explanation even though there were other pictures to choose from that were much more closely related to music.

Again, this is a confabulation. The left hemisphere is explaining as if it were its own a decision that it never made and never carried out. It is not doing so to cover up for a friend (that is, its other hemisphere)—it genuinely thinks that the decision was its own.

These reactions and decisions can extend to emotional responses. They asked a teenage split-brain patient—so that both hemispheres heard—“Who is your favorite…” and then fed the word “girlfriend” just to the right hemisphere through the left ear. Gazzaniga reports that the subject blushed and acted embarrassed, an appropriate reaction for a teenager when asked about his girlfriend. But the left-hemisphere-controlled speech center reported that it had not heard any word and asked for clarification: “My favorite what?” When asked again to answer the question, this time in writing, the right-hemisphere-controlled left hand wrote out his girlfriend’s name.

Gazzaniga’s tests are not thought experiments but actual mind experiments. While they offer an interesting perspective on the issue of consciousness, they speak even more directly to the issue of free will. In each of these cases, one of the hemispheres believes that it has made a decision that it in fact never made. To what extent is that true for the decisions we make every day?

Consider the case of a ten-year-old female epileptic patient. Neurosurgeon Itzhak Fried was performing brain surgery while she was awake (which is feasible because there are no pain receptors in the brain).14 Whenever he stimulated a particular spot on her neocortex, she would laugh. At first the surgical team thought that they might be triggering some sort of laugh reflex, but they quickly realized that they were triggering the actual perception of humor. They had apparently found a point in her neocortex—there is obviously more than one—that recognizes the perception of humor. She was not just laughing—she actually found the situation funny, even though nothing had actually changed in the situation other than their having stimulated this point in her neocortex. When they asked her why she was laughing, she did not reply along the lines of, “Oh, no particular reason,” or “You just stimulated my brain,” but would immediately confabulate a reason. She would point to something in the room and try to explain why it was funny. “You guys are just so funny standing there” was a typical comment.

We are apparently very eager to explain and rationalize our actions, even when we didn’t actually make the decisions that led to them. So just how responsible are we for our decisions? Consider these experiments by physiology professor Benjamin Libet (1916–2007) at the University of California at Davis. Libet had participants sit in front of a timer, EEG electrodes attached to their scalps. He instructed them to do simple tasks such as pushing a button or moving a finger. The participants were asked to note the time on the timer when they “first become aware of the wish or urge to act.” Tests indicated a margin of error of only 50 milliseconds on these assessments by the subjects. They also measured an average of about 200 milliseconds between the time when the subjects reported awareness of the urge to act and the actual act.15

The researchers also looked at the EEG signals coming from the subjects’ brains. Brain activity involved in initiating the action by the motor cortex (which is responsible for carrying out the action) actually occurred on average about 500 milliseconds prior to the performance of the task. That means that the motor cortex was preparing to carry out the task about a third of a second before the subject was even aware that she had made a decision to do so.

The implications of the Libet experiments have been hotly debated. Libet himself concluded that our awareness of decision making appears to be an illusion, that “consciousness is out of the loop.” Philosopher Daniel Dennett commented, “The action is originally precipitated in some part of the brain, and off fly the signals to muscles, pausing en route to tell you, the conscious agent, what is going on (but like all good officials letting you, the bumbling president, maintain the illusion that you started it all).”16 At the same time Dennett has questioned the timings recorded by the experiment, basically arguing that subjects may not really be aware of when they become aware of the decision to act. One might wonder: If the subject is unaware of when she is aware of making a decision, then who is? But the point is actually well taken—as I discussed earlier, what we are conscious of is far from clear.

Indian American neuroscientist Vilayanur Subramanian “Rama” Ramachandran (born in 1951) explains the situation a little differently. Given that we have on the order of 30 billion neurons in the neocortex, there is always a lot going on there, and we are consciously aware of very little of it. Decisions, big and little, are constantly being processed by the neocortex, and proposed solutions bubble up to our conscious awareness. Rather than free will, Ramachandran suggests we should talk about “free won’t”—that is, the power to reject solutions proposed by the nonconscious parts of our neocortex.

Consider the analogy to a military campaign. Army officials prepare a recommendation to the president. Prior to receiving the president’s approval, they perform preparatory work that will enable the decision to be carried out. At a particular moment, the proposed decision is presented to the president, who approves it, and the rest of the mission is then undertaken. Since the “brain” represented by this analogy involves the unconscious processes of the neocortex (that is, the officials under the president) as well as its conscious processes (the president), we would see neural activity as well as actual actions taking place prior to the official decision’s being made. We can always get into debates in a particular situation as to how much leeway the officials under the president actually gave him or her to accept or reject a recommendation, and certainly American presidents have done both. But it should not surprise us that mental activity, even in the motor cortex, would start before we were aware that there was a decision to be made.

What the Libet experiments do underscore is that there is a lot of activity in our brains underlying our decisions that is not conscious. We already knew that most of what goes in the neocortex is not conscious; it should not be surprising, therefore, that our actions and decisions stem from both unconscious and conscious activity. Is this distinction important? If our decisions arise from both, should it matter if we sort out the conscious parts from the unconscious? Is it not the case that both aspects represent our brain? Are we not ultimately responsible for everything that goes on in our brains? “Yes, I shot the victim, but I’m not responsible because I wasn’t paying attention” is probably a weak defense. Even though there are some narrow legal grounds on which a person may not be held responsible for his decisions, we are generally held accountable for all of the choices we make.

The observations and experiments I have cited above constitute thought experiments on the issue of free will, a subject that, like the topic of consciousness, has been debated since Plato. The term “free will” itself dates back to the thirteenth century, but what exactly does it mean?

The Merriam-Webster dictionary defines it as the “freedom of humans to make choices that are not determined by prior causes or by divine intervention.” You will notice that this definition is hopelessly circular: “Free will is freedom….” Setting aside the idea of divine intervention’s standing in opposition to free will, there is one useful element in this definition, which is the idea of a decision’s “not [being] determined by prior causes.” I’ll come back to that momentarily.

The Stanford Encyclopedia of Philosophy states that free will is the “capacity of rational agents to choose a course of action from among various alternatives.” By this definition, a simple computer is capable of free will, so it is less helpful than the dictionary definition.

Wikipedia is actually a bit better. It defines free will as “the ability of agents to make choices free from certain kinds of constraints…. The constraint of dominant concern has been…determinism.” Again, it uses the circular word “free” in defining free will, but it does articulate what has been regarded as the principal enemy of free will: determinism. In that respect the Merriam-Webster definition above is actually similar in its reference to decisions that “are not determined by prior causes.”

So what do we mean by determinism? If I put “2 + 2” into a calculator and it displays “4,” can I say that the calculator displayed its free will by deciding to display that “4”? No one would accept that as a demonstration of free will, because the “decision” was predetermined by the internal mechanisms of the calculator and the input. If I put in a more complex calculation, we still come to the same conclusion with regard to its lack of free will.

How about Watson when it answers a Jeopardy! query? Although its deliberations are far more complex than those of the calculator, very few if any observers would ascribe free will to its decisions. No one human knows exactly how all of its programs work, but we can identify a group of people who collectively can describe all of its methods. More important, its output is determined by (1) all of its programs at the moment that the query is posed, (2) the query itself, (3) the state of its internal parameters that influence its decisions, and (4) its trillions of bytes of knowledge bases, including encyclopedias. Based on these four categories of information, its output is determined. We might speculate that presenting the same query would always get the same response, but Watson is programmed to learn from its experience, so there is the possibility that subsequent answers would be different. However, that does not contradict this analysis; rather, it just constitutes a change in item 3, the parameters that control its decisions.

So how exactly does a human differ from Watson, such that we ascribe free will to the human but not to the computer program? We can identify several factors. Even though Watson is a better Jeopardy! player than most if not all humans, it is nonetheless not nearly as complex as a human neocortex. Watson does possess a lot of knowledge, and it does use hierarchical methods, but the complexity of its hierarchical thinking is still considerably less than that of a human. So is the difference simply one of the scale of complexity of its hierarchical thinking? There is an argument to be made that the issue does come down to this. In my discussion of the issue of consciousness I noted that my own leap of faith is that I would consider a computer that passed a valid Turing test to be conscious. The best chatbots are not able to do that today (although they are steadily improving), so my conclusion with regard to consciousness is a matter of the level of performance of the entity. Perhaps the same is true of my ascribing free will to it.

Consciousness is indeed one philosophical difference between human brains and contemporary software programs. We consider human brains to be conscious, whereas we do not—yet—attribute that to software programs. Is this the factor we are looking for that underlies free will?

A simple mind experiment would argue that consciousness is indeed a vital part of free will. Consider a situation in which someone performs an action with no awareness that she is doing it—it is carried out entirely by nonconscious activity in that person’s brain. Would we regard this to be a display of free will? Most people would answer no. If the action was harmful, we would probably still hold that person responsible but look for some recent conscious acts that may have caused that person to perform actions without conscious awareness, such as taking one drink too many, or just failing to train herself adequately to consciously consider her decisions before she acted on them.

According to some commentators, the Libet experiments argued against free will by highlighting how much of our decision making is not conscious. Since there is a reasonable consensus among philosophers that free will does imply conscious decision making, it appears to be one prerequisite for free will. However, to many observers, consciousness is a necessary but not sufficient condition. If our decisions—conscious or otherwise—are predetermined before we make them, how can we say that our decisions are free? This position, which holds that free will and determinism are not compatible, is known as incompatibilism. For example, American philosopher Carl Ginet (born in 1932) argues that if events in the past, present, and future are determined, then we can be considered to have no control over them or their consequences. Our apparent decisions and actions are simply part of this predetermined sequence. To Ginet, this rules out free will.

Not everyone regards determinism as being incompatible with the concept of free will, however. The compatibilists argue, essentially, that you’re free to decide what you want even though what you decide is or may be determined. Daniel Dennett, for example, argues that while the future may be determined from the state of the present, the reality is that the world is so intricately complex that we cannot possibly know what the future will bring. We can identify what he refers to as “expectations,” and we are indeed free to perform acts that differ from these expectations. We should consider how our decisions and actions compare to these expectations, not to a theoretically determined future that we cannot in fact know. That, Dennett argues, is sufficient for free will.

Gazzaniga also articulates a compatibilist position: “We are personally responsible agents and are to be held accountable for our actions, even though we live in a determined world.”17 A cynic might interpret this view as: You have no control over your actions, but we’ll blame you anyway.

Some thinkers dismiss the idea of free will as an illusion. Scottish philosopher David Hume (1711–1776) described it as simply a “verbal” matter characterized by “a false sensation or seeming experience.”18 German philosopher Arthur Schopenhauer (1788–1860) wrote that “everyone believes himself a priori to be perfectly free, even in his individual actions, and thinks that at every moment he can commence another manner of life…. But a posteriori, through experience, he finds to his astonishment that he is not free, but subjected to necessity, that in spite of all his resolutions and reflections he does not change his conduct, and that from the beginning of his life to the end of it, he must carry out the very character which he himself condemns.”19

I would add several points here. The concept of free will—and responsibility, which is a closely aligned idea—is useful, and indeed vital, to maintaining social order, whether or not free will actually exists. Just as consciousness clearly exists as a meme, so too does free will. Attempts to prove its existence, or even to define it, may become hopelessly circular, but the reality is that almost everyone believes in the idea. Very substantial portions of our higher-level neocortex are devoted to the concept that we make free choices and are responsible for our actions. Whether in a strict philosophical sense that is true or even possible, society would be far worse off if we did not have such beliefs.

Furthermore, the world is not necessarily determined. I discussed above two perspectives on quantum mechanics, which differ with respect to the relationship of quantum fields to an observer. A popular interpretation of the observer-based perspective provides a role for consciousness: Particles do not resolve their quantum ambiguity until observed by a conscious observer. There is another split in the philosophy of quantum events that has a bearing on our discussion of free will, one that revolves around the question: Are quantum events determined or random?

The most common interpretation of a quantum event is that when the wave function constituting a particle “collapses,” the particle’s location becomes specific. Over a great many such events, there will be a predictable distribution (which is why the wave function is considered to be a probability distribution), but the resolution for each such particle undergoing a collapse of its wave function is random. The opposing interpretation is deterministic: specifically, that there is a hidden variable that we are unable to detect separately, but whose value determines the particle’s position. The value or phase of the hidden variable at the moment of the wave function collapse determines the position of the particle. Most quantum physicists seem to favor the idea of a random resolution according to the probability field, but the equations for quantum mechanics do allow for the existence of such a hidden variable.

Thus the world may not be determined after all. According to the probability wave interpretation of quantum mechanics, there is a continual source of uncertainty at the most basic level of reality. However, this observation does not necessarily resolve the concerns of the incompatibilists. It is true that under this interpretation of quantum mechanics, the world is not determined, but our concept of free will extends beyond decisions and actions that are merely random. Most incompatibilists would find the concept of free will to also be incompatible with our decisions’ being essentially accidental. Free will seems to imply purposeful decision making.

Dr. Wolfram proposes a way to resolve the dilemma. His book A New Kind of Science (2002) presents a comprehensive view of the idea of cellular automata and their role in every facet of our lives. A cellular automaton is a mechanism in which the value of information cells is continually recomputed as a function of the cells near it. John von Neumann created a theoretical self-replicating machine called a universal constructor that was perhaps the first cellular automaton.

Dr. Wolfram illustrates his thesis with the simplest possible cellular automata, a group of cells in a one-dimensional line. At each point in time, each cell can have one of two values: black or white. The value of each cell is recomputed for each cycle. The value of a cell for the next cycle is a function of its current value as well as the value of its two adjacent neighbors. Each cellular automaton is characterized by a rule that determines how we compute whether a cell is black or white in the next cycle.

Consider the example of what Dr. Wolfram calls rule 222.



The eight possible combinations of value for the cell being recomputed and its left and right neighbors are shown in the top row. Its new value is shown in the bottom row. So, for example, if the cell is black and its two neighbors are also black, then the cell will remain black in the next generation (see the leftmost subrule of rule 222). If the cell is white, its left neighbor is white, and its right neighbor is black, then it will be changed to black in the next generation (see the subrule of rule 222 that is second from the right).

The universe for this simple cellular automaton is just one row of cells. If we start with just one black cell in the middle and show the evolution of the cells over multiple generations (where each row as we move down represents a new generation of values), the results of rule 222 look like this:



An automaton is based on a rule, and a rule defines whether the cell will be black or white based on which of the eight possible patterns exist in the current generation. Thus there are 28 = 256 possible rules. Dr. Wolfram listed all 256 possible such automata and assigned each a Wolfram code from 0 to 255. Interestingly, these 256 theoretical machines have very different properties. The automata in what Dr. Wolfram calls class I, such as rule 222, create very predictable patterns. If I were to ask what the value of the middle cell was after a trillion trillion iterations of rule 222, you could answer easily: black.

Much more interesting, however, are the class IV automata, illustrated by rule 110.



Multiple generations of this automaton look like this:


The interesting thing about the rule 110 automaton and class IV automata in general is that the results are completely unpredictable. The results pass the strictest mathematical tests for randomness, yet they do not simply generate noise: There are repeating patterns, but they repeat in odd and unpredictable ways. If I were to ask you what the value of a particular cell was after a trillion trillion iterations, there would be no way to answer that question without actually running this machine through that many generations. The solution is clearly determined, because this is a very simple deterministic machine, but it is completely unpredictable without actually running the machine.

Dr. Wolfram’s primary thesis is that the world is one big class IV cellular automaton. The reason that his book is titled A New Kind of Science is because this theory contrasts with most other scientific laws. If there is a satellite orbiting Earth, we can predict where it will be five years from now without having to run through each moment of a simulated process by using the relevant laws of gravity and solve where it will be at points in time far in the future. But the future state of class IV cellular automata cannot be predicted without simulating every step along the way. If the universe is a giant cellular automaton, as Dr. Wolfram postulates, there would be no computer big enough—since every computer would be a subset of the universe—that could run such a simulation. Therefore the future state of the universe is completely unknowable even though it is deterministic.

Thus even though our decisions are determined (because our bodies and brains are part of a deterministic universe), they are nonetheless inherently unpredictable because we live in (and are part of) a class IV automaton. We cannot predict the future of a class IV automaton except to let the future unfold. For Dr. Wolfram, this is sufficient to allow for free will.

We don’t have to look to the universe to see future events that are determined yet unpredictable. None of the scientists who have worked on Watson can predict what it will do, because the program is just too complex and varied, and its performance is based on knowledge that is far too extensive for any human to master. If we believe that humans exhibit free will, then it follows that we have to allow that future versions of Watson or Watson-like machines can exhibit it also.

My own leap of faith is that I believe that humans have free will, and while I act as if that is the case, I am hard pressed to find examples among my own decisions that illustrate that. Consider the decision to write this book—I never made that decision. Rather, the idea of the book decided that for me. In general, I find myself captive to ideas that seem to implant themselves in my neocortex and take over. How about the decision to get married, which I made (in collaboration with one other person) thirty-six years ago? At the time, I had been following the usual program of being attracted to—and pursuing—a pretty girl. I then fell in love. Where is the free will in that?

But what about the little decisions I make every day—for example, the specific words I choose to write in my book? I start with a blank virtual sheet of paper. No one is telling me what to do. There is no editor looking over my shoulder. My choices are entirely up to me. I am free—totally free—to write whatever I…

Uh, grokGrok? Okay, I did it—I finally applied my free will. I was going to write the word “want,” but I made a free decision to write something totally unexpected instead. This is perhaps the first time I’ve succeeded in exercising pure free will.

Or not.

It should be apparent that that was a display not of will, but rather of trying to illustrate a point (and perhaps a weak sense of humor).

Although I share Descartes’ confidence that I am conscious, I’m not so sure about free will. It is difficult to escape Schopenhauer’s conclusion that “you can do what you will, but in any given moment of your life you can will only one definite thing and absolutely nothing other than that one thing.”20 Nonetheless I will continue to act as if I have free will and to believe in it, so long as I don’t have to explain why.

Identity

A philosopher once had the following dream.

First Aristotle appeared, and the philosopher said to him, “Could you give me a fifteen-minute capsule sketch of your entire philosophy?” To the philosopher’s surprise, Aristotle gave him an excellent exposition in which he compressed an enormous amount of material into a mere fifteen minutes. But then the philosopher raised a certain objection which Aristotle couldn’t answer. Confounded, Aristotle disappeared.

Then Plato appeared. The same thing happened again, and the philosopher’s objection to Plato was the same as his objection to Aristotle. Plato also couldn’t answer it and disappeared.

Then all the famous philosophers of history appeared one by one and our philosopher refuted every one with the same objection.

After the last philosopher vanished, our philosopher said to himself, “I know I’m asleep and dreaming all this. Yet I’ve found a universal refutation for all philosophical systems! Tomorrow when I wake up, I will probably have forgotten it, and the world will really miss something!” With an iron effort, the philosopher forced himself to wake up, rush over to his desk, and write down his universal refutation. Then he jumped back into bed with a sigh of relief.

The next morning when he awoke, he went over to the desk to see what he had written. It was, “That’s what you say.”

Raymond Smullyan, as quoted by David Chalmers21

What I wonder about ever more than whether or not I am conscious or exercise free will is why I happen to be conscious of the experiences and decisions of this one particular person who writes books, enjoys hiking and biking, takes nutritional supplements, and so on. An obvious answer would be, “Because that’s who you are.”

That exchange is probably no more tautological than my answers above to questions about consciousness and free will. But actually I do have a better answer for why my consciousness is associated with this particular person: It is because that is who I created myself to be.

A common aphorism is, “You are what you eat.” It is even more true to say, “You are what you think.” As we have discussed, all of the hierarchical structures in my neocortex that define my personality, skills, and knowledge are the result of my own thoughts and experiences. The people I choose to interact with and the ideas and projects I choose to engage in are all primary determinants of who I become. For that matter, what I eat also reflects the decisions made by my neocortex. Accepting the positive side of the free will duality for the moment, it is my own decisions that result in who I am.

Regardless of how we came to be who we are, each of us has the desire for our identity to persist. If you didn’t have the will to survive, you wouldn’t be here reading this book. Every creature has that goal—it is the principal determinant of evolution. The issue of identity is perhaps even harder to define than consciousness or free will, but is arguably more important. After all, we need to know what we are if we seek to preserve our existence.

Consider this thought experiment: You are in the future with technologies more advanced than today’s. While you are sleeping, some group scans your brain and picks up every salient detail. Perhaps they do this with blood cell–sized scanning machines traveling in the capillaries of your brain or with some other suitable noninvasive technology, but they have all of the information about your brain at a particular point in time. They also pick up and record any bodily details that might reflect on your state of mind, such as the endocrine system. They instantiate this “mind file” in a nonbiological body that looks and moves like you and has the requisite subtlety and suppleness to pass for you. In the morning you are informed about this transfer and you watch (perhaps without being noticed) your mind clone, whom we’ll call You 2. You 2 is talking about his or her life as if s/he were you, and relating how s/he discovered that very morning that s/he had been given a much more durable new version 2.0 body. “Hey, I kind of like this new body!” s/he exclaims.

The first question to consider is: Is You 2 conscious? Well, s/he certainly seems to be. S/he passes the test I articulated earlier, in that s/he has the subtle cues of being a feeling, conscious person. If you are conscious, then so too is You 2.

So if you were to, uh, disappear, no one would notice. You 2 would go around claiming to be you. All of your friends and loved ones would be content with the situation and perhaps pleased that you now have a more durable body and mental substrate than you used to have. Perhaps your more philosophically minded friends would express concerns, but for the most part, everybody would be happy, including you, or at least the person who is convincingly claiming to be you.

So we don’t need your old body and brain anymore, right? Okay if we dispose of it?

You’re probably not going to go along with this. I indicated that the scan was noninvasive, so you are still around and still conscious. Moreover your sense of identity is still with you, not with You 2, even though You 2 thinks s/he is a continuation of you. You 2 might not even be aware that you exist or ever existed. In fact you would not be aware of the existence of You 2 either, if we hadn’t told you about it.

Our conclusion? You 2 is conscious but is a different person than you—You 2 has a different identity. S/he is extremely similar, much more so than a mere genetic clone, because s/he also shares all of your neocortical patterns and connections. Or I should say s/he shared those patterns at the moment s/he was created. At that point, the two of you started to go your own ways, neocortically speaking. You are still around. You are not having the same experiences as You 2. Bottom line: You 2 is not you.

Okay, so far so good. Now consider another thought experiment—one that is, I believe, more realistic in terms of what the future will bring. You undergo a procedure to replace a very small part of your brain with a nonbiological unit. You’re convinced that it’s safe, and there are reports of various benefits.

This is not so far-fetched, as it is done routinely for people with neurological and sensory impairments, such as the neural implant for Parkinson’s disease and cochlear implants for the deaf. In these cases the computerized device is placed inside the body but outside the brain yet connected into the brain (or in the case of the cochlear implants, to the auditory nerve). In my view the fact that the actual computer is physically placed outside the actual brain is not philosophically significant: We are effectively augmenting the brain and replacing with a computerized device those of its functions that no longer work properly. In the 2030s, when intelligent computerized devices will be the size of blood cells (and keep in mind that white blood cells are sufficiently intelligent to recognize and combat pathogens), we will introduce them noninvasively, no surgery required.

Returning to our future scenario, you have the procedure, and as promised, it works just fine—certain of your capabilities have improved. (You have better memory, perhaps.) So are you still you? Your friends certainly think so. You think so. There is no good argument that you’re suddenly a different person. Obviously, you underwent the procedure in order to effect a change in something, but you are still the same you. Your identity hasn’t changed. Someone else’s consciousness didn’t suddenly take over your body.

Okay, so, encouraged by these results, you now decide to have another procedure, this time involving a different region of the brain. The result is the same: You experience some improvement in capability, but you’re still you.

It should be apparent where I am going with this. You keep opting for additional procedures, your confidence in the process only increasing, until eventually you’ve changed every part of your brain. Each time the procedure was carefully done to preserve all of your neocortical patterns and connections so that you have not lost any of your personality, skills, or memories. There was never a you and a You 2; there was only you. No one, including you, ever notices you ceasing to exist. Indeed—there you are.

Our conclusion: You still exist. There’s no dilemma here. Everything is fine.

Except for this: You, after the gradual replacement process, are entirely equivalent to You 2 in the prior thought experiment (which I will call the scan-and-instantiate scenario). You, after the gradual replacement scenario, have all of the neocortical patterns and connections that you had originally, only in a nonbiological substrate, which is also true of You 2 in the scan-and-instantiate scenario. You, after the gradual replacement scenario, have some additional capabilities and greater durability than you did before the process, but this is likewise true of You 2 in the scan-and-instantiate process.

But we concluded that You 2 is not you. And if you, after the gradual replacement process, are entirely equivalent to You 2 after the scan-and-instantiate process, then you after the gradual replacement process must also not be you.

That, however, contradicts our earlier conclusion. The gradual replacement process consists of multiple steps. Each of those steps appeared to preserve identity, just as we conclude today that a Parkinson’s patient has the same identity after having had a neural implant installed.22

It is just this sort of philosophical dilemma that leads some people to conclude that these replacement scenarios will never happen (even though they are already taking place). But consider this: We naturally undergo a gradual replacement process throughout our lives. Most of our cells in our body are continuously being replaced. (You just replaced 100 million of them in the course of reading the last sentence.) Cells in the inner lining of the small intestine turn over in about a week, as does the stomach’s protective lining. The life span of white blood cells ranges from a few days to a few months, depending on the type. Platelets last about nine days.

Neurons persist, but their organelles and their constituent molecules turn over within a month.23 The half-life of a neuron microtubule is about ten minutes; the actin filaments in the dendrites last about forty seconds; the proteins that provide energy to the synapses are replaced every hour; the NMDA receptors in synapses are relatively long-lived at five days.

So you are completely replaced in a matter of months, which is comparable to the gradual replacement scenario I describe above. Are you the same person you were a few months ago? Certainly there are some differences. Perhaps you learned a few things. But you assume that your identity persists, that you are not continually destroyed and re-created.

Consider a river, like the one that flows past my office. As I look out now at what people call the Charles River, is it the same river that I saw yesterday? Let’s first reflect on what a river is. The dictionary defines it is “a large natural stream of flowing water.” By that definition, the river I’m looking at is a completely different one than it was yesterday. Every one of its water molecules has changed, a process that happens very quickly. Greek philosopher Diogenes Laertius wrote in the third century AD that “you cannot step into the same river twice.”

But that is not how we generally regard rivers. People like to look at them because they are symbols of continuity and stability. By the common view, the Charles River that I looked at yesterday is the same river I see today. Our lives are much the same. Fundamentally we are not the stuff that makes up our bodies and brains. These particles essentially flow through us in the same way that water molecules flow through a river. We are a pattern that changes slowly but has stability and continuity, even though the stuff constituting the pattern changes quickly.

The gradual introduction of nonbiological systems into our bodies and brains will be just another example of the continual turnover of parts that compose us. It will not alter the continuity of our identity any more than the natural replacement of our biological cells does. We have already largely outsourced our historical, intellectual, social, and personal memories to our devices and the cloud. The devices we interact with to access these memories may not yet be inside our bodies and brains, but as they become smaller and smaller (and we are shrinking technology at a rate of about a hundred in 3-D volume per decade), they will make their way there. In any event, it will be a useful place to put them—we won’t lose them that way. If people do opt out of placing microscopic devices inside their bodies, that will be fine, as there will be other ways to access the pervasive cloud intelligence.

But we come back to the dilemma I introduced earlier. You, after a period of gradual replacement, are equivalent to You 2 in the scan-and-instantiate scenario, but we decided that You 2 in that scenario does not have the same identity as you. So where does that leave us?

It leaves us with an appreciation of a capability that nonbiological systems have that biological systems do not: the ability to be copied, backed up, and re-created. We do that routinely with our devices. When we get a new smartphone, we copy over all of our files, so it has much the same personality, skills, and memories that the old smartphone did. Perhaps it also has some new capabilities, but the contents of the old phone are still with us. Similarly, a program such as Watson is certainly backed up. If the Watson hardware were destroyed tomorrow, Watson would easily be re-created from its backup files stored in the cloud.

This represents a capability in the nonbiological world that does not exist in the biological world. It is an advantage, not a limitation, which is one reason why we are so eager today to continue uploading our memories to the cloud. We will certainly continue in this direction, as nonbiological systems attain more and more of the capabilities of our biological brains.

My resolution of the dilemma is this: It is not true that You 2 is not you—it is you. It is just that there are now two of you. That’s not so bad—if you think you are a good thing, then two of you is even better.

What I believe will actually happen is that we will continue on the path of the gradual replacement and augmentation scenario until ultimately most of our thinking will be in the cloud. My leap of faith on identity is that identity is preserved through continuity of the pattern of information that makes us us. Continuity does allow for continual change, so whereas I am somewhat different than I was yesterday, I nonetheless have the same identity. However, the continuity of the pattern that constitutes my identity is not substrate-dependent. Biological substrates are wonderful—they have gotten us very far—but we are creating a more capable and durable substrate for very good reasons.

Загрузка...