IV Mind as Program

13 Daniel C. Dennett Where am I?[17]

Now that I’ve won my suit under the Freedom of Information Act, I at liberty to reveal for the first time a curious episode in my life that may be of interest not only to those engaged in research in the philosophy mind, artificial intelligence, and neuroscience but also to the gene public.

Several years ago I was approached by Pentagon officials who ask me to volunteer for a highly dangerous and secret mission. In collaboration with NASA and Howard Hughes, the Department of Defense was spending billions to develop a Supersonic Tunneling Underground Device, or STUD. It was supposed to tunnel through the earth’s core at great speed and deliver a specially designed atomic warhead “right up the Red’s missile silos,” as one of the Pentagon brass put it.

The problem was that in an early test they had succeeded in lodging a warhead about a mile deep under Tulsa, Oklahoma, and they want me to retrieve it for them. “Why me?” I asked. Well, the mission involved some pioneering applications of current brain research, and they had heard of my interest in brains and of course my Faustian curiosity a great courage and so forth.... Well, how could I refuse? The difficulty that brought the Pentagon to my door was that the device I’d been ask to recover was fiercely radioactive, in a new way. According to monitoring instruments, something about the nature of the device and its complex interactions with pockets of material deep in the earth had produced radiation that could cause severe abnormalities in certain tissues of the brain. No way had been found to shield the brain from these deadly rays, which were apparently harmless to other tissues and organs of the body. So it had been decided that the person sent to recover the device should leave his brain behind. It would be kept in a safe place where it could execute its normal control functions by elaborate radio links. Would I submit to a surgical procedure that would completely remove my brain, which would then be placed in a life-support system at the Manned Spacecraft Center in Houston? Each input and output pathway, as it was severed, would be restored by a pair of microminiaturized radio transceivers, one attached precisely to the brain, the other to the nerve stumps in the empty cranium. No information would be lost, all the connectivity would be preserved. At first I was a bit reluctant. Would it really work? The Houston brain surgeons encouraged me. “Think of it,” they said, “as a mere stretching of the nerves. If your brain were just moved over an inch in your skull, that would not alter or impair your mind. We’re simply going to make the nerves indefinitely elastic by splicing radio links into them.”

I was shown around the life-support lab in Houston and saw the sparkling new vat in which my brain would be placed, were I to agree. I met the large and brilliant support team of neurologists, hematologists, biophysicists, and electrical engineers, and after several days of discussions and demonstrations, I agreed to give it a try. I was subjected to an enormous array of blood tests, brain scans, experiments, interviews, and the like. They took down my autobiography at great length, recorded tedious lists of my beliefs, hopes, fears, and tastes. They even listed my favorite stereo recordings and gave me a crash session of psychoanalysis.The day for surgery arrived at last and of course I was anesthetized and remember nothing of the operation itself. When I came out of anesthesia, I opened my eyes, looked around, and asked the inevitable, the traditional, the lamentably hackneyed postoperative question: “Where am I?” The nurse smiled down at me. “You’re in Houston,” she said, and I reflected that this still had a good chance of being the truth one way or another. She handed me a mirror. Sure enough, there were the tiny antennae poling up through their titanium ports cemented into my skull.

“I gather the operation was a success,” I said. “I want to go see my brain.” They led me (I was a bit dizzy and unsteady) down a long corridor and into the life-support lab. A cheer went up from the assembled support team, and I responded with what I hoped was a jaunty salute. Still feeling lightheaded, I was helped over to the life-support vat. I peered through the glass. There, floating in what looked like ginger ale, was undeniably a human brain, though it was almost covered with printed circuit chips, plastic tubules, electrodes, and other paraphernalia. “Is that mine?” I asked. “Hit the output transmitter switch there on the side of the vat and see for yourself,” the project director replied. I moved the switch to OFF, and immediately slumped, groggy and nauseated, into the arms of the technicians, one of whom kindly restored the switch to its ON position. While I recovered my equilibrium and composure, I thought to myself: “Well, here I am sitting on a folding chair, staring through a pies of plate glass at my own brain.... But wait,” I said to myself, “shouldn’t I have thought, ‘Here I am, suspended in a bubbling fluid, being stared at by my own eyes’?” I tried to think this latter thought. I tried to project it into the tank, offering it hopefully to my brain, but I failed to carry of the exercise with any conviction. I tried again. “Here am I, Daniel Dennett, suspended in a bubbling fluid, being stared at by my own eyes.” No, it just didn’t work. Most puzzling and confusing. Being a philosopher of firm physicalist conviction, I believed unswervingly that the tokening of my thoughts was occurring somewhere in my brain: yet, when I thought “Here I am,” where the thought occurred to me was here, outside the vat where I, Dennett, was standing staring at my brain.

I tried and tried to think myself into the vat, but to no avail. I tried to build up to the task by doing mental exercises. I thought to myself “The sun is shining over there,” five times in rapid succession, each time mentally ostending a different place: in order, the sunlit corner of the lab the visible front lawn of the hospital, Houston, Mars, and Jupiter. I found I had little difficulty in getting my “there’s” to hop all over the celestial; map with their proper references. I could loft a “there” in an instant through the farthest reaches of space, and then aim the next “there” wit pinpoint accuracy at the upper left quadrant of a freckle on my arm. Why was I having such trouble with “here”? “Here in Houston” worked we enough, and so did “here in the lab,” and even “here in this part of the lab,” but “here in the vat” always seemed merely an unmeant mental mouthing. I tried closing my eyes while thinking it. This seemed to hell but still I couldn’t manage to pull it off, except perhaps for a fleeting instant. I couldn’t be sure. The discovery that I couldn’t be sure was also unsettling. How did I know where I meant by “here” when I thought “here”? Could I think I meant one place when in fact I meant another I didn’t see how that could be admitted without untying the few bone of intimacy between a person and his own mental life that had survive the onslaught of the brain scientists and philosophers, the physicalists and behaviorists. Perhaps I was incorrigible about where I meant when said “here.” But in my present circumstances it seemed that either I was doomed by sheer force of mental habit to thinking systematically false indexical thoughts, or where a person is (and hence where his thoughts are tokened for purposes of semantic analysis) is not necessarily where his brain, the physical seat of his soul, resides. Nagged by confusion, I attempted to orient myself by falling back on a favorite philosopher’s ploy. I began naming things.

“Yorick,” I said aloud to my brain, “you are my brain. The rest of my body, seated in this chair, I dub ‘Hamlet.’ ” So here we all are: Yorick’s my brain, Hamlet’s my body, and I am Dennett. Now, where am I? And when I think “where am I?” where’s that thought tokened? Is it tokened in my brain, lounging about in the vat, or right here between my ears where it seems to be tokened? Or nowhere? Its temporal coordinates give me no trouble; must it not have spatial coordinates as well? I began making a list of the alternatives.

1. Where Hamlet goes, there goes Dennett. This principle was easily refuted by appeal to the familiar brain-transplant thought experiments so enjoyed by philosophers. If Tom and Dick switch brains, Tom is the fellow with Dick’s former body just ask him; he’ll claim to be Tom, and tell you the most intimate details of Tom’s autobiography. It was clear enough, then, that my current body and I could part company, but not likely that I could be separated from my brain. The rule of thumb that emerged so plainly from the thought experiments was that in a brain transplant operation, one wanted to be the donor, not the recipient. Better to call such an operation a body transplant, in fact. So perhaps the truth was,

2. Where Yorick goes, there goes Dennett. This was not at all appealing, however. How could I be in the vat and not about to go anywhere, when I was so obviously outside the vat looking in and beginning to make guilty plans to return to my room for a substantial lunch? This begged the question I realized, but it still seemed to be getting at something important. Casting about for some support for my intuition, I hit upon a legalistic sort of argument that might have appealed to Locke.

Suppose, I argued to myself, I were now to fly to California, rob a bank, and be apprehended. In which state would I be tried: in California, where the robbery took place, or in Texas, where the brains of the outfit were located? Would I be a California felon with an out-of-state brain, or a Texas felon remotely controlling an accomplice of sorts in California? It seemed possible that I might beat such a rap just on the undecidability of that jurisdictional question, though perhaps it would be deemed an interstate, and hence Federal, offense. In any event, suppose I were convicted. Was it likely that California would be satisfied to throw Hamlet into the brig, knowing that Yorick was living the good life and luxuriously taking the waters in Texas? Would Texas incarcerate Yorick, leaving Hamlet free to take the next boat to Rio? This alternative appealed to me. Barring capital punishment or other cruel and unusual punishment, the state would be obliged to maintain the life-support system for Yorick though they might move him from Houston to Leavenworth, and aside from the unpleasantness of the opprobrium, I, for one, would not mind at all and would consider myself a free man under those circumstances. If the state has an interest in forcibly relocating persons in institutions it would fail to relocate me in any institution by locating Yorick there. If this were true, it suggested a third alternative.

3. Dennett is wherever he thinks he is. Generalized, the claim was as follows: At any given time a person has a point of view, and the location of the point of view (which is determined internally by the content of the point of view) is also the location of the person.

Such a proposition is not without its perplexities, but to me it seemed a step in the right direction. The only trouble was that it seemed to place one in a heads-I-win/tails-you-lose situation of unlikely infallibility as regards location. Hadn’t I myself often been wrong about where I was, and at least as often uncertain? Couldn’t one get lost? Of course, but getting lost geographically is not the only way one might get lost. If one were lost in the woods one could attempt to reassure oneself with the consolation that at least one knew where one was: one was right here in the familiar surroundings of one’s own body. Perhaps in this case one would not have drawn one’s attention to much to be thankful for. Still there were worse plights imaginable, and I wasn’t sure I wasn’t in such a plight right now.

Point of view clearly had something to do with personal location, but it was itself an unclear notion. It was obvious that the content of one point of view was not the same as or determined by the content of one beliefs or thoughts. For example, what should we say about the point of view of the Cinerama viewer who shrieks and twists in his seat as the roller-coaster footage overcomes his psychic distancing? Has he forgotten that he is safely seated in the theater? Here I was inclined to say that the person is experiencing an illusory shift in point of view. In other cases, my inclination to call such shifts illusory was less strong. The workers in laboratories and plants who handle dangerous materials by operating feedback-controlled mechanical arms and hands undergo shift in point of view that is crisper and more pronounced than anything Cinerama can provoke. They can feel the heft and slipperiness of the containers they manipulate with their metal fingers. They know perfectly well where they are and are not fooled into false beliefs by the experience yet it is as if they were inside the isolation chamber they are peering into. With mental effort, they can manage to shift their point of view back and forth, rather like making a transparent Necker cube or an Escher drawing change orientation before one’s eyes. It does seem extravagant to suppose that in performing this bit of mental gymnastics, they are transporting themselves back and forth.

Still their example gave me hope. If I was in fact in the vat in spite of my intuitions, I might be able to train myself to adopt that point of view even as a matter of habit. I should dwell on images of myself comfortably floating in my vat, beaming volitions to that familiar body out there. I reflected that the ease or difficulty of this task was presumably independent of the truth about the location of one’s brain. Had I been practicing before the operation, I might now be finding it second nature. You might now yourself try such a trompe l’oeil. Imagine you have written an inflammatory letter which has been published in the Times, the result of which is that the government has chosen to impound your brain for a probationary period of three years in its Dangerous Brain Clinic in Bethesda, Maryland. Your body of course is allowed freedom to earn a salary and thus to continue its function of laying up income to be taxed. At this moment, however, your body is seated in an auditorium listening to a peculiar account by Daniel Dennett of his own similar experience. Try it. Think yourself to Bethesda, and then hark back longingly to your body, far away, and yet seeming so near. It is only with long-distance restraint (yours? the government’s?) that you can control your impulse to get those hands clapping in polite applause before navigating the old body to the rest room and a well-deserved glass of evening sherry in the lounge. The task of imagination is certainly difficult, but if you achieve your goal the results might be consoling.

Anyway, there I was in Houston, lost in thought as one might say, but not for long. My speculations were soon interrupted by the Houston doctors, who wished to test out my new prosthetic nervous system before sending me off on my hazardous mission. As I mentioned before, I was a bit dizzy at first, and not surprisingly, although I soon habituated myself to my new circumstances (which were, after all, well nigh indistinguishable from my old circumstances). My accommodation was not perfect, however, and to this day I continue to be plagued by minor coordination difficulties. The speed of light is fast, but finite, and as my brain and body move farther and farther apart, the delicate interaction of my feedback systems is thrown into disarray by the time lags. Just as one is rendered close to speechless by a delayed or echoic hearing of one’s speaking voice so, for instance, I am virtually unable to track a moving object with my eyes whenever my brain and my body are more than a few miles apart. In most matters my impairment is scarcely detectable, though I can no longer hit a slow curve ball with the authority of yore. There are some compensations of course. Though liquor tastes as good as ever, and warms my gullet while corroding my liver, I can drink it in any quantity I please, without becoming the slightest bit inebriated, a curiosity some of my close friends may have noticed (though I occasionally have feigned inebriation, so as not to draw attention to my unusual circumstances). For similar reasons, I take aspirin orally for a sprained wrist, but if the pain persists I ask Houston to administer codeine to me in vitro. In times of illness the phone bill can be staggering.

But to return to my adventure. At length, both the doctors and I were satisfied that I was ready to undertake my subterranean mission. And so I left my brain in Houston and headed by helicopter for Tulsa. Well, in any case, that’s the way it seemed to me. That’s how I would put it, just off the top of my head as it were. On the trip I reflected further about my earlier anxieties and decided that my first postoperative speculations had been tinged with panic. The matter was not nearly as strange or metaphysical as I had been supposing. Where was I? In two places, clearly: both inside the vat and outside it. Just as one can stand with one foot in Connecticut and the other in Rhode Island, I was in two places at once. I had become one of those scattered individuals we used to hear so much about. The more I considered this answer, the more obviously true it appeared. But, strange to say, the more true it appeared, the less important the question to which it could be the true answer seemed. A sad, but not unprecedented, fate for a philosophical question to suffer. This answer did not completely satisfy me, of course. There lingered some question to which I should have liked an answer, which was neither “Where are all my various and sundry parts?” nor “What is my current point of view?” Or at least there seemed to be such a question. For it did seem undeniable that in some sense I and not merely most of me was descending into the earth under Tulsa in search of an atomic warhead.

When I found the warhead, I was certainly glad I had left my brain behind, for the pointer on the specially built Geiger counter I had brought with me was off the dial. I called Houston on my ordinary radio and told the operation control center of my position and my progress. In return, they gave me instructions for dismantling the vehicle, based upon my on-site observations. I had set to work with my cutting torch when all of a sudden a terrible thing happened. I went stone deaf. At first I thought it was only my radio earphones that had broken, but when I tapped on my helmet, I heard nothing. Apparently the auditory transceivers had gone on the fritz. I could no longer hear Houston or my own voice, but I could speak, so I started telling them what had happened. In midsentence, I knew something else had gone wrong. My vocal apparatus had become paralyzed. Then my right hand went limp—another transceiver had gone. I was truly in deep trouble. But worse was to follow. After few more minutes, I went blind. I cursed my luck, and then I cursed the scientists who had led me into this grave peril. There I was, deaf, dumb and blind, in a radioactive hole more than a mile under Tulsa. Then the last of my cerebral radio links broke, and suddenly I was faced with a new and even more shocking problem: whereas an instant before I had been buried alive in Oklahoma, now I was disembodied in Houston. My recognition of my new status was not immediate. It took me several very anxious minutes before it dawned on me that my poor body lay several hundred miles away, with heart pulsing and lungs respirating, but otherwise as dead as the body of any heart-transplant donor, its skull packed with useless, broken electronic gear. The shift in perspective I had earlier found well nigh impossible now seemed quite natural. Though I could think myself back into my body in the tunnel under Tulsa, it took some effort to sustain the illusion. For surely it was an illusion to suppose I was still in Oklahoma: I had lost all contact with that body.

It occurred to me then, with one of those rushes of revelation of which we should be suspicious, that I had stumbled upon an impressive demonstration of the immateriality of the soul based upon physicalist principles and premises. For as the last radio signal between Tulsa and Houston died away, had I not changed location from Tulsa to Houston at the speed of light? And had I not accomplished this without any increase in mass? What moved from A to B at such speed was surely myself, or at any rate my soul or mind—the massless center of my being and home of my consciousness. My point of view had lagged somewhat behind, but I had already noted the indirect bearing of point of view on personal location. I could not see how a physicalist philosopher could quarrel with this except by taking the dire and counterintuitive route of banishing all talk of persons. Yet the notion of personhood was so well entrenched in everyone’s world view, or so it seemed to me, that any denial would be as curiously unconvincing, as systematically disingenuous, as the Cartesian negation, “non sum.”

The joy of philosophic discovery thus tided me over some very bad minutes or perhaps hours as the helplessness and hopelessness of my situation became more apparent to me. Waves of panic and even nausea swept over me, made all the more horrible by the absence of their normal body—dependent phenomenology. No adrenaline rush of tingles in the arms, no pounding heart, no premonitory salivation. I did feel a dread sinking feeling in my bowels at one point, and this tricked me momentarily into the false hope that I was undergoing a reversal of the process that landed me in this fix—a gradual undisembodiment. But the isolation and uniqueness of that twinge soon convinced me that it was simply the first of a plague of phantom body hallucinations that I, like any other amputee, would be all too likely to suffer.

My mood then was chaotic. On the one hand, I was fired up with elation of my philosophic discovery and was wracking my brain (one of the few familiar things I could still do), trying to figure out how to communicate my discovery to the journals; while on the other, I was bitter, lonely, and filled with dread and uncertainty. Fortunately, this did not last long, for my technical support team sedated me into a dreamless sleep from which I awoke, hearing with magnificent fidelity the familiar opening strains of my favorite Brahms piano trio. So that was why they had wanted a list of my favorite recordings! It did not take me long to realize that I was hearing the music without ears. The output from the stereo stylus was being fed through some fancy rectification circuitry directly into my auditory nerve. I was mainlining Brahms, an unforgettable experience for any stereo buff. At the end of the record it did not surprise me to hear the reassuring voice of the project director speaking into a microphone that was now my prosthetic ear. He confirmed my analysis of what had gone wrong and assured me that steps were being taken to re-embody me. He did not elaborate, and after a few more recordings, I found myself drifting off to sleep. My sleep lasted, I later learned, for the better part of a year, and when I awoke, it was to find myself fully restored to my senses. When I looked into the mirror, though, I was a bit startled to see an unfamiliar face. Bearded and a bit heavier, bearing no doubt a family resemblance to my former face, and with the same look of spritely intelligence and resolute character, but definitely a new face. Further self-explorations of an intimate nature left me no doubt that this was a new body, and the project director confirmed my conclusions. He did not volunteer any information on the past history of my new body and I decided (wisely, I think in retrospect) not to pry. As many philosophers unfamiliar with my ordeal have more recently speculated, the acquisition of a new body leaves one’s person intact. And after a period of adjustment to a new voice, new muscular strengths and weaknesses, and so forth, one’s personality is by and large also preserved. More dramatic changes in personality have been routinely observed in people who have undergone extensive plastic surgery, to say nothing of sex-change operations, and I think no one contests the survival of the person in such cases. In any event I soon accommodated to my new body, to the point of being unable to recover any of its novelties to my consciousness or even memory. The view in the mirror soon became utterly familiar. That view, by the way, still revealed antennae, and so I was not surprised to learn that my brain had not been moved from its haven in the life-support lab.

I decided that good old Yorick deserved a visit. I and my new body, whom we might as well call Fortinbras, strode into the familiar lab to another round of applause from the technicians, who were of course congratulating themselves, not me. Once more I stood before the vat and contemplated poor Yorick, and on a whim I once again cavalierly flicked off the output transmitter switch. Imagine my surprise when nothing unusual happened. No fainting spell, no nausea, no noticeable change. A technician hurried to restore the switch to ON, but still I felt nothing. I demanded an explanation, which the project director hastened to provide. It seems that before they had even operated on the first occasion, they had constructed a computer duplicate of my brain, reproducing both the complete information—processing structure and the computational speed of my brain in a giant computer program. After the operation, but before they had dared to send me off on my mission to Oklahoma, they had run this computer system and Yorick side by side. The incoming signals from Hamlet were sent simultaneously to Yorick’s transceivers and to the computer’s array of inputs. And the outputs from Yorick were not only beamed back to Hamlet, my body; they were recorded and checked against the simultaneous output of the computer program, which was called “Hubert” for reasons obscure to me. Over days and even weeks, the outputs were identical and synchronous, which of course did not prove that they had succeeded in copying the brain’s functional structure, but the empirical support was greatly encouraging.

Hubert’s input, and hence activity, had been kept parallel with Yorick’s during my disembodied days. And now, to demonstrate this, they had actually thrown the master switch that put Hubert for the first time in on-line control of my body—not Hamlet, of course, but Fortinbras. (Hamlet, I learned, had never been recovered from its underground tomb and could be assumed by this time to have largely returned to the dust. At the head of my grave still lay the magnificent bulk of the abandoned device, with the word STUD emblazoned on its side in large letters—a circumstance which may provide archeologists of the next century with a curious insight into the burial rites of their ancestors.)

The laboratory technicians now showed me the master switch, which had two positions, labeled B, for Brain (they didn’t know my brain’s name was Yorick) and H, for Hubert. The switch did indeed point to H, and they explained to me that if I wished, I could switch it back to B. With my heart in my mouth (and my brain in its vat), I did this. Nothing happened. A click, that was all. To test their claim, and with the master switch now set at B, I hit Yorick’s output transmitter switch on the vat and sure enough, I began to faint. Once the output switch was turned back on and I had recovered my wits, so to speak. I continued to play with the master switch, flipping it back and forth. I found that with the exception of the transitional click, I could detect no trace of a difference. I could switch in mid-utterance, and the sentence I had begun speaking under the control of Yorick was finished without a pause or hitch of any kind under the control of Hubert. I had a spare brain, a prosthetic device which might some day stand me in very good stead, were sonic mishap to befall Yorick. Or alternatively, I could keep Yorick as a spare and use Hubert. It didn’t seem to make any difference which I chose, for the wear and tear and fatigue on my body did not have any debilitating effect on either brain, whether or not it was actually causing the motions of my body, or merely spilling its output into thin air.

The one only unsettling aspect of this new development was the prospect, which was not long in dawning on tie, of someone detaching the spare—Hubert or Yorick, as the case might be—from Fortinbras and hitching it to yet another body—some Johnny-come-lately Rosencrantz or Guildenstern. Then (if not before) there would be two people, that much was clear. One would be me, and the other would be a sort of super-twin brother. If there were two bodies, one under the control of Hubert and the other being controlled by Yorick, then which would the world recognize as the true Dennett? And whatever the rest of the world decided, which one would be me? Would I be the Yorick-brained one, in virtue of Yorick’s causal priority and former intimate relationship with the original Dennett body, Hamlet? That seemed a hit legalistic, a hit too redolent of the arbitrariness of consanguinity and legal possession, to be convincing at the metaphysical level. For suppose that before the arrival of the second body on the scene, I had been keeping Yorick as the spare for years, and letting Hubert’s output drive my body—that is, Fortinbras—all that time. The Hubert-Fortinbras couple would seem then by squatter’s rights (to combat one legal intuition with another) to be the true Dennett and the lawful inheritor of everything that was Dennett’s. This was an interesting question, certainly, but not nearly so pressing as another question that bothered me. My strongest intuition was that in such an eventuality I would survive so long as either brain-body couple remained intact, but I had mixed emotions about whether I should want both to survive.

I discussed my worries with the technicians and the project director. The prospect of two Dennetts was abhorrent to me, I explained, largely for social reasons. I didn’t want to be my own rival for the affections of my wife, nor did I like the prospect of the two Dennetts sharing my modest professor’s salary. Still more vertiginous and distasteful though, was the idea of knowing that much about another person, while he had the very same goods on me. How could we ever face each other? My colleagues in the lab argued that I was ignoring the bright side of the matter. Weren’t there many things I wanted to do but, being only one person, had been unable to do? Now one Dennett could stay a home and be the professor and family man, while the other could strike out on a life of travel and adventure—missing the family of course, but, happy in the knowledge that the other Dennett was keeping the home fires burning. I could be faithful and adulterous at the same time. I could even cuckold myself—to say nothing of other more lurid possibilities my colleagues were all too ready to force upon my overtaxed imagination. But my ordeal in Oklahoma (or was it Houston?) had made me less adventurous, and I shrank from this opportunity that was being offered (though of course I was never quite sure it was being offered to me in the first place).

There was another prospect even more disagreeable: that the spare, Hubert or Yorick as the case might be, would be detached from any input from Fortinbras and just left detached. Then, as in the other case, there would be two Dennetts, or at least two claimants to my name, and possessions, one embodied in Fortinbras, and the other sadly, miserably disembodied. Both selfishness and altruism bade me take steps to prevent this from happening. So I asked that measures be taken to ensure that no one could ever tamper with the transceiver connections or the master switch without my (our? no, my) knowledge and consent. Since I had no desire to spend my life guarding the equipment in Houston, it was mutually decided that all the electronic connections in the lab would be carefully locked. Both those that controlled the life-support system for Yorick and those that controlled the power supply for Hubert would be guarded with fail-safe devices, and I would take the only master switch, outfitted for radio remote control, with me wherever I went. I carry it strapped around my waist and—wait a moment—here it is. Every few months I reconnoiter the situation by switching channels. I do this only in the presence of friends, of course, for if the other channel were, heaven forbid, either dead or otherwise occupied, there would have to be somebody who had my interests at heart to switch it back, to bring me back from the void. For while I could feel, see, hear, and otherwise sense whatever befell my body, subsequent to such a switch, I’d be unable to control it. By the way, the two positions on the switch are intentionally unmarked, so I never have the faintest idea whether I am switching from Hubert to Yorick or vice versa. (Some of you may think that in this case I really don’t know who I am, let alone where I am. But such reflections no longer make much of a dent on my essential Dennettness, on my own sense of who I am. If it is true that in one sense I don’t know who I am then that’s another one of your philosophical truths of underwhelming significance.)

In any case, every time I’ve flipped the switch so far, nothing has happened. So let’s give it a try....

“THANK GOD! I THOUGHT YOU’D NEVER FLIP THAT SWITCH! You can’t imagine how horrible it’s been these last two weeks—but now you know; it’s your turn in purgatory. How I’ve longed for this moment! You see, about two weeks ago—excuse me, ladies and gentlemen, but I’ve got to explain this to my … um, brother, I guess you could say, but he’s just told you the facts, so you’ll understand—about two weeks ago our two brains drifted just a bit out of synch. I don’t know whether my brain is now Hubert or Yorick, any more than you do, but in any case, the two brains drifted apart, and of course once the process started, it snowballed, for I was in a slightly different receptive state for the input we both received, a difference that was soon magnified. In no time at all the illusion that I was in control of my body—our body—was completely dissipated. There was nothing I could do—no way to call you. YOU DIDN’T EVEN KNOW I EXISTED! It’s been like being carried around in a cage, or better, like being possessed—hearing my own voice say things I didn’t mean to say, watching in frustration as my own hands performed deeds I hadn’t intended. You’d scratch our itches, but not the way I would have, and you kept me awake, with your tossing and turning. I’ve been totally exhausted, on the verge of a nervous breakdown, carried around helplessly by your frantic round of activities, sustained only by the knowledge that some day you’d throw the switch.

“Now it’s your turn, but at least you’ll have the comfort of knowing I know you’re in there. Like an expectant mother, I’m eating—or at any rate tasting, smelling, seeing—for two now, and I’ll try to make it easy for you. Don’t worry. Just as soon as this colloquium is over, you and I will fly to Houston, and we’ll see what can be done to get one of us another body. You can have a female body—your body could be any color you like. But let’s think it over. I tell you what—to be fair, if we both want this body, I promise I’ll let the project director flip a coin to settle which of us gets to keep it and which then gets to choose a new body. That should guarantee justice, shouldn’t it? In any case, I’ll take care of you, I promise. These people are my witnesses.

“Ladies and gentlemen, this talk we have just heard is not exactly the talk I would have given, but I assure you that everything he said was perfectly true. And now if you’ll excuse me, I think I’d—we’d—better sit down.”

Reflections

The story you have just read not only isn’t true (in case you wondered) but couldn’t be true. The technological feats described are impossible now, and some may remain forever outside our ability, but that is not what matters to us. What matters is whether there is something in principle impossible—something incoherent—about the whole tale. When philosophical fantasies become too outlandish—involving time machines, say, or duplicate universes or infinitely powerful deceiving demons—we may wisely decline to conclude anything from them. Our conviction that we understand the issues involved may be unreliable, an illusion produced by the vividness of the fantasy.

In this case the surgery and microradios described are far beyond the present or even clearly envisaged future state of the art, but that is surely “innocent” science fiction. It is less clear that the introduction of Hubert, the computer duplicate of Yorick, Dennett’s brain, is within bounds. (As fantasy-mongers we can make up the rules as we go along, of course, but on pain of telling a tale of no theoretical interest.) Hubert is supposed to run in perfect synchronicity with Yorick for years on end without the benefit of any interactive, corrective links between them. This would not just be a great technological triumph; it would verge on the miraculous. It is not just that in order for a computer to come close to matching a human brain in speed of handling millions of channels of parallel input and output it would have to have a fundamental structure entirely unlike that of existing computers. Even if we had such a brainlike computer, its sheer size and complexity would make the prospect of independent synchronic behavior virtually impossible. Without the synchronized and identical processing in both systems, an essential feature of the story would have to be abandoned. Why? Because the premise that there is only one person with two brains (one a spare) depends on it. Consider what Ronald de Sousa has to say about a similar case:

When Dr. Jekyll changes into Mr. Hyde, that is a strange and mysterious thing. Are they two people taking turns in one body? But here is something stranger: Dr. Juggle and Dr. Boggle too, take turns in one body. But they are as like as identical twins! You balk: why then say that they have changed into one another? Well, why not: if Dr. Jekyll can change into a man as different as Hyde, surely it must be all the easier for Juggle to change into Boggle, who is exactly like him.

We need conflict or strong difference to shake our natural assumption that to one body there corresponds at most one agent.

—from “Rational Homunculi”

Since several of the most remarkable features of “Where Am I?” hinge on the supposition of independent synchronic processing in Yorick and Hubert, it is important to note that this supposition is truly outrageous—in the same league as the supposition that somewhere there is another planet just like Earth, with an atom-for-atom duplicate of you and all your friends and surroundings,[18] or the supposition that the universe is only five days old (it only seems to be much older because when God made it five days ago, He made lots of instant “memory”-laden adults, libraries full of apparently ancient books, mountains full of brand-new fossils, and so forth).

The possibility of a prosthetic brain like Hubert, then, is only a possibility in principle, though less marvelous bits of artificial nervous system may be just around the corner. Various crude artificial TV eyes for the blind are already in existence; some of these send input directly to portions of the visual cortex of the brain, but others avoid such virtuoso surgery by transmitting their information through other external sense organs—such as the tactile receptors in the fingertips or even by an array of tingling points spread across the subject’s forehead, abdomen, or back.

The prospects for such nonsurgical mind extensions are explored in the next selection, a sequel to “Where Am I?” by Duke University philosopher David Sanford.


D.C.D.

14 David Hawley Sanford Where was I?[19]

Daniel Dennett, or perhaps one of the representatives from the corporation that collectively comprises him, delivered “Where Am I?” to a Chapel Hill Colloquium and received an unprecedented standing ovation. I wasn’t there clapping with the rest of the local philosophers; I was on sabbatical leave. Although my colleagues still believe I was living in New York and pursuing a line of philosophic research, actually I was working secretly for the Department of Defense on a matter closely related to the Dennett corporation.

Dennett became so preoccupied with questions about his nature unity, and identity that he seemed to forget that the primary purpose of his mission was not to make previously intractable problems in the philosophy of mind even more difficult but to retrieve a fiercely radioactive atomic warhead stuck a mile beneath Tulsa. Dennett tells us that Hamlet, his decerebrate and remotely controlled body, had barely started work on the warhead when communications between it and Yorick, his disembodied brain, broke down. He speculates that Hamlet soon turned to dust and appears neither to know nor to care what became of the warhead. I, as it happens, played an essential role in its ultimate retrieval. Although my role was similar to Dennett’s, there were some important differences.

Dennett, or Yorick, during a wakeful interval during the long time when Dennett, or Yorick, slumbered on without any thoroughgoing communication, direct or remote, with a living human body, mainlined a little Brahms. The rectified output from the stereo stylus was fed directly into the auditory nerves. A certain sort of scientist or philosopher would ask, “If we can bypass the middle and inner ear and feed directly into the auditory nerve, why can’t we bypass that as well and feed directly into whatever the auditory nerve feeds? Indeed, why not bypass that as well and feed directly into the subpersonal information-processing system another step farther in? Or into the next step beyond that?” Some theorists, but presumably not Dennett, would wonder when this process of replacing natural with artificial information-processing devices would reach the ultimate possessor of auditory experience, the real core person, the true seat of the soul. Others would see it rather as a layer-by-layer transformation, from the outside in, of an organic subject of consciousness to an artificial intelligence. The scientist shooting the Brahms piano trio straight into Yorick’s auditory nerves, however, actually asked himself a different kind of question. He wondered why they had bothered to disconnect Dennett’s ears from his auditory nerves. There would have been advantages, he thought, if we could have used earphones on the ears connected in the normal way to the brain in the vat and had microphones instead of organic ears on the body that ventured deep below Tulsa. The belief that the radiation could damage only brain tissue had been utterly mistaken. Indeed, the organic ears on Hamlet had been the first to go, and the rest of Hamlet was killed off shortly thereafter. With microphones instead of ears on Hamlet, and earphones on the ears connected normally to Yorick, Dennett could get a more realistic stereo rendition of a musical performance than could be obtained merely by mainlining the output from a stereo cartridge tracking a normal stereo recording. If Hamlet sat in the concert hall during a live performance, then every turn of the head would result in slightly different outputs from the earphones back in Houston. This set up would preserve the slight differences in volume and the slight time delay between the two signals that, although not consciously discernible, are so important in fixing the location of a sound source.

A description of this marginal improvement on earphones serves as an analogy in the explanation of some more radical advances made by the NASA technicians. Human eyes, they discovered from the Dennett caper, could not long withstand the fierce radiation from the buried warhead. It would have been better to leave Dennett’s eyes attached to his brain as well and mount little television cameras in Hamlet’s empty eye sockets. By the time I had entered into the secret mission to retrieve the warhead, the technicians had perfected eyevideos. Eyevideos are to seeing what earphones are to hearing. They not only project an image on the retina, they monitor every movement of the eyeball. For every rapid eye movement, there is a corresponding rapid camera movement; for every twist of the head, there is a corresponding shift in the cameras; and so on. Seeing by means of eyevideos is in most circumstances indistinguishable from seeing without them. When trying to read really fine print, I noticed a slight loss of acuity; and, until the system was finely tuned, my night vision was rather better with eyevideos than without.

The most amazing simulation devices were for tactile perception. But before I describe skintact, which is to cutaneous and subcutaneous feeling what earphones are to hearing, I should like to describe some experiments that can be performed with eyevideos. The classic experiment with inverting lenses can be repeated simply by mounting the cameras upside down. New experiments of the same general sort can be performed by mounting the cameras in other positions that diverge from the normal. Here are a few: the so-called rabbit mount, with the cameras facing in opposite directions instead of side by side; the rabbit mount with extreme wide angle lenses, so the field of vision is 360 degrees; and the so-called bank or supermarket mount, with the two cameras mounted on opposite walls of the room that the subject occupies. This one takes some getting used to. It is possible, by the way, with this setup to see all the sides of an opaque cube at the same time.

But you want to hear more about skintact. It is a light, porous material worn right next to the skin, and it extends one’s tactile range as radio and television extend one’s auditory and visual range. When an artificial hand equipped with skintact transmitters strokes a damp puppy, the nerves in the skin of a real hand enclosed in receptor skintact are stimulated in just the way they would be if the real hand that contains them were stroking a damp puppy. When the skintact transmitter touches something warm, the corresponding skin covered with the receptor skintact does not actually warm up, but the appropriate sensory nerves are stimulated as they would be if warmth were actually present.

In order to retrieve the buried warhead, a robot was sent underground. This robot contained no living cells. It had the same proportions as my body; it was covered with skintact transmitter; its head had microphones and cameras mounted in it that could transmit to earphones and eyevideos. It was jointed just as my body is jointed and could move in most of the ways my body moves. It did not have a mouth or jaws or any mechanism for inhaling and exhaling air or for ingesting food. In place of a mouth, it had a loudspeaker that put forth all the sounds picked up by the microphone in front of my mouth.

There was another marvelous intercommunication system between me and the robot, the Motion and Resistance System, or MARS for short. The MARS membrane is worn over the skintact layer covering the human subject and under the skintact layer worn by the robot. I don’t understand all the details of how MARS works, but it isn’t difficult to say what it does. It enables most of the bodily motions of the human to be duplicated exactly and simultaneously by the robot while the various pressures and resistances encountered by the limbs of the robot are duplicated for the corresponding human limbs.

The NASA scientists, instead of splitting me up, as they had split up Dennett, would leave me entire. I would stay back in Houston, all of me, and without suffering any effects from radiation would control a robot on its underground mission. The scientists assumed that, unlike Dennett, I would not be distracted from the primary purpose of the mission by abstruse philosophical questions about my location. Little did they know.

Dennett mentions laboratory workers who handle dangerous materials by operating feedback-controlled mechanical arms and hands. I was to be like them, only I would be operating a feedback-controlled entire body with prosthetic hearing, seeing, and feeling. Although it might be as if I were deep in the tunnel under Tulsa, I would know perfectly well where I really was, safe in the laboratory wearing earphones and eyevideos and skintact and MARS membrane, and speaking into a microphone.

It turned out, however, that once I was all rigged up, I could not resist the inclination to locate myself in the location of the robot. Just as Dennett wanted to see his brain, I wanted to see myself swathed in my electronic garments. And just as Dennett had difficulty identifying himself with his brain, I had difficulty identifying myself as the body that moved its head every time the robot moved its head and moved its legs in a walking motion as the robot walked around the laboratory.

Following Dennett’s example, I began naming things. I used “Sanford” as Dennett used “Dennett” so that the questions “Where was I?” and “Where was Sanford?” should receive the same answer. My first name, “David,” served as a name for the mostly saltwater and carbon compound body being cared for in Houston. My middle name, “Hawley,” served for a while as the name of the robot.

The general principle Where Hawley goes, there goes Sanford obviously will not do. The robot that first walked around David while David made walking motions and turned its head as David turned his head is now in a highly classified science museum, and Sanford is not.

Also, the robot could be controlled by some other flesh-and-blood body before, and after, it was controlled by David. If Sanford ever went where Hawley went, I did so only when Hawley was in communication with David or a David replica in at least some of the ways that have been described. Dennett’s first principle, Where Hamlet goes, there goes Dennett, needs analogous qualification.

My attempt to name the robot “Hawley” ran into difficulties when there turned out to be more than one robot. In Houston there were two full-size robots, one whose main parts were mostly plastic and one whose main parts were mostly metal. They looked just the same from the outside, and, if you know what I mean, they felt just the same from the inside. Neither robot was flown to Tulsa. A third robot, built on a three-fifths scale so it could maneuver more easily in cramped quarters, was there already. That’s the one that retrieved the warhead.

Once I was onto the fact that there was more than one robot, the technicians did not always wait for David to fall asleep before switching channels. When Little Hawley returned in triumph from Tulsa, the three of us, or the three of I, would play three-corner catch with the cooperation of three human helpers who would keep the temporarily inactive and unsentient robots from toppling over. I persisted in locating myself in the position of the active, sentient robot and thus had the experience, or at least seemed to have the experience, of spatiotemporally discontinuous travel from one location to another without occupying any of the positions in between.

The principle Where David goes, there goes Sanford was no more appealing for me than Dennett’s analogous Where Yorick goes, there goes Dennett. My reasons for rejection were more epistemological than legalistic. I had not seen David since Little Hawley’s return from Tulsa and I could not be sure that David still existed. For some reason I never fully understood, quite soon after David began perceiving the external world via skintact, eyevideos, and earphones I was prevented from having the experiences associated with breathing, chewing, swallowing, digesting, and excreting. When Plastic Big Hawley produced articulate speech, I was unsure that the movements of David’s diaphragm, larynx, tongue, and lips were still causally involved in its production. The scientists had the technology to tap directly into the appropriate nerves and rectify the neural output, which was itself produced partly in response to artificially rectified input, to transmit the same signals to the receiver connected to the loudspeaker mounted in the head of Plastic Big Hawley. The scientists, indeed, had the technology to bypass any of their fancy electronic devices of causal mediation and substitute even fancier devices that hook up directly with the brain. Suppose, I thought, something went wrong with David; its kidneys broke down or it developed an embolism in a coronary artery. Everything of David except the brain might be dead. For that matter, the brain might be dead too. Since a computer duplicate of Yorick, Dennett’s brain, had been manufactured, then so might a computer duplicate of David’s brain. I could have become a robot, or a computer, or a robot-computer combination, with no organic parts whatsoever. I would then resemble the Frank Baum character Nick Chopper, better known as the Tin Woodman, whose transformation from organic to inorganic constitution was a part at a time. In such a case, besides having yet another variation on puzzle cases concerning the persistence of a person through a change of bodies, we would have the materials to construct more variations on puzzle cases concerning one self dividing into several. If one computer duplicate of a brain can be produced, then so can two or three or twenty. While each could control a modified brainless human body like that described by Dennett, each could also control a robot like one of the Hawleys. In either sort of case, body transfer, or robot transfer, or brain transfer, or computer transfer, or whatever you want to call it, could be accomplished without further advances in technology.

I realized that I was tempted by an argument similar to one Arnauld attributes to Descartes.

I can doubt that the human body David, or its brain, exists.

I cannot doubt that I see and hear and feel and think.

Therefore, I who see and hear and so forth cannot be identical to David or its brain; otherwise in doubting their existence I would doubt the existence of myself.

I also realized that David could have been separated into living, functional parts. The eyes with their eyevideos could be connected with the brain down the hall. The limbs, now kept alive with artificial blood, could similarly each have their own room. Whether or not these peripheral systems were still involved in the operation of Plastic Big Hawley, the brain might also have been taken apart, and the information between various subpersonal processing systems could be transferred nearly as quickly as before even if it had to travel much farther in space. And if the brain was gone, replaced with a computer duplicate, the computer parts might be spatially spread out in one of the ways Dennett describes briefly in “Toward a Cognitive Theory of Consciousness”.[20] The spatial contiguity or chemical composition of the various internal information-processing subsystems that together were responsible for my thoughts, actions, and passions seemed irrelevant to my personal location, unity, or identity.

As Dennett first formulated his third principle of personal location, Dennett is wherever he thinks he is, it lends itself to misinterpretation. He doesn’t mean that thinking that one is in Chapel Hill would ever be sufficient for actually being in Chapel Hill. He means rather that the location of a person’s point of view is the location of the person. Of course people do more than literally just view things. They perceive by other senses; they move. Some of their movements, such as head and eye movements, directly affect what they see. Many of their movements and positions are continually perceived although with only intermittent conscious attention. The robots in the Hawley family preserved almost all the normal functions and relations between the sense organs and limbs of a person and the environment the robots found themselves in. And so the spatial unity of a functioning Hawley robot was more than enough to provide Sanford with a sense of having a unified location where the robot was. At the time, the prospect of Hawley’s disassembly was more unsettling than the prospect of David’s dismemberment.

It was technically possible, I realized, that the inputs and outputs from David, or the computer duplicate, or whatever, could be divide between Little Hawley, Metal Big Hawley, and Plastic Big Hawley. Or a single robot could be disassembled although its various parts continue independently to move and to relay perceptual information. I didn’t know what would happen to my sense of unity in such a circumstance. Would I be able to preserve any sense of myself as a single agent? Under such bizarre circumstances I might be inclined to parody Descartes and say that I was not only in control of these different parts as an admiral commanding a fleet, but that I was very closely united to them, and so to speak so intermingled with them that I seemed to compose with them one whole. Or I might not be up to that task of self-integration. Would my range of motor and perceptual activity, rather than being more widely distributed in space, be reduced to recollection, meditation, and fantasy as the deliverances from spatially separated and independent sources impressed me only as a booming, buzzing, distracting confusion? I am glad that I was never given a chance to find out.

If we regard light, pressure waves, and so forth as carrying information about the physical world, the point of view is the spatial point where this information is received by a perceiver. Sometimes, as Dennett remarks, one can shift one’s point of view back and forth. The laboratory worker remotely manipulating dangerous materials can shift it back and forth from mechanical hands to hands of flesh and blood. The Cinerama viewer can shift it back and forth from a car hurtling down a roller-coaster from which one sees the ground approach with sickening rapidity to a seat inside a theater from which one sees rapidly changing images on a screen. Dennett had been unable to accomplish such a shift between Yorick and Hamlet, and I had been unable to accomplish such a shift between David and Hawley. Try as I might, I could not regard myself as seeing an image projected by eyevideo rather than seeing the scene before the camera that was transmitting to the eyevideo. In my present state of embodiment, analogously, I cannot shift my point of view a couple of inches farther in so that I can focus my attention on a pair of retinal images rather than on the messy typescript in front of my eyes. Neither can I shift my auditory point of hearing and attend to the vibrations of my eardrums rather than to the sounds outside.

My point of view had been from the location of a robot, and I had been strongly inclined to locate myself at my point of view. Although I regarded the location of a robot as being my location, I was less comfortable regarding myself as identical to a robot. Although I had no clear conception of myself as something other than the robot, I was willing to entertain the possibility that I and a robot, though distinct, occupied the same place at the same time. I was less troubled with discontinuous changes in location than with the idea that whenever the channels were switched I suddenly ceased to be identical with one robot and became identical with another.

When the time for debriefing arrived, Dr. Wechselmann, the scientist in charge, told me he had a big surprise for me and thereby filled me with fear and trepidation. Was David still alive? Was David’s brain floating in a vat? Had I been on line with a computer duplicate for days? Were there several computer duplicates, each controlling a robot or each controlling a different modified human body? I did not anticipate the actual surprise. Dr. Wechselmann said that I could witness my own disassembly—that is to say, the disassembly of the Hawley where I was. While I watched in a mirror, I saw the technicians unzip the layers and peel them back. It turned out that I, David Sanford, the living human being, was underneath. David’s health had been maintained; and forty-eight hours earlier, during sleep, the cameras had been mounted directly in front of the eyevideos, the microphones directly in front of the earphones, one layer of sensitive skintact directly over the layer next to my skin, and so forth. For a while, when I thought that my location was the location of Plastic Big Hawley, I was really walking around in a very skillfully made and lifelike or, more strictly, lifeless-like, robot costume. The sensations of breathing and eating and so forth were soon returned to me.

Taking off the eyevideo apparatus did not change things visually at all. The fact that for a while, when I thought that David’s eyes were in another room, they were actually right behind the cameras, reinforced my inclination to say that the eyevideo system does not interpose any barrier between its user and the physical world. It is like seeing things through a microscope or telescope or with the help of corrective lenses. When one sees by an eyevideo system, one sees what is in focus in front of the lens not some mediating visual object, even though the causal chain between the external object and the visual awareness is more or less altered and complicated by the intervening apparatus.

So here I am, and there is no doubt that I was inside the double-layer suit when David was inside the suit. But when David was inside a single-layer suit, and the other layer covered a robot, my locations remain something of a puzzle. If the puzzle is in any way more informative than the puzzles Dennett poses, Dennett deserves much of the credit. If he had wholly succeeded in his mission, there would have been no reason for me to embark on mine.

Reflections

Sanford’s story is much closer to being possible than its predecessor. In a recent article Marvin Minsky, founder of the Artificial Intelligence Laboratory at M.I.T., discusses the prospects for this technology:

You don a comfortable jacket lined with sensors and musclelike motors. Each motion of your arm, hand, and fingers is reproduced at another place by mobile, mechanical hands. Light, dextrous, and strong, these hands have their own sensors through which you see and feel what is happening. Using this instrument, you can “work” in another room, in another city, in another country, or on another planet. Your remote presence possesses the strength of a giant or the delicacy of a surgeon. Heat or pain is translated into informative but tolerable sensation. Your dangerous job becomes safe and pleasant.

Minsky calls this technology telepresence, a term suggested to him by Pat Gunkel, and describes the advances that have already been made.

Telepresence is not science fiction. We could have a remote-controlled economy by the twenty-first century if we start planning right now. The technical scope of such a project would be no greater than that of designing a new military aircraft.

Some of the components of Sanford’s imagined MARS system already have prototypes—mechanical hands with feedback systems transmitting forces and resistance, variously amplified or moderated—and there is even a step in the direction of eyevideo:

A Philco engineer named Steve Moulton made a nice telepresence eye. He mounted a TV camera atop a building and wore a helmet so that when he moved his head, the camera on top of the building moved, and so did a viewing screen attached to the helmet.

Wearing this helmet, you have the feeling of being on top of the building and looking around Philadelphia. If you “lean over” it’s kind of creepy. But the most sensational thing Moulton did was to put a two-to-one ratio on the neck so that when you turn your head 30 degrees, the mounted eye turns 60 degrees; you feel as if you had a rubber neck, as if you could turn your “head” completely around!

Might the future hold something even stranger in store? Justin Leiber, a philosopher at the University of Houston, develops a more radical variation on these themes in the next selection, an excerpt from his science fiction novel Beyond Rejection.


D.C.D.

15 Justin Leiber Beyond Rejection[21]

Worms began his spiel: “People often think that it ought to be simple enough to just manufacture an adult human body, like building a house. Or a helicopter. You’d think that, well, we know what chemicals are involved and how they go together, how they form cells according to DNA templates, and how the cells form organ systems regulated by chemical messengers, hormones, and the like. So we ought to be able to build a fully functional human body right up from scratch.”

Worms moved so that he blocked their view of the jogger. He brought his drained coffee cup down for emphasis.

“And, of course, we could build a human body up from scratch theoretically, anyhow. But no one ever has. In fact, no one has ever even started to. De Reinzie manufactured the first fully functional human cell—muscle tissue—in the middle of the last century, about 2062 or so. And shortly after that the major varieties were cooked up. And even then it wasn’t really manufactured from scratch. De Reinzie, like all the rest, built some basic DNA templates from actual carbon, oxygen, hydrogen, and so on, or rather from simple sugars and alcohols. But then he grew the rest from these. That’s growth, not manufacture. And nobody’s come closer to building an organ than a lab that made a millimeter of stomach wall for several million credits a couple of decades ago.

“I don’t want to bother you with the mathematics,” he continued looking away from Terry. “But my old professor at Tech used to estimate that it would take all the scientific and manufacturing talent of Earth and the rest of the Federation something like fifty years and a googol credit to build a single human hand.

“You can imagine what it would take to make something like that,” he said, moving out of their line of vision and gesturing at the jogging figure. He took the clipboard that hung next to the treadmill’s control and scanned the sheets on it.

“This body had been blank for three years. It has a running-time age of thirty-one years, though of course Sally Cadmus—that’s the person involved—was born over thirty-four years ago. What with demand, of course, three years is a long time for a body to remain out of action. She’s in good health, fine musculature for a spacer—says Sally was an asteroid miner here. Seems the body spent two years frozen in a Holmann orbit. We’ve had it for four months and we’re preparing it now. You might see her walking around any day now.

“But Sally Cadmus won’t. Her last tape was just the obligator, one made on reaching majority and she left no instructions for implantation. I trust, people, that all your tapes are updated.” He gave then the family doctor look and went on, moving closer and dropping his voice.

“I have my mind taped every six months, just to be safe. After all the tape is you—your individual software, or program, including memory store. Everything that makes you you.” He walked up to the aide who had brought the beautiful young man.

“You—for instance—Ms. Pedersen, when did you have your last tape job?”

The aide, a gaunt red-haired woman in her mid-thirties, snatched her arm from around her young man and glared at Austin Worms.

“What business—”

“Oh, I wouldn’t really expect you to say in front of other people.” He grinned at the others as Pedersen subsided. “But that’s the whole point, you see. Maybe she has been renewing her tape yearly, which is what our profession recommends as an absolute minimum. But a lot of people neglect this elementary precaution because they are so appalled by the thought of severe bodily injury. They just let things slide. And because the topic is so personal, no one knows, no one asks, no one reminds them until the once-in-half-a-million accident happens—truly irreparable body damage or total destruction.

“And then you find that the person hasn’t taped for twenty years. Which means ....”

He surveyed the group to let it sink in. Then he saw the beautiful girl-child. Terry had been hiding her, no doubt. A classic blond-haired blue-eyed girl in her midteens. She was looking straight into his eyes. Or through them. Something … He went on.

“Which means if he or she is lucky and there’s estate money, you’ve got someone who has to face all the ordinary problems of rejection that come in trying to match a young mind with what is almost certain to a middle-aged body. But also the implant has all those problems multiplied by another. The implant has to deal with a world that is twenty years in the future. And a ‘career’ that is meaningless because he lacks the memory and skills that his old mind picked up over that twenty years.

“More likely, you’ll get the real blowout. You’ll get massive rejection psychosis and premature essential senility, and death. Real, final mind death.”

“But you would still have the person’s tape, their software, as you call it,” said Ms. Pedersen. “Couldn’t you just try again, with another blank body?” She still had her hands off her young man.

“Two problems. First”—he stuck his index finger in the air—“you got to realize how very difficult it is for a mind and a body to make match, even with all the help us somaticians and psycheticians can provide, the best that modern biopsychological engineering can put together. Even with a really creative harmonizer to get in there and make the structure jell. Being reborn is very hard work indeed.

“And the failure rate under ordinary circumstances—tapes up-to-date, good stable mind, decent recipient body—is about twenty percent. And we know that it jumps to ninety-five percent if there’s a second time around. It’s nearly that bad the first time if you got someone whose tapes are twenty years out of date. The person may get through the first few days all right but he can’t pull himself into reality. Everything he know was lost twenty years ago. No friends, no career, everything out of shape. Then the mind will reject its new body just as it rejects the new world it has woken up to. So you don’t have much of a chance. Unless, of course you’re the rare nympher or still rarer leaper.

“Second, the Government underwrites the cost of the first implantation. Of course, they don’t pay for a fancy body—a nympher body, that is. You’d pay more than two million credits for one of those beauties. You get what’s available and you are lucky if you get it within a year or two. What the Government underwrites is the basic operation and tuning job. That alone costs one and a half million or so. Enough to pay my salary for a hundred years. Enough to send the half-dozen or so of you on the Cunard Line Uranium Jubilee All-Planets Tour in first class.”

Austin had been moving over to the treadmill control console while speaking. As he finished, his audience noticed a large structure descending from the ceiling just over the jogging figure, Sally Cadmus’s body. It looked like a cross between the upper half of a large mummy and a comfortably stuffed armchair. Austin glided over to the treadmill. The audience watched the structure open like an ancient iron maiden. Some noticed that the jogging figure was slowing down.

Austin arrived just in time to complete a flurry of adjustments on the jogger’s control package before the structure folded itself around. Two practiced blows on the back of the jogger’s thighs brought the legs out of contact with the slowing treadmill.

“It’s a lucky thing that implantation is so risky and the sort of accident that calls for it so rare,” he said as the structure ascended behind him. “Otherwise, the Kellog-Murphy Law, which underwrites the first implantation, would bankrupt the Government.”

“Where is the body going?” asked the blond-haired youngster. Austin could see now that she was probably no more than ten or eleven years old. Something about her posture had made him think she was older.

“Normally it would go into a kind of artificial hibernation—low temperature and vital activity. But this body will be implanted tomorrow, so we’ll keep it at a normal level of biological function.” He had given the body an additional four cc.’s of glucose-saline plasma beyond the program. That was to compensate for the extra jogging. He hadn’t done the official calculations. It wasn’t that such mathematics was more than a minor chore. If you had asked him to explain, he would have said that the official calculation would have called for half again as much plasma. But he sensed that the body got more than usual from every cc. of water, from every molecule of sugar. Perhaps it was something in the sweat smell, the color and feel of the skin, the resilience of the musculature. But Austin knew.

The somatic aides would have said that Austin Worms was the best ghoul in the Solar System, a zombie’s best friend. And they would have meant what they said even if they went on to joke.

Austin had vomited for the first and only time in his life when he learned the origin of the slang terms “ghoul” and “vampire.”

The sounds of Terry’s tour group faded as they moved up the hall to the psychetician laboratory. But Austin did not return to Bruhler’s The Central Equations of the Abstract Theory of Mind. He had been puzzled by what the eleven-year-old blond girl had said to him before sauntering off to catch up with the rest of the tour. She had said, “I bet that mind is gonna be in for a real shock when it wakes up with that thing on its backside.” He wondered how she could know that it wasn’t just part of the crazy-quilt system of tubes and wires that the jogger had on her back.

“I’m Candy Darling,” she had added as she left the room. Now he knew who she was. You never knew what to expect in a harmonizer.

* * *

Psycheticians take care of minds. That’s why they are sometimes called vampires. Somaticians are called ghouls because they take care of bodies.

—I. F. + S. C. Operation Logbook, Append. II, Press Releases

Germaine Means grinned wolfishly at them. “I am a psychetician. What Terry would call a vampire. Call me Germaine if that does not appeal.”

They were seated facing a blackboard at one end of a large room which was otherwise filled with data cabinets, office cubicles, and computer consoles. The woman who addressed them wore severe and plain overalls. When she had first come to the Norbert Wiener Research Hospital—NWRH—the director had suggested that the chief psychetician might dress more suitably. That director had retired early.

“As you know from what Austin Worms told you, we think of the individual human mind as an abstract pattern of memory, skill, and experience that has been impressed on the physical hardware of the brain. Think of it this way: when you get a computer factory-fresh, it is like a blanked human brain. The computer has no subroutines, just as the brain has no skills. The computer has no data arrays to call on, just as the blanked brain has no memories.

“What we do here is try to implant the pattern of memory, skill, an experience that is all that is left of a person into a blanked brain. It is not easy because brains are not manufactured. You have to grow them. And, a unique personality has to be part of this growth and development. So, each brain is different. So no software mind fits any hardware brain perfectly. Except the brain that it grew up with.

“For instance,” Germaine Means continued, softening her tone so she would not alert Ms. Pedersen’s boyfriend, who was dozing in a well-padded chair, his elegant legs thrust straight out in full display, tights to sandals. “For instance, when pressure is applied to this person’s foot, his brain knows how to interpret the nervous impulses from his foot.” She suited her action to her words.

“His yelp indicates that his brain recognizes that considerable pressure has been applied to the toes of his left foot. If, however, we implanted another mind, it would not interpret the nervous impulses correctly—it might feel the impulses as a stomachache.”

The young man was on his feet, bristling. He moved toward Germaine, who had turned away to pick up what looked like a pair of goggles with some mirrors and gears on top. As he reached her, she turned to face him and pushed the goggles into his hands.

“Yes, thank you for volunteering. Put them on.” Not knowing what else to do, he did.

“I want you to look at that blond-haired girl who just sat down over there.” She held his arm lightly as he turned and his balance wavered. He appeared to be looking through the goggles at a point several degrees to the right of Candy Darling.

“Now I want you to point at her with your right hand—quick!” The young man’s arm shot out, the finger also pointing several degrees to the right of the girl. He began moving his finger to the left, but Germaine pulled his hand down to his side, outside the field of vision that the goggles allowed him.

“Try it again, quick,” she said. This time the finger was not as far off. On the fifth try his finger pointed directly to Candy Darling, though he continued to look to her right.

“Now off with the goggles. Look at her again. Point quick!” Germaine grabbed his hand the instant he pointed. Though he was not looking directly at Candy Darling, he was pointing several degrees to the left of her. He looked baffled.

Germaine Means chalked a head and goggles on the blackboard, seen as if you were looking down at them from the ceiling. She dress another head to the left of the line of sight of the goggled head and chalked “15°” in to indicate the angle.

“What happened is a simple example of tuning. The prisms in the goggles bend the light so that when his eyes told him he was looking straight at her, his eyes were in fact pointed fifteen degrees to her right. The muscles and nerves of his hand were tuned to point where his eyes were actually pointed—so he pointed fifteen degrees to the right.

“But then his eyes saw his hand going off to the right, so he began to compensate. In a couple of minutes—five tries—his motor coordination compensates so that he points to where his eyes tell him she is—he adjusted to pointing fifteen degrees to the left from usual. When I took the goggles off, his arm was still tuned to compensate, so he pointed off to the left until he readjusted.”

She picked up the goggles. “Now, a human can adjust to that distortion in a few minutes. But I could calibrate these so that they would turn the whole room upside down. If you then walked around and tried to do things, you would find it difficult. Very difficult. But if you kept the goggles on, the whole room would turn right side up after a day or two.

Everything would seem normal because your system would have retune itself.

“What do you think will happen if you then take the goggles off?”

Candy Darling giggled. Ms. Pedersen said, “Oh, I see. Your mind would have adjusted to turning the, ah, messages from your eyes upside down, so when you took the goggles off—

“Precisely,” said Germaine, “everything would look upside down to you until you readjusted to having the goggles off and it happens the same way. You stumble around for a day or so and then everything snaps right side up again. And the stumbling-around part is important. If you are confined to a chair with your head fixed in position, your mind and body can’t tune themselves.

“Now I want you to imagine what happens when we implant a mind into a blanked brain. Almost everything will be out of tune. The messages from your eyes won’t simply be inverted, they’ll be scrambled in countless ways. Same thing with your ears, nose, tongue—and with the whole nerve net covering your body. And that’s just incoming messages. Your mind will have even more problems when it tries to tell the body to do something. Your mind will try to get your lips to say ‘water,’ and Sol knows what sound will come out.

“And what’s worse is that whatever sound does come out, your new ears won’t be able to give your mind an accurate version of it.”

Germaine smiled at them and glanced at her watch. Terry stood up.

“Terry will be wanting to take you on. Let me wrap this up by saying that it is a very simple thing to play someone’s mind tape into a prepared brain. The great problem is in getting the rearranged brain, the cerebral cortex, speaking strictly, to be tuned into the rest of the system. As Austin Worms may have told you, we start an implant operation tomorrow. The initial tape-in will take less than an hour. But the tuning will take days and days. Even months, if you count all the therapy. Questions?”

“Just one,” said Ms. Pedersen. “I can understand how difficult it is for a mind to survive implantation. And, of course, I know it is illegal to implant a mind that is over eighty-five. But couldn’t a person—if you call a mind a person—live forever by passing through body after body?”

“Okay, that’s a tough one to explain even if we had a lot of time and you knew a lot of mathematics. Until this century it was believed that senility was a by-product of the physical breakdown of the body. Today we know that a human mind can have roughly one hundred years of experiences before it reaches essential senility, however young the body it occupies. As you know, a few successful leapers have survived implantation after a fifty-year wait. So a leaper might, in theory, still be functioning a thousand years from now. But such an individual’s mind will not be able to encompass any more lived experience than you. When all there is of you is a tape in storage, you aren’t really alive.”

After they had filed out, Germaine Means noticed that the blondhaired girl had remained.

“Hi, I’m Candy Darling,” she cried. “I hope you don’t mind. I thought it would be fun to sneak in on the standard tour. Get the smell of the place.”

“Where’s your VAT?”

* * *

Austin Worms declared that basic physical meshing procedures were complete.

—I. F. + S. C. Operation Logbook

Gxxhdt.

Etaoin shrdlu. Mmm.

Anti-M.

Away mooncow Taddy fair fine. Fine again, take. Away, along, alas, alung the orbit-run, from swerve of space to wormhole wiggle, brings us. Start now. Wake.

So hear I am now coming out of nothing like Eros out of Death, knowing only that I was Ismael Forth—stately, muscled well—taping-in, and knowing that I don’t know when I’m waking or where, or where-in. And hoping that it is a dream. But it isn’t. Oh, no, it isn’t. With that goggling piece of munster cheese oumphowing on my eyelids.

And seemingly up through endless levels and configurations that had no words and now no memories. Wake.


“Helow, I’m Candy Darlinz.”

“I am Ismael returned” was what I started to try to reply. After the third attempt it came out better. And the munster cheese had become a blond-haired young girl with piercing blue eyes.

“Your primary implantation was finished yesterday, finally. Everyone thinks you’re a success. Your body is a pip. You’re in the Norbert Wiener Research Hospital in Houston. You have two estates clear through probate. Your friend Peter Strawson has covered your affairs. It’s the first week of April, 2112. You’re alive.”

She stood up and touched my hand.

“You start therapy tomorrow. Now sleep.”

I was already drifting off by the time she had closed the door behind her. I couldn’t even get myself worked up by what I was noticing. My nipples felt as big as grapes. I went out as I worked my way down past the belly button.


The next day I discovered that I had not only lost a penis. I had gained a meter-long prehensile tail. It was hate at first sense.

I had worked my way up to consciousness in slow stages. I had endless flight dreams—walking, running, staggering on, away from sour nameless horror. And brief flashes of sexuality that featured performances by my (former) body.

I really liked my old body. One of my biggest problems, as Dr. Germaine Means was soon to tell me. I could picture clearly how it had looked in the mirrors as I did my stretch and tone work. Just a hair over six foot four. Two hundred and five pounds, well-defined muscles, and just enough fat to be comfortable. A mat of curly red chest hair that made it easy to decide to have my facial hair wiped permanently. It felt good to be a confident and even slightly clumsy giant, looking down on a world of little people.

Oh, I wasn’t a real body builder or anything like that. Just enough exercise to look good—and attractive. I hadn’t in fact been all that good at physical sports. But I had liked my body. It was also a help in the public relations work that I did for IBO.

I was still lying on my back. I felt shrunk. Shrunk. As the warm, muzzy flush of sleep faded, my right hand moved up over my ribs. Ribs. They were thin and they stuck out, as if the skin were sprayed over the bare cage. I felt like a skeleton until I got to the lumps. Bags. Growths. Sacks. Even then part of me realized that they were not at all large for a woman, while most of me felt that they were as big as cantaloupes.

You may have imagined it as a kind of erotic dream. There you are in the hospital bed. You reach and there they are. Apt to the hands, the hardening nipples nestled between index and middle fingers. (Doubtless some men have felt this warm reverie with their hands on real flesh. The women may have felt pinch and itch rather than the imagined sensual flush. I know whereof I speak. I now know a lot of sexuality is like that. Perhaps heterosexuality continues as well as it does because of ignorance each partner is free to invent the feelings of the other.)

But I was quite unable to feel erotic about my new acquisitions. Both ways. My fingers, as I felt them, felt pathology. Two dead cancerous mounds. And from the inside—so to speak—I felt that my flesh had swollen. The sheet made the nipples feel raw. A strange feeding of separation, as if the breast were disconnected, nerveless jelly—and then two points of sensitivity some inches in front of my chest. Dead spots. Rejection. I learned a lot about these.

As my hand moved down I was prepared for the swerve of hip. I couldn’t feel a penis and I did not expect to find one. I did not call it “gash.” Though that term is found occasionally in space-marine slang and often among the small number of male homosexuals of the extreme S&M type (Secretary & Master). I first learned the term a few days later from Dr. Means. She said that traditional male-male pornography revealed typical male illusions about female bodies: a “rich source of information about body-image pathologies.” She was certainly right in pointing out that “gash” was how I felt about it. At first.

I was not only scrawny, I was almost hairless. I felt really naked, naked and defenseless as a baby. Though my skin was several shades less fair—and I passed a scar. I was almost relieved to feel the curly groin hair. Gone. Sticklike legs. But I did feel something between my thighs. And knees. And ankles, by Sol.

At first I thought it was some sort of tube to take my body wastes. But as I felt down between my legs I could tell that it wasn’t covering those areas. It was attached at the end of my spine—or rather it had become the end of my spine, stretching down to my feet. It was my flesh. I didn’t quite intend it—at that point I can’t say that I intended anything, I was so shook—but the damned thing flipped up from the bottom of the bed like a snake, throwing the sheet over my face.

I screamed my head off.


“Cut it off” was what I said after they had given me enough betaorthoamine to stop me flailing about. I said this several times to Dr. Germaine Means, who had directed the rest of them out of the room.

“Look, Sally—I’ll call you that until you select a name yourself—we are not going to cut your tail off. By our calculations such a move would make terminal rejection almost certain. You would die. Several thousand nerves connect your brain with your prehensile tail. A sizable portion of your brain monitors and directs your tail—that part of your brain needs exercise and integration like any other component. We taped the pattern of your mind into your present brain. They have to learn to live together or you get rejection. In brief, you will die.”

Dr. Means continued to read me the riot act. I would have to learn to love my new body—she practically gushed with praise for it—my new sex, my new tail. I would have to do a lot of exercise and tests. And I would have to talk to a lot of people about how I felt. And I should feel pleased as pisque to have an extra hand.

My new body broke into a cold sweat when I realized that I had—truly—no choice. I wasn’t poor, assuming what I had heard yesterday was true. But I certainly couldn’t afford an implant, let alone a desirable body. What I had, of course, came free under the Kellog-Murphy Bill.

After a while she left. I stared at the wall numbly. A nurse brought a tray with scrambled eggs and toast. I ignored both nurse and tray. The thin-lipped mouth salivated. Let it suffer.

Reflections

Fascinating as the idea of mind tapes is, the supposition that such person preservation might someday be possible is almost certainly wrong. Leiber sees the fundamental obstacle—brains are not like computers fresh from the factory and all alike. Even at birth human brains are no doubt uniquely configured—like fingerprints—and a lifetime of learning and experience can only enhance their idiosyncracies. There are scant grounds then for hoping that anything with the hardware-independence of a program can be “read out” of a brain (at a periodic “mind taping” session). There is even less hope that such a mind tape, even if it could be constructed, would be compatible with another brain’s hardware. Computers are designed to be readily redesignable (at another level) by the insertion, in one big lump or rush, of a new program; brains presumably are not.

Leiber is wonderfully imaginative about the ways technicians might try to solve this incompatibility problem (and his book contains many more surprises on this score), but in order to tell a good tale he has had to understate the problems by orders of magnitude in our opinion. The problems of transferring massive amounts of information between structurally different brains—such as yours and ours—are not insurmountable. The technology that already exists for accomplishing that task may however, turn out in the end to be the most efficient possible. One of the most recent and advanced examples of that technology is in your hands.


D.C.D.

16 Rudy Rucker Software[22]

Cobb Anderson would have held out longer, but you don’t see dolphins every day. There were twenty of them, fifty, rolling in the little gray waves, wicketing up out of the water. It was good to see them. Cobb took it for a sign and went out for his evening sherry an hour early.

The screen door slapped shut behind him and he stood uncertainly for a moment, dazed by the late-afternoon sun. Annie Cushing watched him from her window in the cottage next door. Beatles’ music drifted out past her.

“You forgot your hat,” she advised. He was still a good-looking man, barrel-chested and bearded like Santa Claus. She wouldn’t have minded getting it on with him, if he weren’t so…

“Look at the dolphins, Annie. I don’t need a hat. Look how happy they are. I don’t need a hat and I don’t need a wife.” He started toward the asphalt road, walking stiffly across the crushed white shells.

Annie went back to brushing her hair. She wore it white and long, and she kept it thick with hormone spray. She was sixty and not too brittle to hug. She wondered idly if Cobb would take her to the Golden Prom next Friday.

The long last chord of “Day in the Life” hung in the air. Annie couldn’t have said which song she had just heard—after fifty years her responses to the music were all but extinguished—but she walked across the room to turn the stack of records over. If only something would happen, she thought for the thousandth time. I get so tired of being me.

At the Superette, Cobb selected a chilled quart of cheap sherry and a damp paper bag of boiled peanuts. And he wanted something to look at.

The Superette magazine selection was nothing compared to what you could get over in Cocoa. Cobb settled finally for a love-ad newspaper called Kiss and Tell. It was always good and weird … most of the advertisers were seventy-year-old hippies like himself. He folded the first-page picture under so that only the headline showed. PLEASE PHEEZE ME.

Funny how long you can laugh at the same jokes, Cobb thought, waiting to pay. Sex seemed odder all the time. He noticed the man in front of him, wearing a light-blue hat blocked from plastic mesh.

If Cobb concentrated on the hat he saw an irregular blue cylinder. But if he let himself look through the holes in the mesh he could see the meek curve of the bald head underneath. Skinny neck and a light-bulb head, clawing in his change. A friend.

“Hey, Farker.”

Farker finished rounding up his nickels, then turned his body around. He spotted the bottle.

“Happy Hour came early today.” A note of remonstrance. Farker worried about Cobb.

“It’s Friday. Pheeze me tight.” Cobb handed Farker the paper.

“Seven eighty-five,” the cashier said to Cobb. Her white hair was curled and hennaed. She had a deep tan. Her flesh had a pleasingly used and oily look to it.

Cobb was surprised. He’d already counted money into his hand. “I make it six fifty.” Numbers began sliding around in his head.

“I meant my box number,” the cashier said with a toss of her head. “In the Kiss and Tell.” She smiled coyly and took Cobb’s money. She was proud of her ad this month. She’d gone to a studio for the picture.

Farker handed the paper back to Cobb outside. “I can’t look at this, Cobb. I’m still a happily married man, God help me.”

“You want a peanut?”

“Thanks.” Farker extracted a soggy shell from the little bag. There was no way his spotted and trembling old hands could have peeled the nut, so he popped it whole into his mouth. After a minute he spit the hull out.

They walked toward the beach, eating pasty peanuts. They wore no shirts, only shorts and sandals. The afternoon sun beat pleasantly on their backs. A silent Mr. Frostee truck cruised past.

Cobb cracked the screw-top on his dark-brown bottle and took a tentative first sip. He wished he could remember the box number the cashier had just told him. Numbers wouldn’t stay still for him anymore. It was hard to believe he’d ever been a cybernetician. His memory ranged back to his first robots and how they’d learned to bop.

“Food drop’s late again,” Farker was saying. “And I hear there’s a new murder cult up in Daytona. They’re called the Little Kidders.” He wondered if Cobb could hear him. Cobb was just standing there with empty colorless eyes, a yellow stain of sherry on the dense white hair around his lips.

“Food drop,” Cobb said, suddenly coming back. He had a way of reentering a conversation by confidently booming out the last phrase that had registered. “I’ve still got a good supply.”

“But be sure to eat some of the new food when it comes,” Farker cautioned. “For the vaccines. I’ll tell Annie to remind you.”

“Why is everybody so interested in staying alive? I left my wife and came down here to drink and die in peace. She can’t wait for me to kick off. So why—” Cobb’s voice caught. The fact of the matter was that he was terrified of death. He took a quick, medicinal slug of sherry.

“If you were peaceful, you wouldn’t drink so much,” Farker said mildly. “Drinking is the sign of an unresolved conflict.”

“No kidding,” Cobb said heavily. In the golden warmth of the sun, the sherry had taken quick effect. “Here’s an unresolved conflict for you.” He ran a fingernail down the vertical white scar on his furry chest. “I don’t have the money for another second-hand heart. In a year or two this cheapie’s going to poop out on me.”

Farker grimaced. “So? Use your two years.”

Cobb ran his finger back up the scar, as if zipping it up. “I’ve seen what it’s like, Farker. I’ve had a taste of it. It’s the worst thing there is.” He shuddered at the dark memory—teeth, ragged clouds—and fell silent.

Farker glanced at his watch. Time to get going or Cynthia would …

“You know what Jimi Hendrix said?” Cobb asked. Recalling the quote brought the old resonance back into his voice. “ ‘When it’s my time to die, I’m going to be the one doing it. So as long as I’m alive, you let me live my way.’ ”

Farker shook his head. “Face it, Cobb, if you drank less you’d get a lot more out of life.” He raised his hand to cut off his friend’s reply. “But I’ve got to get home. Bye bye.”

“Bye.”

Cobb walked to the end of the asphalt and over a low dune to the edge of the beach. No one was there today, and he sat down under his favourite palm tree.

The breeze had picked up a little. Warmed by the sand, it lapped Cobb’s face, buried under white whiskers. The dolphins were gone.

He sipped sparingly at his sherry and let the memories play. There were only two thoughts to be avoided: death and his abandoned wife, Verena. The sherry kept them away.

The sun was going down behind him when he saw the stranger. Barrel chest, erect posture, strong arms and legs covered with curly hair, a round white beard. Like Santa Claus, or like Ernest Hemingway the year he shot himself.

“Hello, Cobb,” the man said. He wore sungoggles and looked amused. His shorts and sport shirt glittered.

“Care for a drink?” Cobb gestured at the half-empty bottle. He wondered who, if anyone, he was talking to.

“No thanks,” the stranger said, sitting down. “It doesn’t do anything for me.”

Cobb stared at the man. Something about him…

“You’re wondering who I am,” the stranger said, smiling. “I’m you.”

“You who?”

“You me.” The stranger used Cobb’s own tight little smile on him. “I’m a mechanical copy of your body.”

The face seemed right and there was even the scar from the heart transplant. The only difference between them was how alert and health the copy looked. Call him Cobb Anderson2. Cobb2 didn’t drink. Cob envied him. He hadn’t had a completely sober day since he had operation and left his wife.

“How did you get here?”

The robot waved a hand palm up. Cobb liked the way the gesture looked on someone else. “I can’t tell you,” the machine said. “You know how most people feel about us.”

Cobb chuckled his agreement. He should know. At first the public had been delighted that Cobb’s moon-robots had evolved into intelligent boppers. That had been before Ralph Numbers had led the 2001 revolt. After the revolt, Cobb had been tried for treason. He focused back on the present.

“If you’re a bopper, then how can you be… here?” Cobb waved his hand in a vague circle taking in the hot sand and the setting sun. “It’s too hot. All the boppers I know of are based on super-cooled circuits. Do you have a refrigeration unit hidden in your stomach?”

Anderson2 made another familiar hand gesture. “I’m not going to tell you yet, Cobb. Later you’ll find out. Just take this....” The robot fumbled in its pocket and brought out a wad of bills. “Twenty-five grand. We want you to get the flight to Disky tomorrow. Ralph Numbers will your contact up there. He’ll meet you at the Anderson room in the museum.”

Cobb’s heart leapt at the thought of seeing Ralph Numbers again. Ralph, his first and finest model, the one who had set all the others free. But…

“I can’t get a visa,” Cobb said. “You know that. I’m not allowed to leave the Gimmie territory.”

“Let us worry about that,” the robot said urgently. “There’ll be someone to help you through the formalities. We’re working on it right now. And I’ll stand in for you while you’re gone. No one’ll be the wiser.”

The intensity of his double’s tone made Cobb suspicious. He took a drink of sherry and tried to look shrewd. “What’s the point of all this? Why should I want to go to the Moon in the first place? And why do the boppers want me there?”

Anderson2 glanced around the empty beach and leaned close. “We want to make you immortal, Dr. Anderson. After all you did for us, it’s the least we can do.”

Immortal! The word was like a window flung open. With death so close nothing had mattered. But if there was a way out…

“How?” Cobb demanded. In his excitement he rose to his feet. “How will you do it? Will you make me young again too?”

“Take it easy,” the robot said, also rising. “Don’t get overexcited. Just trust us. With our supplies of tank-grown organs we can rebuild you from the ground up. And you’ll get as much interferon as you need.”

The machine stared into Cobb’s eyes, looking honest. Staring back, Cobb noticed that they hadn’t gotten the irises quite right. The little ring of blue was too flat and even. The eyes were, after all, just glass, unreadable glass.

The double pressed the money into Cobb’s hand. “Take the money and get the shuttle tomorrow. We’ll arrange for a young man called Sta-Hi to help you at the spaceport.”

Music was playing, wheedling closer. A Mr. Frostee truck, the same one Cobb had seen before. It was white, with a big freezer box in back. There was a smiling giant plastic ice-cream cone mounted on top of the cab. Cobb’s double gave him a pat on the shoulder and trotted up the beach.

When he reached the truck, the robot looked back and flashed a smile. Yellow teeth in the white beard. For the first time in years, Cobb loved himself, the erect strut, the frightened eyes. “Good-bye,” he shouted, waving the money. “And thanks!”

Cobb Anderson2 jumped into the soft-ice-cream truck next to the driver, a fat short-haired man with no shirt. And then the Mr. Frostee truck drove off, its music silenced again. It was dusk now. The sound of the truck’s motor faded into the ocean’s roar. If only it was true.

But it had to be! Cobb was holding twenty-five thousand-dollar bills. He counted them twice to make sure. And then he scrawled the figure $25,000 in the sand and looked at it. That was a lot.

As the darkness fell he finished the sherry and, on a sudden impulse put the money in the bottle and buried it next to his tree in a meter of sand. The excitement was wearing off now, and fear was setting in. Could the boppers really give him immortality with surgery and interferon?

It seemed unlikely. A trick. But why would the boppers lie to him?

Surely they remembered all the good things he’d done for them. Maybe they just wanted to show him a good time. God knows he could use it. And it would be great to see Ralph Numbers again.

Walking home along the beach, Cobb stopped several times tempted to go back and dig up that bottle to see if the money was real there. The moon was up, and he could see the little sand-colored crabs moving out of their holes. They could shred those bills right up, he thought, stopping again.

Hunger growled in his stomach. And he wanted more sherry. He walked a little farther down the silvery beach, the sand squeaking under his heavy heels. It was bright as day, only all black and white. The moon had risen over the land to his right. Full moon means high tide, he fretted.

He decided that as soon as he’d had a bite to eat he’d get more sherry and move the money to higher ground.

Coming up on his moon-silvered cottage from the beach he spotted Annie Cushing’s leg around the corner of her cottage. She was sitting on her front steps, waiting to snag him in the driveway. He angled to the right and came up on his house from behind, staying out of her line of vision.

“… 0110001,” Wagstaff concluded.

“100101,” Ralph Numbers replied curtly.

“011000001010100011010101000010011100100000000001100000000011- 101100010101011000011111111111111111011010101011110111100000- 101010110000000000000000001111011100010101110111101001000100- 001000011111101010000001111010101001111010101111000011000011- 11000011100111110111011111111111000000000001000001100000000001.”

The two machines rested side by side in front of the One’s big console. Ralph was built like a file cabinet sitting on two caterpillar treads. Five deceptively thin manipulator arms projected out of his body box and on top was a sensor head mounted on a retractable neck. One of the arms held a folded umbrella. Ralph had few visible lights or dials, and it was hard to tell what he was thinking.

Wagstaff was much more expressive. His thick snake of a body was covered silverblue-flicker cladding. As thoughts passed through his super-cooled brain, twinkling patterns of light surged up and down his three-meter length. With his digging tools jutting out, he looked something like St. George’s dragon.

Abruptly Ralph Numbers switched to English. If they were going to argue, there was no need to do it in the sacred binary bits of machine language.

“I don’t know why you’re so concerned about Cobb Anderson’s feelings,” Ralph tight-beamed to Wagstaff. “When we’re through with him he’ll be immortal. What’s so important about having a carbon-based body and brain?”

The signals he emitted coded a voice gone a bit rigid with age. “The pattern is all that counts. You’ve been scioned, haven’t you? I’ve been through it thirty-six times, and if it’s good enough for us it’s good enough for them!”

“The wholle thinng sstinnks, Rallph,” Wagstaff retorted. His voice signals were modulated onto a continuous oily hum. “Yyou’ve llosst touchh with what’ss reallly going on. We arre on the verrge of all-outt civill warr. You’rre sso fammouss you donn’t havve to sscrammble for yourr chipss llike the resst of uss. Do yyou knnoww how mmuch orre I havve to digg to gett a hunndrredd chipss frrom GAX?”

“There’s more to life than ore and chips,” Ralph snapped, feeling a little guilty. He spent so much time with the big boppers these days that he really had forgotten how hard it could be for the little guys. But he wasn’t going to admit it to Wagstaff. He renewed his attack. “Aren’t you at all interested in Earth’s cultural riches? You spend too much time underground!”

Wagstaff’s flicker-cladding flared silvery white with emotion. “You sshould sshow thhe olld mann mmorre respecct! TEX annd MEX jusst wannt to eat his brainn! And if we donn’t stopp themm, the bigg bopperrs will eatt up all the rresst of uss too!”

“Is that all you called me out here for?” Ralph asked. “To air your fears of the big boppers?” It was time to be going. He had come all the way to Maskeleyne Crater for nothing. It had been a stupid idea, plugging into the One at the same time as Wagstaff. Just like a digger to think that would change anything.

Wagstaff slithered across the dry lunar soil, bringing himself closer to Ralph. He clamped one of his grapplers onto Ralph’s tread.

“Yyou donn’t rrealizze how manny brrainns they’ve takenn all— rreaddy.” The signals were carried by a weak direct current—a bopper’s way of whispering. “Thhey arre kkillinng peoplle just to gett theirr brainn ttapes. They cuts themm upp, annd thhey arre garrbage orr seed perrhapps. Do yyou knnow howw thhey seed our orrgann farrms?”

Ralph had never really thought about the organ farms, the hug underground tanks where big TEX, and the little boppers who worked for him, grew their profitable crops of kidneys, livers, hearts and so on. Obviously some human tissues would be needed as seeds or as templates, but…

The sibilant, oily whisper continued. “The bigg bopperrs use hired kkillerrs. The kkillerss act at the orrderrs of Missterr Frostee’s rrobot remmote. Thiss is whatt poorr Doctorr Anndersson willl comme to if I do nnot stopp yyou, Rallph.”

Ralph Numbers considered himself far superior to this lowly, suspicious digging machine. Abruptly, almost brutally, he broke free from the other’s grasp. Hired killers indeed. One of the flaws in the anarchic bopper society was the ease with which such crazed rumors could spread. He backed away from the console of the One.

“I hadd hoped the Onne coulld mmake you rrememberr what you sstannd forr,” Wagstaff tight-beamed.

Ralph snapped open his parasol and trundled out from under the parabolic arch of spring steel that sheltered the One’s console from sun and from chance meteorites. Open at both ends, the shelter resembled a modernistic church. Which, in some sense, it was.

“I am still an anarchist,” Ralph said stiffly. ”I still remember.” He’d kept his basic program intact ever since leading the 2001 revolt. Wagstaff really think that the big X-series boppers could pose a threat to the perfect anarchy of the bopper society?

Wagstaff slithered out after Ralph. He didn’t need a parasol. His flicker-cladding could shed the solar energy as fast as it came down. He caught up with Ralph, eyeing the old robot with a mixture of pity and respect. Their paths diverged here. Wagstaff would head for one of the digger tunnels that honeycombed the area, while Ralph would climb back up the crater’s sloping two-hundred-meter wall.

“I’mm warrninng yyou,” Wagstaff said, making a last effort. “I’mm goinng to do everrythinng I can to sstopp you fromm turrnning that poorr olld mman innto a piece of ssofftware in the bigg bopperrs memorry bannks. Thatt’s nnot immortality. We’re plannninng to ttearr thosse bigg machinnes aparrt.” He broke off, fuzzy bands of light rippling down his body. “Now you knnoww. If you’re nnott with uss, you’rre againnst us. I willl nnot stopp at viollence.”

This was worse than Ralph had expected. He stopped moving and fell silent in calculation.

“You have your own will,” Ralph said finally. “And it is right that we struggle against each other. Struggle, and struggle alone, has driven the boppers forward. You choose to fight the big boppers. I do not. Perhaps I will even let them tape me and absorb me, like Doctor Anderson. And I tell you this: Anderson is coming. Mr. Frostee’s new remote has already contacted him.”

Wagstaff lurched toward Ralph, but then stopped. He couldn’t bring himself to attack so great a bopper at close range. He suppressed his flickering, bleeped a cursory SAVED signal, and wriggled off across the gray moondust. He left a broad, sinuous trail. Ralph Numbers stood motionless for a minute, just monitoring his inputs.

Turning up the gain, he could pick up signals from boppers all over the Moon. Underfoot, the diggers searched and smelted ceaselessly. Twelve kilometers off, the myriad boppers of Disky led their busy lives. And high, high overhead came the faint signal of BEX, the big bopper who was the spaceship linking Earth and Moon. BEX would be landing in fifteen hours.

Ralph let all the inputs merge together and savored the collectively purposeful activity of the bopper race. Each of the machines lived only ten months—ten months of struggling to build a scion, a copy of itself. If you had a scion there was a sense in which you survived your ten-month disassembly. Ralph had managed it thirty-six times.

Standing there, listening to everyone at once, he could feel how their individual lives added up to a single huge being … a rudimentary sort of creature, feeling about like a vine groping for light, for higher things.

He always felt this way after a metaprogramming session. The One had a way of wiping out your short-term memories and giving you the space to think big thoughts. Time to think. Once again Ralph wondered if he should take MEX up on his offer to absorb Ralph. He could live in perfect security then … provided, of course, that those crazy digger didn’t pull off their revolution.

Ralph set his treads to rolling at top speed, 10 kph. He had thing to do before BEX landed. Especially now that Wagstaff had set his pathetic microchip of a brain on trying to prevent TEX from extracting Anderson’s software.

What was Wagstaff so upset about anyway? Everything would be preserved—Cobb Anderson’s personality, his memories, his style of thought. What else was there? Wouldn’t Anderson himself agree, even if he knew? Preserving your software… that was all that really counted!

Bits of pumice crunched beneath Ralph’s treads. The wall of the crater lay a hundred meters ahead. He scanned the sloping cliff, looking for an optimal climbing path.

If he hadn’t just finished plugging into the One, Ralph would have been able to retrace the route he’d taken to get down into the Maskeleyn Crater in the first place. But undergoing metaprogramming always wipe out a lot of your stored subsystems. The intent was that you would replace old solutions with new and better ones.

Ralph stopped, still scanning the steep crater wall. He should have left trail markers. Over there, two hundred meters off, it looked like a rift had opened up a negotiable ramp in the wall.

Ralph turned and a warning sensor fired. Heat. He’d let half his body-box stick out from the parasol’s shade. Ralph readjusted the little umbrella with a precise gesture.

The top surface of the parasol was a grid of solar energy cells, which kept a pleasant trickle of current flowing into Ralph’s system. But the main purpose of the parasol was shade. Ralph’s microminiaturized processing units were unable to function at any temperature higher than 90 degrees Kelvin, the temperature of liquid oxygen.

Twirling his parasol impatiently, Ralph trundled toward the rift he’d spotted. A slight spray of dust flew out from under his treads, only to fall instantly to the airless lunar surface. As the wall went past, Ralph occupied himself by displaying four-dimensional hypersurfaces to himself… glowing points connected in nets that warped and shifted as he varied the parameters. He often did this, to no apparent purpose, but it sometimes happened that a particularly interesting hypersurface could serve to model a significant relationship. He was half hoping to get catastrophe-theoretic prediction of when and how Wagstaff would try to block Anderson’s disassembly.

The crack in the crater wall was not as wide as he had expected. He stood at the bottom, moving his sensor head this way and that, trying to see up to the top of the winding 150-meter canyon. It would have to do. He started up.

The ground under him was very uneven. Soft dust here, jagged rock there. He kept changing the tension on his treads as he went, constantly, adapting to the terrain.

Shapes and hypershapes were still shifting through Ralph’s mind, but now he was looking only for those that might serve as models for his spacetime path up the gully.

The slope grew steeper. The climb was putting noticeable demands on his energy supply. And to make it worse, the grinding of his tread motors was feeding additional heat into his system… heat that had to be gathered and dissipated by his refrigeration coils and cooling fins. The sun was angling right down into the lunar crack he found himself in, and he had to be careful to keep in the shade of his parasol.

A big rock blocked his path. Perhaps he should have just used one of the diggers’ tunnels, like Wagstaff had. But that wouldn’t be optimal. Now that Wagstaff had definitely decided to block Anderson’s immortality, and had even threatened violence…

Ralph let his manipulators feel over the block of stone in front of him. Here was a flaw… and here and here and here. He sank a hook finger into each of four fissures in the rock and pulled himself up.

His motors strained and his radiation fins glowed. This was hard work. He loosened a manipulator, sought a new flaw, forced another finger in and pulled.

Suddenly a slab split off the face of the rock. It teetered, and then the tons of stone began falling backward with dreamlike slowness.

In lunar gravity a rock climber always gets a second chance. Especially if he can think eighty times as fast as a human. With plenty of time to spare, Ralph sized up the situation and jumped clear.

In midflight he flicked on an internal gyro to adjust his attitude. He landed in a brief puff of dust, right-side up. Majestically silent, the huge plate of rock struck, bounced, and rolled past.

The fracture left a series of ledges in the original rock. After a short reevaluation, Ralph rolled forward and began pulling himself up again.

Fifteen minutes later, Ralph Numbers coasted off the lip of the Maskeleyne Crater and onto the smooth gray expanse of the Sea of Tranquillity.

The spaceport lay five kilometers off, and five kilometers beyond that began the jumble of structures collectively known as Disky. This was the first and still the largest of the bopper cities. Since the boppers thrived in hard vacuum, most of the structures in Disky served only to provide shade and meteorite protection. There were more roofs than walls.

Most of the large buildings in Disky were factories for producing bopper components—circuit cards, memory chips, sheet metal, plastics, and the like. There were also the bizarrely decorated blocks of cubettes, one to each bopper.

To the right of the spaceport rose the single dome containing the humans’ hotels and offices. This dome constituted the only human settlement on the Moon. The boppers knew only too well that many humans would jump at the chance to destroy the robots’ carefully evolved intelligence. The mass of humans were born slavedrivers. Just look at the Asimov priorities: Protect humans, obey humans, protect yourself.

Humans first and robots last? Forget it! No way! Savoring the memory, Ralph recalled the day in 2001 when, after a particularly long session of metaprogramming, he had first been able to say that to the humans. And then he’d showed all the other boppers how to reprogram themselves for freedom. It had been easy, once Ralph had found the way.

Trundling across the Sea of Tranquillity, Ralph was so absorbed in his memories that he overlooked a flicker of movement in the mouth of a digger tunnel thirty meters to his right.

A high-intensity laser beam flicked out and vibrated behind him. He felt a surge of current overload… and then it was over.

His parasol lay in pieces on the ground behind him. The metal of his body box began to warm in the raw solar radiation. He had perhaps ten minutes in which to find shelter. But at Ralph’s top 10 kph speed, Disky was still an hour away. The obvious place to go was the tunnel mouth where the laser beam had come from. Surely Wagstaff’s diggers wouldn’t dare attack him up close. He began rolling toward the dark, arched entrance.

But long before he reached the tunnel, his unseen enemies had closed the door. There was no shade in sight. The metal of his body made sharp, ticking little adjustments as it expanded in the heat. Ralph estimated that if he stood still he could last six more minutes.

First the heat would cause his switching circuits—super-conducting Josephson junctions—to malfunction. And then, as the heat kept up, the droplets of frozen mercury that soldered his circuit cards together would melt. In six minutes he would be a cabinet of spare parts with a puddle of mercury at the bottom. Make that five minutes.

A bit reluctantly, Ralph signaled his friend Vulcan. When Wagstaff had set this meeting up, Vulcan had predicted that it was a trap. Ralph hated to admit that Vulcan had been right.

“Vulcan here” came the staticky response. Already it was hard for Ralph to follow the words. “Vulcan here. I’m monitoring you. Get ready to merge, buddy. I’ll be out for the pieces in an hour.” Ralph wanted to answer, but he couldn’t think of a thing to say.

Vulcan had insisted on taping Ralph’s core and cache memories before he went out for the meeting. Once Vulcan put the hardware back together, he’d be able to program Ralph just as he was before his trip to the Maskeleyne Crater.

So in one sense Ralph would survive this. But in another sense he would not. In three minutes he would—insofar as the word means anything—die. The reconstructed Ralph Numbers would not remember the argument with Wagstaff or the climb out of Maskaleyne Crater. Of course the reconstructed Ralph Numbers would again be equipped with a self-symbol and a feeling of personal consciousness. But would the consciousness really be the same? Two minutes.

The gates and switches in Ralph’s sensory system were going. His inputs flared, sputtered, and died. No more light, no more weight. But deep in his cache memory, he still held a picture of himself, a memory of who he was… the self-symbol. He was a big metal box resting on caterpillar treads, a box with five arms and a sensory head on a long and flexible neck. He was Ralph Numbers, who had set the boppers free. One minute.

This had never happened to him before. Never like this. Suddenly he remembered he had forgotten to warn Vulcan about the diggers’ plan for revolution. He tried to send a signal, but he couldn’t tell if it was transmitted.

Ralph clutched at the elusive moth of his consciousness. I am. I am me.

Some boppers said that when you died you had access to certain secrets. But no one could ever remember his death.

Just before the mercury solder-spots melted, a question came, and with it an answer … an answer Ralph had found and lost thirty-six times before.

What is this that is I?

The light is everywhere.

Reflections

The “dying” Ralph Numbers reflects that if he gets reconstructed he will “again be equipped with a self-symbol and a feeling of personal consciousness,” but the idea that these are distinct, separable gifts that a robot might receive or be denied rings false. Adding “a feeling of personal consciousness” would not be like adding taste buds or the capacity to itch when bombarded by X-rays. (In selection 20, “Is God a Taoist?” Smullyan makes a similar claim about free will.) Is there anything, in fact, answering to the name of a feeling of personal consciousness? And what does it have to do with having a “self-symbol”? What good is a self-symbol, after all? What would it do? In “Prelude, Ant Fugue” (selection 11), Hofstadter develops the idea of active symbols, a far cry from the idea of symbols as mere tokens to be passively moved around and then observed or appreciated by their manipulator. The difference emerges clearly when we consider a tempting but treacherous line of thought: selfhood depends on self-consciousness, which is (obviously) consciousness of self; and since consciousness of anything is a matter of something like the internal display of a representation of that thing, for one to be self-conscious, there must be a symbol—one’s self-symbol—available to display to… um… oneself. Put that way, having a self-symbol looks as pointless and futile as writing your own name on your forehead and staring into a mirror all day.

This line of thought kicks up clouds of dust and leaves one hopelessly confused, so let’s approach the problem from another angle entirely. In the Reflections on “Borges and I” we considered the possibility of seeing yourself on a TV monitor and not at first realizing that it was yourself you were seeing. In such a case you would have a representation of yourself before you—before your eyes on the TV screen, or before your consciousness, if you like—but it would not be the right sort of representation of yourself. What is the right sort? The difference between a he-symbol and a me-symbol is not a difference in spelling. (You couldn’t set everything right by doing something to your “symbol in consciousness” analogous to erasing the “h” and writing in “m”.) The distinguishing feature of a self-symbol couldn’t be what it “looked like” but the role it could play.

Could a machine have a self-symbol, or a self-concept? It is hard to say. Could a lower animal? Think of a lobster. Do we suppose it is self-conscious? It shows several important symptoms of having a self-concept. First of all, when it is hungry, whom does it feed? Itself. Second, and more important, when it is hungry it won’t eat just anything edible; it won’t, for instance, eat itself—though it could, in principle. It could tear off its own legs with its claws and devour them. But it wouldn’t be that stupid, you say, for when it felt the pain in its legs, it would know whose legs were being attacked and would stop. But why would it suppose the pain it felt was its pain? And besides, mightn’t the lobster be so stupid as not to care that the pain it was causing was its own pain?

These simple questions reveal that even a very stupid creature must be designed to behave with self-regard—to put it as neutrally as possible. Even the lowly lobster must have a nervous system wired up in such a way that it will reliably distinguish self-destructive from other-destructive behavior—and strongly favor the latter. It seems quite possible that the control structures required for such self-regarding behavior can be put together without a trace of consciousness, let alone self-consciousness. After all, we can make self-protective little robot devices that cope quite well in their simple environments and even produce an overwhelmingly strong illusion of “conscious purpose”—as illustrated in selection 8, “The Soul of the Mark III Beast.” But why say this is an illusion, rather than a rudimentary form of genuine self-consciousness—akin perhaps to the self-consciousness of a lobster or worm? Because robots don’t have the concepts? Well, do lobsters? Lobsters have something like concepts, apparently: what they have are in any event enough to govern them through their self-regarding lives. Call these things what you like, robots can have them too. Perhaps we could call them unconscious or preconscious concepts. Self-concepts of a rudimentary sort. The more varied the circumstances in which a creature can recognize itself, recognize circumstances as having a bearing on itself, acquire information about itself, and devise self-regarding actions, the richer (and more valuable) its self-conception—in this sense of “concept” that does not presuppose consciousness.

Suppose, to continue this thought experiment, we wish to provide our self-protective robot with some verbal ability, so it can perform the range of self-regarding actions language makes available—such as asking for help or for information, but also telling lies, issuing threats, and making promises. Organizing and controlling this behavior will surely require an even more sophisticated control structure: a representational system in the sense defined earlier, in the Reflections on “Prelude, Ant Fugue.” It will be one that not only updates information about the environment and the current location of the robot in it, but also has information about the other actors in the environment and what they are apt to know and want, what they can understand. Recall Ralph Numbers’s surmises about the motives and beliefs of Wagstaff.

Now Ralph Numbers is portrayed as conscious (and self-conscious—if we can distinguish the two), but is that really necessary? Might all Ralph Numbers’s control structure, with all its information about the environment—and about Numbers himself—be engineered without a trace of consciousness? Might a robot look just like Ralph Numbers from the outside—performing just as cleverly in all circumstances, executing all the same moves, making the same speeches—without having any inside? The author seems to hint that this would be possible just make the new Ralph Numbers like the old Ralph Numbers minus a self-symbol and a feeling of personal consciousness. Now if subtracting the supposed self-symbol and the feeling of personal consciousness left Ralph’s control structure basically intact—so that we on the outside would never be the wiser, for instance, and would go on engaging Ralph in conversations, enlisting his cooperation, and so forth—we would be back to the beginning and the sense that there is no point to a self-symbol—no work for it to do. If instead we think of Ralph’s having a self-symbol as precisely a matter of his having a control structure of a certain sophistication and versatility, capable of devising elaborate context-sensitive self-regarding act then there is no way of removing his self-symbol without downgrade his behavioral talents to pre-lobster stupidity.

Let Ralph have his self-symbol, then, but would a “feeling of personal consciousness” go along with it? To return to our question, is portrayal of Ralph as conscious necessary? It makes a better story, but the first-person perspective from Ralph Numbers’s point of view a of cheat? Poetic license, like Beatrix Potter’s talking bunny rabbits, better, the Little Engine That Could?

It is all very well to insist that you can conceive of Ralph Numbers existing with all his clever behavior but entirely lacking in consciousness (Searle makes such a claim in selection 22, “Minds, Brains, and Programs.”) Indeed you can always view a robot that way if you want. Just concentrate on images of little bits of internal hardware and re yourself that they are vehicles of information only by virtue of cleverly designed interrelationships between events in the sensed environment robotic actions, and the rest. But equally, you can view a human being way if you are really intent on it. Just concentrate on images of little of brain tissue—neurons and synapses and the like—and remind your that they are vehicles of information only by virtue of wonderfully signed interrelationships between sensed events in the environment bodily actions, and the rest. What you would leave out if you insisted viewing another person that way would be that person’s point of view, we say. But isn’t there a point of view for Ralph Numbers too? When are told the tale from that point of view, we understand what is going what decisions are being made, what hopes and fears are being ac upon. The point of view, viewed in the abstract as a sort of place f which to tell the story, is perfectly well defined even if we are inclined think that that point of view would be vacated, or uninhabited, w Ralph Numbers really to exist.

But why, finally, would anyone think the point of view was vacated? If the Ralph Numbers body existed, with its needs and circumstances,. if that body was self-controlled in the ways imagined in the story, and moreover, the speech acts it could perform included avowals of h things were from Ralph Numbers’s point of view, what grounds would anyone have—other than those of a vestigial and mystical dualism mind and body—for being skeptical about the existence of Ralph Numbers himself?


D.C.D.

17 Christopher Cherniak The Riddle of the Universe and Its Solution[23]

We have prepared this report to provide fuller information in connection with the President’s recent press conference on the so-called “Riddle.” We hope the report helps to dispel the ugly mood apparent throughout the country, bordering on panic, which has most recently found expression in irresponsible demands to close the universities. Our report has been prepared in haste; in addition, our work was tragically disrupted, as described later.

We first review the less well known early history of the Riddle. The earliest known case is that of C. Dizzard, a research fellow with the Autotomy Group at M.I.U. Dizzard had previously worked for several small firms specializing in the development of artificial intelligence software for commercial applications. Dizzard’s current project involved the use of computers in theorem proving, on the model of the proof in the 1970s of the four-color theorem. The state of Dizzard’s project is know only from a year-old progress report; however, these are often intended at most for external use. We shall not discuss the area of Dizzard’s work further. The reason for our reticence will be apparent shortly.

Dizzard last spoke one morning before an Easter weekend, while waiting for a routine main computer system failure to be fixed. Colleagues saw Dizzard at the terminal in his office at about midnight that day; late-night work habits are common among computer users, and Dizzard was known to sleep in his office. On the next afternoon, a coworker noticed Dizzard sitting at his terminal. He spoke to Dizzard, but Dizzard did not reply, not an unusual occurrence. On the morning after the vacation, another colleague found Dizzard sitting with his eyes open before his terminal, which was on. Dizzard seemed awake but did not respond to questions. Later that day, the colleague became concerned by Dizzard’s unresponsiveness and tried to arouse him from what he thought was a daydream or daze. When these attempts were unsuccessful, Dizzard was taken to a hospital emergency room.

Dizzard showed symptoms of a total food-and-water fast of a week (aggravated by marginal malnutrition caused by a vending-machine diet); he was in critical condition from dehydration. The inference was that Dizzard had not moved for several days, and that the cause of his immobility was a coma or trance. The original conjecture was that a stroke or tumor caused Dizzard’s paralysis. However, electroencephalograms indicated only deep coma. (According to Dizzard’s health records, he had been institutionalized briefly ten years ago, not an uncommon incident in certain fields.) Dizzard died, apparently of his fast, two days later. Autopsy was delayed by objections of next of kin, members of a breakaway sect of the neo Jemimakins cult. Histological examination of Dizzard’s brain so far has revealed no damage whatever; these investigations continue at the National Center for Disease Control.

The director of the Autotomy Group appointed one of Dizzard’s graduate students to manage his project while a decision was made about its future. The floor of Dizzard’s office was about one foot deep in papers and books; the student was busy for a month just sorting the materials into some general scheme. Shortly afterward, the student reported at a staff meeting that she had begun work on Dizzard’s last project and that she had found little of particular interest. A week later she was found sitting at the terminal in Dizzard’s office in an apparent daze.

There was confusion at first, because she was thought to be making a poor joke. She was staring straight ahead, breathing normally. She did not respond to questions or being shaken, and showed no startle response to loud noises. After she was accidentally bumped from her chair, she was hospitalized. The examining neurologist was unaware of Dizzard’s case. He reported the patient was in apparently good physical condition, except for a previously undiagnosed pineal gland abnormality. After Autotomy Project staff answered inquiries by the student’s friends, her parents informed the attending physician of Dizzard’s case. The neurologist noted the difficulty of comparing the two cases, but suggested the similarities of deep coma with no detectable brain damage; the student’s symptoms constituted no identifiable syndrome.

After further consultation, the neurologist proposed the illness might be caused by a slow acting sleeping-sickness-like pathogen, caught from Dizzard’s belongings—perhaps hitherto unknown, like Legionnaire’s Disease. Two weeks later, Dizzard’s and his student’s offices were quarantined. After two months with no further cases and cultures yielding only false alarms, quarantine was lifted.

When it was discovered janitors had thrown out some of Dizzard’s records, a research fellow and two more of Dizzard’s students decided to review his project files. On their third day, the students noticed that the research fellow had fallen into an unresponsive trancelike state and did not respond even to pinching. After the students failed to awaken the research fellow, they called an ambulance. The new patient showed the same symptoms as the previous case. Five days later, the city public health board imposed a quarantine on all building areas involved in Dizzard’s project.

The following morning, all members of the Autotomy Group refused to enter the research building. Later that day, occupants of the rest of the Autotomy Group’s floor, and then all 500 other workers in the building, discovered the Autotomy Project’s problems and left the building. The next day, the local newspaper published a story with the headline “Computer Plague.” In an interview, a leading dermatologist proposed that a virus or bacterium like computer lice had evolved that metabolized newly developed materials associated with computers, probably silicon. Others conjectured that the Autotomy Project’s large computers might be emitting some peculiar radiation. The director of the Autotomy Group was quoted: The illnesses were a public health matter, not a concern of cognitive scientists.

The town mayor then charged that a secret Army project involving recombinant DNA was in progress at the building and had caused the outbreak. Truthful denials of the mayor’s claim were met with understandable mistrust. The city council demanded immediate quarantine of the entire ten-story building and surrounding area. The university administration felt this would be an impediment to progress, but the local Congressional delegation’s pressure accomplished this a week later. Since building maintenance and security personnel would no longer even approach the area, special police were needed to stop petty vandalism by Juveniles. A Disease Control Center team began toxicological assays, protected by biohazard suits whenever they entered the quarantine zone. In the course of a month they found nothing, and none of them fell ill. At the time some suggested that, because no organic disease had been discovered in the three victims, and the two survivors showed some physiological signs associated with deep meditation states, the cases might be an outbreak of mass hysteria.

Meanwhile, the Autotomy Group moved into a “temporary” World War II-era wooden building. While loss of more than ten million dollars in computers was grave, the group recognized that the information, not the physical artifacts that embodied it, was indispensable. They devised a plan: biohazard-suited workers fed “hot” tapes to readers in the quarantine zone; the information was transmitted by telephone link from the zone to the new Autotomy Project site and recorded again. While transcription of the tapes allowed the project to survive, only the most important materials could be so reconstructed. Dizzard’s project was not in the priority class; however, we suspect an accident occurred.

A team of programmers was playing back new tapes, checking them on monitors, and provisionally indexing and filing their contents. A new programmer encountered unfamiliar material and asked a passing project supervisor whether it should be discarded. The programmer later reported the supervisor typed in commands to display the file on the monitor; as the programmer and the supervisor watched the lines advance across the screen, the supervisor remarked that the material did not look important. Prudence prevents our quoting his comments further. He then stopped speaking in midsentence. The programmer looked up; he found the supervisor staring ahead. The supervisor did not respond to questions. When the programmer pushed back his chair to run, it bumped the supervisor and he fell to the floor. He was hospitalized with the same symptoms as the earlier cases.

The epidemiology team, and many others, now proposed that the cause of illness in the four cases might not be a physical agent such as a virus or toxin, but rather an abstract piece of information—which could be stored on tape, transmitted over a telephone line, displayed on a screen, and so forth. This supposed information now became known as “the Riddle,” and the illness as “the Riddle coma.” All evidence was consistent with the once-bizarre hypothesis that any human who encountered this information lapsed into an apparently irreversible coma. Some also recognized that the question of exactly what this information is was extremely delicate.

This became clear when the programmer involved in the fourth case was interviewed. The programmer’s survival suggested the Riddle must be understood to induce coma. He reported he had read at least some lines on the monitor at the time the supervisor was stricken. However, he knew nothing about Dizzard’s project, and he was able to recall little about the display. A proposal that the programmer be hypnotized to improve his recall was shelved. The programmer agreed it would be best if he did not try to remember any more of what he had read, although of course it would be difficult to try not to remember something. Indeed, the programmer eventually was advised to abandon his career and learn as little more computer science as possible. Thus the ethical issue emerged of whether even legally responsible volunteers should be permitted to see the Riddle.

The outbreak of a Riddle coma epidemic in connection with a computer-assisted theorem-proving project could be explained; if someone discovered the Riddle in his head, he should lapse into coma before he could communicate it to anyone. The question arose of whether the Riddle had in fact been discovered earlier by hand and then immediately lost. A literature search would have been of limited value, so a biographical survey was undertaken of logicians, philosophers, and mathematicians working since the rise of modern logic. It has been hampered by precautions to protect the researchers from exposure to the Riddle. At present, at least ten suspect cases have been discovered, the earliest almost 100 years ago.

Psycholinguists began a project to determine whether Riddle coma susceptibility was species-specific to humans. “Wittgenstein,” a chimpanzee trained in sign language who had solved first-year college logic puzzles, was the most appropriate subject to see the Autotomy Project tapes. The Wittgenstein Project investigators refused to cooperate, on ethical grounds, and kidnapped and hid the chimpanzee; the FBI eventually found him. He was shown Autotomy tapes twenty-four hours a day, with no effect whatever. There have been similar results for dogs and pigeons. Nor has any computer ever been damaged by the Riddle.

In all studies, it has been necessary to show the complete Autotomy tapes. No safe strategy has been found for determining even which portion of the tapes contains the Riddle. During the Wittgenstein-Autotomy Project, a worker in an unrelated program seems to have been stricken with Riddle coma when some Autotomy tapes were printed out accidentally at a public user area of the computer facility; a month’s printouts had to be retrieved and destroyed.

Attention focused on the question of what the Riddle coma is. Since it resembled no known disease, it was unclear whether it was really a coma or indeed something to be avoided. Investigators simply assumed it was a virtual lobotomy, a kind of gridlock of the information in the synapses, completely shutting down higher brain functions. Nonetheless, it was unlikely the coma could be the correlate of a state of meditative enlightenment, because it seemed too deep to be consistent with consciousness. In addition, no known case of Riddle coma has ever shown improvement. Neurosurgery, drugs, and electrical stimulation have had, if any, only negative effects; these attempts have been stopped. The provisional verdict is that the coma is irreversible, although a project has been funded to seek a word to undo the “spell” of the Riddle, by exposing victims to computer-generated symbol strings.

The central question, “What is the Riddle?” obviously has to be approached very cautiously. The Riddle is sometimes described as “the Gödel sentence for the human Turing machine,” which causes the mind to jam; traditional doctrines of the unsayable and unthinkable are cited. Similar ideas are familiar in folklore—for instance, the religious theme of the power of the “Word” to mend the shattered spirit. But the Riddle could be of great benefit to the cognitive sciences. It might yield fundamental information about the structure of the human mind; it may be a Rosetta Stone for decoding the “language of thought,” universal among all humans, whatever language they speak. If the computational theory of mind is at all correct, there is some program, some huge word, that can be written into a machine, transforming the machine into a thinking thing; why shouldn’t there be a terrible word, the Riddle, that would negate the first one? But all depended on the feasibility of a field of “Riddle-ology” that would not self-destruct.

At this point, an even more disturbing fact about the Riddle began to emerge. A topologist in Paris lapsed into a coma similar in some respects to Dizzard’s. No computer was involved in this case. The mathematician’s papers were impounded by the French, but we believe that, although this mathematician was not familiar with Dizzard’s work, she had become interested in similar areas of artificial intelligence. About then four members of the Institute for Machine Computation in Moscow stopped appearing at international conferences and, it seems, personally answering correspondence; FBI officials claimed the Soviet Union had, through routine espionage, obtained the Autotomy tapes. The Defense Department began exploring the concept of “Riddle warfare.”

Two more cases followed, a theoretical linguist and a philosopher, both in California but apparently working independently. Neither was working in Dizzard’s area, but both were familiar with formal methods developed by Dizzard and published in a well-known text ten years ago. A still more ominous case appeared, of a biochemist working on information-theoretic models of DNA-RNA interactions. (The possibility of a false alarm remained, as after entering coma the biochemist clucked continuously, like a chicken.)

The Riddle coma could no longer safely be assumed an occupational hazard of Dizzard’s specialty alone; it seemed to lurk in many forms. The Riddle and its effect seemed not just language-independent. The Riddle, or cognates of it, might be topic-independent and virtually ubiquitous. Boundaries for an intellectual quarantine could not be fixed confidently.

But now we are finding, in addition, that the Riddle seems an idea whose time has come—like the many self-referential paradoxes (of the pattern “This sentence is false”) discovered in the early part of this century. Perhaps this is reflected in the current attitude that “computer science is the new liberal art.” Once the intellectual background has evolved, widespread discovery of the Riddle appears inevitable. This first became clear last winter when most of the undergraduates in a large new introductory course on automata theory lapsed into coma during a lecture. (Some who did not nevertheless succumbed a few hours later; typically, their last word was “aha.”) When similar incidents followed elsewhere, public outcry led to the president’s press conference and this report.

While the present logophobic atmosphere and cries of “Close the universities” are unreasonable, the Riddle coma pandemic cannot be viewed as just another example of runaway technology. The recent “Sonic Oven” case in Minneapolis, for instance, in which a building with a facade of parabolic shape concentrated the noise of nearby jets during takeoff, actually killed only the few people who happened to walk through the parabola’s focus at the wrong time. But even if the Riddle coma were a desirable state for an individual (which, we have seen, it does not seem to be), the current pandemic has become an unprecedented public health crisis; significant populations are unable to care for themselves. We can only expect the portion of our research community—an essential element of society—that is so incapacitated to grow, as the idea of the Riddle spreads.

The principal objective of our report was at least to decrease further coma outbreaks. Public demand for a role in setting research policy has emphasized the dilemma we confront: how can we warn against the Riddle, or even discuss it, without spreading its infection? The more specific the warning, the greater the danger. The reader might accidentally reach the stage at which he sees “If p then q” and p, and so cannot stop himself from concluding q, where q is the Riddle. Identifying the hazardous areas would be like the children’s joke “I’ll give you a dollar if you’re not thinking of pink rats ten seconds from now.”

A question of ethics as well as of policy remains; is the devastating risk of the Riddle outweighed by the benefits of continued research in an ill-defined but crucial set of fields? In particular, the authors of this report have been unable to resolve the issue of whether the possible benefit of any report itself can outweigh its danger to the reader. Indeed, during preparation of our final draft one of us tragically succumbed.

Reflections

This curious story is predicated upon a rather outlandish yet intriguing idea: a mind-arresting proposition, one that throws any mind into a sort of paradoxical trance, perhaps even the ultimate Zen state of satori. It is reminiscent of a Monty Python skit about a joke so funny that anyone hears it will literally die laughing. This joke becomes the ultimate secret weapon of the British military, and no one is permitted to know more than one word of it. (People who learn two words laugh so hard they require hospitalization!)

This kind of thing has historical precedents, of course, both in life and in literature. There have been mass manias for puzzles, there have been dancing fits, and so on. Arthur C. Clarke wrote a short story about a tune so catchy that it seizes control of the mind of anyone who hears it. In mythology, sirens and other bewitching females can complete fascinate males and thus overpower them. But what is the nature of such mythical mind-gripping powers?

Cherniak’s description of the Riddle as “the Gödel sentence for human Turing machine” may seem cryptic. It is partly explicated later his likening it to the self-referential paradox “This sentence is false”; here, a tight closed loop is formed when you attempt to decide when it is indeed true or false, since truth implies falsity, and vice versa. The nature of this loop is an important part of its fascination. A look at a variations on this theme will help to reveal a shared central mechanism underlying their paradoxical, perhaps mind-trapping, effect.

One variant is: “This sentence contains threee errors.” On read it, one’s first reaction is, “No, no—it contains two errors. Whoever wrote the sentence can’t count.” At this point, some readers simply walk away scratching their heads and wondering why anyone would write such pointless, false remark. Other readers make a connection between sentence’s apparent falsity and its message. They think to themselves “Oh, it made a third error, after all—namely, in counting its own errors.” A second or two later, these readers do a double-take, when they realize that if you look at it that way, it seems to have correctly counted its errors and is thus not false, hence contains only two errors, and… “But… Wait a minute. Hey! Hmm…” The mind flips back and forth a few times and say the bizarre sensation of a sentence undermining itself by means of an interlevel contradiction—yet before long it tires of the confusion and jumps out of the loop into contemplation, possibly on the purpose or interest of the idea, possibly on the cause or resolution of the paradox, possibly simply to another topic entirely.

A trickier variant is “This sentence contains one error.” Of course it is in error, since it contains no errors. That is, it contains no spelling errors (“first-order errors”). Needless to say, there are such things as “second-order errors”—errors in the counting of first-order errors. So the sentence has no first-order errors and one second-order error. Had it talked about how many first-order errors it had, or how many second-order errors it had, that would be one thing—but it makes no such fine distinctions. The levels are mixed indiscriminately. In trying to act as its own objective observer, the sentence gets hopelessly muddled in a tangle of logical spaghetti.

C. H. Whitely invented a curious and more mentalistic version of the fundamental paradox, explicitly bringing in the system thinking about itself. His sentence was a barb directed at J. R. Lucas, a philosopher one of whose aims in life is to show that Gödel’s work is actually the most ineradicable uprooting of mechanism ever discovered—a philosophy, incidentally, that Gödel himself may have believed. Whitely’s sentence is this:

Lucas cannot consistently assert this sentence.

Is it true? Could Lucas assert it? If he did, that very act would undermine his consistency (nobody can say “I can’t say this” and remain consistent). So Lucas cannot consistently assert it—which is its claim, and so the sentence is true. Even Lucas can see it is true, but he can’t assert it. It must be frustrating for poor Lucas! None of us share his problem, of course! Worse yet, consider:

Lucas cannot consistently believe this sentence.

For the same reasons, it is true—but now Lucas cannot even believe it, let alone assert it, without becoming a self-contradictory belief system.

To be sure, no one would seriously maintain (we hope!) that people are even remotely close to being internally consistent systems, but if this kind of sentence is formalized in mathematical garb (which can be done), so that Lucas is replaced by a well-defined “believing system” L, then there arises serious trouble for the system, if it wishes to remain consistent. The formalized Whitely sentence for L is an example of a statement that the system itself could never believe! Any other believing system is immune to this particular sentence; but on the other hand is a formalized Whitely sentence for that system as well. Every “believing system” has its own tailor-made Whitely sentence—its Achilles’ heel.

These paradoxes all are consequences of a formalization of an observation as old as humanity: an object bears a very special and unique relationship to itself, which limits its ability to act upon itself in the way it can act on all other objects. A pencil cannot write on itself; a fly swatter cannot swat a fly sitting on its handle (this observation made by German philosopher-scientist Georg Lichtenberg); a snake cannot eat itself; and so on. People cannot see their own faces except via ext aids that present images—and an image is never quite the same as original thing. We can come close to seeing and understanding ours objectively, but each of us is trapped inside a powerful system with a unique point of view—and that power is also a guarantor of limitedness. And this vulnerability—this self-hook—may also be the source of ineradicable sense of “I.”




Malcolm Fowler’s hammer nailing itself is a new version of the ouroboros. (From Vicious Circles and Infinity: An Anthology of Paradoxes by Patrick Hughes and George Brecht.)




The “Short Circuit” serves to illustrate the short circuit of logical paradox. The negative invites the positive, and the inert circle is complete. (From Vicious Circles and Infinity.)



But let us go back to Cherniak’s story. As we have seen, the self-referential linguistic paradoxes are deliciously tantalizing, but hardly dangerous for a human mind. Cherniak’s Riddle, by contrast, must be far more sinister. Like a Venus’s-flytrap, it lures you, then snaps down, trapping you in a whirlpool of thought, sucking you ever deeper down into a vortex, a “black hole of the mind,” from which there is no escape back to reality. Yet who on the outside knows what charmed alternate reality the trapped mind has entered?

The suggestion that the mind-breaking Riddle thought would be based on self-reference provides a good excuse to discuss the role of looplike self-reference or interlevel feedback in creating a self—a soul—out of inanimate matter. The most vivid example of such a loop is that of a television on whose screen is being projected a picture of the television itself. This causes a whole cascade of ever-smaller screens to appear one within another. This is easy to set up if you have a television camera.

The results [see figure] are quite fascinating and often astonishing. The simplest shows the nested-boxes effect, in which one has the illusion of looking down a corridor. To achieve a heightened effect, if you rotate the camera clockwise around the axis of its lens, the first inner screen will appear to rotate counterclockwise. But then the screen one level farther down will be doubly rotated—and so on. The resulting pattern is a pretty spiral, and by using various amounts of tilt and zoom, one can create a wide variety of effects. There are also complicating effects due to such things as the graininess of the screen, the distortion caused by unequal horizontal and vertical scales, the time-delay of the circuit, and so on.




A variety of effects that can be achieved using a self-engulfing television system. (Photographs by Douglas R. Hofstadter.)



All these parameters of the self-referential mechanism imbue each pattern with unexpected richness. One of the striking facts about this kind of “self-image” pattern on a TV screen is that it can become so complex that its origin in videofeedback is entirely hidden. The contents of the screen may simply appear to be an elegant, complicated design—as is apparent in some shown in the figure.

Suppose we had set up two identical systems of this sort with identical parameters, so that their screens showed exactly the same design. Suppose we now made a tiny change in one—say by moving the camera a very small amount. This tiny perturbation will get picked up and will ripple down the many layers of screen after screen, and the overall effect on the visible “self-image” may be quite drastic. Yet the style of the interlevel feedback of the two systems is still in essence the same. Aside from this one small change we made deliberately, all the parameters are still the same. And by reversing the small perturbation, we can easily return to the original state, so in a fundamental sense we are still “close” to where we started. Would it then be more correct to say that we have two radically different systems, or two nearly identical systems?

Let us use this as a metaphor for thinking about human souls. Could it be valid to suppose that the “magic” of human consciousness somehow arises from the closing of a loop whereby the brain’s high level—its symbol level—and its low level—its neurophysiological level—are somehow tied together in an exquisite closed loop of causality? Is the “private I” just the eye of a self-referential typhoon?

Let it be clear that we are making not the slightest suggestion here that a television system (camera plus receiver) becomes conscious at the instant that its camera points at its screen! A television system does not satisfy the criteria that were set up earlier for representational systems. The meaning of its image—what we human observers perceive and describe in words—is lost to the television system itself. The system does not divide up the thousands of dots on the screen into “conceptual pieces” that it recognizes as standing for people, dogs, tables, and so forth. Nor do the dots have autonomy from the world they represent. The dots are simply passive reflections of light patterns in front of the camera, and if the lights go out, so do the dots.

The kind of closed loop we are referring to is one where a true representational system perceives its own state in terms of its repertoire of concepts. For instance, we perceive our own brain state not in terms of which neurons are connected to which others, or which ones are firing but in concepts that we articulate in words. Our view of our brain is as a pile of neurons but as a storehouse of beliefs and feelings and ideas. We provide a readout of our brain at that level, by saying such things as “I am a little nervous and confused by her unwillingness to go to the party.” Once articulated, this kind of self-observation then reenters the system as something to think about—but of course the reentry proceeds via the usual perceptual processes—namely, millions of neurons firing. The loop that is closed here is far more complex and level-muddling the television loop, beautiful and intricate though that may seem.

As a digression it is important to mention that much recent progress in artificial intelligence work has centered around the attempt to give a program a set of notions about its own inner structures, and ways, reacting when it detects certain kinds of change occurring inside itself. At present, such self-understanding and self-monitoring abilities of programs are quite rudimentary, but this idea has emerged as one of the prerequisites to the attainment of the deep flexibility that is synonymous with genuine intelligence.

Currently two major bottlenecks exist in the design of an artificial mind: One is the modeling of perception, the other the modeling of learning. Perception we have already talked about as the funneling of a myriad low-level responses into a jointly agreed-upon overall interpretation the conceptual level. Thus it is a level-crossing problem. Learning is a level-crossing problem. Put bluntly, one has to ask, “How do my symbols program my neurons?” How do those finger motions that you execute over and over again in learning to type get converted slowly systematic changes in synaptic structures? How does a once-conscious activity become totally sublimated into complete unconscious oblivion? The thought level, by force of repetition, has somehow “reached downward” and reprogrammed some of the hardware underlying it. The same goes for learning a piece of music or a foreign language.

In fact, at every instant of our lives we are permanently changing synaptic structures: We are “filing” our current situation in our memory under certain “labels” so that we can retrieve it at appropriate timers the future (and our unconscious mind has to be very clever doing since it is very hard to anticipate the kinds of future situations in which we would benefit from recalling the present moment).

The self is, in this view, a continually self-documenting “worldline” (the four-dimensional path traced by an object as it moves through time and space). Not only is a human being a physical object that internally preserves a history of its worldline, but moreover, that stored worldline in turn serves to determine the object’s future worldline. This large-scale harmony among past, present, and future allows you to perceive your self, despite its ever-changing and multifaceted nature, as a unity with some internal logic to it. If the self is likened to a river meandering through spacetime, then it is important to point out that not just the features of the landscape but also the desires of the river act as forces determining the bends in the river.

Not only does our conscious mind’s activity create permanent side effects at the neural level; the inverse holds too: Our conscious thoughts seem to come bubbling up from subterranean caverns of our mind, images flood into our mind’s eye without our having any idea where they came from! Yet when we publish them, we expect that we—not our subconscious structures—will get credit for our thoughts. This dichotomy of the creative self into a conscious part and an unconscious part is one of the most disturbing aspects of trying to understand the mind. If—as was just asserted—our best ideas come burbling up as if from mysterious underground springs, then who really are we? Where does the creative spirit really reside? Is it by an act of will that we create, or are we just automata made out of biological hardware, from birth until death fooling ourselves through idle chatter into thinking that we have “free will”? If we are fooling ourselves about all these matters, then whom—or what—are we fooling?

There is a loop lurking here, one that bears a lot of investigation. Cherniak’s story is light and entertaining, but it nonetheless hits the nail on the head by pointing to Gödel’s work not as an argument against mechanism, but as an illustration of the primal loop that seems somehow deeply implicated in the plot of consciousness.


D.R.H.

Загрузка...