V Created Selves and Free Will

18 Stanislaw Lem The Seventh Sally or How Trurl’s Own Perfection Led to No Good[24]

The Universe is infinite but bounded, and therefore a beam of light, in whatever direction it may travel, will after billions of centuries return if powerful enough—to the point of its departure; and it is no different with rumor, that flies about from star to star and makes the rounds of every planet. One day Trurl heard distant reports of two mighty constructor-benefactors, so wise and so accomplished that they had no equal; with this news he ran to Klapaucius, who explained to him that these were not mysterious rivals, but only themselves, for their fame had circumnavigated space. Fame, however, has this fault, that it says nothing of one’s failures, even when those very failures are the product of a great perfection. And he who would doubt this, let him recall the last of the seven sallies of Trurl, which was undertaken without Klapaucius, whom certain urgent duties kept at home at the time.

In those days Trurl was exceedingly vain, receiving all marks of veneration and honor paid to him as his due and a perfectly normal thing. He was heading north in his ship, as he was the least familiar with that region, and had flown through the void for quite some time, passing spheres full of the clamor of war as well as spheres that had finally obtained the perfect peace of desolation, when suddenly a little planet came into view, really more of a stray fragment of matter than a planet.

On the surface of this chunk of rock someone was running back and forth, jumping and waving his arms in the strangest way. Astonished by a scene of such total loneliness and concerned by those wild gestures of despair, and perhaps of anger as well, Trurl quickly landed.

He was approached by a personage of tremendous hauteur, iridium and vanadium all over and with a great deal of clanging and clanking, who introduced himself as Excelsius the Tartarian, ruler of Pancreon and Cyspenderora; the inhabitants of both these kingdoms had, in a fit of regicidal madness, driven His Highness from the throne and exiled him to this barren asteroid, eternally adrift among the dark swells and currents of gravitation.

Learning in turn the identity of his visitor, the deposed monarch began to insist that Trurl—who after all was something of a professional when it came to good deeds—immediately restore him to his former position. The thought of such a turn of events brought the flame of vengeance to the monarch’s eyes, and his iron fingers clutched the air, as if already closing around the throats of his beloved subjects.

Now Trurl had no intention of complying with this request of Excelsius, as doing so would bring about untold evil and suffering, yet at the same time he wished somehow to comfort and console the humiliated king. Thinking a moment or two, he came to the conclusion that, even in this case, not all was lost, for it would be possible to satisfy the king completely—without putting his former subjects in jeopardy. And so, rolling up his sleeves and summoning up all his mastery, Trurl built the king an entirely new kingdom. There were plenty of towns, rivers, mountains, forests, and brooks, a sky with clouds, armies full of derring-do, citadels, castles, and ladies’ chambers; and there were marketplaces, gaudy and gleaming in the sun, days of back-breaking labor, nights full of dancing and song until dawn, and the gay clatter of swordplay. Trurl also carefully set into this kingdom a fabulous capital, all in marble and alabaster, and assembled a council of hoary sages, and winter palaces and summer villas, plots, conspirators, false witnesses, nurses, informers, teams of magnificent weeds, and plumes waving crimson in the wind, and then he crisscrossed that atmosphere with silver fanfares and twenty-one gun salutes, also threw in the necessary handful of traitors, another of heroes, added a pinch of prophets and seers, and one messiah and one great poet each, after which he bent over and set the works in motion, deftly making last-minute adjustments with his microscopic tools as it ran, and he gave the women of that kingdom beauty, the men—sullen silence and surliness when drunk, the officials—arrogance and servility, the astronomers—an enthusiasm for stars, and the children—a great capacity for noise. And all of this, connected, mounted and ground to precision, fit into a box, and not a very large box, but just the size that could be carried about with ease. This Trurl presented to Excelsius, to rule and have dominion over forever; but first he showed him where the input and output of his brand-new kingdom were, and how to program wars, quell rebellions, exact tribute, collect taxes, and also instructed him in the critical points and transition states of that microminiaturized society—in other words the maxima and minima of palace coups and revolutions and explained everything so well that the king, an old hand in the running of tyrannies, instantly grasped the directions and, without hesitation, while the constructor watched, issued a few trial proclamations, correctly manipulating the control knobs, which were carved with imperial eagles and regal lions. These proclamations declared a state of emergency, martial law, a curfew, and a special levy. After a year had passed in the kingdom, which amounted to hardly a minute for Trurl and the king, by an act of the greatest magnanimity —that is, by a flick of the finger at the controls—the king abolished one death penalty, lightened the levy, and deigned to annul the state of emergency, whereupon a tumultuous cry of gratitude, like the squeaking of tiny mice lifted by their tails, rose up from the box, and through its curved glass cover one could see, on the dusty highways and along the banks of lazy rivers that reflected the fluffy clouds, the people rejoicing and praising the great and unsurpassed benevolence of their sovereign lord.

And so, though at first he had felt insulted by Trurl’s gift, in that the kingdom was too small and very like a child’s toy, the monarch saw that the thick glass lid made everything inside seem large; perhaps too he dully understood that size was not what mattered here, for government is not measured in meters and kilograms, and emotions are somehow the same, whether experienced by giants or dwarfs—and so he thanked the constructor, if somewhat stiffly. Who knows, he might even have liked to order him thrown in chains and tortured to death, just to be safe—that would have been a sure way of nipping in the bud any gossip about how some common vagabond tinkerer presented a mighty monarch with a kingdom.

Excelsius was sensible enough, however, to see that this was out of the question, owing to a very fundamental disproportion, for fleas could sooner take their host into captivity than the king’s army seize Trurl. So with another cold nod, he stuck his orb and scepter under his arm, lifted the box kingdom with a grunt, and took it to his humble hut of exile. And as blazing day alternated with murky night outside, according to the rhythm of the asteroid’s rotation, the king, who was acknowledged by his subjects as the greatest in the world, diligently reigned, bidding this, forbidding that, beheading, rewarding—in all these ways incessantly spurring his little ones on to perfect fealty and worship of the throne.

As for Trurl, he returned home and related to his friend Klapaucius, not without pride, how he had employed his constructor’s genius to indulge the autocratic aspirations of Excelsius and, at the same time, safeguard the democratic aspirations of his former subjects. But Klapaucius, surprisingly enough, had no words of praise for Trurl; in fact, there seemed to be rebuke in his expression.

“Have I understood you correctly?” he said at last. “You gave that brutal despot, that born slave master, that slavering sadist of a painmonger, you gave him a whole civilization to rule and have dominion over forever? And you tell me, moreover, of the cries of joy brought on by the repeal of a fraction of his cruel decrees! Trurl, how could you have done such a thing?”

“You must be joking!” Trurl exclaimed. “Really, the whole kingdom fits into a box three feet by two by two and a half… it’s only a model....

“A model of what?”

“What do you mean, of what? Of a civilization, obviously, except that it’s a hundred million times smaller.”

“And how do you know there aren’t civilizations a hundred million times larger than our own? And if there were, would ours then be a model? And what importance do dimensions have anyway? In that box kingdom, doesn’t a journey from the capital to one of the corners take months—for those inhabitants? And don’t they suffer, don’t they know the burden of labor, don’t they die?”

“Now just a minute, you know yourself that all these processes take place only because I programmed them, and so they aren’t genuine....”

“Aren’t genuine? You mean to say the box is empty, and the parades, tortures, and beheadings are merely an illusion?”

“Not an illusion, no, since they have reality, though purely as certain microscopic phenomena, which I produced by manipulating atoms,” said Trurl. “The point is, these births, loves, acts of heroism, and denunciations are nothing but the minuscule capering of electrons in space, precisely arranged by the skill of my nonlinear craft, which—”

“Enough of your boasting, not another word!” Klapaucius snapped. “Are these processes self-organizing or not?”

“Of course they are!”

“And they occur among infinitesimal clouds of electrical charge?”

“You know they do.”

“And the phenomenological events of dawns, sunsets, and bloody battles are generated by the concatenation of real variables?”

“Certainly.”

“And are not we as well, if you examine us physically, mechanistically, statistically, and meticulously, nothing but the minuscule capering of electron clouds? Positive and negative charges arranged in space? And is our existence not the result of subatomic collisions and the interplay of particles, though we ourselves perceive those molecular cartwheels as fear, longing, or meditation? And when you daydream, what transpires within your brain but the binary algebra of connecting and disconnecting circuits, the continual meandering of electrons?”

“What, Klapaucius, would you equate our existence with that of an imitation kingdom locked up in some glass box?!” cried Trurl. “No, really, that’s going too far! My purpose was simply to fashion a simulator of statehood, a model cybernetically perfect, nothing more!”

“Trurl! Our perfection is our curse, for it draws down upon our every endeavor no end of unforeseeable consequences!” Klapaucius said in a stentorian voice. “If an imperfect imitator, wishing to inflict pain, were to build himself a crude idol of wood or wax, and further give it some makeshift semblance of a sentient being, his torture of the thing would be a paltry mockery indeed! But consider a succession of improvements on this practice! Consider the next sculptor, who builds a doll with a recording in its belly, that it may groan beneath his blows; consider a doll which, when beaten, begs for mercy, no longer a crude idol, but a homeostat; consider a doll that sheds tears, a doll that bleeds, a doll that fears death, though it also longs for the peace that only death can bring! Don’t you see, when the imitator is perfect, so must be the imitation, and the semblance becomes the truth, the pretense a reality! Trurl, you took an untold number of creatures capable of suffering and abandoned them forever to the rule of a wicked tyrant.... Trurl, you have committed a terrible crime!”

“Sheer sophistry!” shouted Trurl, all the louder because he felt the force of his friend’s argument. “Electrons meander not only in our brains, but in phonograph records as well, which proves nothing, and certainly gives no grounds for such hypostatical analogies! The subjects of that monster Excelsius do in fact die when decapitated, sob, fight, and fall in love, since that is how I set up the parameters, but it’s impossible to say, Klapaucius, that they feel anything in the process—the electrons jumping around in their heads will tell you nothing of that!”

“And if I were to look inside your head, I would also see nothing but electrons,” replied Klapaucius. “Come now, don’t pretend not to understand what I’m saying, I know you’re not that stupid! A phonograph record won’t run errands for you, won’t beg for mercy or fall on its knees! You say there’s no way of knowing whether Excelsius’s subjects groan, when beaten, purely because of the electrons hopping about inside—like wheels grinding out the mimicry of a voice—or whether they really groan, that is, because they honestly experience the pain? A pretty distinction, this! No, Trurl, a sufferer is not one who hands you his suffering, that you may touch it, weigh it, bite it like a coin; a sufferer is one who behaves like a sufferer! Prove to me here and now, once and for all, that they do not feel, that they do not think, that they do not in any way exist as being conscious of their enclosure between the two abysses of oblivion—the abyss before birth and the abyss that follows death—prove this to me, Trurl, and I’ll leave you be! Prove that you only imitated suffering, and did not create it!”

“You know perfectly well that’s impossible,” answered Trurl quietly. “Even before I took my instruments in hand, when the box was still empty, I had to anticipate the possibility of precisely such a proof—in order to rule it out. For otherwise the monarch of that kingdom sooner or later would have gotten the impression that his subjects were not real subjects at all, but puppets, marionettes. Try to understand, there was no other way to do it! Anything that would have destroyed in the littlest way the illusion of complete reality would have also destroyed the importance, the dignity of governing, and turned it into nothing but a mechanical game....”

“I understand, I understand all too well!” cried Klapaucius. “Your intentions were the noblest—you only sought to construct a kingdom as lifelike as possible, so similar to a real kingdom, that no one, absolutely no one, could ever tell the difference, and in this, I am afraid, you were successful! Only hours have passed since your return, but for them, the ones imprisoned in that box, whole centuries have gone by—how many beings, how many lives wasted, and all to gratify and feed the vanity of King Excelsius!”

Without another word Trurl rushed back to his ship, but saw that his friend was coming with him. When he had blasted off into space, pointed the bow between two great clusters of eternal flame and opened the throttle all the way, Klapaucius said:

“Trurl, you’re hopeless. You always act first, think later. And now what do you intend to do when we get there?”

“I’ll take the kingdom away from him!”

“And what will you do with it?”

“Destroy it!” Trurl was about to shout, but choked on the first syllable when he realized what he was saying. Finally he mumbled:

“I’ll hold an election. Let them choose just rulers from among themselves.”

“You programmed them all to be feudal lords or shiftless vassals. What good would an election do? First you’d have to undo the entire structure of the kingdom, then assemble from scratch…”

“And where,” exclaimed Trurl, “does the changing of structures end and the tampering with minds begin?!” Klapaucius had no answer for this, and they flew on in gloomy silence, till the planet of Excelsius came into view. As they circled it, preparing to land, they beheld a most amazing sight.

The entire planet was covered with countless signs of intelligent life. Microscopic bridges, like tiny lines, spanned every rill and rivulet, while the puddles, reflecting the stars, were full of microscopic boats like floating chips.... The night side of the sphere was dotted with glimmering cities, and on the day side one could make out flourishing metropolises, though the inhabitants themselves were much too little to observe, even through the strongest lens. Of the king there was not a trace, as if the earth had swallowed him up.

“He isn’t here,” said Trurl in an awed whisper. “What have they done with him? Somehow they managed to break through the walls of their box and occupy the asteroid....”

“Look!” said Klapaucius, pointing to a little cloud no larger than a thimble and shaped like a mushroom; it slowly rose into the atmosphere. “They’ve discovered atomic energy.... And over there—you see that bit of glass? It’s the remains of the box, they’ve made it into some sort of temple....”

“I don’t understand. It was only a model, after all. A process with a large number of parameters, a simulation, a mock-up for a monarch to practice on, with the necessary feedback, variables, multistats…” muttered Trurl, dumbfounded.

“Yes. But you made the unforgivable mistake of overperfecting your replica. Not wanting to build a mere clocklike mechanism, you inadvertently—in your punctilious way—created that which was possible, logical, and inevitable, that which became the very antithesis of a mechanism....”

“Please, no more!” cried Trurl. And they looked out upon the asteroid in silence, when suddenly something bumped their ship, or rather grazed it slightly. They saw this object, for it was illuminated by the thin ribbon of flame that issued from its tail. A ship, probably, or perhaps an artificial satellite, though remarkably similar to one of those steel boots the tyrant Excelsius used to wear. And when the constructors raised their eyes, they beheld a heavenly body shining high above the tiny planet—it hadn’t been there previously—and they recognized, in that cold, pale orb, the stern features of Excelsius himself, who had in this way become the Moon of the Microminians.

Reflections

But sure as oft as women weep

It is to be supposed they grieve,

—Andrew Marvell

“No, Trurl, a sufferer is not one who hands you his suffering, that you may touch it, weigh it, bite it like a coin; a sufferer is one who behaves like a sufferer!”

It is interesting, the choice of words that Lem uses in describing his fantastic simulations. Words like “digital,” “nonlinear,” “feedback,” “self-organizing,” and “cybernetic” come up over and over again in his stories. They have an old-fashioned flavor different from that of most terms that come up in current discussions of artificial intelligence. Much of the work in AI has wandered off in directions that have little to do with perception, learning, and creativity. More of it is directed toward such things as simulating the ability to use language—and we say “simulating” advisedly. It seems to us that many of the most difficult and challenging parts of artificial intelligence research lie ahead—and the “self-organizing,” “nonlinear” nature of the human mind will then come back as an important mystery to be attacked. In the meanwhile Lem vividly brings out some of the powerful, heady scents that those words ought to carry.

In his novel Even Cowgirls Get the Blues,[25] Tom Robbins has a passage that is strikingly similar to Lem’s vision of a tiny manufactured world:

For Christmas that year, Julian gave Sissy a miniature Tyrolean village. The craftsmanship was remarkable.

There was a tiny cathedral whose stained-glass windows made fruit salad of sunlight. There was a plaza and ein Biergarten. The Biergarten got quite noisy on Saturday nights. There was a bakery that smelled always of hot bread and strudel. There was a town hall and a police station, with cutaway sections that revealed standard amounts of red tape and corruption. There were little Tyroleans in leather britches, intricately stitched, and, beneath the britches, genitalia of equally fine workmanship. There were ski shops and many other interesting things, including art orphanage. The orphanage was designed to catch fire and burn down every Christmas Eve. Orphans would dash into the snow with their nightgowns blazing. Terrible. Around the second week of January, a fire inspector would come and poke through the ruins, muttering, “If they had only listened to me, those children would be alive today.”

Although in subject it resembles the Lem piece greatly, in flavor it is completely different. It is as if two composers had independently come up with the same melody but harmonized it utterly differently. Far from drawing you into believing in the genuine feelings of the tiny people, Robbins makes you see them as merely incredible (if not incredibly silly) pieces of fine clockwork.

The repetition of the orphanage drama year after year, echoing the Nietzschean idea of eternal recurrence—that everything that has happened will happen again and again—seems to rob the little world of any real meaning. Why should the repetition of the fire inspector’s lament make it sound so hollow? Do the little Tyroleans rebuild the orphanage themselves or is there a “RESET” button? Where do the new orphans come from, or do the “dead” ones come back to “life”? As with the other fantasies here, it is often instructive to think about the details omitted.

Subtle stylistic touches and narrative tricks make all the difference as to whether you get sucked into belief in the genuineness of the tiny souls. Which way do you tilt?


D.R.H.

D.C.D.

19 Stanislaw Lem Non Serviam[26]

Professor Dobb’s book is devoted to personetics, which the Finnish philosopher Eino Kaikki has called “the crudest science man has ever created.” Dobb, one of the most distinguished personeticists today, shares this view. One cannot escape the conclusion, he says, that personetics is, in its application, immoral, we are dealing, however, with a type of pursuit that is, though counter to the principles of ethics, also of practical necessity for us. There is no way, in the research, to avoid its special ruthlessness, to avoid doing violence to one’s natural instincts, and if nowhere else it is here that the myth of the perfect innocence of the scientist as a seeker of facts is exploded. We are speaking of a discipline, after all, which, with only a small amount of exaggeration, for emphasis, has been called “experimental theogony.” Even so, this reviewer is struck by the fact that when the press played up the thing, nine years ago, public opinion was stunned by the personetic disclosures. One would have thought that in this day and age nothing could surprise us. The centuries rang with the echo of the feat of Columbus, whereas the conquering of the Moon in the space of a week was received by the collective consciousness as a thing practically humdrum. And yet the birth of personetics proved to be a shock.

The name combines Latin and Greek derivatives: “persona” and “genetic” — “genetic” in the sense of formation or creation. The field is a recent offshoot of the cybernetics and psychonics of the eighties, crossbred with applied itellitronics. Today everyone knows of personetics; the man in the street would say, if asked, that it is the artificial production of intelligent beings—an answer not wide of the mark, to be sure, but not quite getting to the heart of the matter. To date we have nearly a hundred personetics programs. Nine years ago identity schemata were being developed—primitive cores of the “linear” type—but even that generation of computers, today of historical value only, could not yet provide a field for the true creation of personoids.

The theoretical possibility of creating sentience was divined some time ago by Norbert Wiener, as certain passages of his last book, God and Golem, bear witness. Granted, he alluded to it in that half-facetious manner typical of him, but underlying the facetiousness were fairly grim premonitions. Wiener, however, could not have foreseen the turn that things would take us twenty years later. The worst came about—in the words of Sir Donald Acker—when at MIT “the inputs were shorted to the outputs.”

At present a “world” for personal “inhabitants” can be prepared in a matter of a couple of hours. This is the time it takes to feed into the machine one of the full-fledged programs (such as BAAL 66, CREAN IV or JAHVE 09). Dobbs gives a rather cursory sketch of the beginnings of personetics, referring the reader to the historical sources; a confirmed practitioner-experimenter himself, he speaks mainly of his own work—which is much to the point, since between the English school, which Dobb represents, and the American school at MIT, the differences are considerable, both in the area of methodology and as regards experimental goals. Dobb describes the procedure of “6 days in 120 minutes” as follows. First one supplies the machine’s memory with a minimal set of givens, that is—to keep within a language comprehensible to laymen—one load sits memory with substance that is “mathematical.” This substance is the protoplasm of a universum to be “habitated” by personoids. We are now able to supply the beings that will come into this mechanical, digital world—that will be carrying on an existence in it, and in it, only—with an environment of nonfinite characteristics. These beings, wherefore, cannot feel imprisoned in the physical sense, because the environment does not have, from their standpoint, any bounds. The medium possesses only one dimension that resembles a dimension given us also—namely, that of the passage of time (duration). Their time is not directly analogous to ours, however, because the rate of its flow is subject to discretionary control on the part of the experimenter. As a rule, the rate is maximized in the preliminary phase (the so-called creational warm-up) so that our minutes correspond to whole eons in the computer, during which there takes place a series of successive reorganizations and crystallizations—of a synthetic cosmos. It is a cosmos completely spaceless, though possessing dimensions, but these dimensions have a purely mathematical, hence what one might call an “imaginary” character. They are, very simply, the consequences of certain axiomatic decisions of the programmer and their number depends on him. If, for example, he chooses a ten dimensionality, it will have for the structure of the world created altogether different consequences from those where only six dimensions are established. It should be emphasized that these dimensions bear no relation to those of physical space but only to the abstract, logically valid constructs made use of in systems creation.

This point, all but inaccessible to the nonmathematician, Dobb attempts to explain by adducing simple facts, the sort generally learned in school. It is possible, as we know, to construct a geometrically regular three-dimensioned solid—say a cube—which in the real world possess a counterpart in the form of a die; and it is equally possible to create geometrical solids of four, five, n dimensions (the four-dimensional one is a tesseract). These no longer possess real counterparts, and we can see this, since in the absence of any physical dimension No. 4 there is no way to fashion genuine four-dimensional dice. Now this distinction (between what is physically constructible and what may be made only mathematically) is, for personoids, in general nonexistent, because their world is of a purely mathematical consistency. It is built of mathematics, though the building blocks of that mathematics are ordinary, perfectly physical objects (relays, transistors, logic circuits—in a word, the whole huge network of the digital machine).

As we know from modern physics, space is not something independent of the objects and masses that are situated within it. Space is, in its existence, determined by those bodies, where they are not, where nothing is—in the material sense—there, too, space ceases, collapsing to zero. Now, the role of material bodies, which extend their “influence”, so to speak, and thereby “generate” space, is carried out in the personoid world by systems of a mathematics called into being for that very purpose. Out of all the possible “maths” that in general might be made (for example, in an axiomatic manner), the programmer, having decided upon a specific experiment, selects a particular group, which will serve as the underpinning, the “existential substrate,” the “ontological foundation” of the created universum. There is in this, Dobb believes, a striking similarity to the human world. This world of ours, after all, has “decided” upon certain forms and upon certain types of geometry that best suit it—best, since most simply (three dimensionality, in order to remain with what one began with). This notwithstanding, we are able to picture “other worlds” with “other properties”—in the geometrical and not only in the geometrical realm. It is the same with the personoids; that aspect of mathematics which the researcher has chosen as the “habitat” is for them exactly what for us is the “real-world base” in which we live, and live perforce. And, like us, the personoids are able to “picture” worlds of different fundamental properties.

Dobb presents his subject using the method of successive approximations and recapitulations, that which we have outlined above, and which corresponds roughly to the first two chapters of his book, in the subsequent chapters undergoes partial revocation—through complication. It is not really the case, the author advises us, that the personoids simply come upon a ready-made, fixed, frozen sort of world in its irrevocable final form; what the world will be like in its specificities depends on them, and this to a rowing degree as their own activeness increases, as their “exploratory initiative” develops. Nor does the likening of the universum of the personoids to a world in which phenomena exist only to the extent that its inhabitants observe them provide an accurate image of the conditions. Such a comparison, which is to be found in the works of Sainter and Hughes, Dobb considers an “idealist deviation”—a homage that personetics has rendered to the doctrine, so curiously and so suddenly resurrected, of Bishop Berkeley. Sainter maintained that the personoids would know their world after the fashion of a Berkeleyan being, which is not in a position to distinguish esse from percipi—to wit, it will never discover the difference between the thing perceived and that which occasions the perception in a way objective and independent of the one perceiving. Dobb attacks this interpretation of the matter with a passion. We, the creators of their world, know perfectly well that what is perceived by them indeed exists; it exists inside the computer, independent of them—though, granted, solely in the manner of mathematical objects.

And there are further clarifications. The personoids arise germinally by virtue of the program; they increase at a rate imposed by the experimenter—a rate only such as the latest technology of information processing, operating at near light speeds, permits. The mathematics that is to be the “existential residence” of the paranoids does not await them in full readiness, but is still “in wraps”, so to speak—unarticulated, suspended, latent—because it represents only a set of certain prospective chances, of certain pathways contained in appropriately programmed subunits of the machine. These subunits, or generators, in and of themselves contribute nothing; rather, a specific type of personoid activity serves as a triggering mechanism, setting in motion a production process that will gradually augment and define itself; in other words, the world surrounding these beings takes on an unequivocalness only in accordance with their behaviour. Dobb tries to illustrate this concept with recourse to the following analogy. A man may interpret the real world in a variety of ways. He may devote particular attention—intense scientific investigation—to certain facets of that world, and the knowledge he acquires then casts its own special light on the remaining portions of the world, those not considered in his priority-setting research. If first he diligently takes up mechanics, he will fashion for himself a mechanical model of the world and will see the Universe as a gigantic and perfect clock that in its inexorable movement proceeds from the past to a precisely determined future. This model is not an accurate representation of reality, and yet one make use of it for a period of time historically long, and with it can even achieve many practical successes—the building of machines, implements, etc. Similarly, should the personoids “incline themselves.” By choice, by an act of will, to a certain type of relation to their universum, and to that type of relation they give precedence—if it is in this and only in this that they find the “essence” of their cosmos—they will enter upon a definite path of endeavours and discoveries, a path that is neither illusory nor futile. Their inclination “draws out” of the environment what best corresponds to it. What they first perceive is what they must master. For the world that surrounds them is only partially determined, only partially established in advance by the researcher-creator, in it, the personoids preserve a certain and by no means insignificant margin of freedom of action—action both “mental” (in the province of what they think of their own world, of how they understand it) and “real” (in the context of their “deeds”—which are not, to be sure, literally real, as we understand the term, but not merely imagined either). This is, in truth, the most difficult part of the exposition, and Dobb, we daresay, is not altogether successful in explaining those special qualities of personoid existence—qualities that can be rendered only by the language of the mathematics of programs and creationist interventions. We must, then, take it somewhat on faith that the activity of the personoids is neither entirely free—as the space of our actions is not entirely free, being limited by the physical laws of nature—nor entirely determined—just as we are not train cars set on rigidly fixed tracks. A personoid is similar to a man in this respect, too, man’s “secondary qualities”—colours, melodious sounds, the beauty of things—can sometimes manifest themselves only when he has ears to hear and eyes to see, but what makes possible hearing and sight has been, after all, previously given. Personoids, perceiving their environment, give it from out of themselves those experimental qualities which exactly correspond to what for us are the charms of a beheld landscape—except, of course, that they have been provided with purely mathematical scenery. As to “how they see it,” one can make no pronouncement, for the only way of learning the “subjective quality of their sensation” would be for one to shed his human skin and become a personoid. Personoids, one must remember, have no eyes or ears, therefore they neither see nor hear, as we understand it; in their cosmos there is no light, no darkness, no spatial proximity, no distance, no up or down, there are dimensions there, not tangible to us but to them primary, elemental; they perceive, for example—certain changes in electrical potential. But these changes in potential are, for them, not something in the nature of, let us say, pressures of current, but, rather, the sort of thing that, for a man, is the most rudimentary phenomenon, optical or aural—the seeing of a red blotch, the hearing of a sound, the touching of an object hard or soft. From here on, Dobb stresses, one can speak only in analogies, evocations.

To declare that the personoids are “handicapped” with respect to us, inasmuch as they do not see or hear as we do, is totally absurd, because with equal justice one could assert that it is we who are deprived with respect to them—unable to feel with immediacy the phenomenonalism of mathematics, which, after all, we know only in a cerebral, inferential fashion. It is only through reasoning that we are in touch with mathematics, only through abstract thought that we “experience” it. Whereas the personoids live in it; it is their air, their earth, their clouds, water and even bread—yes, even food, because in a certain sense they take nourishment from it. And so they are “imprisoned,” hermetically locked inside the machine, solely from our point of view; just as they cannot work their way out to us, to the human world, so, conversely—and symmetrically—a man can in no wise enter the interior of their world, so as to exist in it and know it directly. Mathematics has become, then, in certain of its embodiments, the life-space of an intelligence so spiritualized as to be totally incorporeal, the niche and cradle of its existence, its element.

The personoids are in many respects similar to man. They are able to imagine a particular contradiction (that a is and not-a is) but cannot bring about its realization, just as we cannot. The physics of our world, the logic of theirs, does not allow it, since logic is for the personoids’ universum the very same action-confining frame that physics is for our world. In any case—emphasizes Dobb—it is quite out of the question that we could ever fully, introspectively grasp what the personoids “feel” and what they “experience” as they go about their intensive tasks in their nonfinite universum. Its utter spacelessness is no prison—that is a piece of nonsense the journalists latched onto—but is, on the contrary, the guarantee of their freedom, because the mathematics that is spun by the computer generators when “excited” into activity (and what excites them thus is precisely the activity of the personoids)—that mathematics is, as it were, a self-realizing infinite field for optional actions, architectural and other labours, for exploration, heroic excursions, daring incursions, surmises. In a word: we have done the personoids no injustice by putting them in possession of precisely such and not a different cosmos. It is not in this that one finds the cruelty, the immorality of personetics.

In the seventh chapter of Non Serviam Dobb presents to the reader the inhabitants of the digital universum. The personoids have at their disposal a fluency of thought as well as language, and they also have emotions. Each of them is an individual entity; their differentiation is not the mere consequence of the decisions of the creator-programmer but results from the extraordinary complexity of their internal structure. They can be very like one to another, but never are they identical. Coming into the world, each is endowed with a “core,” a “personal nucleus,” and already possesses the faculty of speech and thought, albeit in a rudimentary state. They have a vocabulary, but it is quite spare, and they have the ability to construct sentences in accordance with the rules of the syntax imposed upon them. It appears that in the future it will be possible for us not to impose upon them even these determinants, but to sit back and wait until, like a primeval human group in the course of socialization, they develop their own speech. But this direction of personetics confronts two cardinal obstacles. In the first place, the time required to await the creation of speech would have to be very long. At present, it would take twelve years, even with the maximization of the rate of intracomputer transformations (speaking figuratively and very roughly, one second of machine time corresponds to one year of human life). Secondly, and this is the greater problem, a language arising spontaneously in the “group evolution of the personoids” would be incomprehensible to us, and its fathoming would be bound to resemble the ardous task of breaking an enigmatic code—a task made all the more difficult by the fact that such a code would not have been created by people for other people in a world shared by the decoders. The world of the personoids is vastly different in qualities from ours, and therefore a language suited to it would have to be far removed from any ethnic language. So, for the time being, linguistic evolution ex nihilo is only a dream of the personeticists.

The personoids, when they have “taken root developmentally,” come up against an enigma that is fundamental, and for them paramount—that of their own origin. To wit, they set themselves questions—questions known to us from the history of man, from the history of his religious beliefs, philosophical enquiries, and mythic creations. Where did we come from? Why are we made thus, and not otherwise? Why is it that the world we perceive has these and not other, wholly, different properties? What meaning do we have for the world? What meaning does it have for us? The train of such speculation leads them ultimately, unavoidably, to the elemental questions of ontology, to the problem of whether existence came about “in and of itself,” or whether it was the product, instead, of a particular creative act—that is, whether there might not be, hidden behind it, invested with a will and consciousness, purposively active, master of the situation, a Creator. It is here that the whole cruelty, the immorality of personetics manifests itself.

But before Dobb takes up, in the second half of his work, the account of these intellectual strivings—these struggles of a mentality made prey to the torment of such questions—he presents in a series of successive chapters a portrait of the “typical personoid,” its “anatomy, physiology, and psychology.”

A solitary personoid is unable to go beyond the stage of rudimentary thinking, since, solitary, it cannot exercise itself in speech, and without speech discursive thought cannot develop. As hundreds of experiments have shown, groups numbering from four to seven personoids are optimal, at least for the development of speech and typical exploratory activity, and also for “culturization.” On the other hand, phenomena corresponding to social processes on a larger scale require larger groups. At present it is possible to “accommodate” up to one thousand personoids, roughly speaking, in a computer universum of fair capacity; but studies of this type, belonging to a separate and independent discipline—socio dynamics—lie outside the area of Dobb’s primary concerns, and for this reason his book makes only passing mention of them. As was said, a personoid does not have a body, but it does have a “soul.” This soul—to an outside observer who has a view into the machine world (by means of a special installation, an auxiliary module that is a type of probe, built into the computer)—appears as a “coherent cloud of processes,” as a functional aggregate with a kind of “center” that can be isolated fairly precisely, i.e., delimited within the machine network. (This, nota bene, is not easy, and in no more than one way resembles the search by neurophysiologists for the localized centres of many functions in the human brain.) Crucial to an understanding of what makes possible the creation of the personoids is Chapter 11 of Non Serviam, which in fairly simple terms explains the fundamentals of the theory of consciousness. Consciousness—all consciousness, not merely the personoid—is in its physical aspect an “informational standing wave,” a certain dynamic invariant in a stream of incessant transformations, peculiar in that it represents a “compromise” and at the same time is a “resultant” that, as far as we can tell, was not at all planned for by natural evolution. Quite the contrary, evolution from the first placed tremendous problems and difficulties in the way of the harmonizing of the work of brains above a certain magnitude—i.e., above a certain level of complication—and it trespassed on the territory of these dilemmas clearly without design, for evolution is not a deliberate artificer. It happened, simply, that certain very old evolutionary solutions to problems of control and regulation, common to the nervous system, were “carried along” up to the level at which anthropogenesis began. These solutions ought to have been, from a purely rational, efficiency engineering standpoint, canceled or abandoned, and something entirely new designed—namely, the brain of an intelligent being. But obviously, evolution could not proceed in this way, because disencumbering itself of the inheritance of old solutions—solutions often as much as hundreds of millions of years old—did not lie within its power. Since it advances always in very minute increments of adaptation, since it “crawls” and cannot “leap,” evolution is a dragnet “that lugs after it innumerable archaisms, all sorts of refuse,” as was bluntly put by Tammer and Bovine. (Tammer and Bovine are two of the creators of the computer simulation of the human psyche, a simulation that laid the groundwork for the birth of personetics.) The consciousness of man is the result of a special kind of compromise. It is a “patchwork,” or, as was observed, e.g., by Gebhardt, a perfect exemplification of the well-known German saying: “Aus einer Not eine Tugend machen” (in effect: “To turn a certain defect, a certain difficulty into a virtue”). A digital machine cannot of itself ever acquire consciousness, for the simple reason that in it there do not arise hierarchical conflicts of operation. Such a machine can, at most, fall into a type of “logical palsy” or “logical stupor” when the antimonies in it multiply. The contradictions with which the brain of man positively teems were, however, in the course of hundreds of thousands of years, gradually subjected to arbitrational procedures. There came to be levels higher and lower, levels of reflex and of reflection, impulse and control, the modeling of the elemental environment by zoological means and of the conception by linguistic means. All these levels cannot, do not “want” to tally perfectly or merge to form a whole.

What, then, is consciousness? An expedient, a dodge, a way out of the trap, a pretended last resort, a court allegedly (but only allegedly!) of highest appeal. And, in the language of physics and information theory, it is a function that, once begun, will not admit of any closure—i.e., any definitive completion. It is, then, only a plan for such a closure, for a total “reconciliation” of the stubborn contradictions of the brain. It is, one might say, a mirror whose task it is to reflect other mirrors, which in turn reflect still others and so on to infinity. This, physically, is simply not possible, and so the regressus ad infinitum represents a kind of pit over which soars and flutters the phenomenon of human consciousness. “Beneath the consciousness” there goes on a continuous battle for full representation—in it—of that which cannot reach it in fullness, and cannot for simple lack of space, for, in order to give full and equal rights to all those tendencies that clamour for attention at the centres of awareness, what would be necessary is infinite capacity and volume. There reigns, then, around the conscious a never-ending crush, a pushing and shoving, and the conscious is not—not at all—the highest, serene, sovereign helmsman of all mental phenomena, but more nearly a cork upon the fretful waves, a cork whose uppermost position does not mean the mastery of those waves.... The modern theory of consciousness, interpreted informationally and dynamically, unfortunately cannot be set forth simply or clearly, so that we are constantly—at least here, in this more accessible presentation of the subject—thrown back on a series of visual models and metaphors. We know in any case, that consciousness is a kind of dodge, a shift to which evolution has resorted, and resorted in keeping with its characteristic and indispensable modus operandi, opportunism—i.e., finding a quick, extempore way out of a tight corner. If, then, one were indeed to build an intelligent being and proceed according to the canons of completely rational engineering and logic, applying the criteria of technological efficiency, such a being would not, in general, receive the gift of consciousness. It would behave in a manner perfectly logical, always consistent, lucid, and well ordered, and it might even seem, to a human observer, a genius in creative action and decision making. But it could in no way be a man, for it would be bereft of his mysterious depth, his internal intricacies, his labyrinthine nature....

We will not here go further into the modern theory of the conscious psyche, just as Professor Dobb does not. But these few words were in order, for they provide a necessary introduction to the structure of the personoids. In their creation is at last realizes one of the oldest myths, that of the homunculus. In order to fashion a likeness of man, of his psyche, one must deliberately introduce into the informational substrate specific contradictions; one must impart to it an asymmetry, acentric tendencies; one must, in a word, booth unify and make discordant. Is this rational? Yes, and well nigh unavoidable if we desire not merely to construct some sort of synthetic intelligence but to imitate the thought and, with it, the personality of man.

Hence, the emotions of the personoids must to some extent be at odds with their reason; they must possess self-destructive tendencies, at least to a certain degree; they must feel internal tensions—that entire centrifugality which we experience now as the magnificent infinity of spiritual states and now as there unendurably painful disjointedness. The creational prescription for this, meanwhile, is not at all so hopelessly complicated as it might appear. It is simply that the logic of the creation (the personoid) must be disturbed, must contain certain antimonies. Consciousness is not only a way out of the evolutionary impasse, says Hilbrandt, but also an escape from the snares of Gödelization, for by means of paralogistic contradictions this solution has sidestepped the contradictions to which every system that is perfect with respect to logic is subject. So, then, the universum of the personoids is fully rational, but they are not fully rational inhabitants of it. Let that suffice us—Professor Dobb himself does not pursue further this exceedingly difficult topic. As we know already, the personoids have souls but no bodies and, therefore, also no sensation of their corporeality. “It is difficult to imagine” has been said of that which is experienced in certain special states of mind, in total darkness, with the greatest of possible reduction in the inflow of external stimuli—but, Dobb maintains, this is a misleading image. For with sensory deprivation the function of the human brain soon begins to disintegrate, without a stream of impulses from the outside world the psyche manifests a tendency to lysis. But personoids, who have no physical senses, hardly disintegrate, because what gives them cohesion is there mathematical milieu, which they do experience. But how? They experience it, let us say, according to those changes that surface from the depths of their own psyche. How do they discriminate? To this question only the theory of the dynamic structure of personoids can supply a direct answer.

And yet they are like us, for all the awesome differences. We know already that a digital machine can never spark with consciousness; regardless of the task to which we harness it, or of the physical processes we simulate in it, it will remain forever aphysic. Since, to simulate man, it is necessary that we reproduce certain of his fundamental contradictions, only a system of mutually gravitating antagonisms—a personoid—will resemble, in the words of Canyon, whom Dobb cites, a “star contracted by the forces of gravity and at the same time expanded by the pressure of radiation.” The gravitational centre is, very simply, the logical or the physical sense. That is only our subjective illusion! We find ourselves, at this stage of the exposition, amid a multitude of astounding surprises. One can, to be sure, program a digital machine in such a way as to be able to carry on a conversation with it, as if with an intelligent partner. The machine will employ, as the need arises, the pronoun “I” and all its grammatical inflections. This however is a hoax! The machine will still be closer to a billion chattering parrots—howsoever brilliantly trained the parrots be—than to the simplest, most stupid man. It mimics the behaviour of a man on the purely linguistic plane and nothing more. Nothing will amuse such a machine, or surprise it, or confuse it, or alarm it, or distress it, because it is psychologically and individually No One. It is a Voice capable of defeating the best chess player, it is—or, rather it can become—a consummate imitator that is, within, completely empty. One cannot count on its sympathy, or its antipathy. It works toward no self-set goal; to a degree eternally beyond the conception of any man it “doesn’t care,” for as a person it simply does not exist.... It is a wondrously efficient combinatorial mechanism, nothing more. Now, we are faced with a most remarkable phenomenon. The thought of it is staggering that from the raw material of so utterly vacant and so perfectly impersonal a machine it is possible, through the feeding into it of a special program—a personetic program—to create authentic sentient beings, and even a great many of them at a time! The latest IBM models have a top capacity of one thousand personoids. (The number is mathematically precise, since the elements and linkages needed to carry one personoid can be expressed in units of centimeters-grams-seconds.)

Personoids are separated one from another within the machine. They do not ordinarily “overlap,” though it can happen. Upon contact, there occurs what is equivalent to repulsion, which impedes mutual “osmosis.” Nevertheless, they are capable to interpenetrate if such is their aim. The processes making up their mental substrates then commence to superimpose upon each other, producing “noise” and interference. When the area of permeation is thin, a certain amount of information becomes the common property of both partially coincident personoids—a phenomenon that is for them peculiar, as for a man it would be peculiar, if not alarming, to hear “strange voices” and “foreign thoughts” in his own head (which does, of course occur in certain mental illnesses or under the influence of hallucinogenic drugs). It is as though two people were to have not merely the same, but the same memory; as though there had occurred something more than a telepathic transference of thought—namely, a “peripheral merging of the egos.” The phenomenon is ominous in its consequences, however, and ought to be avoided. For, following the transitional state of surface osmosis, the “advancing” personoid can destroy the other and consume it. The latter, in that case, simply undergoes absorption, annihilation—it ceases to exist (this has already been called murder). The annihilated personoid becomes an assimilated, indistinguishable part of the “aggressor.” We have succeeded—says Dobb—in simulating not only psychic life but also its imperilment and obliteration. Thus we have succeeded in simulating death as well. Under normal experimental conditions, however, personoids eschew such acts of aggression. “Psychophagi” (Castler’s term) are hardly ever encountered among them. Feeling the beginnings of osmosis, which may come about as the result of purely accidental approaches and fluctuations—feeling this threat in a manner that is of course nonphysical, much as someone might sense another’s presence or even hear “strange voices” in his own mind—the personoids execute active avoidance maneuvers they withdraw and go their separate ways. It is on account of this phenomenon that they have come to know the meaning of the concept of “good” and “evil.” To them it is evident that “evil” lies in the destruction of another, and “good” (i.e., the gain, now in the nonethical sense) of another, who would become a “psychophage.” For such expansion—the appropriation of someone else’s “intellectual territory”—increases ones initially given mental “acreage.” In a way, this is a counterpart of a practice of ours, for as carnivores we kill and feed on our victims. The personoids, though, are not obliged to behave thus; they are merely able to. Hunger and thirst are unknown to them, since a continuous influx of energy sustains them—an energy whose source they need not concern themselves with (just as we need not go to any particular lengths to have the sun shine down on us). In the personoid world the terms and principles of thermodynamics, in their application to energetics, cannot arise, because that world is subject to mathematical and not thermodynamic laws.

Before long, the experimenters came to the conclusion that contacts between personoids and man, via the inputs and outputs of the computer were of little scientific value and, moreover, produced moral dilemmas which contributed to the labeling of personetics as the cruelest science. There is something unworthy in informing personoids that we have created them in enclosures that only simulate infinity, that they are microscopic “psychocysts,” capsulations in our world. To be sure, they have their own infinity; hence Sharker and other psychoneticians (Falk, Wiegeland) claim that the situation is fully symmetrical; the personoids do not need our world, our “living space,” just as we have no use for their “mathematical earth.” Dobb considers such reasoning sophistry, because as to who created whom, and who confined whom existentially, there can be no argument, Dobb himself belongs to that group which advocates the principle of absolute nonintervention—“noncontact”—with the personoids. They are the behaviourists of personetics. Their desire is to observe synthetic beings of intelligence, to listen to their speech and thoughts, to record their actions and their pursuits, but never to interfere with these. This method is already developed and has a technology of its own—a set of instruments whose procurement presented difficulties that seemed all but insurmountable only a few years ago. The idea is to hear, to understand—in short, to be a constantly eavesdropping witness—but at the same time to prevent one’s “monitorings” from disturbing in any way the world of the personoids. Now in the planning stage at MIT are programs (APHRON II and EROT) that will enable the personoids—who are currently without gender—to have “erotic contacts,” make possible what corresponds to fertilization, and give them the opportunity to multiply “sexually.” Dobb makes clear that he is no enthusiast of these American projects. His work, as described in Non Serviam, is aimed in an altogether different direction. Not without reason has the English school of personetics been called “the philosophical Polygon” and “the theodicy lab.” With these descriptions we come to what is probably the most significant and, certainly, the most intriguing part of the book under discussion—the last part, which justifies and explains its peculiar title.

Dobb gives an account of his own experiment, in progress now for eight years without interruption. Of the creation itself he makes only brief mention; it was a fairly ordinary duplicating of functions typical of the program JAHVE VI, with slight modifications. He summarizes the results of “tapping” this world, which he himself created and whose development he continues to follow. He considers this tapping to be unethical, and even, at times, a shameful practice. Nevertheless, he carries on with his work, professing a belief in the necessity, for science, of conducting such experiments also—experiments that can in no way be justified on moral—or, for that matter, on any other nonknowledge-advancing—grounds. The situation, he says, has come to the point where the old evasions of the scientists will not do. One cannot affect fine neutrality and conjure away an uneasy conscience by using, for example, the rationalization worked out by vivisectionists—that it is not in creatures of full dimensional consciousness, not in sovereign beings that one is causing suffering or only discomfort. In the personoid experiments we are accountable twofold, because we create and then enchain the creation in the schema of our laboratory procedures. Whatever we do and however we explain our action, there is no longer an escape from full accountability.

Many years of experience on the part of Dobb and his co-workers at Oldport went into the making of their eight-dimensional universum, which became the residence of personoids bearing the names ADAN, ADNA, ANAD, DANA, DAAN, and NAAD. The first personoids developed the rudiment of language implanted in them and had “progeny” by means of division. Dobb writes, in the biblical vein, “And ADAN begat ADNA, ADNA in turn begat DANN and DANN brought forth EDAN, who bore EDNA....” And so it went, until the number of succeeding generations had reached three hundred; because the computer possessed a capacity of only one hundred personoid entities, however, there were periodic eliminations of the “demographic surplus.” In the three-hundredth generation, personoids named ADAN, ADNA, ANAD, DANA, DAAN and NAAD again make an appearance, endowed with additional numbers designating their order of descent. (For simplicity in our recapitulation, we will omit the numbers.) Dobb tells us that the time that has elapsed inside the computer universum works out to—from 2 to 2.5 thousand years. Over this period there has come into being, within the personoid population, a whole series of varying explanations of their lot, as well as the formulation by them of varying, and contending, and mutually excluding models of “all that exists.” That is, there have arisen many different philosophies (ontologies and epistemologies), and also, “metaphysical experiments” of a type all their own. We do not know whether it is because the experiment has been of too short duration, but, in the population studied, no faith that would come completely dogmatized has ever crystallized—a faith that would correspond to Buddhism, say, or to Christianity. On the other hand, one notes, as early as the eighth generation, the appearance of the notion of a Creator, envisioned personally and monotheistically. The experiment consists in alternately raising the rate of computer transformations to the maximum and slowing down (once a year, more or less) to make direct monitoring possible. These changes are, as Dobb explains, totally imperceptible to the inhabitants of the computer universum, just as similar transformations would be imperceptible to us, because when at a single blow the whole of existence undergoes a change (here, in the dimension of time), those immersed in it cannot be aware of the change, because they have no fixed point, or frame of reference, by which to determine that it is taking place.

The utilization of “two chronological gears” permitted that which Dobb most wanted—the emergence of a personoid history, a history with a depth of tradition and a vista of time. To summarize all the data of that history recorded by Dobb, often of a sensational nature, is not possible. We will confine ourselves, then, to the passages from which came the idea that is reflected in the book’s title. The language employed by the personoids is a recent transformation of the standard English whose lexicon and syntax were programmed into them in the first generation. Dobb translates it into essentially normal English but leaves intact a few expressions coined by the personoid population, Among these are the terms “godly” and “ungodly,” used to describe believers in God and atheists.

ADAN discourses with DAAN and ADNA (personoids themselves do not use these names, which are purely a pragmatic contrivance on the part of the observers, to facilitate the recording of the “dialogues”) upon a problem known to us also—a problem that in our history originates with Pascal but in the history of the personoids was the discovery of a certain EDAN 197. Exactly like Pascal, this thinker stated that a belief in God is in any case more profitable than unbelief, because if truth is on the side of the “ungodlies”, the believer loses nothing but his life when he leaves the world, whereas if God exists he gains all eternity (glory everlasting). Therefore, one should believe in God, for this is dictated very simply by the existential tactic of weighing one’s chances in the pursuit of optimal success.

Adan 900 holds the following view of this directive: Edan 197, in his line of reasoning, assumes a God that requires reverence, love, and total devotion, and not only a simple belief in the fact that He exists and that He created the world. It is not enough to assent to the hypothesis of God the Maker of the World in order to win one’s salvation, one must in addition be grateful to that Maker for the act of creation, and divine His will, and do it. In short, one must serve God. Now, God, if He exists, has the power to prove His own existence in a manner at least as convincing as the manner in what can be directly perceived testifies to His being. Surely, we cannot doubt that certain objects exist and that our world is composed of them. At the most, one might harbour doubts regarding the question of what it is they do to exist, how they exist etc. But the fact itself of their existence no one will gainsay. God could with this same force provide evidence of His own existence. Yet He has not done so, condemning us to obtain, on that score, knowledge that is roundabout, indirect, expressed in the form of various conjectures—conjectures sometimes given the name of revelation. If He has acted thus, then He has thereby put the “godlies” and the “ungodlies” on an equal footing. He has not compelled His creatures to an absolute belief in His being but has offered them that possibility. Granted, the motives that moved the Creator may well be hidden from His creations. Be that as it may, the following proposition arises. God either exists or He does not exist. There might be a third possibility (God did exist but no longer does, or He exists intermittently, in oscillation, or He exists sometimes “less” and sometimes “more” etc.) appears exceedingly improbable. It cannot be ruled out, but the introduction of a multivalent logic into a theodicy serves only to muddle it.

So, then, God either is or He is not. If He Himself accepts our situation, in which each member of the alternative in question has arguments to support it—for the “godlies” prove the existence of the Creator and the “ungodlies” disprove it—then from the point of view of logic, we have a game whose partners are, on one side, the full set of “godlies” and “ungodlies,” and, on the other side, God alone. The game necessarily possesses the logical feature that for unbelief in Him God may not punish anyone. If it is definitely unknown whether a thing or not a thing exists—some merely asserting that it does and others, that it does not—and if in general it is possible to advance the hypothesis that the thing never was at all, then no just tribunal can pass judgment against anyone for denying the existence of that thing. For in all worlds it is thus; when there is no full certainty, there is no full accountability. This formulation is by pure logic unassailable, because it sets up a symmetric function of reward in the context of the theory of games; whoever in the face of an uncertainty demands full accountability destroys the mathematical symmetry of the game; we then have the so-called game of the non-zero sum.

It is therefore thus: either God is perfectly just, in which case He cannot assume the right to punish the “ungodlies” by virtue of the fact that they are “ungodlies” (i.e. that they do not believe in Him); or else He will punish the unbelievers after all, which means that from the logical point of view He is not perfectly just. What follows from this? What follows is that He can do whatever He pleases, for when in a system of logic a single, solitary contradiction is permitted, then by the principle of ex falso quodlibet one can draw from that system whatever conclusion one will. In other words: a just God may not touch a hair on the head of the “ungodlies” and if He does, then by that very act He is not the universally perfect and just being that the theodicy posits.

ADNA asks how, in this light, we are to view the problem of the doing of evil unto others.

ADAN 300 replies: Whatever takes place here is entirely certain, whatever takes place “there”—i.e. beyond the world’s pale, in eternity with God—is uncertain, being but inferred according to the hypotheses. Here, one should not commit evil, despite the fact that the principle of eschewing evil is not logically demonstrable. But by the same token the existence of the world is not logically demonstrable. The world exists, though it could not exist. Evil may be committed, but one should not do so, and should not, I believe, because of our agreement based on the rule of reciprocity; be to me as I am to thee. It has naught to do with the existence or nonexistence of God. Were I to refrain from committing evil in the expectation that “there” I would be punished for committing it, or were I to perform good, counting upon a reward “there”, I would be predicating my behaviour on uncertain ground. Here, however, there can be no ground more certain than our mutual agreements in this matter. If there be, “there,” other grounds, I do not have knowledge of them as exact as the knowledge I have, here, of ours. Living, we play the game of life, and in it we are allies, every one. Therewith, the game between us is perfectly symmetrical. In postulating God, we postulate a continuation of the game beyond the world. I believe that one should be allowed to postulate this continuation of the game, so long as it does not in any way influence the course of the game here. Otherwise, for the sake of someone who perhaps does not exist, we may well be sacrificing that which exists here, and exists for certain.

NAAD remarks that the attitude of ADAN 300 toward God is not clear to him. ADAN has granted, has he not, the possibility of the existence of the Creator, what follows from it?

ADAN: Not a thing. That is nothing in the province of obligation. I believe that—again for all worlds—the following principle holds; a temporal ethics is always transcendental. This means that an ethics of the here and now can have outside itself no sanction which would substantiate it. And this means that he who does evil is in every case a scoundrel, just as he who does good in every case righteous. If someone is prepared to serve God, judging the arguments in favour of His existence to be sufficient, he does not thereby acquire here any additional merit. It is His business. This principle rests on the assumption that if God is not, then He is not one whit, and if He is, then He is almighty. For, being almighty, He could create not only another world but likewise logic different from the one that is the foundation of my reasoning. Within such logic the hypothesis of a temporal ethics could be of necessity dependent upon a transcendental ethics. In that case, if not palpable proofs then logical proofs would have compelling force, and constrain one to accept the hypothesis of God under the threat of sinning against reason.

NAAD says that perhaps God does not wish a situation of such compulsion to believe in Him—a situation that would arise in a creation based on that other logic postulated by ADAN 300. To this the latter replies:

An almighty God must also be all-knowing, absolute power is not something independent of absolute knowledge, because he who can do all, but knows not what the consequences will attend the bringing into play of his omnipotence is, ipso facto, no longer omnipotent; were God to work miracles now and then, as it is rumoured He does, it would put His perfection in a most dubious light, because a miracle is a violation of the autonomy of His own creation, a violent intervention. Yet he who has regulated the product of his creation, and knows its behaviour from beginning to end has no need to violate that autonomy; if he does nevertheless violate it, remaining all knowing, this means that he is not in the least correcting his handiwork (a correction can only mean, after all, an initial nonomiscience), but instead is providing—with the miracle—a sign of his existence. Now this is faulty logic because the providing of any such sign must produce the impression that the creation is nevertheless improved in its local stumblings. For a logical analysis of the new model yields the following: the creation undergoes corrections that do not proceed from it, but come from without (from the transcendental, from God), and therefore miracles ought really to be made the norm; or, in other words, the creation ought to be corrected and so perfected that miracles are at last no longer needed. For miracles, as ad hoc interventions, cannot be merely signs of God’s existence; they always, after all, besides revealing their Author, indicate an addressee (being directed to someone here in a helpful way). So, then, with respect to logic it must be thus; either the creation is perfect, in which case miracles are unnecessary, or the miracles are necessary, in which case the creation is not perfect. (With miracle or without, one may correct only that which is somehow flawed, for a miracle that meddles with perfection will simply disturb it, more, worsen it.) Therefore, the signaling by miracle of one’s own presence amounts to using the worst possible means, logically of its manifestation.

NAAD asks if God may not actually want there to be a dichotomy between logic and belief in Him; perhaps the act of faith should be precisely a resignation of logic in favour of a total trust.

ADAN: Once we allow the logical reconstruction of something (a being, a theodicy, and the like) to have internal self contradiction, it obviously becomes possible to prove absolutely anything, whatever one pleases. Consider how the matter lies. We are speaking of creating someone and of endowing him with a particular logic, and then demanding that this same logic be offered up in sacrifice to a belief in the Maker of all things. If this model itself is to remain noncontradictory, it calls for the application, in the form of a metalogic, of a totally different type of reasoning from that which is natural to the logic of the one created. If that does not reveal the outright imperfection of the Creator, then it reveals a quality that I would call mathematical inelegance—a sui generis unmethodicalness (incoherence) of the creative act.

NAAD persists: Perhaps God acts thus, desiring precisely to remain inscrutable to His creation—i.e. nonreconstructible by the logic with which He has created it, He demands, in short, the supremacy of faith over logic.

ADAN answers him: I follow you. This is, of course, possible, but even if such were the case, a faith that proves incompatible with logic presents an exceedingly unpleasant dilemma of a moral nature. For then it is necessary at some point in one’s reasonings to suspend them and give precedence to an unclear supposition—in other word, to set the supposition above logical certainty. This is to be done in the name of unlimited trust; we enter her, into a circulus vitiosus, because the postulated existence of that in which it behooves one now to place one’s trust is the product of a line of reasoning that was in the first place, logically correct; and thus arises a logical contradiction, which, for some, takes on a positive value and is called the Mystery of God. Now, from the purely constructional point of view such a solution is shoddy, and from the moral point of view questionable, because Mystery may satisfactorily be founded upon infinity (infiniteness, after all, is a characteristic of our world), but the maintaining and reinforcing of it through internal paradox is, by any architectural criterion, perfidious. The advocates of theodicy are in general not aware that this is so, because to certain parts of their theodicy they continue to apply ordinary logic and to other parts, not. What I wish to say is this, that if one believes in contradiction,[27] one should then believe only in contradiction, and not at the same time still in some noncontradiction (i.e. in logic) in some other area. If, however, such a curious dualism is insisted upon (that the temporal is always subject to logic, the transcendental only fragmentarily), then one thereupon obtains a model of Creation as something that is, with regard to logical correctness, “patched,” and it is no longer possible for one to postulate its perfection. One comes inescapably to the conclusion that perfection is a thing that must be logically patched.

EDNA asks whether the conjunction of these incoherencies might not be love.

ADAN: And even were this to be so, it can be not any form of love but only one such as is binding. God, if He is, if He created the world, has permitted it to govern itself as it can and wishes. For the fact that God exists, no gratitude to Him is required, such gratitude assumes the prior determination that God is able not to exist, and this would be bad—a premise that leads to yet another kind of contradiction. And what of gratitude for the act of creation? This is not due God either. For it assumes a compulsion to believe that to be is definitely better than not to be. I cannot conceive how that, in turn, could be proven. To one who does not exist surely it is not possible to do either a service or an injury; and if the Creating One, in His omniscience, knows beforehand that the one created will be grateful to Him and love Him or that he will be ungrateful and deny Him, He thereby produces a constraint, albeit one not accessible to the direct comprehension of the one created. For this reason nothing is due God; neither love nor hate; nor gratitude, nor rebuke, not the hope of reward, nor the fear of retribution. Nothing is due Him. A God who craves such feelings must first assure his feeling subject that He exists beyond all question. Love may be forced to rely on speculations as to the reciprocity it inspires; that is understandable. But a love forced to rely on speculations as to whether or not the beloved exists is nonsense. He who is almighty could have provided certainty. Since He did not provide it, if He exists, He must have deemed it unnecessary. Why unnecessary? One begins to suspect that maybe He is not almighty. A God not almighty would be deserving of feelings akin to pity, and indeed to love as well; but this, I think, none of our theodicies allow. And so we say: We serve ourselves and no one else.

We pass over the further deliberations on the topic of whether the God of the theodicy is more of a liberal or an autocrat; it is difficult to condense any arguments that take up such a large part of the book. The discussions and deliberations that Dobb has recorded, sometimes in group colloquia of ADAN 300, NAAD, and other personoids, and sometimes in soliloquies (an experimenter is able to take down even a purely mental sequence by means of appropriate devices hooked into the computer network), constitute practically a third of Non Serviam. In the text itself we find no commentary on them. In Dobb’s Afterword, however, we find this statement:

“ADAN’s reasoning seems incontrovertible, at least insofar as it pertains to me: it was I, after all, who created him. In his theodicy, I am the Creator. In point of fact, I produced that world (serial No. 47) with the aid of ADONAI IX program and created the personoid gemmae with a modification of the program JAHVE VI. These initial entities gave rise to three hundred subsequent generations. In point of fact, I have not communicated to them—in the form of an axiom—either these data, or my existence beyond the limits of their world. In point of fact, they arrived at the possibility of my existence only by inference, on the basis of conjecture and hypothesis. In point of fact, when I create intelligent beings, I do not feel myself entitled to demand of them any sort of privileges—love, gratitude, or even service of some kind or other. I can enlarge their world or reduce it, speed up its time or slow it down, alter the mode and means of their perception; I can liquidate them, divide them, multiply them, transform the very ontological foundation of their existence. I am thus omnipotent with respect to them, but indeed, from this it does not follow that they owe me anything. As far as I am concerned, they are in no way beholden to me. It is true that I do not love them. Love does not enter into it at all, though I suppose some other experimenter might possibly entertain that feeling for his personoids. As I see it, this does not in the least change the situation—not in the least. Imagine for a moment that I attach to my BIX 310 092 and enormous auxiliary unit, which will be a “hereafter.” One by one, I let pass through the connecting channel and into the unit the “souls” of my personoids, and there I reward those who believed in me, who rendered homage unto me, who showed me gratitude and trust, while all the others, the “ungodlies” to use the personoid vocabulary, I punish—e.g., by annihilation or else by torture. (Of eternal punishment I dare not even think—that much of a monster I am not!) My deed would undoubtedly be regarded as a piece of fantastically shameless egotism, as a low act of irrational vengeance—in sum, as the final villainy in a situation of total dominion over innocents. And these innocents will have against me the irrefutable evidence of logic, which is the aegis of their conduct. Everyone has the right, obviously, to draw from the personetic experiments such conclusions as he considers fitting. Dr. Ian Combay once said to me, in a private conversation, that I could, after all, assure the society of personoids of my existence. Now, this I most certainly shall not do. For it would have all the appearance to me of soliciting a sequel—that is, a reaction on their part. But what exactly could they do or say to me, that I would not feel the profound embarrassment, the painful sting of my position as their unfortunate Creator? The bills for the electricity consumed have to paid quarterly, and the moment is going to come when my university superiors demand the “wrapping up” of the experiment—that is, the disconnecting of the machine, or, in other words, the end of the world. That moment I intend to put off as long as humanely possible. It is the only thing of which I am capable, but it is not anything I consider praiseworthy. It is, rather, what in common parlance is generally called “dirty work.” Saying this, I hope that no one will get any ideas. But if he does, well, that is his business.

Reflections

Taken from Lem’s collection A Perfect Vacuum; Perfect Reviews of Nonexistent Books, “Non Serviam” is not just immensely sophisticated and accurate in its exploitation of themes from computer science, philosophy, and the theory of evolution; it is strikingly close to being a true account of aspects of current work in artificial intelligence. Terry Winograd’s famous SHRDLU, for instance, purports to be a robot who moves coloured blocks around on a table top with a mechanical arm, but, in fact, SHRDLU’s world is one that has been entirely made up or simulated within the computer — “In effect, the device is in precisely the same situation that Descartes dreads; it’s a mere computer which dreams that it’s a robot.”[28] Lem’s description of computer-simulated worlds and the simulated agents within them (worlds made of mathematics, in effect) is as accurate as it is poetic—with one striking falsehood, a close kin to falsehoods we have encountered again and again in these tales. Lem would have it that thanks to the blinding speed of computers, the “biological time” of these simulated worlds can be much faster than our real time—and only slowed down to our pace when we want to probe and examine; “…one second of machine time corresponds to one year of human life.”

There would indeed be a dramatic difference between the time scale of a large scale, multidimensional, highly detailed computer simulation of the sort Lem describes and our everyday world’s time scale—but it would run in the other direction! Somewhat like Wheeler’s electron that composes the whole universe by weaving back and forth, a computer simulation must work by sequentially painting in details, and even at the speed of light quite simple and façadelike simulations (which is all that artificial intelligence has yet attempted to produce) take much longer to run than their real life inspirations. “Parallel processing”—running, say, a few million channels of simulation at once—is of course the engineering answer to this problem (though no one yet knows how to do this); but once we have worlds simulated by millions of channels of parallel processing, the claim that they are simulated rather than real (if artificial) will be far less clear. See “The Seventh Sally” (selection 18) and “A Conversation with Einstein’s Brain” (selection 20) for further exploration of these themes.

In any case, Lem portrays with uncanny vividness a “cybernetic universe” with conscious software inhabitants. He has various words for what we have often called “soul.” He refers to “cores,” “personal nuclei,” “personoid gemmae,” and at one point he even gives the illusion of spelling it out in more technical detail; “a coherent cloud of processes… A functional aggregate with a kind of “centre” that can be defined fairly precisely.” Lem describes human—or rather, personoid—consciousness as an unclosed and unclosable plan for a total reconciliation of the stubborn contradiction of the brain. It arises from, and “soars and flutters” over, an infinite regress of level-conflicts in the brain. It is a “patchwork,” “an escape from the snares of Gödelization,” “a mirror whose task it is to reflect other mirrors, which in turn reflect still others, and so on to infinity.” Is this poetry, philosophy, or science?

The vision of personoids patiently awaiting a proof of the existence of God by a miracle is quite touching and astonishing. This kind of vision is occasionally discussed by computer wizards in their hideaways late at night when all the world seems to shimmer in mysterious mathematical harmony. At the Stanford AI Lab late one night, Bill Gosper expounded his own vision of a “theogony” (to use Lem’s word) strikingly similar to Lem’s. Gosper is an expert on the so called “Game of Life,” on which he bases his theogony . “Life” is a kind of two-dimensional “physics,” invented by John Horton Conway, which can be easily programmed in a computer and displayed on a screen. In this physics, each intersection on a huge and theoretically infinite Go board—a grid, in other words—has a light that can be either on or off. Not only space is discrete (discontinuous) but time is also. Time goes from instant to instant in little “quantum jumps.” The way the minute hand moves on some clocks—sitting still for a minute, then jumping. Between these discreet instants, the computer calculates the new “state of the universe” based on the old one, then displays the new state.

The status at a given instant—nothing further back in time is “remembered” by the laws of Life-physics (this “locality” in time is, incidentally also true of the fundamental laws of physics in our own universe). The physics of the Game of Life is also local in space (again agreeing with our own physics); that is, passing from a specific instant to the next, only a cell’s own light and those of its nearest neighbours play any role in telling that cell what to do in the new instant. There are eight such neighbours—four adjacent, four diagonal. Each cell, in order to determine what to do in the next moment, counts how many of its eight neighbours’ lights are on at the present moment; If the answer is exactly two, then the cell’s light stays as it is. If the answer is exactly three, then the cell lights up, regardless of its previous status. Otherwise the cell goes dark. (When a light turns on, it is technically known as a “birth,” and when one goes off it is called a “death”—fitting terms for the Game of Life.) The consequences of this simple law, when it is obeyed simultaneously all over the board are quite astonishing. Although the Game of Life is now over a decade old, its depths have not yet been fully fathomed.

The locality in time implies that the only way the remote history of the universe could exert any effect on the course of events in the present would be if “memories” were somehow encoded in patterns of lights stretching out over the grid (we have earlier referred to this as a “flattening” of the past into the present). Of course the more detailed the memories, the larger the physical structures would have to be. And yet the locality in space of the laws of physics implies that large physical structures may not survive —they just disintegrate!

From early on the question of the survival and the coherence of large structures was one of the big questions if Life, and Gosper was among the discoverers of various kinds of fascinating structures that, because of their internal organization, do survive and exhibit interesting behaviours. Some structures (called “glider guns”) periodically emit smaller structures (“gliders”) that slowly sail off toward infinity. When two gliders collide, or, in general, when large blinking structures collide, sparks can fly!

By watching such flashing patterns on the screens (and by being able to zoom in or out, thus to see events on various size scales), Gosper and others have developed a powerful intuitive understanding of events in the Life universe, accompanied by a colourful vocabulary (flotillas, puffer trains, glider barrages, strafing machines, breeders, eaters, space rakes, antibodies, and so on). Patterns that to a novice have spectacular unpredictability are quite intuitive to these experts. Yet there remains many mysteries in the Game of Life. Are there structures that grow endlessly in complexity, or do all structures achieve a steady state at some point? Are there higher and higher levels of structure that have phenomenological laws of their own—analogues to our own universe’s molecules, cells, organisms, and societies? Gosper speculates that on a gigantic board, where perhaps several upward leaps of intuition would be needed to gain a sense for the complex modes of organization, “creatures” with consciousness and free will could well exist, could think about their universe and its physics, could even speculate on whether a God exists who created it all, on how to try to communicate with “Him,” on whether such efforts make sense or are worth it, and so on.

Here one runs into the eternal question as to how free will can coexist with a determinate substrate. The answer is partly that free will is in the eye of the willer, not in the eyes of the God above. As long as the creature feels free, he, she, or it is free. But let us defer, in our discussions of these arcane matters, to God himself, who in the next selection graciously explains to a befuddled mortal what free will is really all about.


D.C.D.

D.R.H.

20 Raymond M. Smullyan Is God a Taoist?[29]

MORTAL: And therefore, O God, I pray thee, if thou hast one ounce of mercy for this thy suffering creature, absolve me of having to have free will!


GOD: You reject the greatest gift I have given thee?


MORTAL: How can you call that which was forced on me a gift? I have free will, but not of my own choice. I have never freely chosen to have free will. I have to have free will, whether I like it or not.


GOD: Why would you wish not to have free will?


MORTAL: Because free will means moral responsibility, and moral responsibility is more than I can bear.


GOD: Why do you find moral responsibility so unbearable?


MORTAL: Why? I honestly can’t analyze why; all I know is that I do.


GOD: All right, in that case suppose I absolve you from all moral responsibility, but still leave you with free will. Will this be satisfactory?


MORTAL (after a pause): No, I am afraid not.


GOD: Ah, just as I thought! So moral responsibility is not the only aspect of free will to which you object. What else about free will is bothering you?


MORTAL: With free will I am capable of sinning and I don’t want to sin!


GOD: If you don’t want to sin, then why do you?


MORTAL: Good God! I don’t know why I sin, I just do! Evil temptations come along, and try as I can, I cannot resist them.


GOD: If it is really true that you cannot resist them, then you are not sinning of your own free will and hence (at least according to me) not sinning at all.


MORTAL: No, no! I keep feeling that if only I tried harder I could avoid sinning. I understand that the will is infinite. If one wholeheartedly wills not to sin, then one won’t.


GOD: Well now, you should know. Do you try as hard as you can to avoid sinning or don’t you?


MORTAL: I honestly don’t know! At the time, I feel I am trying as hard as I can, but in retrospect, I am worried that maybe I didn’t.


GOD: So in other words, you really don’t know whether or not you have been sinning. So the possibility is open that you haven’t been sinning at all!


MORTAL: Of course this possibility is open, but maybe I have been sinning, and this thought is what so frightens me!


GOD: Why does the thought of sinning frighten you?


MORTAL: I don’t know why! For one thing, you do have a reputation for making out rather gruesome punishments in the afterlife!


GOD: Oh, that’s what’s bothering you! Why didn’t you say so in the first place instead of all this peripheral talk about free will and responsibility? Why didn’t you simply request me not to punish you for any of your sins?


MORTAL: I think I am realistic enough to know that you would hardly grant such a request!


GOD: You don’t say! You have a realistic knowledge of what requests I will grant, eh? Well, I’ll tell you what I’m going to do! I will grant you a very, very special dispensation to sin as much as you like, and I will give you my divine word of honour that I will never punish you for it in the least. Agreed?


MORTAL (in great terror): No, no, don’t do that!


GOD: Why not? Don’t you trust my divine word?


MORTAL: Of course I do! But don’t you see, I don’t want to sin! I have an utter abhorrence of sinning, quite apart from any punishments it may entail.


GOD: In that case, I’ll go one better. I’ll remove your abhorrence of sinning. Here is a magic pill. Just swallow it, and you will lose all abhorrence of sinning. You will joyfully and merrily sin away, you will have no regrets, no abhorrence and I still promise you will never be punished by me, or by yourself, or by any source whatever. You will be blissful for all eternity. So here is the pill!


MORTAL: No, no!


GOD: Are you not being irrational? I am removing your abhorrence for sin, which is your last obstacle.


MORTAL: I still won’t take it.


GOD: Why not?


MORTAL: I believe that the pill will indeed remove my future abhorrence for sin, but my present abhorrence is enough to prevent me from being willing to take it.


GOD: I command that you take it!


MORTAL: I refuse!


GOD: What, you refuse of your own free will?


MORTAL: Yes!


GOD: So it seems that your free will comes in pretty handy, doesn’t it?


MORTAL: I don’t understand!


GOD: Are you not glad now that you have the free will to refuse such a ghastly offer? How would you like it if I forced you to take this pill, whether you wanted it or not?


MORTAL: No, no! Please don’t!


GOD: Of course I won’t; I’m just trying to illustrate a point. All right, let me put it this way. Instead of forcing you to take the pill, suppose I grant your original prayer of removing your free will—but with the understanding that the moment you are no longer free, then you will take the pill.


MORTAL: Once my will is gone, how could I possibly choose to take the pill?


GOD: I did not say you would choose it; I merely said you would take it. You would act, let us say according to purely deterministic laws which are such that you would as a matter of fact take it.


MORTAL: I still refuse.


GOD: So you refuse my offer to remove your free will. This is rather different from your original prayer isn’t it?


MORTAL: Now I see what you are up to. Your argument is ingenious, but I’m not sure it is really correct. There are some points we will have to go over again.


GOD: Certainly.


MORTAL: There are two things you said which seem contradictory to me. First you said that one cannot sin unless one does so of one’s own free will. But then you said that you would give me a pill which would deprive me of my own free will, and then I could sin as much as I liked. But if I no longer had free will, then, according to your first statement, how could I be capable of sinning?


GOD: You are confusing two separate parts of our conversations. I never said the pill would deprive you of your free will, but only that it would remove your abhorrence of sinning.


MORTAL: I’m afraid I’m a bit confused.


GOD: All right, then let us make a fresh start. Suppose I agree to remove your free will, but with the understanding that you will then commit an enormous number of acts which you now regard as sinful. Technically speaking you will not then be sinning since you will not be doing these acts of your own free will. And these acts will carry no moral responsibility, nor moral culpability, nor any punishment whatsoever. Nevertheless, these acts will all be of the type which you presently regard as sinful; they will all have this quality which you presently feel as abhorrent, but your abhorrence will disappear; so you will not then feel abhorrence toward the acts.


MORTAL: No, I have present abhorrence toward the acts, and this present abhorrence is sufficient to prevent me from accepting your proposal.


GOD: Hm! So let me get this absolutely straight. I take it you no longer wish me to remove your free will.


MORTAL (reluctantly): No, I guess not.


GOD: All right, I agree not to. But I am still not exactly clear as to why you no longer wish to be rid of your free will. Please tell me again.


MORTAL: Because, as you have told me, without free will I would sin even more than I do now.


GOD: But I have already told you that without free will you cannot sin.


MORTAL: But If I choose now to be rid of free will, then all my subsequent actions will be sins, not of the future, but of the present moment in which I choose not to have free will.


GOD: Sounds like you are pretty badly trapped, doesn’t it?


MORTAL: Of course I am trapped! You have placed me in a hideous double bind. Now whatever I do is wrong. If I retain free will, I will continue to sin, and if I abandon free will (with your help, of course), I will now be sinning in so doing.


GOD: But by the same token, you place me in a double blind. I am willing to leave you free will or remove it as you choose, but neither alternative satisfies you. I wish to help you, but it seems I cannot.


MORTAL: True!


GOD: But since it is not my fault, why are you still angry with me?


MORTAL: For having placed me in such a horrible predicament in the first place!


GOD: But, according to you, there is nothing satisfactory I could have done.


MORTAL: You mean there is nothing satisfactory you can do now, but that does not mean that there is nothing you could have done.


GOD: Why? What could I have done?


MORTAL: Obviously you should never have given me free will in the first place. Now that you have given it to me, it is too late—anything I do will be bad. But you should never have given it to me in the first place.


GOD: Oh, that’s it! Why would it have been better had I never given it to you?


MORTAL: Because then I never would have been capable of sinning at all.


GOD: Well, I’m always glad to learn from my mistakes.


MORTAL: What!


GOD: I know, that sounds sort of blasphemous, doesn’t it? It almost involves a logical paradox! On the one hand, as you have been taught, it is morally wrong for any sentient being to claim that I am capable of making mistakes. On the other hand, I have the right to do anything. But I am also a sentient being. So the question is, Do I do or do I not have the right to claim that I am capable of making mistakes?


MORTAL: That is a bad joke! One of your premises is simply false. I have not been taught that it is wrong for any sentient being to doubt your omniscience, but only for a mortal to doubt it. But since you are not mortal, then you are obviously free from this injunction.


GOD: Good, so you realize this on a rational level. Nevertheless, you did appear shocked when I said “I am always glad to learn from my mistakes.”


MORTAL: Of course I was shocked. I was shocked not by your self-blasphemous (as you jokingly called it), not by the fact that you had no right to say it, but just by the fact that you did say it, since I have been taught that as a matter of fact you don’t make mistakes. So I was amazed that you claimed that it is possible for you to make mistakes.


GOD: I have not claimed that it is possible. All I am saying is that if I made mistakes, I will be happy to learn from them. But this says nothing about whether the if has or ever can be realized.


MORTAL: Let’s please stop quibbling about this point. Do you or do you not admit it was a mistake to haven given me free will?


GOD: Well now, this is precisely what I propose we should investigate. Let me review your present predicament. You don’t want to have free will because with free will you can sin, and you don’t want to sin. (Though I still find this puzzling; in a way you must want to sin, or you wouldn’t. But let this pass for now.) On the other hand, if you agreed to give up free will, then you would now be responsible for the acts of the future. Ergo, I should never have given you free will in the first place.


MORTAL: Exactly!


GOD: I understand exactly how you feel. Many mortals—even some theologians—have complained that I have been unfair in that it was I, not they, who decided that they should have free will and since then I hold them responsible for their actions. In other words, they feel that they are expected to live up to a contract with me which they never agreed to in the first place.


MORTAL: Exactly!


GOD: As I said, I understand the feeling perfectly. And I can appreciate the justice of the complaint. But the complaint arises only from an unrealistic understanding of the true issues involved. I am about to enlighten you as to what these are; and I think the results will surprise you! But instead of telling you outright, I shall continue to use the Socratic method.

To repeat, you regret that I ever gave you free will. I claim that when you see the true ramifications you will no longer have this regret. To prove my point, I’ll tell you what I’m going to do, I am about to create a new universe—a new space time continuum. In this new universe will be born a mortal just like you—for all practical purposes, we might say that you will be reborn. Now, I can give this new mortal—this new you—free will or not. What would you like me to do?


MORTAL (in great relief): Oh, please! Spare him from having to have free will!


GOD: All right. I’ll do as you say. But you do realize that this new you without free will, will commit all sorts of horrible acts.


MORTAL: But they will not be sins since he will have no free will.


GOD: Whether you call them sins or not, the fact remains that they will be horrible acts in the sense that they will cause great pain to many sentient beings.


MORTAL (after a pause): Good God, you have trapped me again! Always the same game! If I now give you the go-ahead to create this new creature with no free will who will nevertheless commit atrocious acts, then true enough he will not be sinning, but I again will be the sinner to sanction this.


GOD: In that case, I’ll go one better! Here, I have already decided whether to create this new you with a free will or not. Now, I am writing my decision on this piece of paper and I won’t show it to you until later. But my decision is now made and is absolutely irrevocable. There is nothing you can possibly do to alter it, you have no responsibility in the matter. Now, what I wish to know is this: Which way do you hope I have decided? Remember now, the responsibility for the decision falls entirely on my shoulders, not yours. So you can tell me perfectly honestly, and without fear, which way do you hope I have decided?


MORTAL (after a very long pause): I hope you have decided to give him free will.


GOD: Most interesting! I have removed your last obstacle! If I do not give him free will, then no sin is to imputed to anybody. So why do you hope I will give him free will?


MORTAL: Because, sin or no sin, the important point is that if you do not give him free will, then (at least according to what you have said) he will go around hurting people, and I don’t want to see people hurt.


GOD (with an infinite sigh of relief): At last! At last you see the real point!


MORTAL: What point is that?


GOD: That sinning is not the real issue! The important thing is that people as well as other sentient beings don’t get hurt!


MORTAL: You sound like a utilitarian!


GOD: I am a utilitarian!


MORTAL: What!


GOD: What or no whats, I am a utilitarian. Not a Unitarian, mind you, but a utilitarian.


MORTAL: I just can’t believe it!


GOD: Yes, I know, your religious training has taught you otherwise. You have probably thought of me more like a Kantian than a utilitarian, but your training was simply wrong.


MORTAL: You leave me speechless!


GOD: I leave you speechless, do I? Well, then that is perhaps not too bad a thing—you have a tendency to speak too much as it is. Seriously, though, why do you think I ever did give you free will in the first place?


MORTAL: Why did you? I never have thought much about why you did; all I have been arguing for is that you shouldn’t have! But why did you? I guess all I can think of is the standard religious explanation. Without free will, one is not capable of meriting either salvation or damnation. So without free will, we could not earn the right to an eternal life.


GOD: Most interesting! I have eternal life, do you think I have ever done anything to merit it?


MORTAL: Of course not! With you it is different. You are already so good and perfect (at least allegedly) that it is not necessary for you to merit eternal life.


GOD: Really now? That puts me in a rather enviable position, doesn’t it?


MORTAL: I don’t think I understand you.


GOD: Here I am eternally blissful without ever having to suffer or make sacrifices or struggle against evil temptations or anything like that. Without any of that type of “merit,” I enjoy blissful eternal existence. By contrast, you poor mortals have to sweat and suffer and have all sorts of horrible conflicts about morality, and all for what? You don’t even know whether I really exist or not, or if there is any afterlife, or if there is, where you come into the picture. No matter how much you try to placate me by being “good,” you never have any real assurance that your “best” is good enough for me, and hence you have no real security in obtaining salvation. Just think of it! I already have the equivalent of “salvation”—and have never had to go through this infinitely lugubrious process of earning it. Don’t you envy me for this?


MORTAL: But it is blasphemous to envy you!


GOD: Oh, come off it! You’re not talking to your Sunday school teacher, you are talking to me. Blasphemous or not, the important question is not whether you have the right to be envious of me, but whether you are. Are you?


MORTAL: Of course I am.


GOD: Good! Under your present world view, you sure should be most envious of me. But I think with a more realistic world view, you no longer will be. So you really have swallowed the idea which has been taught you that your life on earth is like an examination period and that the purpose of providing you with free will is to test you, to see if you merit blissful eternal life. But what puzzles me is this: If you really believe I am as good and benevolent as I am cracked up to be, why should I require people to merit things like happiness and eternal life? Why should I not grant such things to everyone regardless of whether or not he deserves them?


MORTAL: But I have been taught that your sense of morality—your sense of justice—demands that goodness be rewarded with happiness and evil be punished with pain.


GOD: Then you have been taught wrong.


MORTAL: But the religious literature is so full of this idea! Take for example Jonathan Edwards’s “Sinners in the Hands of an Angry God”. How he describes you as holding your enemies like loathsome scorpions over the flaming pits of hell, preventing them from falling into the fate that they deserve only by the dint of your mercy.


GOD: Fortunately, I have not been exposed to the tirades of Mr. Jonathan Edwards. Few sermons have ever been preached which were more misleading. The very title “Sinners in the Hands of an Angry God” tells its own tale. In the first place, I am never angry. In the second place, I do not think at all in terms of “sin.” In the third place I have no enemies.


MORTAL: By that do you mean that there are no people whom you hate, or that there are no people who hate you?


GOD: I meant the former although the latter also happens to be true.


MORTAL: Oh come now, I know people who have openly claimed to have hated you. At times I have hated you.


GOD: You mean you have hated your image of me. That is not the same as hating me as I really am.


MORTAL: Are you trying to say that it is not wrong to hate a false conception of you, but that it is wrong to hate you as you really are?


GOD: No, I am not saying that at all; I am saying something far more drastic! What I am saying has absolutely nothing to do with right or wrong. What I am saying is that one who knows me for what I really am would simply find it psychologically impossible to hate me.


MORTAL: Tell me, since we mortals seem to have such erroneous views about your real nature, why don’t you enlighten us? Why don’t you guide us the right way?


GOD: What makes you think I’m not?


MORTAL: I mean, why don’t you appear to our very senses and simply tell us that we are wrong?


GOD: Are you really so naïve as to believe that I am the sort of being which can appear to your senses? It would be more correct to say that I am your senses.


MORTAL (astonished): You are my senses?


GOD: Not quite, I am more than that. But it comes closer to the truth than the idea that I am perceivable by the senses. I am not an object like you. I am a subject, and a subject can perceive, but cannot be perceived. You can no more see me than you can see your own thoughts. You can see an apple, but the event of your seeing an apple is itself not seeable. And I am far more like the seeing of an apple than the apple itself.


MORTAL: If I can’t see you, how do I know you exist?


GOD: Good question. How in fact do you know I exist?


MORTAL: Well, I am talking to you, am I not?


GOD: How do you know you are talking to me? Suppose you told a psychiatrist, “Yesterday I talked to God.” What do you think he would say?


MORTAL: That might depend on the psychiatrist. Since most of them are atheistic, I guess most of them would tell me I had simply been talking to myself.


GOD: And they would be right.


MORTAL: What? You mean you don’t exist?


GOD: You have the strangest faculty of drawing false conclusions! Just because you are talking to yourself, it follows that I don’t exist?


MORTAL: Well, if I think I am talking to you, but I am really talking to myself, in what sense do you exist?


GOD: Your question is based on two fallacies plus a confusion. The question of whether or not you are now talking to me and the question of whether or not I exist are totally separate. Even if you were not now talking to me (which obviously you are), it would still not mean that I don’t exist.


MORTAL: Well, all right, of course! So instead of saying “if I am talking to myself, then you don’t exist,” I should rather have said “if I am talking to myself, then I obviously am not talking to you.”


GOD: A very different statement indeed, but still false.


MORTAL: Oh, come now, if I am only talking to myself, then how can I be talking to you?


GOD: Your use of the word “only” is quite misleading! I can suggest several logical possibilities under which your talking to yourself does not imply that you are not talking to me.


MORTAL: Suggest just one!


GOD: Well, obviously one such possibility is that you and I are identical.


MORTAL: Such a blasphemous thought—at least had I uttered it.


GOD: According to some religions, yes. According to others, it is the plain, simple, immediately perceived truth.


MORTAL: So the only way out of my dilemma is to believe that you and I are identical?


GOD: Not at all! That is only one way out. There are several others. For example, it may be that you are part of me, in which case you may be talking to that part of me which is you. Or I may be part of you in which case you may be talking to that part of you which is me. Or again, you and I might partially overlap, in which case you may be talking to the intersection and hence talking to both to you and to me. The only way your talking to yourself might seem to imply that you are not talking to me is if you and I were totally disjoint—and even then, you could conceivably be talking to both of us.


MORTAL: So you claim you do exist.


GOD: Not at all. Again you draw false conclusions! The question of my existence has not even come up. All I have said is that from the fact that you are talking to yourself one cannot possibly infer my nonexistence, let alone the weaker fact that you are not talking to me.


MORTAL: All right, I’ll grant your point! But what I really want to know is do you exist?


GOD: What a strange question!


MORTAL: Why? Men have been asking it for countless millennia.


GOD: I know that. The question itself is not strange; what I mean is that it is a most strange question to ask of me.


MORTAL: Why?


GOD: Because I am the very one whose existence you doubt! I perfectly well understand your anxiety. You are worried that your present experience with me is a mere hallucination. But how can you possibly expect to obtain reliable information from a being about his very existence when you suspect the nonexistence of the very same being?


MORTAL: So you won’t tell me whether or not you exist?


GOD: I am not being willful! I merely wish to point out that no answer I could give could possibly satisfy you. All right, suppose I said, “No, I don’t exist.” What would that prove? Absolutely nothing! Or if I said, “Yes, I do exist.” Would that convince you? Of course not.


MORTAL: Well, if you can’t tell me whether or not you exist, then who possibly can?


GOD: That is something which no one can tell you. It is something which only you can find out for yourself.


MORTAL: How do I go about finding this out for myself?


GOD: That also no one can tell you. This is another thing you will have to find out for yourself.


MORTAL: So there is no way you can help me?


GOD: I didn’t say that. I said there is no way I can tell you. But that doesn’t mean there is no way I can help you.


MORTAL: In what manner then can you help me?


GOD: I suggest you leave that to me! We have gotten sidetracked as it is, and I would like to return to the question of what you believed my purpose to be in giving you free will. Your first idea of my giving you free will in order to test whether you merit salvation or not must appeal to many moralists, but the idea is quite hideous to me. You cannot think of any nicer reason—any more humane reason—why I gave you free will?


MORTAL: Well now, I once asked this question of an orthodox rabbi. He told me that the way we are constituted, it is simply not possible for us to enjoy salvation unless we feel we have earned it. And to earn it, of course we need free will.


GOD: That explanation is indeed much nice than your former, but still is far from correct. According to Orthodox Judaism, I created angels, and they have no free will. They are in actual sight of me and are so completely attracted by goodness that they never have even the slightest temptation toward evil. They really have no choice in the matter. Yet they are eternally happy even though they have never earned it. So if your rabbi’s explanation were correct, why wouldn’t I have simply created only angels rather than mortals?


MORTAL: Beats me! Why didn’t you?


GOD: Because the explanation is simply not correct. In the first place, I have never created any ready-made angels. All sentient beings ultimately approach the state which might be called “angelhood”. But just as in the case of human beings is in a certain stage of biologic evolution, so angels are simply the end result of a process of Cosmic Evolution. The only difference is between the so-called saint and the so-called sinner is that the former is vastly older that the latter. Unfortunately it takes countless life cycles to learn what is perhaps the most important fact of the universe—evil is simply painful. All the arguments of the moralists—all the alleged reasons why people shouldn’t commit evil acts—simply pale into insignificance in light of the one basic truth that evil is suffering.

No, my dear friend, I am not a moralist. I am wholly a utilitarian. That I should have been conceived in the role of a moralist is one of the greatest tragedies of the human race. My role in the scheme of things (if one can use this misleading expression) is neither to punish nor reward, but to aid the process by which all sentient beings achieve ultimate perfection.


MORTAL: Why did you say your expression is misleading?


GOD: What I said was misleading in two respects. First of all it is inaccurate to speak of my role in the scheme of things. I am the scheme of things. Secondly, it is equally misleading to speak of my aiding the process of sentient beings attaining enlightenment. I am the process. The ancient Taoists were quite close when they said of me (whom they called “Tao”) that I do not do things, yet through me all things get done. In more modern terms, I am not the cause of Cosmic Process, I am Cosmic Process itself. I think the most accurate and fruitful definition of me which man can frame—at least in his present state of evolution—is that I am the very process of enlightenment. Those who wish to think of the devil (although I wish they wouldn’t) might analogously define him as the unfortunate length of time the process takes. In this sense, the devil is necessary the process simply does take an enormous length of time, and there is absolutely nothing I can do about it. But, I assure you, once the process is more correctly understood, the painful length of time will no longer be regarded as an essential limitation or an evil. It will be seen to be the very essence of the process itself. I know this is not completely consoling to you who are now in the finite sea of suffering, but the amazing thing is that once you grasp this fundamental attitude, your very finite suffering will begin to diminish—ultimately to the vanishing point.


MORTAL: I have been told this, and I tend to believe it. But suppose I personally succeed in seeing things through your eternal eyes. Then I will be happier, but don’t I have a duty to others?


GOD (laughing): You remind me of the Mahayana Buddhists! Each one says “I will not enter Nirvana until I first see that all other sentient beings do so.” So each one waits for the other fellow to go first. No wonder it takes them so long! The Hinayana Buddhist errs in a different direction. He believes that no one can be of the slightest help to others in obtaining salvation, each one has to do it entirely by himself. And so each tries only for his own salvation. But this very detached attitude makes salvation impossible. The truth of the matter is that salvation is partly an individual and partly a social process. But it is a grave mistake to believe—as do Mahayana Buddhists—that the attaining of enlightenment puts one out of commission so to speak, for helping others. The best way of helping others is by first seeing the light oneself.


MORTAL: There is one thing about your self-descriptions which is somewhat disturbing. You describe yourself as essentially as a process. This puts you in such an impersonal light, and so many people have a need for a more personal God.


GOD: So because they need a more personal God it follows that I am one?


MORTAL: Of course not. But to be acceptable to a mortal a religion must satisfy his needs.


GOD: I realize this. But the so-called “personality” of a being is really more in the eyes of the beholder than in the being itself. The controversies which have raged about whether I am a personal or impersonal being are rather silly because neither side is right or wrong. From one point of view, I am personal, from another, I am not. It is the same with a human being. A creature from another planet may look at him purely impersonally as a mere collection of atomic particles behaving according to strict prescribed physical laws. He may have no more feeling for the personality of a human than the average human has for an ant. Yet an ant has just as much individual personality as a human to beings like myself who really know the ant. To look at something impersonally, is no more correct or incorrect than to look at it personally, but in general, the better you get to know something, the more personal it becomes. To illustrate my point, do you think of me as a personal or impersonal being?


MORTAL: Well, I’m talking to you, am I not?


GOD: Exactly! From that point of view, your attitude toward me might be described as a personal one. And yet, from another point of view—no less valid—I can also be looked at impersonally.


MORTAL: But if you are really such an abstract thing as a process, I don’t see what sense it can make my talking to a mere “process”.


GOD: I love the way you say “mere”. You might just as well say that you are living in a “mere universe”. Also, why must everything one does make sense? Does it make sense to talk to a tree?


MORTAL: Of course not.


GOD: And yet, many children and primitives do just that.


MORTAL: But I am neither a child nor a primitive.


GOD: I realize that, unfortunately.


MORTAL: Why unfortunately?


GOD: Because many children and primitives have a primal intuition which the likes of you have lost. Frankly, I think it would do you a lot of good to talk to a tree once in a while, even more good than talking to me. But we always seem to be getting sidetracked! For the last time, I would like us to try to come to an understanding about why I gave you free will.


MORTAL: I have been thinking about this all the while.


GOD: You mean you haven’t been paying attention to our conversation?


MORTAL: Of course I have. But all the while, on another level, I have been thinking about it.


GOD: And have you come to any conclusion?


MORTAL: Well you say the reason is not to test our worthiness. And you disclaimed the reason that we need to feel that we must merit things in order to enjoy them. And you claim to be a utilitarian. Most significant of all, you appeared so delighted when I came to the sudden realization that it is not sinning in itself which is bad but only the suffering it causes.


GOD: Well of course! What else could conceivably be bad about sinning?


MORTAL: All right, you know that, and now I know that. But all my life I unfortunately have been under the influence of those moralists who hold sinning to be bad in itself. Anyway, putting all these pieces together, it occurs to me that the only reason you gave free will is because of your belief that with free will, people will tend to hurt each other—and themselves—less than without free will.


GOD: Bravo! That is by far the best reason you have yet given! I can assure you that had I chosen to give free will, that would have been my very reason for so choosing.


MORTAL: What! You mean to say you did not choose to give us free will?


GOD: My dear fellow, I could no more choose to give you free will than I could choose to make an equilateral triangle equiangular. I could choose to make or not an equilateral triangle in the first place, but having chosen to make one, I would then have no choice but to make it equiangular.


MORTAL: I thought you could do anything.


GOD: Only things which are logically possible. As St. Thomas said, “It is a sin to regard the act that God cannot do the impossible, as a limitation on His powers.” I agree, except that in place of his using the word sin I would use the term error.


MORTAL: Anyhow, I am still puzzled by your implication that you did not choose to give me free will.


GOD: Well, it is high time I inform you that the entire discussion—from the very beginning—has been based on one monstrous fallacy! We have been talking purely on a moral level—you originally complained that I gave you free will, and raised the whole question as to whether I should have. It never once occurred to you that I had absolutely no choice in the matter.


MORTAL: I am still in the dark.


GOD: Absolutely! Because you are only able to look at it through the eyes of a moralist! The more fundamental metaphysical aspects of the question you never even considered.


MORTAL: I still do not see what you are driving at.


GOD: Before you requested me to remove your free will, shouldn’t your first question have been whether as a matter of fact you do have free will.


MORTAL: That I simply took for granted.


GOD: But why should you?


MORTAL: I don’t know. Do I have free will?


GOD: Yes.


MORTAL: Then why did you say I shouldn’t have taken it for granted?


GOD: Because you shouldn’t. Just because something happens to be true, it does not follow that it should be taken for granted.


MORTAL: Anyway, it is reassuring to know that my natural intuition about having free will is correct. Sometimes I have been worried that determinists are correct.


GOD: They are correct.


MORTAL: Wait a minute now, do I have free will or don’t I?


GOD: I already told you you do. But that does not mean that determinism is incorrect.


MORTAL: Well, are my acts determined by the laws of nature or aren’t they?


GOD: The word determined here is subtly but powerfully misleading and has contributed so much to the confusions of the free will versus determinism controversies. Your acts are certainly in accordance with the laws of nature, but to say they are determined by the laws of nature creates a totally misleading psychological image which is that your free will could somehow be in conflict with the laws of nature and then the latter is somehow more powerful than you, and could “determine” your acts whether you liked it or not. But it is simply impossible for your will to ever conflict with natural law. You and natural law are really one and the same.


MORTAL: What do you mean that I cannot conflict with nature? Suppose I were to become very stubborn, and I determined not to obey the laws of nature. What could stop me? If I became sufficiently stubborn, even you could not stop me!


GOD: You are absolutely right! I certainly could not stop you. Nothing could stop you. But there is no need to stop you, because you could not even start! As Goethe very beautifully expressed it, “In trying to oppose Nature, we are, in the very process of doing so, acting according to the laws of nature”. Don’t you see that the “so-called laws of nature” are nothing more than a description of how you and other beings do act. They are merely a description of how you act, not a prescription of how you should act, not a power or force which compels or determines your acts. To be valid a law of nature must take into account how in fact you do act, or if you like, how you choose to act.


MORTAL: So you really claim that I am incapable of determining to act against natural law!


GOD: It is interesting that you have twice now used the phrase “determined to act” instead of “chosen to act.” This identification is quite common. Often one uses the statement, “I am determined to do this” synonymously with “I have chosen to do this.” This very psychological identification should reveal that determinism and choice are much closer than they might appear. Of course, you might well say that the doctrine of free will says that it is you who are doing the determining, whereas the doctrine of determinism appears to say that your acts are determined by something apparently outside you. But the confusion is largely caused by your bifurcation of reality into the “you” and “not you”. Really now, just where do you leave off and the rest of the universe begin? Once you can see the so-called “you” and the so-called “nature” as a continuous whole, then you can never again be bothered by such questions as whether it is you who are controlling nature or nature who is controlling you. Thus the muddle of free will versus determinism will vanish. If I may use a crude analogy, imagine two bodies moving toward each other by virtue of gravitational attraction. Each body, if sentient, might wonder whether it is he or the other fellow who is exerting the “force”. In a way it is both, in a way it is neither. It is best to say that it is the configuration of the two which is crucial.


MORTAL: You said a short while ago that our whole discussion was based on a monstrous fallacy. You still have not told me what this fallacy is.


GOD: Why, the idea that I could possibly have created you without free will! You acted as if this were a genuine possibility, and wondered why I did not choose it! It never occurred to you that a sentient being without free will is no more conceivable than a physical object which exerts no gravitational attraction. (There is, incidentally, more analogy than you realize between a physical object exerting gravitational attraction and a sentient being exerting free will!) Can you honestly even imagine a conscious being without free will? What on earth could it be like? I think that the one thing in your life that has so misled you is your having been told I gave man the gift of free will. As if I first created man, an then as an afterthought endowed him with the extra property of free will. Maybe you think I have some sort of “paint brush” with which I daub some creatures with free will and not others. No, free will is not an “extra”; it is part and parcel of the very essence of consciousness. A conscious being without free will is simply a metaphysical absurdity.


MORTAL: Then why did you play along with me all this while discussing what I thought was a moral problem, when as you say, my basic confusion was metaphysical?


GOD: Because I thought it would be good therapy for you to get some of this moral poison out of your system. Much of your metaphysical confusion was due to faulty moral notions, and so the latter had to be dealt with first.

And now we must part—at least until you need me again. I think our present union will do much to sustain you for a long while. But do remember what I told you about trees. Of course you don’t have to literally talk to them if doing so makes you feel silly. But there is so much you can learn from them, as well as from the rocks and streams and other aspects of nature. There is nothing like a naturalistic orientation to dispel all these morbid thoughts of “sin” and “free will” and “moral responsibility.” At one stage of history, such notions were actually quite useful. I refer to the days when tyrants had unlimited powers and nothing short of fears of hell could possibly refrain them. But mankind has grown up since then, an this gruesome way of thinking is no longer necessary.

It might be helpful to you to recall what I once said through the writings of the great Zen poet Seng-Ts’an:

If you want to get the plain truth,

Be not concerned with right and wrong.

The conflict between right and wrong

Is the sickness of the mind.

I can see by your expression that you are simultaneously soothed and terrified by those words! What are you afraid of? That if in your mind you abolish the distinction between right and wrong you are more likely to commit acts which are wrong? What makes you so sure that self-consciousness about right and wrong does not in fact lead to more wrong acts than right ones? Do you honestly believe that so-called amoral people, when it comes to action rather than theory, behave less ethically than moralists? Of course not! Even most moralists acknowledge the ethical superiority of the behaviour of most of those who theoretically take an amoral position. They seem so surprised that without ethical principles these people behave so nicely! It never seems to occur to them that it is by virtue of the very lack of moral principles that their good behaviour flows so freely. Do the words “The conflict between right and wrong is the sickness of the human mind” express an idea so different from the story of the Garden of Eden and the fall of Man due to Adam’s eating of the fruit of knowledge? This knowledge, mind you, was of ethical principles, not ethical feelings—these Adam already had. There is much truth in this story, though I never commanded Adam not to eat the apple, I merely advised him not to. I told him it would not be good for him. If the damn fool had only listened to me, so much trouble could have been avoided! But no, he thought he knew everything.” But I wish the theologists would finally learn that I am not punishing Adam and his descendants for the act, but rather that the fruit in question is poisonous in it’s own right, and its effects, unfortunately, last countless generations.

And now really I must take leave. I do hope that our discussion will dispel some of your ethical morbidity, and replace it by a more naturalistic orientation. Remember also the marvelous words I once uttered through the mouth of Lao-Tse when I chided Confucious for his moralizing.

All this talk of goodness and duty. These perpetual pin-pricks unnerve and irritate the hearer—You had best study how it is that Heaven and Earth maintain their eternal course, that the sun and the moon maintain their light, the stars their serried ranks, the birds and beasts their flocks, the trees and shrubs their station. This you too should learn to guide your steps toward Inward Power, to follow the course that the Way of Nature sets, and soon you will no longer need to go round laboriously advertising goodness, and duty.... The swan does not need a daily bath in order to remain white.


MORTAL: You certainly seem partial to eastern Philosophy!


GOD: Oh, not at all! Some of my best thoughts have bloomed in your native American soil. For example, I never expressed my notion of “duty” more eloquently that through the thoughts of Walt Whitman:

I give nothing as duties,

What others give as duties, I give as living impulses.

Reflections

This witty and sparkling dialogue introduces Raymond Smullyan, a colourful logician and magician who also happens to be a sort of Taoist, in his own personal way. Smullyan has two further selections to come, equally insightful and delightful. The dialogue you have just read was taken from The Tao is Silent, a collection of writings illustrating what happens when Western logician meets eastern thought. The result is both scrutable and inscrutable (as one might expect).

There are undoubtedly many religious people who would consider this dialogue to be the utmost in blasphemy, just as some religious people think it is blasphemy to walk around in a church with his hands in his pockets. We think, on the other hand, that this dialogue is pious — a powerful religious statement about God, free will, and the laws of nature, blasphemous only on the most superficial reading. Along the way, Smullyan gets in (through God) many sideswipes at shallow or fuzzy thinking, preconceived categories, pat answers, pompous theories, and moralistic rigidities. Actually we should—according to God’s claim in the dialogue—attribute its message not to Smullyan, but to God. It is God speaking through the character of Smullyan, in turn speaking through the character of God, whose message is being given to us.

Just as God (or the Tao, or the universe, if you prefer) has many parts all with their own free will—you and I being examples—so each one of us has such inner parts with their own free will (although these parts are less free than we are). This is particularly clear in the Mortal’s own internal conflict over whether “he” does or does not want to sin. There are “inner people”—homunculi, or subsystems—who are fighting for control.

Inner conflict is one of the most familiar and yet least understood parts of human nature. A famous slogan for a brand of potato chips used to go, “Betcha can’t eat just one!”—a pithy way of reminding us of our internal splits. You start trying to solve a captivating puzzle (the notorious “Magic Cube,” for instance) and it just takes over. You cannot put it down. You start to play a piece of music or read a good book, and you cannot stop even when you know you have many other pressing duties to take care of.

Who is in control here? Is there some overall being who can dictate what will happen? Or is there just anarchy, with neurons firing helter-skelter, and come what may? The truth must lie somewhere in between. Certainly in a brain the activity is precisely the firing of neurons, just as in a country, the activity is precisely the sum total of the actions of its inhabitants. But the structure of government—itself a set of activities of people—imposes a powerful kind of top-down control on the organization of the whole. When government becomes excessively authoritarian and when enough of the people become truly dissatisfied, then there is the possibility that the overall structure may be attacked and collapse—internal revolution. But most of the time opposing internal forces reach various sorts of compromises, sometimes by finding the happy medium between two alternatives, sometimes by taking turns at control, and so on. The ways in which such compromises can be reached are themselves strong characterizers of the type of government. The same goes for people. The style of resolution of inner conflicts is one of the strongest features of personality.

It is a common myth that each person is a unity, a kind of unitary organization with a will of its own. Quite the contrary, a person is an amalgamation of many subpersons, all with wills of their own. The “subpeople” are considerably less complex than the overall person, and consequently they have much less of a problem with internal discipline. If they themselves are split, probably their component parts are so simple that they are of a single mind—and if not, you can continue down the line. This hierarchical organization of personality is something that does not much please our sense of dignity, but there is much evidence for it.

In the dialogue, Smullyan comes up with a wonderful definition of the Devil, the unfortunate length of time it takes for sentient beings as a whole to come to be enlightened. This idea of the necessary time it takes for a complex state to come about has been explored mathematically in a provocative way by Charles Bennet and Gregory Chaitin. They theorize that it may be possible to prove, by arguments similar to those underlying Gödel’s Incompleteness Theorem, that there is no shortcut to the development of higher and higher intelligences (or, if you prefer, more and more “enlightened” states); in short, that “the Devil” must get his due.

Toward the end of this dialogue, Smullyan gets at issues we have been dealing with throughout this book—the attempts to reconcile the determinism and “upward causality” of the laws of nature with the free will and “downward causality” that we all feel ourselves exerting. His astute observation that we often say “I am determined to do this” when we mean “I have chosen to do this” leads him to his account of free will, beginning with god’s statement that “Determinism and choice are much close than they might appear.” Smullyan’s elegantly worked out reconciliation of these opposing views depends on our willingness to switch points of view—to cease thinking “dualistically” (i.e. breaking the world into parts such as “myself” and “not myself”), and to see the entire universe as boundaryless, with things flowing into each other, overlapping, with no clearly defined categories or edges.

This seems an odd point of view for a logical to be exposing, at first—but then, who says logicians are always upright and rigid? Why should not logicians, more than anyone, realize the places where hard edge, clean logic will necessarily run into trouble when dealing with this chaotic and messy universe? One of Marvin Minsky’s favourite claims is “Logic doesn’t apply to the real world”. There is a sense in which this is true. This is one of the difficulties that artificial intelligence workers are facing. They are coming to realize that no intelligence can be based on reasoning alone; or rather, that isolated reasoning is impossible, because reasoning depends on a prior setting up of a system of concepts, percepts, classes, categories—call them what you will—in terms of which all situations are understood. It is there that biases and selection enter the picture. Not only must the reasoning faculty be willing to accept the first characterizations of a situation that the perceiving faculty must in turn be willing to accept these doubts and to go back and reinterpret the situation, creating a continual loop between levels. Such interplay between perceiving and reasoning subselves brings into being a total self—a Mortal.


D.R.H.

21 Jorge Luis Borges The Circular Ruins[30]

And if he left off dreaming about you…

—Through the Looking Glass, VI

No one saw him disembark in the unanimous night, no one bamboo canoe sinking into the sacred mud, but within a few days was unaware that the silent man came from the South and that was one of the infinite villages upstream, on the violent mountainside where the Zend tongue is not contaminated with Greek and where leprosy is infrequent. The truth is that the obscure man kissed the mud, came up the bank without pushing aside (probably without feeling) brambles which dilacerated his flesh, and dragged himself, nauseous and bloodstained, to the circular enclosure crowned by a stone tiger which once was the color of fire and now was that of ashes. This circle was a temple, long ago devoured by fire, which the malarial jungle had profaned and whose god no longer received the homage of stranger stretched out beneath the pedestal. He was awakened by high above. He evidenced without astonishment that his wounds had closed; he shut his pale eyes and slept, not out of bodily weakness but out of determination of will. He knew that this temple was the place, by his invincible purpose; he knew that, downstream, the incessant trees had not managed to choke the ruins of another propitious temple gods were also burned and dead; he knew that his immediate obligation was to sleep. Towards midnight he was awakened by the disconsolate cry bird. Prints of bare feet, some figs and a jug told him that men of the region had respectfully spied upon his sleep and were solicitous of favor or feared his magic. He felt the chill of fear and sought out a sepulchral niche in the dilapidated wall and covered himself with some unknown leaves.

The purpose which guided him was not impossible, though it was supernatural. He wanted to dream a man: he wanted to dream him with minute integrity and insert him into reality. This magical project had exhausted the entire content of his soul; if someone had asked him his own name or any trait of his previous life, he would not have been able to answer. The uninhabited and broken temple suited him, for it was a minimum of visible world; the nearness of the peasants also suited him, for they would see that his frugal necessities were supplied. The rice and fruit of their tribute were sufficient sustenance for his body, consecrated to the sole task of sleeping and dreaming.

At first, his dreams were chaotic; somewhat later, they were of a dialectical nature. The stranger dreamt that he was in the center of a circular amphitheater which in some way was the burned temple: clouds of silent students filled the gradins; the faces of the last ones hung many centuries away and at a cosmic height, but were entirely clear and precise. The man was lecturing to them on anatomy, cosmography, magic; the countenances listened with eagerness and strove to respond with understanding, as if they divined the importance of the examination which would redeem one of them from his state of vain appearance and interpolate him into the world of reality. The man, both in dreams and awake, considered his phantoms’ replies, was not deceived by impostors, divined a growing intelligence in certain perplexities. He sought a soul which would merit participation in the universe.

After nine or ten nights, he comprehended with some bitterness that he could expect nothing of those students who passively accepted his doctrines, but that he could of those who, at times, would venture a reasonable contradiction. The former, though worthy of love and affection, could not rise to the state of individuals; the latter pre-existed somewhat more. One afternoon (now his afternoons too were tributaries of sleep, now he remained awake only for a couple of hours at dawn) he dismissed the vast illusory college forever and kept one single student. He was a silent boy, sallow, sometimes obstinate, with sharp features which reproduced those of the dreamer. He was not long disconcerted by his companions’ sudden elimination; his progress, after a few special lessons, astounded his teacher. Nevertheless, catastrophe ensued. The man emerged from sleep one day as if from a viscous desert, looked at the vain light of afternoon, which at first he confused with and understood that he had not really dreamt. All that night the intolerable lucidity of insomnia weighed upon him. He tried to explore the jungle, to exhaust himself, amidst the hemlocks, he was scarcely able to manage a few snatches of feeble sleep, fleetingly mottled with some rudimentary visions which were useless. He tried to convoke the college and had scarcely uttered a few brief words of exhortation when it became deformed and was extinguished. In his almost perpetual sleeplessness, his old eyes burned with tears of anger.

He comprehended that the effort to mold the incoherent and vertiginous matter dreams are made of was the most arduous task a man could undertake, though he might penetrate all the enigmas of the upper and lower orders: much more arduous than weaving a rope of sand or coining the faceless wind. He comprehended that an initial failure was inevitable. He swore he would forget the enormous hallucination which had misled him at first, and he sought another method. Before putting it in effect he dedicated a month to replenishing the powers his delirium had wasted. He abandoned any premeditation of dreaming and, almost at once able to sleep for a considerable part of the day. The few times he dreamt during this period, he did not take notice of the dreams. To take up his task again, he waited until the moon’s disk was perfect. Then in the afternoon, he purified himself in the waters of the river, worshipped the planetary gods, uttered the lawful syllables of a powerful name and slept. Almost immediately, he dreamt of a beating heart.

He dreamt it as active, warm, secret, the size of a closed fist, of garnet color in the penumbra of a human body as yet without face or sex with minute love he dreamt it, for fourteen lucid nights. Each night he perceived it with greater clarity. He did not touch it, but limited himself to witnessing it, observing it, perhaps correcting it with his eyes perceived it, lived it, from many distances and many angles. On the fourteenth night he touched the pulmonary artery with his finger, and then the whole heart, inside and out. The examination satisfied him. Deliberately, he did not dream for a night; then he took the heart again, the name of a planet and set about to envision another of the principal organs. Within a year he reached the skeleton, the eyelids. The innumerable hair was perhaps the most difficult task. He dreamt a complete man, a youth, but this youth could not rise nor did he speak nor could his eyes. Night after night, the man dreamt him as asleep.

In the Gnostic cosmogonies, the demiurgi knead and mold a red Adam who cannot stand alone; as unskillful and crude and elementary as this Adam of dust was the Adam of dreams fabricated by the magician’s nights of effort. One afternoon, the man almost destroyed his work, but then repented. (It would have been better for him had he destroyed it.) Once he had completed his supplications to the numina of the earth and the river, he threw himself down at the feet of the effigy which was perhaps a tiger and perhaps a horse, and implored its unknown succor. That twilight, he dreamt of the statue. He dreamt of it as a living, tremulous thing: it was not an atrocious mongrel of tiger and horse, but both these vehement creatures at once and also a bull, a rose, a tempest. This multiple god revealed to him that its earthly name was Fire, that in the circular temple (and in others of its kind) people had rendered it sacrifices and cult and that it would magically give life to the sleeping phantom, in such a way that all creatures except Fire itself and the dreamer would believe him to be a man of flesh and blood. The man was ordered by the divinity to instruct his creature in its rites, and send him to the other broken temple whose pyramids survived downstream, so that in this deserted edifice a voice might give glory to the god. In the dreamer’s dream, the dreamed one awoke.

The magician carried out these orders. He devoted a period of time (which finally comprised two years) to revealing the arcana of the universe and of the fire cult to his dream child. Inwardly, it pained him to be separated from the boy. Under the pretext of pedagogical necessity, each day he prolonged the hours he dedicated to his dreams. He also redid the right shoulder, which was perhaps deficient. At times, he was troubled by the impression that all this had happened before… In general, his days were happy; when he closed his eyes, he would think: Now I shall be with my son. Or, less often: The child I have engendered awaits me and will not exist if I do not go to him.

Gradually, he accustomed the boy to reality. Once he ordered him to place a banner on a distant peak. The following day, the banner flickered from the mountain top. He tried other analogous experiments, each more daring than the last. He understood with certain bitterness that his son was ready—and perhaps impatient—to be born. That night he kissed him for the first time and sent him to the other temple whose debris showed white downstream, through many leagues of inextricable jungle and swamp. But first (so that he would never know he was a phantom, so that he would be thought a man like others) he instilled into him a complete oblivion of his years of apprenticeship.

The man’s victory and peace were dimmed by weariness. At dawn and at twilight, he would prostrate himself before the stone figure, imagining perhaps that his unreal child was practicing the same rites, in other circular ruins, downstream; at night, he would not dream, or would dream only as all men do. He perceived the sounds and forms of the universe with a certain colorlessness: his absent son was being nurtured with these diminutions of his soul. His life’s purpose was complete; man persisted in a kind of ecstasy. After a time, which some narrators of his story prefer to compute in years and others in lustra, he was awakened one midnight by two boatmen; he could not see their faces, but they told him of a magic man in a temple of the North who could walk upon and not be burned. The magician suddenly remembered the words of the god. He recalled that, of all the creatures of the world, fire was the one that knew his son was a phantom. This recollection, at first soot finally tormented him. He feared his son might meditate on his abnormal privilege and discover in some way that his condition was that of a mere image. Not to be a man, to be the projection of another man’s dream what a feeling of humiliation, of vertigo! All fathers are interested in children they have procreated (they have permitted to exist) in confusion or pleasure; it was natural that the magician should fear for future of that son, created in thought, limb by limb and feature by feature in a thousand and one secret nights.

The end of his meditations was sudden, though it was foretold to certain signs. First (after a long drought) a faraway cloud on a hill, light and rapid as a bird; then, toward the south, the sky which had the rose color of the leopard’s mouth; then the smoke which corroded the metallic nights; finally, the panicky flight of the animals. For what was happening had happened many centuries ago. The ruins of the fire god’s sanctuary were destroyed by fire. In a birdless dawn the magician saw the concentric blaze close round the walls. For a moment, he thought of taking refuge in the river, but then he knew that death was coming to crown his old-age; and absolve him of his labors. He walked into the shreds of flame. But they did not bite into his flesh, they caressed him and engulfed him without heat or combustion. With relief, with humiliation, with terror he understood that he too was a mere appearance, dreamt by another.

Reflections

Borges’s epigraph is drawn from a passage in Lewis Carroll’s Through the Looking Glass worth quoting in full.

Here she checked herself in some alarm, at hearing something that sounded to her like the puffing of a large steam-engine in the wood near them, though she feared it was more likely to be a wild beast. “Are there any lions or tigers about here?” she asked timidly.

“It’s only the Red King snoring,” said Tweedledee.

“Come and look at him!” the brothers cried, and they each took one of Alice’s hands, and led her up to where the King was sleeping.


ILLUSTRATION BY JOHN TENNIEL.


“Isn’t he a lovely sight?” said Tweedledum.

Alice couldn’t say honestly that he was. He had a tall red night-cap on, with a tassel, and he was lying crumpled up into a sort of untidy heap, and snoring loud—“fit to snore his head off!” as Tweedledum remarked.

“I’m afraid he’ll catch cold with lying on the damp grass,” said Alice, who was a very thoughtful little girl.

“He’s dreaming now,” said Tweedledee: “and what do you think he’s dreaming about?”

Alice said “Nobody can guess that.”

“Why, about you!” Tweedledee exclaimed, clapping his hands triumphantly.

“And if he left off dreaming about you, where do you suppose you’d be?”

“Where I am now, of course,” said Alice.

“Not you!” Tweedledee retorted contemptuously. “You’d be nowhere. Why, you’re only a sort of thing in his dream!”

“If that there King was to wake,” added Tweedledum, “you’d go out—bang!—just like a candle!”

“I shouldn’t!” Alice exclaimed indignantly. “Besides, if I’m only a sort of thing in his dream, what are you, I should like to know?”

“Ditto,” said Tweedledum.

“Ditto, ditto!” cried Tweedledee.

He shouted this so loud that Alice couldn’t help saying “Hush! You’ll be waking him, I’m afraid, if you make so much noise.”

“Well, it’s no use your talking about waking him,” said Tweedledum, “when you’re only one of the things in his dream. You know very well you’re not real.”

“I am real!” said Alice, and began to cry.

“You won’t make yourself a bit realler by crying,” Tweedledee remarked: “there’s nothing to cry about.”

“If I wasn’t real,” Alice said—half-laughing through her tears, it all seem so ridiculous—“I shouldn’t be able to cry.”

“I hope you don’t suppose those are real tears?” Tweedledum interrupted in a tone of great contempt.

René Descartes asked himself whether he could tell for certain he wasn’t dreaming. “When I consider these matters carefully, I realize so clearly that there are no conclusive indications by which waking can be distinguished from sleep that I am quite astonished, and bewilderment is such that it is almost able to convince me that I am sleeping.”

It did not occur to Descartes to wonder if he might be a character in someone else’s dream, or, if it did, he dismissed the idea out of hand. Why? Couldn’t you dream a dream with a character in it who was not you but whose experiences were a part of your dream? It is not easy to know how to answer a question like that. What would be the difference between dreaming a dream in which you were quite unlike your waking self—much older or younger, or of the opposite sex—and dreaming a dream in which the main character (a girl named Renee, let’s say), the character from whose “point of view” the dream was “narrated,” was simply not you but merely a fictional dream character, no more real than the dream-dragon chasing her? If that dream character were to ask Descartes’s question, and wonder if she were dreaming or awake, it seems the answer would be that she was not dreaming, nor was she really awake; she was just dreamt. When the dreamer, the real dreamer, wakes up, she will be annihilated. But to whom would we address this answer, since she does not really exist at all, but is just a dream character?

Is this philosophical play with the ideas of dreaming and reality just idle? Isn’t there a no-nonsense “scientific” stance from which we objectively distinguish between the things that are really there and mere fictions? Perhaps there is, but then on which side of the divide we put ourselves? Not our physical bodies, but our selves?

Consider the sort of novel that is written from the point of view a fictional narrator-actor. Moby Dick begins with the words “Call Ishmael,” and then we are told Ishmael’s story by Ishmael. Call whom Ishmael? Ishmael does not exist. He is just a character in Melville’s novel. Melville is, or was, a perfectly real self, and he created a fictional self who calls himself Ishmael—but who is not to be numbered among the real things, the things that really are. But now imagine, if you can, a novel writing machine, a mere machine, without a shred of consciousness selfhood. Call it the JOHNNIAC. (The next selection will help you imagine such a machine, if you cannot yet convince yourself you can do it.) Suppose the novel that clattered out of the JOHNNIAC on its high-speed printer started: “Call me Gilbert,” and proceeded to tell Gilbert’s story from Gilbert’s point of view. Call whom Gilbert? Gilbert is just a fictional character, a nonentity with no real existence, though we can go along with the fiction and talk about, learn about, worry about “his” adventures, problems, hopes, fears, pains. In the case of Ishmael, we may have supposed his queer, fictional, quasi-existence depended on the real existence of Melville’s self. No dream without a dreamer to dream it seems to be Descartes’ discovery. But in this case we do seem to have a dream—a fiction, in any case—with no real dreamer or author, no real self with whom we might or might not identify Gilbert. So in such an extraordinary case as the novel-writing machine there might be created a merely fictional self with no real self behind the act of creation. (We can even suppose the JOHNNIAC’s designers had no idea what novels it would eventually write.)

Now suppose our imagined novel-writing machine is not just a sedentary, boxy computer, but a robot. And suppose—why not?—that the text of the novel is not typed but “spoken” from a mechanical mouth. Call this robot the SPEECHIAC. And suppose, finally, the tale we learn from the SPEECHIAC about the adventures of Gilbert is a more or less true story of the “adventures” of the SPEECHIAC. When it is locked in a closet, it says: “I am locked in the closet! Help me!” Help whom? Help Gilbert. But Gilbert does not exist; he is just a fictional character in the SPEECHIAC’s peculiar narration. Why, though, should we call this account fiction, since there is a quite obvious candidate in sight to be Gilbert: the person whose body is the SPEECHIAC? In “Where Am I?” Dennett called his body Hamlet. Is this a case of Gilbert having a body called the SPEECHIAC, or of the SPEECHIAC calling itself Gilbert?

Perhaps we are being tricked by the name. Naming the robot “Gilbert” may be just like naming a sailboat “Caroline” or a bell “Big Ben” or a program “ELIZA.” We may feel like insisting that there is no person named Gilbert here. What, though, aside from bio-chauvinism, grounds our resistance to the conclusion that Gilbert is a person, a person created, in effect, by the SPEECHIAC’s activity and self-presentation in the world?

“Is the suggestion then that I am my body’s dream? Am I just a fictional character in a sort of novel composed by my body in action?” That would be one way of getting at it, but why call yourself fictional? Your brain, like the unconscious novel-writing machine, cranks along, doing its physical tasks, sorting the inputs and the outputs without a glimmer of what it is up to. Like the ants that compose Aunt Hillary in “Prelude, Ant Fugue,” it doesn’t “know” it is creating you in the process, but there you emerging from its frantic activity almost magically.

This process of creating a self at one level out of the relatively mindless and uncomprehending activities amalgamated at another level is vividly illustrated in the next selection by John Searle, though he firmly resists that vision of what he is showing.


D.C.D.

22 John R. Searle Minds, Brains and Programs[31]

What psychological and philosophical significance should we attach to recent efforts at computer simulations of human cognitive capacities? In answering this question, I find it useful to distinguish what I will call “strong” AI from “weak” or “cautious” AI (artificial intelligence). According to weak AI, the principal value of the computer in the study of the mind is that it gives us a very powerful tool. For example, it enables us to formulate and test hypotheses in a more rigorous and precise fashion. But according to strong AI, the computer is not merely a tool in the study of the mind; rather, the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states. In strong AI, because the programmed computer has cognitive states, the programs are not mere tools that enable us to test psychological explanations; rather, the programs are themselves the explanations.

I have no objection to the claims of weak AI, at least as far as this article is concerned. My discussion here will be directed at the claims I have defined as those of strong AI, specifically the claim that the appropriately programmed computer literally has cognitive states and that the programs thereby explain human cognition. When I hereafter refer, I have in mind the strong version, as expressed by these two claims.

I will consider the work of Roger Schank and his colleagues at Yale (Schank and Abelson 1977), because I am more familiar with it than with any other similar claims, and because it provides a very clear example of the sort of work I wish to examine. But nothing that follows depends upon the details of Schank’s programs. The same arguments would apply to Winograd’s SHRDLU (Winograd 1973), Weizenbaum’s ELIZA (Weizenbaum 1965), and indeed any Turing machine simulation of human mental phenomena. [See “Further Reading” for Searle’s references.]

Very briefly, and leaving out the various details, one can describe Schank’s program as follows: The aim of the program is to simulate human ability to understand stories. It is characteristic of human being story-understanding capacity that they can answer questions about story even though the information that they give was never explicitly stated in the story. Thus, for example, suppose you are given the following story: “A man went into a restaurant and ordered a hamburger. When the hamburger arrived it was burned to a crisp, and the man stormed of the restaurant angrily, without paying for the hamburger or leaving a tip.” Now, if you are asked “Did the man eat the hamburger?” you presumably answer, “No, he did not.” Similarly, if you are given following story: “A man went into a restaurant and ordered a hamburger when the hamburger came he was very pleased with it; and as he left restaurant he gave the waitress a large tip before paying his bill,” and are asked the question, “Did the man eat the hamburger?” you presumably answer, “Yes, he ate the hamburger.” Now Schank’s machines can similarly answer questions about restaurants in this fashion. To do this, they have a “representation” of the sort of information that human beings have about restaurants, which enables them to answer such questions as those above, given these sorts of stories. When the machine is given the story and then asked the question, the machine will print out answers of the sort that we would expect human beings to give if told similar stories. Partisans of strong AI claim that in this question and answer sequence the machine is not only simulating a human ability but also (1) that the machine can literally be said to understand the story and provide the answers to questions, and (2) that what the machine and its program do explains the human ability to understand the story and answer questions about it.

Both claims seem to me to be totally unsupported by Schank’s work as I will attempt to show in what follows. (I am not, of course, saying Schank himself is committed to these claims.)

One way to test any theory of the mind is to ask oneself what it would be like if my mind actually worked on the principles that the theory says all minds work on. Let us apply this test to the Schank program with the following Gedankenexperiment. Suppose that I’m locked in a room and given a large batch of Chinese writing. Suppose furthermore (as is indeed the case) that I know no Chinese, either written or spoken, and that I’m not even confident that I could recognize Chinese writing as Chinese writing distinct from, say, Japanese writing or meaningless squiggles. To me, Chinese writing is just so many meaningless squiggles. Now suppose further that after this first batch of Chinese writing I am given a second batch of Chinese script together with a set of rules for correlating the second batch with the first batch. The rules are in English, and I understand these rules as well as any other native speaker of English. They enable me to correlate one set of formal symbols with another set of formal symbols, and all that “formal” means here is that I can identify the symbols entirely by their shapes. Now suppose also that I am given a third batch of Chinese symbols together with some instructions, again in English, that enable me to correlate elements of this third batch with the first two batches, and these rules instruct me how to give back certain Chinese symbols with certain sorts of shapes in response to certain sorts of shapes given me in the third batch. Unknown to me, the people who are giving me all of these symbols call the first batch a “script,” they call the second batch a “story,” and they call the third batch “questions.” Furthermore, they call the symbols I give them back in response to the third batch “answers to the questions,” and the set of rules in English that they gave me, they call the “program.” Now just to complicate the story a little, imagine that these people also give me stories in English, which I understand, and they then ask me questions in English about these stories, and I give them back answers in English. Suppose also that after a while I get so good at following the instructions for manipulating the Chinese symbols and the programmers get so good at writing the programs that from the external point of view—that is, from the point of view of somebody outside the room in which I am locked—my answers to the questions are absolutely indistinguishable from those of native Chinese speakers. Nobody just looking at my answers can tell that I don’t speak a word of Chinese. Let us also suppose that my answers to the English questions are, as they no doubt would be, indistinguishable from those of other native English speakers, for the simple reason that I am a native English speaker. From the external point of view—from the point of view of someone reading my “answers”—the answers to the Chinese questions and the English questions are equally good. But in the Chinese case, unlike the English case, I produce the answers by manipulating uninterpreted formal symbols. As far as the Chinese is concerned, I simply behave like a computer: I perform computational operations on formally specified elements. For the purposes of the Chinese, I am simply an instantiation of the computer program.

Now the claims made by strong AI are that the programmed computer understands the stories and that the program in some sense explains human understanding. But we are now in a position to examine these claims in light of our thought experiment.

1. As regards the first claim, it seems to me quite obvious in the example that I do not understand a word of the Chinese stories. I have inputs and outputs that are indistinguishable from those of the native Chinese speaker, and I can have any formal program you like, but I still understand nothing. For the same reasons, Schank’s computer understands nothing of any stories, whether in Chinese, English, or whatever, since in the Chinese case the computer is me, and in cases where the computer is not me, the computer has nothing more than I have in the case where I understand nothing.

2. As regards the second claim, that the program explains human understanding, we can see that the computer and its program do not provide sufficient conditions of understanding since the computer and the program are functioning, and there is no understanding. But does it even provide a necessary condition or a significant contribution to understanding? One of the claims made by the supporters of strong AI is that when I understand a story in English, what I am doing is exactly the same—or perhaps more of the same—as what I was doing in manipulating the Chinese symbols. It is simply more formal symbol manipulation that distinguishes the case in English, where I do understand, from the case in Chinese, where I don’t. I have not demonstrated that this claim is false, but it would certainly appear an incredible claim in the example. Such plausibility as the claim has derives from the supposition that we can construct a program that will have the same inputs and outputs as native speakers, and in addition we assume that speakers have some level of description where they are also instantiations of a program. On the basis of these two assumptions we assume that even if Schank’s program isn’t the whole story about understanding, it may be part of the story. Well, I suppose that is an empirical possibility, but not the slightest reason has so far been given to believe that it is true, since what is suggested —though certainly not demonstrated—by the example is that the computer program is simply irrelevant to my understanding of the story. In the Chinese case I have everything that artificial intelligence can put into me by way of a program, and I understand nothing: in the English case I understand everything, and there is so far no reason at all to suppose that my understanding has anything to do with computer programs, that is, if, computational operations on purely formally specified elements. As long as the program is defined in terms of computational operations on purely formally defined elements, what the example suggests is that these by themselves have no interesting connection with understanding. They are certainly not sufficient conditions, and not the slightest reason has been given to suppose that they are necessary conditions or even that they make a significant contribution to understanding. Notice that the force of the argument is not simply that different machines can have the same input and output while operating on different formal principles that is not the point at all. Rather, whatever purely formal principles you put into the computer, they will not be sufficient for understanding, since a human will be able to follow the formal principles without understanding anything. No reason whatever has been offered to suppose that such principles are necessary or even contributory, since no reason has been given to suppose that when I understand English I am operating with any formal program at all.

Well, then, what is it that I have in the case of the English sentences that I do not have in the case of the Chinese sentences? The obvious answer is that I know what the former mean, while I haven’t the faintest idea what the latter mean. But in what does this consist and why couldn’t we give it to a machine, whatever it is? I will return to this question later, but first I want to continue with the example.

I have had the occasions to present this example to several workers in artificial intelligence, and, interestingly, they do not seem to agree on what the proper reply to it is. I get a surprising variety of replies, and in what follows I will consider the most common of these (specified along with their geographic origins).

But first I want to block some common misunderstandings about “understanding”: In many of these discussions one finds a lot of fancy footwork about the word “understanding.” My critics point out that there are many different degrees of understanding; that “understanding” is not a simple two-place predicate; that there are even different kinds and levels of understanding, and often the law of excluded middle doesn’t even apply in a straightforward way to statements of the form “x understands y” that in many cases it is a matter for decision and not a simple matter of fact whether x understands y: and so on. To all of these points I want to say: of course, of course. But they have nothing to do with the points at issue. There are clear cases in which “understanding” literally applies and clear cases in which it does not apply; and these two sorts of cases are all I need for this argument.[32] I understand stories in English; to a lesser degree I can understand stories in French; to a still lesser degree, stories in German; and in Chinese, not at all. My car and my adding machine, on the other hand, understand nothing: they are not in that line of business. We often attribute “understanding” and other cognitive predicates by metaphor and analogy to cars, adding machines and other artifacts, but nothing is proved by such attributions. We say “The door knows when to open because of its photoelectric cell.” “The adding machine knows how (understands how, is able) to do addition and subtraction but not division,” and “The thermostat perceives changes in the temperature.” The reason we make these attributions is quite interesting and it has to do with the fact that in artifacts we extend our own intentionality;[33] our tools are extensions of our purposes and so we find it natural to make metaphorical attributions of intentionality to them; but I take it no philosophical ice is cut by such examples. The sense in which an automatic door “understands instructions” from its photoelectric cell is not at all the sense in which I understand English. If the sense in which Schank’s programmed computers understand stories is supposed to be the metaphorical sense in which the door understands and not the sense in which I understand English, the issue would not be worth discussing. But Newell and Simon (1963) write that the kind of cognition they claim for computers is exactly the same as for human beings. I like the straightforwardness of this claim, and it is the sort of claim I will be considering. I will argue that in the literal sense the programmed computer understands what the car and the adding machine understand, namely, exactly nothing. The computer understanding is not (just like my understanding of German) partial or incomplete; it is zero.

Now to the replies:


1. The Systems Reply (Berkeley). “While it is true that the individual person who is locked in the room does not understand the story, the fact is that he is merely part of a whole system; and the system does understand the story. The person has a large ledger in front of him in which are written the rules, he has a lot of scratch paper and pencils for doing calculations, he has ‘data banks’ of sets of Chinese symbols. Now, understanding is not being ascribed to the mere individual; rather it is being ascribed to this whole system of which he is a part.”

Intentionality is by definition that feature of certain mental states by which they are directed at or about objects and states of affairs in the world. Thus, beliefs, desires and intentions are intentional states; undirected forms of anxiety and depression are not.

My response to the systems theory is quite simple: Let the individual internalize all of these elements of the system. He memorizes the rules in the ledger and the data banks of Chinese symbols, and he does all the calculations in his head. The individual then incorporates the entire system. There isn’t anything at all to the system that he does not encompass. We can even get rid of the room and suppose he works outdoors. All the same, he understands nothing of the Chinese, and a fortiori neither does the system, because there isn’t anything in the system that isn’t in him. If he doesn’t understand, then there is no way the system could understand because the system is just a part of him.

Actually I feel somewhat embarrassed to give even this answer to the systems theory because the theory seems to me so implausible to start with. The idea is that while a person doesn’t understand Chinese, somehow the conjunction of that person and bits of paper might understand Chinese. It is not easy for me to imagine how someone who was not in the grip of an ideology would find the idea at all plausible. Still, I think many people who are committed to the ideology of strong AI will in the end be inclined to say something very much like this; so let us pursue it a bit further. According to one version of this view, while the man in the internalized systems example doesn’t understand Chinese in the sense that a native Chinese speaker does (because, for example, he doesn’t know that the story refers to restaurants and hamburgers, etc.), still “the man as a formal symbol manipulation system” really does understand Chinese. The subsystem of the man that is the formal symbol manipulation system for Chinese should not be confused with the subsystem for English.

So there are really two subsystems in the man; one understands English, the other Chinese, and “it’s just that the two systems have little to do with each other.” But, I want to reply, not only do they have little to do with each other, they are not even remotely alike. The subsystem that understands English (assuming we allow ourselves to talk in this jargon of “subsystems” for a moment) knows that the stories are about restaurants and eating hamburgers, he knows that he is being asked questions about restaurants and that he is answering questions as best he can by making various inferences from the content of the story, and so on. But the Chinese system knows none of this. Whereas the English subsystem knows that “hamburgers” refers to hamburgers, the Chinese subsystem knows only that “squiggle squiggle” is followed by “squoggle squoggle.” All he knows is that various formal symbols are being introduced at one end and manipulated according to rules written in English, and other symbols are going out at the other end. The whole point of the original example was to argue that such symbol manipulation by itself couldn’t be sufficient for understanding Chinese in any literal sense because the man could write “squoggle squoggle” after “squiggle squiggle” without understanding anything in Chinese. And it doesn’t meet that argument to postulate subsystems within the man, because the subsystems are no better off than the man was in the first place; they still have anything even remotely like what the English-speaking man (or subsystem) has. Indeed, in the case as described, the Chinese subsystem is simply a part of the English subsystem, a part that engages in meaningless symbol manipulation according to rules in English.

Let us ask ourselves what is supposed to motivate the systems in the first place; that is, what independent grounds are there supposed be for saying that the agent must have a subsystem within him that literally understands stories in Chinese? As far as I can tell the only grounds are that in the example I have the same input and output native Chinese speakers and a program that goes from one to the other. But the whole point of the examples has been to try to show that that couldn’t be sufficient for understanding, in the sense in which I understand stories in English, because a person, and hence the set of systems that go to make up a person, could have the right combination of input, output, and program and still not understand anything in the relevant literal sense in which I understand English. The only motivation for saying there must be a subsystem in me that understands Chinese is that I have a program and I can pass the Turing test; I can fool native Chinese speakers. But precisely one of the points at issue is the adequacy of Turing test. The example shows that there could be two “systems,” both of which pass the Turing test, but only one of which understands; and it is no argument against this point to say that since they both pass Turing test they must both understand, since this claim fails to meet argument that the system in me that understands English has a great deal more than the system that merely processes Chinese. In short, the systems reply simply begs the question by insisting without argument the system must understand Chinese.

Furthermore, the systems reply would appear to lead to consequences that are independently absurd. If we are to conclude that there must be cognition in me on the grounds that I have a certain sort of input and output and a program in between, then it looks like all sorts noncognitive subsystems are going to turn out to be cognitive. For example, there is a level of description at which my stomach does information processing, and it instantiates any number of computer programs, but I take it we do not want to say that it has any understanding (cf. Pylyshyn 1980). But if we accept the systems reply, then it is hard to see how we avoid saying that stomach, heart, liver, and so on are all understanding subsystems, since there is no principled way to distinguish the motivation for saying the Chinese subsystem understands from saying that the stomach understands. It is, by the way, not an answer to this point to say that the Chinese system has information as input and output and the stomach has food and food products as input and output, since from the point of view of the agent, from my point of view, there is no information in either the food or the Chinese—the Chinese is just so many meaningless squiggles. The information in the Chinese case is solely in the eyes of the programmers and the interpreters, and there is nothing to prevent them from treating the input and output of my digestive organs as information if they so desire.

This last point bears on some independent problems in strong AI, and it is worth digressing for a moment to explain it. If strong AI is to be a branch of psychology, then it must be able to distinguish those systems that are genuinely mental from those that are not. It must be able to distinguish the principles on which the mind works from those on which nonmental systems work; otherwise it will offer us no explanations of what is specifically mental about the mental. And the mental-nonmental distinction cannot be just in the eye of the beholder but it must be intrinsic to the systems; otherwise it would be up to any beholder to treat people as nonmental and, for example, hurricanes as mental if he likes. But quite often in the AI literature the distinction is blurred in ways that would in the long run prove disastrous to the claim that AI is a cognitive inquiry. McCarthy, for example, writes, “Machines as simple as thermostats can be said to have beliefs, and having beliefs seems to be a characteristic of most machines capable of problem solving performance” (McCarthy 1979). Anyone who thinks strong AI has a chance as a theory of the mind ought to ponder the implications of that remark. We are asked to accept it as a discovery of strong AI that the hunk of metal on the wall that we use to regulate the temperature has beliefs in exactly the same sense that we, our spouses, and our children have beliefs, and furthermore that “most” of the other machines in the room—telephone, tape recorder, adding machine, electric light switch—also have beliefs in this literal sense. It is not the aim of this article to argue against McCarthy’s point, so I will simply assert the following without argument. The study of the mind starts with such facts as that humans have beliefs while thermostats, telephones, and adding machines don’t. If you get theory that denies this point you have produced a counterexample to the theory and the theory is false. One gets the impression that people in AI who write this sort of thing think they can get away with it because they don’t really take it seriously, and they don’t think anyone else will either I propose, for a moment at least, to take it seriously. Think hard for one minute about what would be necessary to establish that that hunk of metal on the wall over there had real beliefs, beliefs with direction of fit, propositional content, and conditions of satisfaction; beliefs that had the possibility of being strong beliefs or weak beliefs; nervous, anxious, or secure beliefs; dogmatic, rational, or superstitious beliefs; blind faiths or hesitant cogitations; any kind of beliefs. The thermostat is not a candidate. Neither is stomach, liver, adding machine, or telephone. However, since we are taking the idea seriously, notice that its truth would be fatal to strong AI’s claim to be a science of the mind. For now the mind is everywhere. What we wanted to know is what distinguishes the mind from thermostats and livers. And if McCarthy were right, strong AI would have a hope of telling us that.


2. The Robot Reply (Yale). “Suppose we wrote a different kind of program from Schank’s program. Suppose we put a computer inside robot, and this computer would not just take in formal symbols as input and give out formal symbols as output, but rather would actually operate the robot in such a way that the robot does something very much perceiving, walking, moving about, hammering nails, eating, drinking—anything you like. The robot would, for example, have a television camera attached to it that enabled it to see, it would have arms and legs enabled it to ‘act,’ and all of this would be controlled by its computer ‘brain.’ Such a robot would, unlike Schank’s computer, have genus understanding and other mental states.”

The first thing to notice about the robot reply is that it tacitly concedes that cognition is not solely a matter of formal symbol manipulation since this reply adds a set of causal relations with the outside world (cf. Fodor 1980). But the answer to the robot reply is that the addition of “perceptual” and “motor” capacities adds nothing by way of understanding, in particular, or intentionality, in general, to Schank’s original program. To see this, notice that the same thought experiment applies to robot case. Suppose that instead of the computer inside the robot, put me inside the room and, as in the original Chinese case, you give more Chinese symbols with more instructions in English for matching Chinese symbols to Chinese symbols and feeding back Chinese symbols to the outside. Suppose, unknown to me, some of the Chinese symbols that come to me come from a television camera attached to the robot and other Chinese symbols that I am giving out serve to make the motors inside the robot move the robot’s legs or arms. It is important to emphasize that all I am doing is manipulating formal symbols: I know none these other facts. I am receiving “information” from the robot’s “perceptual” apparatus, and I am giving out “instructions” to its motor apparatus, without knowing either of these facts. I am the robot’s homunculus, but unlike the traditional homunculus, I don’t know what’s going on. I don’t understand anything except the rules for symbol manipulation. Now in this case I want to say that the robot has no intentional states at all; it is simply moving about as a result of its electrical wiring and its program. And furthermore, by instantiating the program I have no intentional states of the relevant type. All I do is follow formal instructions about manipulating formal symbols.


3. The Brain Simulator Reply (Berkeley and M.I.T.). “Suppose we design a program that doesn’t represent information that we have about the world, such as the information in Schank’s scripts, but simulates the actual sequence of neuron firings at the synapses of the brain of a native Chinese speaker when he understands stories in Chinese and gives answers to them. The machine takes in Chinese stories and questions about them as input, it simulates the formal structure of actual Chinese brains in processing these stories, and it gives out Chinese answers as outputs. We can even imagine that the machine operates, not with a single serial program, but with a whole set of programs operating in parallel, in the manner that actual human brains presumably operate when they process natural language. Now surely in such a case we would have to say that the machine understood the stories; and if we refuse to say that, wouldn’t we also have to deny that native Chinese speakers understood the stories? At the level of the synapses, what would or could be different about the program of the computer and the program of the Chinese brain?”

Before countering this reply I want to digress to note that it is an odd reply for any partisan of artificial intelligence (or functionalism, etc.) to make: I thought the whole idea of strong AI is that we don’t need to know how the brain works to know how the mind works. The basic hypothesis, or so I had supposed, was that there is a level of mental operations consisting of computational processes over formal elements that constitute the essence of the mental and can be realized in all sorts of different brain processes, in the same way that any computer program can be realized in different computer hardwares: On the assumptions of strong AI, the mind is to the brain as the program is to the hardware, and thus we can understand the mind without doing neurophysiology. If we had to know how the brain worked to do AI, we wouldn’t bother with AI. However, even getting this close to the operation of the brain is still not sufficient to produce understanding. To see this, imagine that instead of a monolingual man in a room shuffling symbols we have the man operate an elaborate set of water pipes with valves connecting them. When the man receives the Chinese symbols, he looks up in the program, written in English, which valves he has to turn on and off. Each water connection corresponds to a synapse in the Chinese brain, and the whole system is rigged up so that after doing all the right firings, that is after turning on all the right faucets, the Chinese answers pop out at the output end out of the series of pipes.

Now where is the understanding in this system? It takes Chinese as input, it simulates the formal structure of the synapses of the Chinese brain, and it gives Chinese as output. But the man certainly does understand Chinese, and neither do the water pipes, and if we tempted to adopt what I think is the absurd view that somehow the conjunction of man and water pipes understands, remember that in principle the man can internalize the formal structure of the water pipes do all the “neuron firings” in his imagination. The problem with the brain simulator is that it is simulating the wrong things about the brain. As long as it simulates only the formal structure of the sequence of neuron firings at the synapses, it won’t have simulated what matters about the brain, namely its causal properties, its ability to produce intentional states. And that the formal properties are not sufficient for the causal properties shown by the water pipe example: we can have all the formal property carved off from the relevant neurobiological causal properties.


4. The Combination Reply (Berkeley and Stanford). “While each the previous three replies might not be completely convincing by it as a refutation of the Chinese room counterexample, if you take all three together they are collectively much more convincing and even decisive. Imagine a robot with a brain-shaped computer lodged in its cranial cavity, imagine the computer programmed with all the synapses of a human brain, imagine the whole behavior of the robot is indistinguishable from human behavior, and now think of the whole thing as a unified system and not just as a computer with inputs and outputs. Surely in such a case would have to ascribe intentionality to the system.”

I entirely agree that in such a case we would find it rational and indeed irresistible to accept the hypothesis that the robot had intentionality, as long as we knew nothing more about it. Indeed, besides appearance and behavior, the other elements of the combination are really irrelevant. If we could build a robot whose behavior was indistinguishable over large range from human behavior, we would attribute intentionality to it, pending some reason not to. We wouldn’t need to know in advance that its computer brain was a formal analogue of the human brain.

But I really don’t see that this is any help to the claims of strong AI; and here’s why: According to strong AI, instantiating a formal program with the right input and output is a sufficient condition of, indeed constitutive of, intentionality. As Newell (1979) puts it, the essence of mental is the operation of a physical symbol system. But the attributions of intentionality that we make to the robot in this example have nothing to do with formal programs. They are simply based on the assumption that if the robot looks and behaves sufficiently like us, then we would suppose, until proven otherwise, that it must have mental states like ours that cause and are expressed by its behavior and it must have an inner mechanism capable of producing such mental states. If we knew independently how to account for its behavior without such assumptions we would not attribute intentionality to it, especially if we knew it had a formal program. And this is precisely the point of my earlier reply to objection II.

Suppose we knew that the robot’s behavior was entirely accounted for by the fact that a man inside it was receiving uninterpreted formal symbols from the robot’s sensory receptors and sending out uninterpreted formal symbols to its motor mechanisms, and the man was doing this symbol manipulation in accordance with a bunch of rules. Furthermore, suppose the man knows none of these facts about the robot, all he knows is which operations to perform on which meaningless symbols. In such a case we would regard the robot as an ingenious mechanical dummy. The hypothesis that the dummy has a mind would now be unwarranted and unnecessary, for there is now no longer any reason to ascribe intentionality to the robot or to the system of which it is a part (except of course for the man’s intentionality in manipulating the symbols). The formal symbol manipulations go on, the input and output are correctly matched, but the only real locus of intentionality is the man, and he doesn’t know any of the relevant intentional states; he doesn’t, for example, see what comes into the robot’s eyes, he doesn’t intend to move the robot’s arm, and he doesn’t understand any of the remarks made to or by the robot. Nor, for the reasons stated earlier, does the system of which man and robot are a part.

To see this point, contrast this case with cases in which we find it completely natural to ascribe intentionality to members of certain other primate species such as apes and monkeys and to domestic animals such as dogs. The reasons we find it natural are, roughly, two: We can’t make sense of the animal’s behavior without the ascription of intentionality, and we can see that the beasts are made of similar stuff to ourselves—that is an eye, that a nose, this is its skin, and so on. Given the coherence of the animal’s behavior and the assumption of the same causal stuff underlying it, we assume both that the animal must have mental states underlying its behavior, and that the mental states must be produced by mechanisms made out of the stuff that is like our stuff. We would certainly make similar assumptions about the robot unless we had some reason not to, but as soon as we knew that the behavior was the result of a formal program, and that the actual causal properties of the physical subs were irrelevant we would abandon the assumption of intentionality (See Multiple authors 1978).

There are two other responses to my example that come up frequently (and so are worth discussing) but really miss the point.


5. The Other Minds Reply (Yale). “How do you know that other people understand Chinese or anything else? Only by their behavior. Now computer can pass the behavioral tests as well as they can (in principle) so if you are going to attribute cognition to other people you must principle also attribute it to computers.”

This objection really is only worth a short reply. The problem in discussion is not about how I know that other people have cognitive states, but rather what it is that I am attributing to them when I attribute cognitive states to them. The thrust of the argument is that it couldn’t be just computational processes and their output because the computational processes and their output can exist without the cognitive state. It is answer to this argument to feign anesthesia. In “cognitive sciences” one presupposes the reality and knowability of the mental in the same way that in physical sciences one has to presuppose the reality and knowability of physical objects.


6. The Many Mansions Reply (Berkeley). “Your whole argument presupposes that AI is only about analog and digital computers. But that just happens to be the present state of technology. Whatever these causal processes are that you say are essential for intentionality (assuming you are right), eventually we will be able to build devices that have these causal processes, and that will be artificial intelligence. So your arguments are in no way directed at the ability of artificial intelligence produce and explain cognition.”

I really have no objection to this reply save to say that it in effect trivializes the project of strong AI by redefining it as whatever artificial produces and explains cognition. The interest of the original claim made on behalf of artificial intelligence is that it was a precise, well-defined thesis: mental processes are computational processes over formally defined elements. I have been concerned to challenge that thesis. If claim is redefined so that it is no longer that thesis, my objections no longer apply because there is no longer a testable hypothesis for them to apply to.

Let us now return to the question I promised I would try to answer: granted that in my original example I understand the English and I do not understand the Chinese, and granted therefore that the machine doesn’t understand either English or Chinese, still there must be something about me that makes it the case that I understand English and a corresponding something lacking in me that makes it the case that I fail to understand Chinese. Now why couldn’t we give those somethings, whatever they are, to a machine?

I see no reason in principle why we couldn’t give a machine the capacity to understand English or Chinese, since in an important sense our bodies with our brains are precisely such machines. But I do see very strong arguments for saying that we could not give such a thing to a machine where the operation of the machine is defined solely in terms of computational processes over formally defined elements; that is, where the operation of the machine is defined as an instantiation of a computer program. It is not because I am the instantiation of a computer program that I am able to understand English and have other forms of intentionality (I am, I suppose, the instantiation of any number of computer programs), but as far as we know it is because I am a certain sort of organism with a certain biological (i.e., chemical and physical) structure, and this structure, under certain conditions, is causally capable of producing perception, action, understanding, learning, and other intentional phenomena. And part of the point of the present argument is that only something that had those causal powers could have that intentionality. Perhaps other physical and chemical processes could produce exactly these effects; perhaps, for example, Martians also have intentionality but their brains are made of different stuff. That is an empirical question, rather like the question whether photosynthesis can be done by something with a chemistry different from that of chlorophyll.

But the main point of the present argument is that no purely formal model will ever be sufficient by itself for intentionality because the formal properties are not by themselves constitutive of intentionality, and they have by themselves no causal powers except the power, when instantiated, to produce the next stage of the formalism when the machine is running. And any other causal properties that particular realizations of the formal model have, are irrelevant to the formal model because we can always put the same formal model in a different realization where those causal properties are obviously absent. Even if, by some miracle, Chinese speakers exactly realize Schank’s program, we can put the same program in English speakers, water pipes, or computers, none of which understand Chinese, the program notwithstanding.

What matters about brain operations is not the formal shadow cast by the sequence of synapses but rather the actual properties of the sequences. All the arguments for the strong version of artificial intelligence that I have seen insist on drawing an outline around the shadows cast by cognition and then claiming that the shadows are the real thing.


By way of concluding I want to try to state some of the general philosophical points implicit in the argument. For clarity I will try to do it in a question-and-answer fashion, and I begin with that old chestnut of a question:

“Could a machine think?”

The answer is, obviously, yes. We are precisely such machines. “Yes, but could an artifact, a man-made machine, think?” Assuming it is possible to produce artificially a machine with a nervous system, neurons with axons and dendrites, and all the rest of it, sufficiently like ours, again the answer to the question seems to be obviously, yes. If you can exactly duplicate the causes, you could duplicate effects. And indeed it might be possible to produce consciousness, intentionality, and all the rest of it using some other sorts of chemical principles than those that human beings use. It is, as I said, an empire question.

“OK, but could a digital computer think?”

If by “digital computer” we mean anything at all that has a level of decryption where it can correctly be described as the instantiation of a computer program, then again the answer is, of course, yes, since we are the instantiations of any number of computer programs and we can think.

“But could something think, understand, and so on solely in virtue of being a computer with the right sort of program? Could instantiating a program, the right program of course, by itself be a sufficient condition of understanding?”

This I think is the right question to ask though it is usually confused with one or more of the earlier questions, and the answer to it is no.

Why not?

Because the formal symbol manipulations by themselves don’t have any intentionality; they are quite meaningless; they aren’t even symbol manipulations, since the symbols don’t symbolize anything. In the linguistic jargon, they have only a syntax but no semantics. Such intentionality as computers appear to have is solely in the minds of those who program them and those who use them, those who send in the input and those who interpret the output.

The aim of the Chinese room example was to try to show this by showing that as soon as we put something into the system that really does have intentionality (a man), and we program him with the formal program, you can see that the formal program carries no additional intentionality. It adds nothing for example to a man’s ability to understand Chinese.

Precisely that feature of AI that seemed so appealing—the distinction between the program and the realization—proves fatal to the claim that simulation could be duplication. The distinction between the program and its realization in the hardware seems to be parallel to the distinction between the level of mental operations and the level of brain operations. And if we could describe the level of mental operations as a formal program, then it seems we could describe what was essential about the mind without doing either introspective psychology or neurophysiology of the brain. But the equation “mind is to brain as program is to hardware” breaks down at several points, among them the following three:

First, the distinction between program and realization has the consequence that the same program could have all sorts of crazy realizations that had no form of intentionality. Weizenbaum (1976, Ch. 2), for example, shows in detail how to construct a computer using a roll of toilet paper and a pile of small stones. Similarly, the Chinese story understanding program can be programmed into a sequence of water pipes, a set of wind machines, or a monolingual English speaker, none of which thereby acquires an understanding of Chinese. Stones, toilet paper, wind, and water pipes are the wrong kind of stuff to have intentionality in the first place—only something that has the same causal powers as brains can have intentionality—and though the English speaker has the right kind of stuff for intentionality you can easily see that he doesn’t get any extra intentionality by memorizing the program, since memorizing it won’t teach him Chinese.

Second, the program is purely formal, but the intentional states are not in that way formal. They are defined in terms of their content, not their form. The belief that it is raining, for example, is not defined as a certain formal shape, but as a certain mental content with conditions of satisfaction, a direction of fit (see Searle 1979), and the like. Indeed the belief as such hasn’t even got a formal shape in this syntactic sense, since one and the same belief can be given an indefinite number of different syntactic expressions in different linguistic systems.

Third, as I mentioned before, mental states and events are literally a product of the operation of the brain, but the program is not in that way a product of the computer.

“Well if programs are in no way constitutive of mental processes, why have so many people believed the converse? That at least needs some explanation.”

I don’t really know the answer to that one. The idea that computer simulations could be the real thing ought to have seemed suspicious in the first place because the computer isn’t confined to simulating mental operations, by any means. No one supposes that computer simulations of a five-alarm fire will burn the neighborhood down or that a computer simulation of a rainstorm will leave us all drenched. Why on earth would anyone suppose that a computer simulation of understanding act understood anything? It is sometimes said that it would be frightfully hard to get computers to feel pain or fall in love, but love and pain are neither harder nor easier than cognition or anything else. For stimulation, all you need is the right input and output and a program in the middle that transforms the former into the latter. That is all computer has for anything it does. To confuse simulation with duplication is the same mistake, whether it is pain, love, cognition, fires, or rainstorms.

Still, there are several reasons why AI must have seemed—and to many people perhaps still does seem—in some way to reproduce thereby explain mental phenomena, and I believe we will not succeed removing these illusions until we have fully exposed the reasons that rise to them.

First, and perhaps most important, is a confusion about the notion of “information processing”: many people in cognitive science believe that the human brain, with its mind, does something called “information processing,” and analogously the computer with its program does information processing; but fires and rainstorms, on the other hand, don’t do information processing at all. Thus, though the computer can simulate the formal features of any process whatever, it stands in a special relation to the mind and brain because when the computer is properly programmed, ideally with the same program as the brain, the information processing is identical in the two cases, and this information processing is really the essence of the mental. But the trouble with this argument is that it rests on an ambiguity in the notion of “information.” In the sense in which people “process information” when they reflect, say, on problems in arithmetic or when they read and answer questions about stories, the programmed computer does not do “information processing.” Rather, what it does is manipulate formal symbols. The fact that programmer and the interpreter of the computer output use the symbols to stand for objects in the world is totally beyond the scope of computer. The computer, to repeat, has a syntax but no semantics. Thus if you type into the computer “2 plus 2 equals?” it will type out “4.” But it has no idea that “4” means 4 or that it means anything at all. And point is not that it lacks some second-order information about the interpretation of its first-order symbols, but rather that its first-order symbols don’t have any interpretations as far as the computer is concerned. All the computer has is more symbols. The introduction of the notion of “information processing” therefore produces a dilemma: either we construe the notion of “information processing” in such a way that it implies intentionality as part of the process or we don’t. If the former, then the programmed computer does not do information processing, it only manipulates formal symbols. If the latter, then, though the computer does information processing, it is only doing so in the sense in which adding machines, typewriters, stomachs, thermostats, rainstorms, and hurricanes do information processing; namely, they have a level of description at which we can describe them as taking information in at one end, transforming it, and producing information as output. But in this case it is up to outside observers to interpret the input and output as information in the ordinary sense. And no similarity is established between the computer and the brain in terms of any similarity of information processing.

Second, in much of AI there is a residual behaviorism or operationalism. Since appropriately programmed computers can have input-output patterns similar to those of human beings, we are tempted to postulate mental states in the computer similar to human mental states. But once we see that it is both conceptually and empirically possible for a system to have human capacities in some realm without having any intentionality at all, we should be able to overcome this impulse. My desk adding machine has calculating capacities, but no intentionality, and in this paper I have tried to show that a system could have input and output capabilities that duplicated those of a native Chinese speaker and still not understand Chinese, regardless of how it was programmed. The Turing test is typical of the tradition in being unashamedly behavioristic and operationalistic, and I believe that if AI workers totally repudiated behaviorism and operationalism much of the confusion between simulation and duplication would be eliminated.

Third, this residual operationalism is joined to a residual form of dualism; indeed strong AI only makes sense given the dualistic assumption that, where the mind is concerned, the brain doesn’t matter. In strong AI (and in functionalism, as well) what matters are programs, and programs are independent of their realization in machines; indeed, as far as AI is concerned, the same program could be realized by an electronic machine, a Cartesian mental substance, or a Hegelian world spirit. The single most surprising discovery that I have made in discussing these issues is that many AI workers are quite shocked by my idea that actual human mental phenomena might be dependent on actual physical-chemical properties of actual human brains. But if you think about it a minute you can see that I should not have been surprised; for unless you accept some form of dualism, the strong AI project hasn’t got a chance. The project is to reproduce and explain the mental by designing programs but unless the mind is not only conceptually but empirically independent of the brain you couldn’t carry out the project, for the program is completely independent of any realization. Unless you believe that the mind is separable from the brain both conceptually and empirically—dualism in a strong form—you cannot hope to reproduce the mental by writing and running programs since programs must be independent of brains any other particular forms of instantiation. If mental operations consist in computational operations on formal symbols, then it follows that they have no interesting connection with the brain; the only connection would be that the brain just happens to be one of the indefinitely many types of machines capable of instantiating the program. This form of dualism is not the traditional Cartesian variety that claims there are two sorts of substances, but it is Cartesian in the sense that it insists that what is specifically mental about the mind has no intrinsic connection with the actual properties of the brain. This underlying dualism is masked from us by the fact that AI literature contains frequent fulminations against “dualism”; what the authors seem to be unaware of is that their position presuppose a strong version of dualism.

“Could a machine think?” My own view is that only a machine could think, and indeed only very special kinds of machines, namely brains and machines that had the same causal powers as brains. And that is the main reason strong AI has had little to tell us about thinking: since it has nothing to tell us about machines. By its own definition, it is about programs, and programs are not machines. Whatever else intentionality is, it is a biological phenomenon, and it is as likely to be as causally dependent on the specific biochemistry of its origins as lactation, photosynthesis, or any other biological phenomena. No one would suppose that we could produce milk and sugar by running a computer simulation of the formal sequences in lactation and photosynthesis, but where the mind is concerned many people are willing to believe in such a miracle because of a deep and abiding dualism: the mind they suppose is a matter of formal processes and is independent of quite specific material causes in the way that milk and sugar are not.

In defense of this dualism the hope is often expressed that the brain is a digital computer (early computers, by the way, were often call “electronic brains”). But that is no help. Of course the brain is a digital computer. Since everything is a digital computer, brains are too. The point is that the brain’s causal capacity to produce intentionality cannot consist in its instantiating a computer program, since for any program you like it is possible for something to instantiate that program and still not have any mental states. Whatever it is that the brain does to produce intentionality, it cannot consist in instantiating a program since no program, by itself, is sufficient for intentionality.[34]

Reflections

This article originally appeared together with twenty-eight responses from assorted people. Many of the responses contained excellent commentary, but reprinting them would have overloaded this book, and in any case some were a little too technical. One of the nice things about Searle’s article is that it is pretty much understandable by someone without special training in AI, neurology, philosophy, or other disciplines that have a bearing on it.

Our position is quite opposed to Searle’s, but we find in Searle an eloquent opponent. Rather than attempt to give a thorough rebuttal to his points, we will concentrate on a few of the issues he raises, leaving our answers to his other points implicit, in the rest of this book.

Searle’s paper is based on his ingenious “Chinese room thought experiment,” in which the reader is urged to identify with a human being executing by hand the sequence of steps that a very clever AI program would allegedly go through as it read stories in Chinese and answered questions about them in Chinese in a manner sufficiently human-seeming as to be able to pass the Turing test. We think Searle has committed a serious and fundamental misrepresentation by giving the impression that it makes any sense to think that a human being could do this. By buying this image, the reader is unwittingly sucked into an impossibly unrealistic concept of the relation between intelligence and symbol manipulation.

The illusion that Searle hopes to induce in readers (naturally he doesn’t think of it as an illusion!) depends on his managing to make readers overlook a tremendous difference in complexity between two systems at different conceptual levels. Once he has done that, the rest is a piece of cake. At the outset, the reader is invited to identify with Searle as he hand-simulates an existing AI program that can, in a limited way answer questions of a limited sort, in a few limited domains. Now, for person to hand-simulate this, or any currently existing AI program—that is, to step through it at the level of detail that the computer does—would involve days, if not weeks or months, of arduous, horrendous boredom. But instead of pointing this out, Searle—as deft at distracting the reader’ attention as a practiced magician—switches the reader’s image to a hypothetical program that passes the Turing test! He has jumped up many levels of competency without so much as a passing mention. The reader is again invited to put himself or herself in the shoes of the person carrying out the step-by-step simulation, and to “feel the lack of understanding” of Chinese. This is the crux of Searle’s argument.

Our response to this (and, as we shall show later, Searle’s response as well, in a way) is basically the “Systems Reply”: that it is a mistake to try to impute the understanding to the (incidentally) animate simulator; rather it belongs to the system as a whole, which includes what Searle casually characterizes as “bits of paper.” This offhand comment, we feel, reveals how Searle’s image has blinded him to the realities of the situation. A thinking computer is as repugnant to John Searle as no Euclidean geometry was to its unwitting discoverer, Gerolamo Saccher who thoroughly disowned his own creation. The time—the late 1700s was not quite ripe for people to accept the conceptual expansion caused by alternate geometries. About fifty years later, however, non-Euclidean geometry was rediscovered and slowly accepted.

Perhaps the same will happen with “artificial intentionality”—if it ever created. If there ever came to be a program that could pass the Turing test, it seems that Searle, instead of marveling at the power and depth of that program, would just keep on insisting that it lacked some marvelous “causal powers of the brain” (whatever they are). To point out the vacuity of that notion, Zenon Pylyshyn, in his reply to Searle, wondered if the following passage, quite reminiscent of Zuboff’s “Story of Brain” (selection 12), would accurately characterize Searle’s viewpoint:

If more and more of the cells in your brain were to be replaced by integrated circuit chips, programmed in such a way as to keep the input-output function each unit identical to that of the unit being replaced, you would in all likelihood just keep right on speaking exactly as you are doing now except that you would eventually stop meaning anything by it. What we outside observers might take to be words would become for you just certain noises that circuits caused you to make.

The weakness of Searle’s position is that he offers no clear way to tell when genuine meaning—or indeed the genuine “you”—has vanished from this system. He merely insists that some systems have intentionality by virtue of their “causal powers” and that some don’t. He vacillates about what those powers are due to. Sometimes it seems that the brain is composed of “the right stuff,” but other times it seems to be something else. It is whatever seems convenient at the moment—now it is the slippery essence that distinguishes “form” from “content,” now another essence that separates syntax from semantics, and so on.

To the Systems-Reply advocates, Searle offers the thought that the human being in the room (whom we shall from now on refer to as “Searle’s demon”) should simply memorize, or incorporate all the material on the “bits of paper.” As if a human being could, by any conceivable stretch of the imagination, do this. The program on those “bits of paper” embodies the entire mind and character of something as complex in its ability to respond to written material as a human being is, by virtue of being able to pass the Turing test. Could any human being simply “swallow up” the entire description of another human being’s mind? We find it hard enough to memorize a written paragraph; but Searle envisions the demon as having absorbed what in all likelihood would amount to millions, if not billions, of pages densely covered with abstract symbols—and moreover having all of this information available, whenever needed, with no retrieval problems. This unlikely aspect of the scenario is all lightly described, and it is not part of Searle’s key argument to convince the reader that it makes sense. In fact, quite the contrary a key part of his argument is in glossing over these questions of orders of magnitude, for otherwise a skeptical reader will realize that nearly all of the understanding must lie in the billions of symbols on paper, and practically none of it in the demon. The fact that the demon is animate is an irrelevant—indeed, misleading—side issue that Searle has mistaken for a very significant fact.

We can back up this argument by exhibiting Searle’s own espousal of the Systems Reply. To do so, we should first like to place Searle’s thought experiment in a broader context. In particular, we would like to show how Searle’s setup is just one of a large family of related thought experiments, several of which are the topics of other selections in this book. Each member of this family of thought experiments is defined by a particular choice of “knob settings” on a thought-experiment generator. Its purpose is to create—in your mind’s eye—various sorts of imaginary simulations of human mental activity. Each different thought experiment is an “intuition pump” (Dennett’s term) that magnifies one facet or other of the issue, tending to push the reader toward certain conclusions. We see approximately five knobs of interest, although it is possible that someone else could come up with more.

Knob 1. This knob controls the physical “stuff” out of which the simulation be constructed. Its settings include: neurons and chemicals; water pipes and water; bits of paper and symbols on them; toilet paper and stones; data structures and procedures; and so on.


Knob 2. This knob controls the level of accuracy with which the simulation attempts to mimic the human brain. It can be set at an arbitrarily fine level of detail (particles inside atoms), at a coarser level such as that of cells and synapses, or even at the level that AI researchers and cognitive psychologists deal with: that of concepts and ideas, representations and processes.


Knob 3. This knob controls the physical size of the simulation. Our assumption is that microminiaturization would allow us to make a teeny-weeny network of water pipes or solid-state chips that would fit inside a thimble, and conversely that any chemical process could be blown up to the macroscopic scale.


Knob 4. This critical knob controls the size and nature of the demon wh carries out the simulation. If it is a normal-sized human being, we shall call it a “Searle’s demon.” If it is a tiny elflike creature that can sit inside neurons or on particles, then we shall call it a “Haugeland’s demon,” after John Haugeland, whose response to Searle featured this notion, The settings of this knob also determine whether the demon is animate or inanimate.


Knob 5. This knob controls the speed at which the demon works. It can be set to make the demon work blindingly fast (millions of operations per microsecond) or agonizingly slowly (maybe one operation every few seconds).

Now, by playing with various knob settings, we can come up with various thought experiments. One choice yields the situation described in selection 26, “A Conversation with Einstein’s Brain.” Another choice yields Searle’s Chinese room experiment. In particular, that involves the following knob settings:

Knob 1: paper and symbols

Knob 2: concepts and ideas

Knob 3: room size

Knob 4: human-sized demon

Knob 5: slow setting (one operation every few seconds)

Note that in principle Searle is not opposed to assuming that a simulation with these parameters could pass the Turing test. His dispute is only with what that would imply.

There is one final parameter that is not a knob but a point of view from which to look at the experiment. Let us add a little color to this drab experiment and say that the simulated Chinese speaker involved is a woman and that the demons (if animate) are always male. Now we have a choice between the demon’s-eye view and the system’s-eye view. Remember that by hypothesis, both the demon and the simulated woman are equally capable of articulating their views on whether or not they are understanding, and on what they are experiencing. Searle is insistent, nonetheless, that we see this experiment only from the point of view of the demon. He insists that no matter what the simulated woman claims (in Chinese, of course) about her understanding, we should disregard her claims, and pay attention to the demon inside, who is carrying out the symbol manipulation. Searle’s claim amounts to the notion that actually there is only one point of view, not two. If one accepts the way Searle describes the whole experiment, this claim has great intuitive appeal, since the demon is about our size, speaks our language, and works at about our speed—and it is very hard to identify with a “woman” whose answers come at the rate of one per century (with luck)—and in “meaningless squiggles and squoggles,” to boot.

But if we change some of the knob settings, we can also alter the ease with which we change point of view. In particular, Haugeland’s variation involves switching various knobs as follows:

Knob 1: neurons and chemicals

Knob 2: neural-firing level

Knob 3: brain size

Knob 4: eensy-weensy demon

Knob 5: dazzlingly fast demon

What Haugeland wants us to envision is this: A real woman’s brain is, unfortunately, defective. It no longer is able to send neurotransmitters from one neuron to another. Luckily, however, this brain is inhabited by an incredibly tiny and incredibly speedy Haugeland’s demon, who intervenes every single time any neuron would have been about to release neurotransmitters into a neighboring neuron. This demon “tickles” the appropriate synapse of the next neuron in a way that is functionally indistinguishable, to that neuron, from the arrival of genuine neurotransmitters. And the H-demon is so swift that he can jump around from synapse to synapse in trillionths of a second, never falling behind schedule. In this way the operation of the woman’s brain proceeds exactly as it would have, if she were healthy. Now, Haugeland asks Searle, does the woman still think—that is, does she possess intentionality—or, to recall the words of Professor Jefferson as cited by Turing, does she merely “artificially signal”?

You might expect Searle to urge us to listen to and identify with demon, and to eschew the Systems Reply, which would be, of course, to listen to and identify with the woman. But in his response to Haugeland, Searle surprises us—he chooses to listen to her this time and to ignore the demon who is cursing us from his tiny vantage point, yelling up to us, “Fools! Don’t listen to her! She’s merely a puppet whose every action is caused by my tickling, and by the program embedded in these many neurons that I zip around among.” But Searle does not heed the H. demon’s warning cries. He says, “Her neurons still have the right causal powers; they just need some help from the demon.”

We can construct a mapping between Searle’s original setup and this modified setup. To the “bits of paper” now correspond all the synapses in the woman’s brain. To the AI program written on these “bits of paper” corresponds the entire configuration of the woman’s brain; this amounts to a gigantic prescription telling the demon when and how to know which synapses to tickle. To the act of writing “meaningless squiggles and squoggles of Chinese” on paper corresponds the act of tickling her synapses. Suppose we take the setup as is, except that we’ll vary the size and speed knobs. We’ll blow the woman’s brain up to the size of the Earth, so that the demon becomes an “us-sized” S-demon, instead of a tiny H-demon. And let’s also have the S-demon act at speed reasonable for humans, instead of zipping thousands of miles throughout this bulbous brain in mere microseconds. Now which level does Searle wish us to identify with? We won’t speculate, but it seems to us that if the Systems Reply was compelling in the previous case, it should still be so, in this case.

It must be admitted that Searle’s thought experiment vividly raises the question of what understanding a language really is. We would like: to digress for a moment on that topic. Consider the question: “What kind of ability to manipulate the written or spoken symbols of a language amounts to a true understanding of that language?” Parrots who parrot English do not understand English. The recorded voice of a woman announcing the exact time of day on the telephone time service is not the mouthpiece of a system that understands English. There is no mentality behind that voice—it has been skimmed off of its mental substrate, yet retains a human-seeming quality. Perhaps a child would wonder how anyone could have so boring a job, and could do it so reliably. This would amuse us. It would be another matter, of course, if her voice were being driven by a flexible AI program that could pass the Turing test!

Imagine you are teaching a class in China. Further, imagine that you are aware of formulating all your thoughts in English and then of applying last-minute transformation rules (in reality, they would be last-split-second rules) that convert the English thoughts into instructions for moving your mouth and vocal cords in strange, “meaningless” ways—and yet, all your pupils sit there and seem quite satisfied with your performance. When they raise their hands, they utter exotic sounds that, although they are completely meaningless to you, you are equipped to deal with, as you quickly apply some inverse rules and recover the English meanings underlying them.... Would you feel you were actually speaking Chinese? Would you feel you had gained some insight into the Chinese mentality? Or—can you actually imagine this situation? Is it realistic? Could anyone actually speak a foreign language well using this method?

The standard line is “You must learn to think in Chinese.” But in what does this consist? Anyone who has experienced it will recognize this description: The sounds of the second language pretty soon become “unheard”—you hear right through them, rather than hearing them, as you see right through a window, rather than seeing the window. Of course, you can make yourself hear a familiar language as pure uninterpreted sound if you try very hard, just as you can look at a windowpane if you want; but you can’t have your cake and eat it too—you can’t hear the sounds both with and without their meanings. And so most of the time people hear mainly meaning. For those people who learn a language because of enchantment with its sounds, this is a bit disappointing—and yet mastery of those sounds, even if one no longer hears them naïvely, is a beautiful, exhilarating experience. (It would be an interesting thing to try to apply this same kind of analysis to the hearing of music, where the distinction between hearing bare sounds and hearing their “meanings” is far less well understood, yet seems very real.)

Learning a second language involves transcending one’s own native language. It involves mixing the new language right in with the medium in which thought takes place. Thoughts must be able to germinate as easily (or nearly as easily) in the new language as in one’s native language. The way in which a new language’s habits seep down level by level and finally get absorbed into neurons is a giant mystery still. But one thing for certain is that mastery of a language does not consist in getting your “English subsystem” to execute for you a program of rules that enable you to deal with a language as a set of meaningless sounds and marks. Somehow, the new language must fuse with your internal representational system—your repertoire of concepts, images, and so on—in the same intimate way as English is fused with it. To think precisely about this, one must develop a very clear notion of the concept of levels of implementation, a computer-science concept of great power.

Computer scientists are used to the idea that one system can “emulate” another system. In fact, it follows from a theorem proven in 1936 by Alan Turing that any general-purpose digital computer can take on guise of any other general-purpose digital computer, and the only difference to the outside world will be one of speed. The verb “emulate” reserved for simulations, by a computer, of another computer, while “simulate” refers to the modeling of other phenomena, such as hurricanes, population curves, national elections, or even computer users.

A major difference is that simulation is almost always approximate, depending on the nature of the model of the phenomenon in question whereas emulation is in a deep sense exact. So exact is it that when, say a Sigma-5 computer emulates a computer with different architecture—say a DEC PDP-10—the users of the machine will be unaware that they are not dealing with a genuine DEC. This embedding of one architecture in another gives rise to so-called “virtual machines”—in this case, virtual DEC-10. Underneath every virtual machine there is always some other machine. It may be a machine of the same type, it may even be another virtual machine. In his book Structured Computer Organization, Andrew Tanenbaum uses this notion of virtual machines to explain how large computer systems can be seen as a stack of virtual machines implemented one on top of the other—the bottommost one being, of course a real machine! But in any case, the levels are sealed off from each other in a watertight way, just as Searle’s demon was prevented from talking to the Chinese speaker he was part of. (It is intriguing to imagine what kind of conversation would take place—assuming that there were an interpreter present, since Searle’s demon knows no Chinese!)

Now in theory, it is possible to have any two such levels communicate with each other, but this has traditionally been considered bad style; level-mingling is forbidden. Nonetheless, it is probable that this forbidden fruit—this blurring of two implementational levels—is exactly what goes on when a human “system” learns a second language. The second language does not run on top of the first one as a kind of software parasite, but rather becomes equally fundamentally implanted in the hardware (or nearly so). Somehow, absorption of a second language, involves bringing about deep changes in one’s underlying “machine”—a vast and coherent set of changes in the ways that neurons fire, so sweeping a set of changes that it creates new ways for the higher-level entities—the symbols—to trigger one another.

To parallel this in a computer system, a higher-level program would have to have some way of creating changes inside the “demon” that is carrying its program out. This is utterly foreign to the present style in computer science of implementing one level above another in a strictly vertical, sealed-off fashion. The ability of a higher level to loop back and affect lower levels—its own underpinnings—is a kind of magic trick which we feel is very close to the core of consciousness. It will perhaps one day prove to be a key element in the push toward ever-greater flexibility in computer design, and of course in the approach toward artificial intelligence. In particular, a satisfactory answer to the question of what “understanding” really means will undoubtedly require a much sharper delineation of the ways in which different levels in a symbol-manipulating system can depend on and affect one another. All in all, these concepts have proven elusive, and a clear understanding of them is probably a good ways off yet.

In this rather confusing discussion of many levels, you may have started to wonder what in the world “level” really means. It is a most difficult question. As long as levels are sealed off from each other, like Searle’s demon and the Chinese-speaking woman, it is fairly clear. When they begin to blur, beware! Searle may admit that there are two levels in his thought experiment, but he is reluctant to admit that there are two occupied points of view—two genuine beings that feel and “have experience.” He is worried that once we admit that some computational systems might have experiences, that would be a Pandora’s box and all of a sudden “mind would be everywhere”—in the churning of stomachs, livers, automobile engines, and so on.

Searle seems to believe that any system whatsoever can be ascribed beliefs and feelings and so on, if one looks hard enough for a way to describe the system as an instantiation of an AI program. Obviously, that would be a disturbing notion, leading the way to panpsychism. Indeed, Searle believes that the AI people have unwittingly committed themselves to a panpsychic vision of the world.

Searle’s escape from his self-made trap is to maintain that all those “beliefs” and “feelings” that you will uncover in inanimate objects and so forth when you begin seeing mind everywhere are not genuine but “pseudo.” They lack intentionality! They lack the causal powers of the brain! (Of course, Searle would caution others to beware of confusing these notions with the naïvely dualistic notion of “soul.”)

Our escape is to deny that the trap exists at all. It is incorrect to see minds everywhere. We say: minds do not lurk in car engines or livers any more than brains lurk in car engines and livers.

It is worthwhile expanding on this a little. If you can see all the complexity of thought processes in a churning stomach, then what’s to prevent you from reading the pattern of bubbles in a carbonated beverage as coding for the Chopin piano concerto in E minor? And don’t the holes in pieces of Swiss cheese code for the entire history of the United States? Sure they do—in Chinese as well as in English. After all, all things are written everywhere! Bach’s Brandenburg concerto no. 2 is coded for in the structure of Hamlet—and Hamlet was of course readable (if you’d only known the code) from the structure of the last piece of birthday cake you gobbled down.

The problem is, in all these cases, that of specifying the code without knowing in advance what you want to read. For otherwise, you could pull a description of anyone’s mental activity out of a baseball game or a blade of grass by an arbitrarily constructed a posteriori code. But this is not science.

Minds come in different grades of sophistication, surely, but minds worth calling minds exist only where sophisticated representational systems exist, and no describable mapping that remains constant in time will reveal a self-updating representational system in a car engine or a liver. Perhaps one could read mentality into a rumbling car engine in somewhat the way that people read extra meanings into the structures of the Great Pyramids or Stonehenge, the music of Bach, Shakespeare’s plays, and so on—namely, by fabricating far-fetched numerological mapping schemes that can be molded and flexed whenever needed to fit the desires of the interpreter. But we doubt that that is what Searle intends (we do grant that he intends).

Minds exist in brains and may come to exist in programmed machines. If and when such machines come about, their causal powers will derive not from the substances they are made of, but from their design and the programs that run in them. And the way we will know they have those causal powers is by talking to them and listening carefully to what they have to say.


D.R.H.

23 Raymond M. Smullyan An Unfortunate Dualist[35]

Once upon a time there was a dualist. He believed that mind and matter are separate substances. Just how they interacted he did not pretend to know—this was one of the “mysteries” of life. But he was sure they were quite separate substances.

This dualist, unfortunately, led an unbearably painful life—not because of his philosophical beliefs, but for quite different reasons. And he had excellent empirical evidence that no respite was in sight for the rest of his life. He longed for nothing more than to die. But he was deterred from suicide by such reasons as: (1) he did not want to hurt other people by his death; (2) he was afraid suicide might be morally wrong; (3) he was afraid there might be an afterlife, and he did not want to risk the possibility of eternal punishment. So our poor dualist was quite desperate.

Then came the discovery of the miracle drug! Its effect on the taker was to annihilate the soul or mind entirely but to leave the body functioning exactly as before. Absolutely no observable change came over the taker; the body continued to act just as if it still had a soul. Not the closest friend or observer could possibly know that the taker had taken the drug, unless the taker informed him.

Do you believe that such a drug is impossible in principle? Assuming you believe it possible, would you take it? Would you regard it as immoral? Is it tantamount to suicide? Is there anything in Scriptures forbidding the use of such a drug? Surely, the body of the taker can still fulfill all its responsibilities on earth. Another question: Suppose your spouse took such a drug, and you knew it. You would know that she (or he) no longer had a soul but acted just as if she did have one. Would you love your mate any less?

To return to the story, our dualist was, of course, delighted! Now he could annihilate himself (his soul, that is) in a way not subject to any of the foregoing objections. And so, for the first time in years, he went to bed with a light heart, saying: “Tomorrow morning I will go down to the drugstore and get the drug. My days of suffering are over at last!” With these thoughts, he fell peacefully asleep.

Now at this point a curious thing happened. A friend of the dualist who knew about this drug, and who knew of the sufferings of the dualist, decided to put him out of his misery. So in the middle of the night, while the dualist was fast asleep, the friend quietly stole into the house and injected the drug into his veins. The next morning the body of the dualist awoke—without any soul indeed—and the first thing it did was to go to the drugstore to get the drug. He took it home and, before taking it, said, “Now I shall be released.” So he took it and then waited the time interval in which it was supposed to work. At the end of the interval he angrily exclaimed: “Damn it, this stuff hasn’t helped at all! I still obviously have a soul and am suffering as much as ever!”

Doesn’t all this suggest that perhaps there might be something just a little wrong with dualism?

Reflections

“O Seigneur, s’il y a un Seigneur, sauvez mon âme, si j’ai um âme.”


“O Lord, if there is a Lord, save my soul, if I have a soul.”

—Ernest Renan

Prière d´un sceptique

Smullyan provides a provocative riposte to Searle’s thrust—an intentionality-killing potion. The soul of a sufferer is annihilated and yet, to all , external eyes, the suffering goes on unabated. What about to the inner “I”? Smullyan leaves no doubt as to how he feels.

The point of this little fable is the logical absurdity of such a potion. But why is this? Why can’t the soul depart and leave behind a soulless, feelingless, yet living and normal-seeming being?

Soul represents the perceptually unbreachable gulf between principles and particles. The levels in between are so many and so murky that we not only see in each person a soul but are unable to unsee it. “Soul” is the name we give to that opaque yet characteristic style of each individual. Put another way, your soul is the “incompressible core” that determines how you are, hence who you are. But is this incompressible core a set of moral principles or personality traits, or is it something that we can speak of in physical terms—in brain language?

The brain’s neurons respond only to “local” stimuli—local in both space and time. At each instant (as in the Game of Life, described in the Reflections on “Non Serviam”), the neighboring neurons’ influences are added together and the neuron in question either fires or doesn’t. Yet somehow all of this “local” behavior can add up to a Grand Style—to a set of “global” principles that, seen on the level of human behavior, embody long-term goals, ideals, interests, tastes, hopes, fears, morals, and so on. So somehow all of these long-term global qualities have to be coded into the neurons in such a way that, from the neurons’ firings, the proper global behavior will emerge. We can call this a “flattening” or “compressing” of the global into the local. Such coding of many longterm, high-level goals into the synaptic structures of billions of neurons has been partially done for us by our millions of ancestors, way back in the evolutionary tree. We owe much not only to those who survived, but also to those who perished, since it is only thanks to the multiple branchings at every stage that evolution could work its miracles to give rise to a creature of such complexity as a person.

Consider a simpler animal, such as a newborn calf. An hour-old calf not only can see and walk, but will instinctively shy away from people. Such behavior comes from ancient sources—namely, the higher survival rate of “protocows” that had genes for this kind of behavior. Such behavior, along with a million other successful adaptations, has been “flattened” into neural patterns coded for in the bovine genes, and is now a ready-made feature of each calf as it comes off the assembly line. Seen on its own, the set of cow genes or human genes seems a miracle—nearly inexplicable. So much history has been flattened into molecular patterns. In order to demystify this, you would have to work backward, reconstructing the evolutionary tree—and not just the branches that survived! But we don’t see the whole tree of predecessors, successful and otherwise, when we look at an individual cow, and so we can be amazed by the long-term purposes, goals, and so forth that we see flattened in its brain structure. Our amazement is particularly great when we try to image how, inside its head, millions of individually purposeless local neural firings are adding up to a coherent purposive style-the soul of one cow.

In humans, by contrast, the mind and character continue to shaped for years after birth, and over this long time span neurons absorb feedback from the environment and self-modify in such a way as to build up a set of styles. The lessons of childhood are flattened into unconscious firing patterns, and when all of these tiny learned neural patterns act in concert with the myriad tiny neural patterns coded for in genes, a human perceiver will see one large pattern emerge—the soul of one human. This is why the idea of a potion that “kills the soul” and yet leaves the behavior patterns invariant makes no sense.

Under pressure, of course, a soul—a set of principles—may partly fold. What might have seemed “incompressible” may in fact yield to greed, fame, vanity, corruption, fear, torture, or whatever. In this way, “soul” can be broken. Orwell’s novel 1984 gives a vivid description of the mechanics of soul breaking. People who are brainwashed by cults or terrorist groups that hold them captive for long periods of time can lose the global coherence of drives so carefully compressed over years into their neurons. And yet there is a kind of resilience, a tendency to return to some sort of “resting position”—the central soul, the innermost core—even after horrendous, grueling episodes. This could be called “homeostasis of the spirit.”

Let us move to a jollier note. Imagine a soul-free universe, a mechanistic universe with nary a speck of free will or consciousness to be found not a perceiver anywhere. This universe might be deterministic or might be filled with arbitrary, random, capricious, and causeless events. It is law-governed enough, though, that stable structures can emerge and evolve. In this universe, then, are swarming many distinct, tightly knit, self-sufficient little objects, each one with an internal representation system of enough complexity as to engender a deep, rich self-image. In each one of them this will give rise to (and here we onlookers must be pardoned for smiling with wry amusement) the illusion of free will—when in fact, of course, this is just a cold universe and these objects that populate it are just robotlike, rule-bound machines, moving around in deterministic (or capricio-deterministic) trajectories, and kidding themselves that they’re exchanging meaningful ideas when in reality they’re just mechanically chattering back and forth by emitting and absorbing long trains of empty, hollow, meaningless electromagnetic or perhaps acoustical waves.

Having imagined this strange universe filled with illusions, one can now take a look out at this universe and see all of humanity in thin disorienting light. One can de-soul-ify everyone in the world, so that they’re all like Smullyan’s zombie or Searle’s Chinese-speaking robot, seeming to have an inner life but in fact as devoid of soul as is a clacking typewriter driven by a cold, feelingless computer. Life then seems a cruel hoax on all those soul-free shells, erroneously “convinced” (although how can a heap of dead atoms be convinced?) that they are conscious.

And this would be the best possible way to look at people, were not for one tiny fact that seems to mess it up: I, the observer, am one of them, yet am undeniably conscious! The rest of them are, for all I know just bundles of empty reflexes that feign consciousness—but not this one! After I’ve died—well, then this vision will be an accurate accounting of the way things are. But until that moment, one of the objects will remain special and different, because it is not being fooled! Or … might there be something just a little wrong with dualism?

Dualists maintain, as Smullyan puts it, that mind and matter are separate substances. That is, there are (at least) two kinds of stuff: physical stuff and mental stuff. The stuff our minds are made of has no mass, no physical energy—perhaps not even a location in space. This view is so mysterious, so systematically immune to clarification, that one may well wonder what attracts anyone to it. One broad highway leading to dualism goes through the following (bad) argument:

Some facts are not about the properties, circumstances, and relations of physic objects.

Therefore some facts are about the properties, circumstances, and relations nonphysical objects.

What’s wrong with this argument? Try to think of examples of facts that are not about physical objects. The fact that the narrator in Moby Dick called Ishmael is a fact in good standing, but what is it about? One might want to insist (implausibly) that it is really about certain ink shapes certain bound stacks of printed pages; or one might say (somewhat mysteriously) that it is a fact all right, but it is not about anything at all; or, waving one’s hands a bit, one might say that it is a fact about an abstract object—in much the way the fact that 641 is a prime number is a fact about an abstract object. But almost no one (we suppose) is attracted the view that it is a fact about a perfectly real but nonphysical person named Ishmael. This last view takes novel writing to be a method of ghost-manufacture; it takes too literally the familiar hyperbole about an author’s characters coming to life, having wills of their own, rebelling against the creator. It is literary dualism. (Anybody might seriously wonder if Jack the Ripper was really the Prince of Wales, for they were both real people—or maybe a single real person. A literary dualist might seriously wonder if Professor Moriarty were really Dr. Watson.) Dualists believe that over and above the physical things and events there are other, nonphysical things and events that have some sort of independent existence.

When asked to say more, dualists divide into two schools: those who hold that the occurrence or existence of a mental event has no effect whatsoever on subsequent physical events in the brain, and those who deny this and hold that mental events do have effects on physical events in the brain. The former are called epiphenomenalists and the latter are called interactionists. Smullyan’s fable nicely disposes of epiphenomenalism (doesn’t it?), but what of interactionism?

Ever since Descartes first struggled with it, interactionists have had the apparently insuperable problem of explaining how an event with no physical properties—no mass, no charge, no location, no velocity—could make a physical difference in the brain (or anywhere else). For a nonphysical event to make a difference, it must make some physical event happen that wouldn’t have happened if the nonphysical event hadn’t happened. But if we found a sort of event whose occurrence had this sort of effect, why wouldn’t we decide for that very reason that we had discovered a new sort of physical event? When antimatter was first postulated by physicists, dualists didn’t react with glee and taunts of “I told you so!” Why not? Hadn’t physicists just supported their claim that the universe had two radically different sorts of stuff in it? The main trouble with antimatter, from the dualists’ point of view, was that however exotic it was, it was still amenable to investigation by the methods of the physical sciences. Mind-stuff, on the other hand, was supposed to be off limits to science. But if it is, then we have a guarantee that the mystery will never go away. Some people like that idea.


D.R.H.

D.C.D.

Загрузка...