“WHAT DO YOU MEAN … HUMAN?” by John W. Campbell, Jr. from Astounding Science Fiction


The Incredible Mr. Amis singles out John Campbell several times for special notice. This is not unusual; almost anyone writing about modern American science fiction finds himself paying respects to the man under whose sometimes daft but always deft—and vigorous and enthusiastic—guidance, ASF (which you can take as Astounding Science Fiction or the new title. Analog Science Fact and—gasp—Fiction) has been the consistent leader in the field—both as to sales and influence. Mr. A., however, limits his comments about Campbell’s influence to a snidish remark about cranks whose rapid departure would benefit the whole field and a description of the editor as “a deviant figure of marked ferocity.”

I am here to say that I have talked with Campbell, literally and actually—and lived to go back for more. (I don’t want to give the impression that talking with John is easy. But listening is lots of fun too, you know.) But we had lunch together, and both ate spaghetti, and there were no fangs, claws, or horns in evidence.

The following selection is a Campbell editorial from ASF. And now that I think of it, I suppose it is rather ferociously deviant of Mr. Campbell to want to “play robot.”

* * * *

There are some questions that only small children and very great philosophers are supposed to ask—questions like “What is Death?” and “Where is God?”

And then there are some questions that, apparently, no one is supposed to ask at all; largely, I think, because people have gotten so many wrong answers down through the centuries, that it’s been agreed-by-default not to ask the questions at all.

Science fiction, however, by its very existence, has been asking one question that belongs in the “Let’s agree not to discuss it at all” category—of course, simply by implication, but nevertheless very persistently. To wit: “What do you mean by the term ‘human being’?”

It asks the question in a number of ways; the question of “What is a superman?” requires that we first define the limits of “normal man.” The problem of “What’s a robot?” asks the question in another way.

Some years ago now, Dr. Asimov introduced the Three Laws of Robotics into science fiction:

1. A robot cannot harm, nor allow harm to come to, a human being.

2. A robot must obey the orders of human beings.

3. A robot must, within those limits, protect itself against damage.

The crucial one is, of course, the First Law. The point that science fiction has elided very deftly is… how do you tell a robot what a human being is?

Look … I’ll play robot; you tell me what you mean by “human being.” What is this entity-type that I’m required to leave immune, and defend? How am I, Robot, to distinguish between the following entities: 1. A human idiot. 2. Another robot. 3. A baby. 4. A chimpanzee.

We might, quite legitimately, include a humanoid alien —or even Tregonsee, E. E. Smith’s Rigellian Lensman, and Worsel, the Velantian—which we, as science-fictioneers, have agreed fulfill what we really mean by “human”! But let’s not make the problem that tough just yet.

We do, however, have to consider the brilliant question Dr. W. Ross Ashby raised: If a mechanic with an artificial arm is working on an engine, is the mechanical arm part of the organism struggling with the environment, or part of the environment the organism is struggling with? If I, Robot, am to be instructed properly, we must consider human beings with prosthetic attachments. And, if I am a really functional robot, then that implies a level of technology that could turn up some very fancy prosthetic devices. Henry Kuttner some years back had a story about a man who had, through an accident, been reduced to a brain in a box; the box, however, had plug-in connections whereby it could be coupled to allow the brain in the box to “be” a whole spaceship, or a power-excavator, or any other appropriate machine.

Is this to be regarded as “a robot” or “a human being”? Intuitively we feel that, no matter how many prosthetic devices may be installed as replacements, the human being remains.

The theologians used to have a very handy answer to most of those questions; a human being, unlike animals or machines, has a soul. If that is to be included in the discussion, however, we must also include the associated problems of distinguishing between human beings and incubi, succubi, demons and angels. The problem then takes on certain other aspects … but the problem remains. History indicates that it was just as difficult to distinguish between humans and demons as it is, currently, to distinguish between humans and robots.

Let’s try a little “truth-table” of the order that logicians sometimes use, and that advertisers are becoming fond of. We can try various suggested tests, and check off how the various entities we’re trying to distinguish compare.

You can, of course, continue to extend this, with all the tests you care to think of. I believe you’ll find that you can find no test within the entire scope of permissible-in-our-society-evaluations that will permit a clear distinction between the five entities in the table.

Note, too, that that robot you want to follow the Three Laws is to modify the Second Law—obedience—rather extensively with respect to children and idiots, after you’ve told it how to distinguish between humanoids and chimpanzees.

There have been a good many wars fought over the question “What do you mean… human?” To the Greeks, the peoples of other lands didn’t really speak languages— which meant Greek—but made mumbling noises that sounded like bar-bar-bar, which proved they were barbarians, and not really human.

The law should treat all human beings alike; that’s been held as a concept for a long, long time. The Athenians subscribed to that concept. Of course, barbarians weren’t really human, so the Law didn’t apply to them, and slaves weren’t; in fact only Athenian citizens were.

The easy way to make the law apply equally to all men is to so define “men” that the thing actually works. “Equal Justice for All! (All who are equal, of course.)”


TEST Idiot Robot Baby Chimp Man with prosthetic aids.

Capable of logical thought. No Yes No No Yes

“Do I not bleed?”

(Merchant of Venice test.) Yes No Yes Yes Depends.

Capable of speech? Yes Yes No No Yes

“Rational Animal”;

this must be divided into

A. Rational No Yes No No Yes

B. Animal Yes No Yes Yes Partly

Humanoid form and size Yes Yes No Yes Maybe.

Lack of fur or hair * Yes Yes Yes Maybe partly

A living being Yes No Yes No Depends on what test you use for “living”.

*A visit to a beach in summer will convince you that some adult male humans have a thicker pelt than some gorillas.


This problem of defining what you mean by “human being” appears to be at least as prolific a source of conflict as religion—and may, in fact, be why religion, that being the relationship between Man and God, has been so violent a ferment.

The law never has and never will apply equally to all; there are inferiors and superiors, whether we like it or not, and Justice does not stem from applying the same laws equally to different levels of beings. Before blowing your stack on that one, look again and notice that every human culture has recognized that you could not have the same set of laws for children and adults—not since the saurians lost dominance on the planet has that concept been workable. (Reptilian forms are hatched from the egg with all the wisdom they’re ever going to have; among reptiles of one species, there is only a difference of size and physical strength.)

Not only is there difference on a vertical scale, but there’s displacement horizontally—i.e., different-but-equal, also exists, a woman may be equal to a man, but she’s not the same as a man.

This, also, makes for complications when trying to decide “what is a human being”; there have been many cultures in history that definitely held that women weren’t human.

I have a slight suspicion that the basic difficulty is that we can’t get anything even approximating a workable concept of Justice so long as we consider equality a necessary, inherent part of it. The Law of Gravity applies equally to all bodies in the Universe—but that doesn’t mean that the force of gravity is the same for all!

Gravity—the universal law—is the same on Mars and a white dwarf star as it is on Earth. That doesn’t mean that the force of gravity is the same.

But it takes considerable genius to come up with a Universal Law of Gravity for sheer, inanimate mass. What it takes to discover the equivalent for intelligent entities… the human race hasn’t achieved as yet! Not even once has an individual reached that level!

This makes defining “human” a somewhat explosive subject.

Now the essence of humanity most commonly discussed by philosophers has been Man, the Rational Animal. The ability to think logically; to have ideas, and be conscious of having those ideas. The implied intent in “defining human-ness” is to define the unique, highest-level attribute that seta man apart from all other entities.

That “rational animal” gimmick worked pretty well for a long time; the development of electronic computers, and the clear implication of robots calls it into question. That, plus the fact that psychological experiments have shown that logical thought isn’t quite so unique-to-Man as philosophers thought.

The thing that is unique to human beings is something the philosophers have sputtered at, rejected, damned, and loudly forsworn throughout history. Man is the only known entity that laughs, weeps, grieves, and yearns. , There’s been considerable effort made to prove that those are the result of simple biochemical changes of endocrine balance. That is, that you feel angry because there is adrenalin in the bloodstream, released from the suprarenal glands. Yes, and the horse moves because the cart keeps pushing him. Why did the gland start secreting that extra Charge of adrenalin?

The essence of our actual definition of humanness is “I am human; any entity that feels as I feel is human also. But any entity that merely thinks, and feels differently is not human.”

The “inhuman scientist” is so called because he doesn’t appear to feel as the speaker does. While we were discussing possible theological ramifications of the humanness question, we might have included the zombie. Why isn’t a zombie “human” any longer? Because he has become the logical philosopher’s ideal; a purely rational, non-emotional entity.

Why aren’t Tregonsee, the Rigellian, and Worsel, the Velantian, to be compared with animals and/or robots?

Because, as defined in E. E. Smith’s stories, they feel as we do.

Now it’s long since been observed that an individual will find his logical thinking subtly biased in the direction of his emotional feelings. His actions will be controlled not by his logic and reason but, in the end, by his emotional pulls. If a man is my loyal friend— i.e., if he feels favorable-to-me —then whatever powers of physical force or mental brilliance he may have are no menace to me, but are a menace to my enemies.

If he feels about things as I do, I need not concern myself with how he thinks about them, or what he does. He is “human”—my kind of human.

But… if he can choose his feelings, if his emotions are subject to his conscious, judicious, volitional choice…? What then? If his emotional biases are not as rigidly unalterable as his bones? If he can exercise judgment and vary his feelings, can I trust him to remain “human”?

Could an entity who felt differently about things—whose emotions were different—be “human”?

That question may be somewhat important to us. Someone, sooner or later, is going to meet an alien, a really alien alien, not just a member of Homo sapiens from a divergent breed and culture.

Now it’s true that all things are relative. Einstein proved the relativity of even the purely physical level of reality. But be it noted that Einstein proved that Law of Relativity; things aren’t “purely relative” in the sense that’s usually used—”I can take any system of relationships I choose!” There are laws of relativity.

The emotional biases a culture induces in its citizens vary widely. Mores is a matter of cultural relativity.

That doesn’t mean that ethics is; there are laws of relativity, and it’s not true that any arbitrary system of relationships is just as good as any other.

Can we humans-who-define-humanness-in-emotional-terms—despite what we theoretically say!—meet an equally wise race with different emotions—and know them for fellow humans?

A man who thinks differently we can tolerate and understand, but our history shows we don’t know how to understand a man who feels differently.

The most frightening thing about a man who feels differently is this; his feelings might be contagious. We might learn to feel his way—and then, of course, we wouldn’t be human any more.

The wiser and sounder his different feelings are, the greater the awful danger of learning to feel that way. And that would make us inhuman, of course.

How do you suppose an Athenian Greek of Pericles’s time would have felt if threatened with a change of feelings such that he would not feel disturbed if someone denied the reality of the Gods, or suggested that the Latins had a sounder culture? Why—only a nonhuman barbarian could feel that way!

The interesting thing is that the implication of “inhuman” is invariably subhuman.

I suspect one of the most repugnant aspects of Darwin’s concept of evolution was—not that we descended from monkeys—but its implication that something was apt to descend from us! Something that wasn’t human … and wasn’t subhuman.

The only perfect correlation is auto-correlation; “I am exactly what I am.” Any difference whatever makes the correlation less perfect.

Then if what I feel is human—anything different is less perfectly correlated with humanness. Hence any entity not identical is more or less subhuman; there can’t possibly be something more like me than I am.

Anybody want to try for a workable definition of “human”? One warning before you get started too openly; logical discussion doesn’t lead to violence—until it enters the area of emotion.

As of now, we’d have to tell that robot “A human being is an entity having an emotional structure, as well as a physical and mental structure. Never mind what kind of emotional structure—good, indifferent, or insane. It’s the fact of its existence that distinguishes the human.”

Of course, that does lead to the problem of giving the robot emotion-perceptors so he can detect the existence of an emotion-structure.

And that, of course, gets almost as tough as the problem of distinguishing a masquerading demon from a man. You know … maybe they are the same problem?

It’s always puzzled me that in the old days they delected so many demons, and so few angels, too. It always looked as though the Legions of Hell greatly outnumbered the Host of Heaven, or else were far more diligent on Earth.

But then … the subhuman is so much more acceptable than the superhuman.


Загрузка...