The other day I was looking through a new textbook on biology (Biological Science: An Inquiry into Life, written by a number of contributing authors and published by Har court, Brace amp; World, Inc. in 1963). 1 found it fascinating.
Unfortunately, though, I read the Foreword first (yes, I'm one of that kind) and was instantly plunged into the deepest gloom. Let me quote from;the first two paragraphs:
"With each new generation our fund of scientific knowl edge increases fivefold… At the current rate of s.cien tific advance, there is about four times as much'significant biological knowledge today as in 1930, and about sixteen times as much as in 1900. By the year 2000, at this rate of increase, there will be a hundred times as much biology to dcover' in the introductory course as at the beginning of the century."
Imagine how this affects me. I am a professional "keeper upper" with science and in my more manic, ebullient, and carefree moments, I even think I succeed fairly well.
Then I read something like the above-quoted passage and the world falls about my ears. I don't keep up with science. Worse, I can't keep tip with it. Still worse, I'm falling farther behind every day.
And finally, when I'm all through sorrowing for myself, I devote a few moments to worrying about the world generally. What is going to become of Homo sapiens?
We're going to smarten ourselves to death. After a while, we will all die of pernicious education, with our brain cells crammed to indigestion with facts and concepts, and with blasts of information exploding out of our ears.
But then, as luck would have it, the very day after I read the Foreword to Biological Science I came across an old, old book entitled Pike's Arithmetic. At least that is the name on the spine. On the title page it spreads itself a bit better, for in those days titles were titles. It goes "A New and Complete System of Arithmetic Composed for the Use of the Citizens of the United States," by Nicolas Pike, A.M.
It was first published in 1785, but the copy I have is only the "Second Edition, Enlarged," published in 1797.
It is a large book of over 500 pages, crammed full of small print and with no relief whatever in the way of illustrations or diagrams. It is a solid slab of arithmetic except for small sections at the very end that introduce algebra and geometry.
I was amazed. I have two children in grade school (and once I was in grade school myself), and I know what arith metic books are like these days. They are nowhere near as large. They can't possibly have even one-fifth the wordage of Pike.
Can it be that we are leaving anything out?
So I went through Pike and, you know, we are leaving something out. And there's nothing wrong with that. The trouble is we're not leaving enough out.
On page 19, for instance, Pike devotes half a page to a listing of numbers as expressed in Roman numerals, ex tending the list to numbers as high as five hundred thou sand.
Now Arabic numerals reached Europe in the High Middle Ages, and once they came on the scene the Roman numerals were completely outmoded. They lost all pos sible use, so infinitely superior was the new Arabic nota tion. Until then who knows how many reams of paper were required to explain methods for calculating with Roman numerals. Afterward the same calculations could be performed with a hundredth of the explanation. No knowledge was lost only inefficient rules.
And yet five hundred years after the deserved death of the Roman numerals, Pike still included them and ex pected his readers to be able to translate them into Arabic numerals and vice versa even though he gave no instruc tions for how to manipulate them. In fact, nearly two hun 138 dred years after Pike, the Roman numerals are still being taught. My little daughter is learning them now.
But why? Where's the need? To be sure, you will find Roman numerals on cornerstones and gravestones, on clockfaces and on some public buildings and documents, but it isn't used for any need at all. It is used for show, for status, for antique flavor, for a craving for some kind of phony classicism.
I dare say there are some sentimental fellows who feel that knowledge of the Roman numerals is a kind of gate way to history and culture; that scrapping them would be like knocking over what is left of the Parthenon, but I have no patience with such mawkishness. We might as well suggest that everyone who learns to drive a car be required to spend some time at the wheel of a Model-T Ford so he could get the flavor of early cardom.
Roman numerals? Forget iti-And make room instead for new and valuable material.
But do we dare forget things? Why not? We've forgot ten much; more than you imagine. Our troubles stem not from the fact that we've forgotten, but that we remember too well; we don't forget enough.
A great deal of Pike's book consists of material we have imperfectly forgotten. That is why the modern arithmetic book is shorter than Pike. And if we could but perfectly forget, the modern arithmetic book could grow still shorter.
For instance, Pike devotes many pages to tables-pre sumably important tables that he thought the reader ought to be familiar with. His fifth table is labeled "cloth meas ure.29 Did you know that 2% inches make a "nail"? Well, they do. And 16 nails make a yard; while 12 nails make an ell.
No, wait a while. Those 12 nails (27 inches) make a Flemish ell. It takes 20 nails (45 inches) to make an English ell, and 24 nails (54 inches) to make a French ell. Then, 16 nails plus 1% inches (371/5 inches) make a Scotch ell.
Now if you're going to be in the business world and import and export cloth, you're going to have to know all those ells-unless you can figure some way of getting the ell out of business.
Furthermore, almost every piece of goods is measured in its own units. You speak of a firkin of butter, a punch of prunes, a fother of lead, a stone of butcher's meat, and so on. Each of these quantities weighs a certain number of pounds (avoirdupois pounds, but there are also troy pounds and apothecary pounds and so on), and Pike carefully gives all the equivalents.
Do you want to measure distances? Well, how about this: 7 92/100 inches make I link; 25 links make I pole; 4 poles make I chain; 10 chains make I furlong; and 8 furlongs make I mile.
Or do you want to measure ale or beer-a very com mon line of work in Colonial tim6s. You have to know the language, of course. Here it is: 2 pints make a quart and 4 quarts make a gallon. Well, we still know that much anyway.
In Colonial times, however, a mere gallon of beer or ale was but a starter. That was for infants. You had to know how to speak of man-sized quantities. Well, 8 gallons make a firkin-that is, it makes "a firkin,of ale in Lon don." It takes, however, 9 gallons to make "a firkin of beer in London." The intermediate quantity, 81/2 gallons, is marked down as "a firkin of ale or beer"-presumably outside of the environs of London where the provincial citizens were less finicky in distinguishing between the two.
But we go on: 2 firkins (I suppose the intermediate kind, but I'm not sure) make a kilderkin and 2 kilderkins make a barrel. Then ll/z barrels make I hogshead; 2 bar rels make a puncheon; and 3 barrels make a butt.
Have you got all that straight?
But let's try dry measure in case your appetite has been sharpened for something still better.
Here, 2 pints make a quart and 2 quarts make a pottle.
(No, not bottle, pottle. Don't tell me you've never heard of a pottle!) But let's proceed.
Next, 2 pottles make a gallon, 2 gallons make a peck, and 4 pecks make a bushel. (Long breath now.) Then 2 bushels make a strike, 2 strikes make a coom, 2 cooms make a quarter, 4 quarters make a chaldron (though in the demanding city of London, it takes 41/2 quarters to make a chaldron). Finally, 5 quarters make a wey and 2 weys make a last.
I'm not making this up. I'm copying it right out of Pike, page 48.
Were people who were studying arithmetic in 1797 ex pected to memorize all this? Apparently, yes, because Pike spends a lot of time on compound addition. That's right, compound addition. . You see, the addition you consider addition is just 44 simple addition." Compound addition is something stronger and I will now explain it to you.
Suppose you have 15 apples, your friend has 17 apples, and a passing stranger has 19 apples and you decide to make a pile of them. Having done so, you wonder bow many you have altogether. Preferring not to count, you draw upon your college education and prepare to add 15 + 17 + 19. You begin with the units column and find that 5 + 7 + 9 = 21.;You therefore divide 21 by 10 and find the quotient is 2 plus a remainder of I,. so you put down the remainder, 1, and carry the quotient 2 into the tens col- I seem to hear loud yells from the audience. "What is all this? comes the fevered demand. "Where does this 'divide by 10' jazz come from?"
Ah, Gentle Readers, but this is exactly what you do whenever you add. It is only that the kindly souls who devised our Arabic system of numeration based it on the number 10 in such a way that when any two-digit num ber is divided by 10, the first digit represents the quotient and the second the remainder.
For that reason, having the quotient and remainder in our hands without dividing, we can add automatically. If the units column adds up to 21, we put down I and carry 2; if it bad added up to 57, we would have put down 7 and carried 5, and so on.
The only reason this works, mind you, is that in adding a set of figures, each column of dicits (starting from the right and working leftward) represents a value ten times as great as the column before. The rightmost column is units, the one to its left is tens, the one to its left is hun dreds, and so on.
It is this combination of a number system based on ten and a value ratio from column to column of ten that makes addition very simple. It is for this reason that it is, as Pike calls it, "simple addition."
Now suppose you have I dozen and 8 apples, your friend has 1 dozen and 10 apples, and a passing stranger has I dozen and 9 apples. Make a pile of those and add them as follows:
I dozen 8 units
1 dozen 10 units
1 dozen 9 units
Since 8 + 10 + 9 = 27, do we put down 7 and carry 2? Not at all! The ratio of the "dozens" column to the (tunits" column is not 10 but 12, since there are 12 units to a dozen. And since the number system we are using is based on I 0 and not on 12, we can no longer let the dicits do our thinking for us. We have to go long way round.
If 8 + 10 + 9 - 27, we must divide that sum by the ratio of the value of the columns; in this case, 12. We find that 27 divided by 12 gives a quotient of 2 plus a remain der of 3, so we put down 3 and carry 2. In the dozens column we get I + I + 1 + 2 = 5. Our total therefore is 5 dozen and 3 apples.
Whenever a ratio of other than 10 is used so that you have to make actual divisions in adding, you have "com pound addition." You must indulge in compound addition if you try to add 5 pounds 12 ounces and 6 pounds 8 ounces, for there are 16 ounces to a pound. You are stuck again if you add 3 yards 2 feet 6 inches to I yard 2 feet 8 inches, for there are 12 inches to a foot, and 3 feet to a yard.
You do the former if you care to; I'll do the latter.
First, 6 inches and 8 inches are 14 inches. Divide 14 by 12, getting 1 and a remainder of 2, so you put down 2 and carry 1. As for the feet, 2 + 2 + I = 5. Divide 5 by 3 and get I and a remainder of 2, put down 2 and carry 1. In the yards, you have 3 + 1 + 1 = 5. Your answer, then, is 5 yards 2 feet 2 inches.
Now why on Earth should our unitratios vary all over the lot, when our number system is so firmly based on 10?
There are many reasons (valid in their time) for the use of odd ratios like 2, 3, 4, 8, 12, 16, and 20, but surely we are now advanced and sophisticated enough to use 10 as the exclusive (or n arly exclusive) ratio. If we could do so, we could with such pleasure forget about compound addition-and compound subtraction, compound multipli cation, compound division, too. (They also exist, of course.)
To be sure, there are times when nature makes the uni versal ten impossible. In measuring time, the day and the year have their lengths fixed for us by astronomical condi tions and neither unit of time can be abandoned. Com pound addition and the rest will have to be retained for suchspecial cases, alas.
But who in blazes says we must measure things in firkins and pottles and Flemish ells? These are purely man made measurements, and we must remember that measures were made for man and not man for measures.
It so happens that there is a system of measurement based exclusively on ten in this world. It is called the metric system and it is used all over the civilized world except for certain English-speaking nations such as the United States and Great Britain.
By not adopting the metric system, we waste our time for we gain. nothing, not one thing, by learning- our own measurements. The loss in time (which is expensive in deed) is balanced by not one thing I can imagine. (To be sure, it would be expensive to convert existing instruments and tools but it would have been nowhere nearly as ex pensive if we had done it a century ago, as we should have.)
There are those, of course, who object to violating our long-used cherished measures. They have given up cooms and ehaldrons but imagine there is something about inches and feet and pints and quarts and pecks and bushels that is "simpler" or "more natural" than meters and liters.
There may even be people who find something danger ously foreign and radical (oh, for that vanished word of opprobrium, "Jacobin") in the metric system-yet it was the United Stettes that led the way.
In 1786, thirteen years before the wicked French revo lutionaries designed the metric system, Thomas Jefferson (a notorious "Jacobin," according to the Federalists, at least) saw a suggestion of his adopted by the infant, United States. The nation established a decimal currency.
What we had been using was British currency, and that is a fearsome and wonderful thing. Just to point out bow preposterous it is, let me say that the British people who, over the centuries, have, with monumental patience, taught themselves to endure anything at all provided it was "tra ditional"-are now sick and tired of their durrency and are debating converting it to the decimal system. (Tley can't agree on the exact details of the change.)
But consider the British currency as it has been. To begin with, 4 farthings make 1- penny; 12 pennies make I shilling, and 20 shillings make I pound. In addition, there is a virtual farrago of terms, if not always actual coins, such as ha'pennies and thruppences and sixpences and crowns and balf-crowns and florins and guineas and heaven knows what other devices with which to cripple the mental development of the British schoolchild and line the pockets of British tradesmen whenever tourists come to call and attempt to cope with the currency.
Needless to say, Pike gives careful instruction on how to manipulate pounds, shillings, and pence-and very special instructions they are. Try dividing 5 pounds, 13 shillings, 7 pence by 3. Quick now!
In the United States, the money system, as originally established, is as follows: 10 mills make I cent; 10 cents make I dime; 10 dimes make 1 dollar; 10 dollars make I eagle. Actually, modern Americans, in their calculations, stick to dollars and cents only.
The result? American money can be expressed in deci mal form and can be treated as can any other decimals. An American child who has learned decimals need only be taught to recognize the dollar sign and he is all set. In the time that he does, a British child has barely mastered the fact that thruppence ba'penny equals 14 farthings.
What a pity that when, thirteen years later, in 1799, the metric system came into being, our original anti-British, pro-French feelings had not lasted just long enough to allow us to adopt it. Had we done so, we would have been as happy to forget our foolish pecks and ounces, as we are now happy to have forgotten our pence and shillings.
(After all, would you like to go back to British currency in preference to our own?)
What I would like to see is one form of money do for all the world. Everywhere. Why not?
I appreciate the fact that I may be accused because of this of wanting to pour humanity into a mold, and of being a conformist. Of course, I am not a conformist (heavens!).
I have no objection to local customs and local dialects and local dietaries. In fact, I insist on them for I constitute a locality all by myself. I just don't want to keep provin cialisms that were w 'ell enough in their time but that interfere with human well-being in a world which is now 90 minutes in circumference.
If you think provincialism is cute and gives humanity color and charm, let me quote to you once more from Pike.
"Federal Money" (dollars and cents) had been intro duced eleven years before Pike's second edition, and he gives the exact wording of the law that established it and discusses it in detail-under the decimal system and not under compound addition.
Naturally, since other systems than the Federal were still in use, rules had to be formulated and given for con verting (or "reducing") one system to another. Here is the list. I won't give you the actual rules, just the list of reductions that were necessary, exactly as he lists them:
I. To reduce New Hampshire, Massachusetts,, Rnode Island, Connecticut, and Virginia currency:
1. To Federal Money
2. To New York and North Carolina currency
3. To Pennsylvania, New Jersey, Delaware, and Maryland currency
4. To South Carolina and Georgia currency
5. To English money
6. To Irish money
7. To Canada and Nova Scotia currency
8. To Livres Toumois (French money)
9. To Spanish milled dollars
II. To reduce Federal Money to New England and Virginia currency.
III. To reduce New Jersey, Pennsylvania, Delaware, and Maryland currency:
1. To New Hampshire, Massachusetts, Rhode Island, Connecticut, and Virginia currency
2. To New York and…
Oh, the heck with it. You get the idea.
Can anyone possibly be sorry that all that cute provin cial flavor has vanished? Are you sorry that every time you travel out of state you don't have to throw yourself into fits of arithmetical discomfort whenever you want to make a purchase? Or into similar fits every time someone from another state invades yours and tries to dicker with you? What a pleasure to have forgotten all that.
Then tell me what's so wonderful about having fifty sets of marriage and divorce laws?
In 1752, Great Britain and her colonies (some two centuries later than Catholic Europe) abandoned the Julian calendar and adopted the astronomically more cor rect Gregorian calendar (see Chapter 1). Nearly half a century later, Pike was still giving rules for solving com plex calendar-based problems for the Julian calendar as well as for the Gregorian. Isn't it nice to have forgotten the Julian calendar?
Wouldn't it be nice if we could forget most of calendri cal complications by adopting a rational calendar that would tie the day of the month firmly to the day of the week and have a single three-month calendar serve as a perpetual one, repeating itself over and over every three months? There is a world calendar proposed which would do just this.
It would enable us to do a lot of useful forgetting.
I would like to see the English language come into worldwide use. Not necessarily as the only language or even as the major language. It would just be nice if every one-whatever his own language was-could also speak English fluently. It would help in communications and per haps, eventually, everyone would just choose to speak English.
That would save a lot of room for other things.
Why English? Well, for one thing more people speak English as either first or second language than any other language on Earth, so we have a head start. Secondly, far more science is reported in English than in any other lan guage and it is communication in science that is critical today and will be even more critical tomorrow.
To be sure, we ought to make it as easy as possible for people to speak English, which means we should rational ize its spelling and grammar.
English, as it is spelled today, is almost a set of Chinese ideograms. No one can be sure how a word is pronounced by looking at the letters that make it up. How do you pronounce: rough, through, though, cough, hiccough, and lough; and why is it so terribly necessary to spell all those sounds with the mad letter combination "ough"?
It looks funny, perhaps, to spell the words ruff, throo, thoh, cawf, hiccup, and lokh; but we already write hiccup and it doesn't look funny. We spell colour, color, and centre, center, and shew, show and grey, gray. The result looks funny to a Britisher but we are us 'ed to it. We can get used to the rest, too, and save a lot of wear and tear on the brain. We would all become more intelligent, if intelligence is measured by proficiency at spelling, and we'll not have lost one thing.
And grammar? Who needs the eternal hair-splitting arguments about "shall" and "will" or "which" and "that"?
The uselessness of it can be demonstrated by the fact that virtually no one gets it straight anyway. Aside from losing valuable time, blunting a child's reasoning faculties, and instilling him or her with a ravening dislike for the English language, what do you gain?
If there be some who think that such blurring of fine distinctions will ruin the language, I would like to point out that English, before the grammarians got hold of it, had managed to lose its gender and its declensions almost everywhere except among the pronouns. The fact that we have only one definite article (the) for all genders and cases and times instead of three, as in French (le, la, les) or six, as in German (der, die, das, dem, den, des) in no way blunts the English language, which remains an ad mirably flexible instrument. We cherish our follies only because we are used to them and not because they are not really follies.
We must make room for expanding knowledge, or at least make as much room as possible. Surely it is as im portant to forget the old and useless as it is to learn the new and important.
Forget it, I say, forget it more and more. Forget it!
But why am I getting so excited? No one is listening to a word I say.
In the previous chapter, I spoke of a variety of things; among them, Roman numerals. These seem, even after five centuries of obsolescence, to exert a peculiar fascination over the inquiring mind.
It is my theory that the reason for this is that Roman numerals appeal to the ego. When one passes a corner stone which says: "Erected MCMXVIII," it gives one a sensation of power to say, "Ah, yes, nineteen eighteen" to one's self. Whatever the reason, they are worth further discussion.
The notion of number and of counting, as well as the names of the smaller and more-often-used numbers, date back to prehistoric times and I don't believe that there is a tribe of human beings on Earth today, however primitive, that does not have some notion of number.
With the invention of writing (a step which marks the boundary line between "prehistoric" and "historic"), the next step had to be taken-numbers had to be written.
One can, of course, easily devise written symbols for the words that represent particular numbers, as easily as for any other word. In English we can write the number of fingers on one hand as "five" and the number of digits on all four limbs as "twenty."
Early in the game, however, the kings' tax-collectors, chroniclers, and scribes saw that numbers bad t-he pe culiarity of being ordered. There was one set way of count ing numbers and any number could be defined by counting up to it. Therefore why not make marks which need be counted up to the proper number.
Thus, if we let "one" be represented as ' and "two" as and "three" as "', we can then work out the number indicated by a given symbol without trouble. You can see, for instance, that the symbol stands for "twenty-three." What's more, such a symbol is universal.
Whatever language you count in, the symbol stands for the number "twenty-three" in whatever sound your par ticular language uses to represent it.
It gets hard to read too many marks in an unbroken row, so it is only natural to break it up into smaller groups. If we are used to counting on the fingers of one hand, it seems natural to break up the marks into groups of five.
"Twenty-three" then becomes "' @" 'if" fl@lf "f. If we are more sophisticated and use both hands in counting, we would write it fl"pttflf '//. If we go barefoot and use our toes, too, we might break numbers into twenties.
All three methods of breaking up number symbols into more easily handled groups have left their mark on the various number systems of mankind, but the favorite was division into ten. Twenty symbols in one group are, on the whole, too many for easy grasping, while five symbols in one group produce too many groups as numbers grow larger. Division into ten is the happy compromise.
It seems a natural thought to go on to indicate groups of ten by a separate mark. There is no reason to insist on writing out a group of ten as Ifillittif every time, when a separate mark, let us say -, can be used for the purpose.
In that case "twenty-three" could be written as - "'.
Once you've started this way, the next steps are clear.
By the time you have ten groups of ten (a hundred), you can introduce another symbol, for instance +. Ten hun dreds, or a thousand, can become = and so on. In that case, the number "four thousand six hundred seventy-five" can be written - ++++++
To make such a set of symbols more easily graspable, we can take advantage of the ability of the eye to form a pattern. (You know how you can tell the numbers displayed by a pack of cards or a pair of dice by the pattern itself.)
We could therefore write "four thousand six hundred sev enty-five" as
And, as a matter of fact, the ancient Babylonians used just this system of writing numbers, but they used cunei form wedges to express it.
The Greeks, in the earlier stages of their development, used a system similar to that of the Babylonians, but in later times an alternate method grew popular. They made use of another ordered system-that of the letters of the alphabet.
It is natural to correlate the alphabet and the number system. We are taught both about the same time in child hood, and the two ordered systems of objects naturally tend to match up. The series "ay, bee, see, dee…" comes as glibly as "one, two, three, four…" and there is no dif ficulty in substituting one for the other.
If we use undifferentiated symbols such as '" for ggseven," all the components of the symbol are identical. and all must be included without exception if. the symbol is to mean "seven" and nothing else. On the other hand, if "A,BCDEFG" stands for "seven" (count the letters and see) then, since each symbol is different, only the last need be written. You can't confuse the fact that G is the seventh letter of the alphabet and therefore stands for "seven." In this way, a one-component symbol does the work of a seven-component symbol. Furthermore, " (six) looks very much like "' (seven); whereas F (six) looks n6th ing at all like G (seven).
The Greeks used their own alphabet, of course, but let's use our own alphabet here for the complete demonstration:
A = one, B = two, C = three, D = four, E Five, F six, G = seven, H = eight, I = nine, and J = ten.
We could let the letter K go on to equal "eleven," but at that rate our alphabet will only help us up through "twenty-six." The Greeks had a better system. The Baby lonian notion of groups of ten had left its mark. If J ten, then J equals not only ten objects but also one group of tens. Why not, then, continue the next letters as numbering groups of tens?
In other words J = ten, K twenty, L = thirty, M = forty, N = fifty, 0 = sixty, P seventy, Q = eighty, R = ninety. Then we can go on to number groups of hundreds:
S one hundred, T = two hundred, U = three hundred,
V four hundred, W = five hundred, X = six hundred,
Y seven hundred, Z = eight hundred. It would be con venient to go on to nine hundred, but we have run out of letters. However, in old-fashioned alphabets the amper sand ( amp;) was sometimes placed at the end of the alphabet, so we can say that amp; = nine hundred.
The first nine letters, in other words, represent the units from one to nine, the second nine letters represent the tens groups from one to nine, the third nine letters represent the hundreds groups from one to nine. (The Greek alpha bet, in classic times, had only twenty-four letters where twenty-seven are needed, so the Greeks made use of three archaic letters to fill out the list.)
This system possesses its advantages and disadvantages over the Babylonian system. One advantage is that any number under a thousand can be given in three symbols.
For instance, by the system I have just set up with our alphabet, six hundred seventy-five is XPE, while eight hun dred sixteen is ZJF.
One disadvantage of the Greek system, however, is that the significance of twenty-seven different symbols must be carefully memorized for the use of numbers to a thousand, whereas in the Babylonian system only three different sym bols must be memorized.
Furthermore, the Greek system cofnes to a natural end when the letters of the alphabet are used up. Nine hun dred ninety-nine ( amp;RI) is the largest number that can be written without introducing special markings to indicate that a particular symbol indicates groups of thousands, tens of thousands, and so on. I will get back to this later.
A rather subtle disadvantage of the Greek system was that the same symbols were used for numbers and words so that the mind could be easily distracted. For instance, the Jews of Graeco-Roman times adopted the Greek sys tem of representing numbers but, of course, used the He brew alphabet-and promptly ran into a difficulty. The number "fifteen" would naturally be written as "ten-five."
In the Hebrew alphabet, however, "ten-five" represents a short version of the ineffable name of the Lord, and the Jews, uneasy at the sacrilege, allowed "fifteen" to be repre sented as "nine-six" instead.
Worse yet, words in the Greek-Hebrew system look like numbers. For instance, to use our own alphabet, WRA is "five hundred ninety-one." In the alphabet system it doesn't usually matter in which order we place the symbols though, as we shall see, this came to be untrue for the Roman numerals, which are alphabetic, and WAR also means "five hundred ninety-one." (After all, we can say "five hundred one-and-ninty" if we wish.) Consequently, it is easy to be lieve that there is something warlike, martial, and of omi nous import in the number "five hundred ninety-one."
The Jews, poring over every syllable of the Bible in their effort to copy the word of the Lord with the exactness that reverence required, saw numbers in all the words, and in New Testament times a whole system of mysticism rose over the numerical interrelationships within the Bible. This was the nearest the Jews came to mathematics, and they called this numbering of words gematria, which is a distor-' tion of the Greek geometria. We now call it "numerology."
Some poor souls, even today, assign numbers to the dif ferent letters and decide which names are lucky and which unlucky, and which boy should marry which girl and so on. It is on'e of the more laughable pseudo-sciences.
In one case, a piece of gematria had repercussions in later history. This bit of gematria is to be found in "The Revelation of St. John the Divine," the last book of the New Testament-a book which is written in a mystical fashion that defies literal understanding. The reason for the lack of clarity seems quite clear to me. The author of Revelation was denouncing the Roman government and was laying himself open to a charge of treason and to sub sequent crucifixion if he made his words too clear. Conse 153 quently, he made an effort to write in such a way as to be perfectly clear to his "in-group" audience, while remaining completely meaningless to the Roman authorities.
In the thirteenth chapter he speaks of beasts of diaboli cal powers, and in the eighteenth verse he says, "Here is wisdom. Let him that hath understandino, count the number of the beast: for it is the number of a man; and his number is Six hundred three-score and six."
Clearly, this is designed not to give the pseudo-science of gematria holy sanction, but merely to serve as a guide to the actual person meant by the obscure imagery of the chapter. Revelation, as nearly as is known, was written only a few decades after the first great persecution of Chris tians under Nero. If Nero's name ("Neron Caesar") is written in Hebrew characters the sum of the numbers rep resented by the individual letters does indeed come out to be six hundred sixty-six, "the number of the beast."
Of course, other interpretations are possible. In fact, if Revelation is taken as having significance for all time as well as for the particular time in which it was written, it may also refer to some anti-Christ of the future. For this reason, generation after generation, people have made at tempts to show that, by the appropriate ju-glings of the spelling of a name in an appropriate language, and by the appropriate assignment of numbers to letters, some par ticular personal enemy could be made to possess the num ber of the beast.
If the Christians could apply it to Nero, the Jews them selves might easily have applied it in the next century to Hadrian, if they had wished. Five centuries later it could be (and was) applied to Mohammed. At the time of the Ref ormation, Catholics calculated Martin Luther's name and found it to be the number of the beast, and Protestants re turned the compliment by making the same discovery in the case of several popes.
Later still, when religious rivalries were replaced by na tionalistic ones, Napoleon Bonaparte and William 11 were appropriately worked out. What's more, a few minutes' work with my own system of alphabet-numbers shows me that "Herr Adolif Hitler" has the number of the beast. (I need that extra "I" to make it work.)
The Roman system of number symbols had similarities to both the Greek and Babylonian systems. Like the Greeks, the Romans used letters of the alphabet. However, they did not use them in order, but use just a few letters which they repeated as often as necessary-as in the Baby lonian system. Unlike the Babylonians, the Romans did not invent a new symbol for every tenfold increase of number, but (more primitively) used new symbols for fivefold increases as well.
'Thus, to begin with, the symbol for "one" is 1, and "two," "three," and "four," can be written II, III, and IIII.
The symbol for five, then, is not 11111, but V. People have amused themselves no end trying to work out the reasons for the particular letters chosen as symbols, but there are no explanations that are universally accepted.
However, it is pleasant to think that I represents the up held fin-er and that V might symbolize the hand itself with all five fingers-one branch of the V would be the out held thumb, the other, the remaining fingers. For "six," "seven," "eight," and "nine," we would then have VI, VII, 'VIII, and VIIII.
For "ten" we would then have X, which (some peo ple think) represents both hands held wrist to wrist.
"Twenty-three" would be XXIII, "forty-eight" would be XXXXVIII, and so on.
The symbol for "fifty" is L, for "one hundred" is C, for "five hundred" is D, and for "one thousand" is M. The C and M are easy to understand, for C is the first letter of centum (meaning "one hundred") and M is the first letter of rnille (one thousand).
For that very reason, however, those symbols are sus picious. As initials they may have come to oust the original less-meaningful symbols for those numbers. For instance, an alternative symbol for "thousand" looks something like this (1). Half of a thousand or "five hundred" is the right half of the symbol, or (1), and this may have been con verted into D. As for the L which stands for "fifty," I don't know why it is used.
Now, then, we can write nineteen sixty-four, in Roman numerals, as follows: MDCCCCLXIIII.
One advantage of writing numbers according to this sys tem is that it doesn't matter in which order the numbers are written. If I decided to write nineteen sixty-four as CDCLIIMXCICT, it would still represent nineteen sixty four if I add up the number values of each letter. However, it is not likely that anyone would ever scramble the letters in this fashion. If the letters were written in strict order of decreasing value, as I.did the first time, it would then be much simpler to add the values of the letters. And, in fact, this order of decreasing value is (except for special cases) always used.
Once the order of writing the letters in Roman numerals is made an established convention, one can make use of deviations from that set order if it will help simplify mat ters. For instance, suppose we decide that when a symbol of smaller value follows one of larger value, the two are added; while if the symbol of smaller value precedes one of larger value, the first is subtracted from the second. Thus VI is "five" plus "one" or "six,"' while IV is "five" minus "one" or "four." (One might even say that IIV is "three," but it is conventional to subtract no more than one sym bol.) In the same way LX is "sixty" while XL is "forty"; CX is "one hundred ten," while XC is "ninety"; MC is 44 one thousand one hundred," while CM is "nine hundred."
The value of this "subtractive principle" is that two sym bols can do the work of five. Why write VIIII il you can write IX; or DCCCC if you can write CM? The year nine teen sixty-four, instead of being written MDCCCCLXIIII (twelve symbols), can be written MC@XIV (seven sym bols). On the other hand, once you make the order of writing letters significant, you can no longer scramble them even if you wanted to. For instance, if MCMLXIV is scrambled to MMCLXVI it becomes "two thousand one hundred sixty-six."
The subtractive principle was used on and off in ancient times but was not regularly adopted until the Middle Ages.
One interesting theory for the delay involves the simplest use of the principle-that of IV ("four"). These are the first letters of IVPITER, the chief of the Roman gods, and the Romans may have had a delicacy about writing even the beginning of the name. Even today, on clockfaces bear ing Roman numerals, "four" is represented as 1111 and never as IV. This is not because the clockf ace does not ac cept the subtractive principle, for "nine" is represented as IX and never as VIIII.
With the symbols already,given, we can go up to the number "four thousand nine hundred ninety-nine" in Ro man numerals. This would be MMMMDCCCCLXXXX VIIII or, if the subtractive principle is used ' MMMM CMXCIX. You might suppose that "five thousand" (the next number) could be written MMMMM, but this is not quite right. Strictly speaking, the Roman system never re quires a symbol to be repeated more than four times. A new symbol is always invented to prevent that: 11111 = V; XXXXX = L; and CCCCC = D. Well, then, what is MMMMM?
No letter was decided upon for "five thousand." In an cient times there was little need in ordinary life for num bers that high. And if scholars and tax collectors had oc casion for larger numbers, their systems did not percolate down to the common man.
One method of penetrating to "five thousand" and be yond is to use a bar to represent thousands. Thus, V would represent not "five" but "five thousand." And sixty-seven thousand four hundred eighty-two would be LX-VIICD LXXXII.
But another method of writing large numbers harks back to the primitive symbol (1) for "thousand." By adding to the curved lines we can increase the number by ratios of ten. Thus "ten thousand" would be (1)), and "one hundred thousand" would be (1) Then just as "five hundred" was 1) or D, "five thousand" would be 1)) and "fifty thousand" would be I))).
Just as the Romans made special marks to indicate tbou sands, so did the Greeks. What's more, the Greeks made special marks for ten thousands and for millions (or at least some of the Greek writers did). That the Romans didn't carry this to the logical extreme is no surprise. The Romans prided themselves on being non-intellectual. That the Greeks missed it also, however, will never cease to astonish me.
Suppose that instead of making special marks for large numbers only, one were to make special marks for every type of group from the units on. If we stick to the system I introduced at the start of the chapter-that is, the one in which ' stands for units, - for tens, + for hundreds, and = for thousands-then we could get by with but one set of nine syrrbols. We could write every number with a little heading, marking off the type of groups -+-'. Then for "two thousand five hundred eighty-one" we could get by with only the letters from A to I and write it GEHA. What's more, for "five thousand five hundred fifty-five" we could write EEEE. There would be no confusion with all the E's, since the symbol above each E would indicate that one was a "five," another a "fifty," another a "five hundred," and another a "five thousand." By using additional symbols for ten thousands, hundred thousands, millions, and so on, any number, however large, could be written in this same fashion.
Yet it is not surprising that this would not be popular.
Even if a Greek had thought of it he would have been re peucd by the necessity of writing those tiny symbols. In an age of band-copying, additional symbols meant additional labor and scribes would resent that furiously.
Of course, one might easily decide that the symbols weren't necessary. The Groups, one could agree, could al ways be written right to left in increasing values. The units would be at the right end, the tens next on the left, the hun dreds next, and so on. In that case, BEHA would be "two thousand five hundred eighty-one" and EEEE would be "five thousand five hundred fifty-five" even without the little symbols on top.
Here, though, a difficulty would creep in. What if there were no groups of ten, or perhaps no units, in a particular number? Consider the number "ten" or the number "one hundred and one." The former is made up of one group of ten and no units, while the latter is made up of one group of hundreds, no groups of tens, and ont unit. Using sym bols over the columns, the numbers could be written A and A A, but now you would not dare leave out the little sym bols. If you did, how could you differentiate A meaning "ten" from A meaning "one" or AA meaning "one hun dred and one" from AA meaning "eleven" or AA meaning "one hundred and ten"?
You might try to leave a gap so as to indicate "one hun dred and one" by A A. But then, in an age of hand-copy ing, how quickly would that become AA, or, for that mat ter, how quickly might AA become A A? Then, too, how would you indicate a gap at the end of a symbol? No, even if the Greeks thought of this system, they must obviously have come to the conclusion that the existence of gaps in numbers made this attempted simplification impractical.
They decided it was safer to let J stand for "ten" and SA for "one hundred and one" and to Hades with little sym bols.
What no Greek ever thought of-not even Archimedes himself-was that it wasn't absolutely necessary to work with gaps. One could fill the gap with a symbol by letting one stand for nothing-for "no groups." Suppose we use $ as such a symbol. Then, if "one hundred and one",is made up of one group of hundreds, no groups of tens, an one + - I unit, it can be written A$A. If we do that sort of thing, all gaps are eliminated and we don't need the little symbols on top. "One" becomes A, "ten" becomes A$, "one hun dred" becomes A$$, "one hundred and one" becomes A$A, "one hundred and ten" becomes AA$, and so on.
Any number, however large, can be written with the use of exactly nine letters plus a symbol for nothinc, Surely this is the simplest thing in the world-after you think of it.
Yet it took men about five thousand years, counting from the beginning of number symbols, to think of a sym bol for nothing. The man who succeeded (one of the most creative and original thinkers in history) is unknown. We know only that he was some Hindu who lived no later than the ninth century.
The Hindus called the symbol sunyo, meaning "empty."
This symbol for nothing was picked up by the Arabs, who termed it sifr, which in their language meant "empty." This has been distorted into our own words "cipher" and, by way of zefirum, into "zero."
Very slowly, the new svstem of numerals (called "Ara bic numerals" because the Europeans learned of them from the Arabs) reached the West and replaced the Roman sys tem.
Because the Arabic numerals came from lands which did not use the Roman alphabet, the shape of the numerals was nothing like the letters of the Roman alphabet and this was good, too. It rerroved word-number confusion and reduced gematria from the everyday occupation of anyone who could read, to a burdensome folly that only a few would wish to bother with.
The Arabic numerals as now used by us are, of course, 1, 2, 3, 4, 5, 6, 7, 8, 9, and the all-important 0. Such is our reliance on these numerals (which are internationally accepted) that we are not even aware of the extent to which we rely on them. For instance, if this chapter has seemed vaauely queer to you, perhaps it was because I had delib eratclv refrained from using Arz.bic numerals all through.
We ail know the great simplicity Arabic numerals have lent 'Lo arithmetical computation. The unnecessary load they took off the human mind, all because of the presence of t' e zero, is simply incalculable. Nor has this fact gone unnot.ccd in the Engl'sh language. Tle importance of the zero is reflected in the fact that when we work out an arithmetical computation we are (to use a term now slightly old-fashioned) "ciphering." And when we work out some code, we are "deciphering" it.
So if you look once more at the title of this chapter, you will see that I am not being cynical. I mean it literally.
Nothing counts! The symbol for nothing makes all the dif ference in the world.
If ever an equation has come into its own it is Ein stein's e = mc 2. Everyone can rattle it off now, from the highest to the lowest; from the rarefied intellectual height of the science-fiction reader, through nuclear physicists, college students, newspapers reporters, housewives, busboys, all the way down to congressmen.
Rattling it off is not, of course, the same as understand ing it; any more than a quick paternoster (from which, in cidentally, the word "patter" is derived) is necessarily evi dence of deep religious devotion.
So let's take a look at the equation. Each letter is the initial of a word representing the concept it stands for.
Thus, e is the initial letter of "energy" and m of "mass."
As for c, that is the speed of light in a vacuum, and if you ask why c, the answer is that it is the initial letter of celeri tas, the Latin word meaning "speed."
This is not all, however. For any equation to have mean ing in physics, there must be an understanding as to the units being used. It is meaningless to speak of a mass of 2.3, for instance. It is necessary to say 2.3 grams or 2.3 pounds or 2.3 tons; 2.3 alone is worthless.
Theoretically, one can choose whatever units are most convenient, but as a matter of convention, one system used in physics is to start with "grams" for mass, "centimeters" for distance, and "seconds" for time; and to build up, as far as possible, other units out of appropriate combinations of these three fundamental ones.
Therefore, the m in Einstein's equation is expressed in grams, abbreviated gm. The c represents a speed-that is, a distance traveled in a certain time. Using the fundamental units, this means the number of centimeters traveled in a certain number of seconds. The units of c are therefore centimeters per second, or cm/sec.
(Notice that the word "per" is represented by a fraction line. The reason for this is that to get a speed represented in lowest terms, that is, the number of centimeters traveled in one second, you must divide the number of centimeters traveled by the number of seconds of traveling. _If you travel 24 centimeters in 8 seconds, your speed is 24 centi meters - 8 seconds, or 3 cm/sec.)
But, to get back to our subject, c occurs as its square in the equation. If you multiply c by c, you get C2. It is, how ever, insufficient to multiply the numerical value of c by it self. You must also multiply the unit of c by itself.
A common example of this is in connection with meas urements of area. If you have a tract of land that is 60 feet by 60 feet, the area is not 60 x 60, or 3600 feet. It is 60 feet x 60 feet, or 3600 square feet.
Similarly, in dealing with C2, you must multiply cm/sec 'by cm/sec and end with the units CM2 /seC2 (which can be read as centimeters squared per seconds squared).
The next question is: What is the unit to be used for e?
Einstein's equation itself will tell us, if we remember to treat units as we treat any other algebraic symbols. Since e = mc 2, that means the unit of e can be obtained by mul tiplying the unit of m by the unit Of C2. Since the unit of m is gm and that of c2 is CM2 /seC2, the unit of e is gm x CM2/seC2. In algebra we represent a x b as ab; conse quently, we can run the multiplication sign out of the unit of e and make it simply gm CM2/SCC2 (which is read "gram centimeter squared per second squared).
As it happens, this is fine, because long before Einstein worked out his equation it had been decided that the unit of energy on the gram-centimeter-second basis had to be gm CM2 /seC2. I'll explain why this should be.
The unit of speed is, as I have said, cm/sec, but what happens when an object changes speed? Suppose that at a given instant, an object is traveling at 1 cm/sec, while a second later it is travelling at 2 cm/sec; and another second later it is traveling at 3 cm/sec. It is, in other words, "ac celeratin " (also from the Latin word celeritas).
In the case I've just cited, the acceleration is 1 centi meter per secondevery second, since each successive sec ond it is going I centimeter per second faster. You might say that the acceleration is I emlsec per second. Since we are letting the word "per" be represented by a fraction mark, this may be represented as 1 cm/sec/sec.
As I said before, we can treat the units by the same manipulations used for algebraic symbols. An expression like alblb is equivalent to alb b, which is in turn equiva lent to alb x Ilb, which is in turn equivalent to alb2. By the same reasoning, I cm/sec/sec is equivalent to 1 cm/ seC2 and it is CM/SCC2 that is therefore the unit of accelera tion.
A "force" is defined, in Newtonian physics, as some thing that will bring about an acceleration. By Newton's First Law of Motion any object in motion, left to itself, will travel at constant speed in a constant direction forever.
A speed in a particular direction is referred to as a t'veloc ity," so we might, more simply, say that an object in mo tion, left to itself, will travel at constant velocity forever.
This velocity may well be zero, so that Newton's First'Law also says that an object at rest, left to itself, will remain at rest forever.
As soon as a force, which may be gravitational, electro magnetic, mechanical, or anything, is applied, however, the velocity is changed. This means that its speed of travel or its direction of travel or both is changed.
The quantity of force applied to an object is measured by the amount of acceleration induced, and also by the mass of the object, since the force applied to a massive ob ject produces less acceleration than the same force applied to a light object. (If you want to check this for yourself, kick a beach ball with all your might and watch it accel erate from rest to a good speed in a very short time. Next kick a cannon ball with all your might and observe-while hopping in agony-what an unimpressive acceleration you have imparted to it.)
to assure yourself, first, of a supply of nine hundred quin tiflion ergs.
This sounds impressive. Nine hundred quintillion ergs, wow!
But then, if you are cautious, you might stop and think:
An erg is an unfamiliar unit. How large is it anyway?
After all, in Al Capp's Lower Slobbovia, the sum of a billion slobniks sounds like a lot-until you find that the rate of exchange is ten billion slobniks to the dollar.
So-How large is an erg?
Well, it isn't large. As a matter of fact, it is quite a small unit. It is forced on physicists by the lo 'c of the gram-cen 91 timeter-second system of units, but it ends in being so small a unit as to be scarcely useful. For instance, consider the task of lifting a pound weight one foot against gravity.
That's not difficult and not much energy is expended. You could probably lift a hundred pounds one foot without completely incapacitating yourself. A professional strong man could do the same for a thousand pounds.
Nevertheless, the energy expended in lifting one pound one foot is equal to 13,558,200 ergs. Obviously, if any trifling bit of work is going to involve ergs in the tens of millions, we need other and larger units to keep the nu merical values conveniently low.
For instance, there is an energy unit called a joule, which is equal to 10,000,000 ergs.
This unit is derived from the name of the British physi cist James Prescott Joule, who inherited wealth and a brew ery but spent his time in research. From 1840 to 18 9 e ran a series of meticulous experiments which demonstrated conclusively the quantitative interconversion of heat and work and brought physics an understanding of the law of conservation of energy. However, it was the erman sci entist Hermann Ludwig Ferdinand von Helmholtz who first put the law into actual words in a paper presented in 1847, so that he consequently gets formal credit for -,the discov ery.
(The word "joule," by the way, is most commonly pro nounced "jowl," although Joule himself probably pro 167 nounced his name "jool." In any case, I have heard over precise people pronounce the word "zhool" under the im pression that it is a French word, which it isn't. These are the same people who pronounce "centigrade" and "centri fuge" with a strong nasal twang as "sontigrade" and "son trifugp,," under the impression that these, too, are French words. Actually, they are from the Latin and no pseudo French pronunciation is required. There is some justifica tion for pronouncing "centimeter" as "sontimeter," since that'is a French word to begin with, but in that case one should either stick to English or go French all the way and pronounce it "sontimettre," with a light accent on the third syllable.)
Anyway, notice the usefulness of the joule in everyday affairs. Lifting a pound mass a distance of one foot against gravity requires energy to the amount, roughly, of 1.36 joules-a nice, convenient figure.
Meanwhile, physicists who were studying heat had in vented a unit that would be convenient for their purposes.
This was the "calorie" (from the Latin word color meaning "heat"). It can be abbreviated as cal. A calorie is the amount of heat required to raise the temperature of I gram of water from 14.5' C. to 15.5' C. (The amount of heat necessary to raise a gram of water one Celsius degree varies slightly for different temperatures, which is why one must carefully specify the 14.5 to 15.5 business.)
Once it was demonstrated that all other forms of energy and all forms of work can be quantitatively converted to heat, it could be seen that any unit that was suitable for heat would be suitable for any other kind of energy or work.
By actual measurement it was found (by Joule) that 4.185 joules of energy or work could be converted into pre cisely I calorie of heat. Therefore, we can say that I cal equals 4.185 joules equals 41,850,000 ergs.
Althouo the calorie, as defined above, is suitable for physicists, it is a little too small for chemists. Chemical re actions usually release or absorb heat in quantities that, under the conventions used for chemical calculations, re sult in numbers that are too large..For instance, I gram of ,carbohydrate burned to carbon dioxide and water (either in a furnace or the human body, it doesn't matter) liberates roughly 4000 calories. A gram of fat would, on burning, liberate roughly 9000 calories. Then again, a human being, doing the kind of work I do, would use up about 2,500,000 calories per day.
The figures would be more convenient if a larger unit were used, and for that purpose a larger calorie was in vented, one that would represent the amount of heat re quired to raise the temperature of 1000 grams (1 kilo gram) of water from 14.50 C. to 15.5' C. You see, I sup pose, that this larger calorie is a thousand times as great as the smaller one. However, because both units'are called calorie," no end of confusion has resulted.
Sometimes the two have been distinguished as "small calorie" and "large calorie"; or "gram-calorie" and "kilo gram-calorie"; or even "calorie" and "Calorie." (The last alternative is a particularly stupid one, since' in speech and scientists must occasionally speak-there is no way of distinguishing a C and a c by pronunciation alone.)
My idea of the most sensible way of handling the matter is this: In the metric system, a kilogram equals 1000 grams; a kilometer equals 1000 meters, and so on. Let's call the large calorie a kilocalorie (abbreviated kcal) and set it equal to 1000 calories.
In summary, then, we can say that 1 kcal equals 1000 cal or 4185 joules or 41,850,000,000 ergs.
Another type of energy unit arose in a roundabout way, via the concept of "power." Power is the rate at which work is done. A machine might lift a ton of mass one foot against gravity in one minute or in one hour. In each case the energy consumed in the process is the same, but it takes a more powerful heave to lift that ton in one minute than in one hour.
To raise one pound of mass one foot against gravity takes one foot-pound (abbreviated I ft-lb) of energy. To expand that energy in one second is to deliver 1 foot pound per second (1 ft-lb/sec) and the ft-lb/sec is there fore a permissible unit of power.
The first man to make a serious effort to measure power accuratel was James Watt (1736-1819). He compared y the power of the steam engine he had devised with the power delivered by a horse, thus measuring his machine's rate of delivering energy in horsepower (or hp). In doing so, he first measured the power of a horse in ft-lb/sec and decided that I hp equals 550 ft-lb/sec, a conversion figure which is now standard and official.
The use of foot-pounds per second and horsepower is perfectly legitimate and, in fact, automobile and airplane engines have their power rated in horsepower. The trouble with these units, however, is that they don't tie in easily with the gram-centimeter-second system. A foot-pound is 1.355282 joules and a horsepower is 10.688 kilocalories per minute. These are inconvenient numbers to deal with.
The ideal am centimeter-second unit of power would be ergs per on@ (erg/sec). However, since the erg is such a small unit, it is more convenient to deal with joules per second (joule/sec). And since I joule is equal to 10, 000,000 ergs, 1 joule/sec equals 10,000,000 erg/sec, or 10,000,000 gM CM2/sec3.
Now we need a monosyllable to express the unit joule/ see, and what better monosyllable than the monosyllabic name of the gentleman who first tried to measure power.
So 1 joule/sec was set equal to 1 watt. The watt may be defined as representing the delivery of I joule of energy per second.
Now if power is multiplied by time, you are back to energy. For instance, if 1 watt is multipled by 1 second, you have I watt-sec. Since 1 watt equals 1 joule/sec, 1 watt-sec equals I joule/sec x see, or I joule sec/sec. The sees can cel as you would expect in the ordinary algebraic manipu lation tG which units can be subjected, and you end with the statement that I watt-sec is equal to 1 joule and is, therefore, a unit of energy.
A larger unit of energy of this sort is the kilowatt-,hour (or kw-hr). A kilowatt is equal to 1000 watts and an hour - is equal to 3600 seconds. Therefore a kw-hr is equal to 1000 x 3600 watt-sec, or to 3,600,000 joules, or to 36, 000,000,000,OGO ergs.
Furthermore, since there are 4185 joules in a kilocalorie (kcal), 1 kw-hr is equal to 860 kcal or to 860,000 cal.
A human being who is living on 2500 kcal/day is de livering (in the form of heat, eventually) about 104 kcal/ hr, which is equal to 0.120 kw hr/hr or 120 watts. Next tirhe you're at a crowded cocktail party (or a crowded sub way train or a crowded theater audience) on a hot evening in August, think of that as each additional person walks in.
Each entrance is equivalent to turning on another one hun dred twenty-watt electric bulb. It will make you feel a lot hotter and help you appreciate the new light of understand ing that science brings.
But back to the subject. Now, you see, we have a variety of units into which we can translate the amount of energy ,resulting from the complete conversion of I gram of mass.
That gram of mass will liberate:
900,000,000,000,000,000,000 ergs, or 90,000,000,000,000 joules, or 21,500,000,000,000 calories, or 21,500,000 000 kilocalories, '600 kilowatt-hours. or 25,000, Which brings us to the conclusion that although the erg is indeed a tiny unit, nine hundred quintillion of them still mount up most impressively. Convert a mere one gram of mass into energy and use it with perfect efficiency and you can keep a thousand-watt electric light bulb running for 25,000,000 hours, which is equivalent to 2850 years, or the time from the days of Homer to, the present.
How's that for solving the fuel problem?
We could work it the other way around, too. We might ask: How much mass need we convert to produce I kilo watt-hour oi energy?
Well, if I gram of mass produces 25,000,000 kilowatt hours of energy, then 1 kilowatt-hour of energy is produced by 1/25,000,000 gram.
You can see that this sort of calculation is going to take us into small mass units indeed. Suppose we choose a unit smaller than the gram, say the microgram. This is equal to a millionth of a gram, i.e., 10-11 gram. We can then say that I kilowatt-hour of energy is produced by the conver sion of 0.04 micrograms of mass.
Even the microgram is an inconveniently large unit of mass if we become interested in units of energy smaller than the kilowatt-hour. We could therefore speak of a micromicrogram (or, as it is now called, a picogram). This is a millionth of a millionth of a gram (10-12 gram) or a trillionth of a gram. Using that as a unit, we can say that:
1 kilowatt-hour is equivalent to 40,000 picograms
I kilocalorie fl, 46.5
1 calorie 0.0465
1 joule 0.0195
I erg 0.00000000195
To give you some idea of what this means, the mass of a typical human cell is about 1000 picograms. If, under conditions of dire emergency, the body possessed the abil ity to convert mass to energy, the conversion of the con tents of 125 selected cells (which the body, with 50,000, 000,000 000 cells or so, could well afford) would supply the boY; with 2500 kilocalories and keep it going for a full day.
The amount of mass which, upon conversion, yields 1 erg of energy (and the erg, after all, is the proper unit of energy in the gram-centimeter-second system) is an incon veniently small fraction even in terms of picograms.
We need units smaller still, so suppose we turn to the picopicogram (10-24 gram), which is a trillionth of a tril lion of a gram, or a septillionth of a gram. Using the pico picogram, we find that it takes the conversion of 1950 picopicograms of mass to produce an erg of energy.
And the significance? Well, a single hydrogen atom has a mass of about 1.66 picopicograms. A uranium-235 atom has a mass of about 400 picopicograms. Consequently, an erg of energy is produced by the total conversion of 1200 hydrogen atoms or by 5 uranium-235 atoms.
In ordinary fission, only 1/1000 of the mass is converted to energy so it takes 5000 fissioning uranium atoms to produce I erg of energy. In hydrogen fusion, 1/100 of the mass is converted to energy, so it takes 120,000 fusing hydrogen atoms to produce 1 erg of energy.
And with that, we can let e mc,2 rest for the nonce.
When my book 1, Robot was reissued by the estimable gentlemen of Doubleday amp; Company, it was with a great deal of satisfaction that I noted- certain reviewers (posses sing obvious intelligence and good taste) beginning to refer to it as a "classic."
"Classic" is derived in exactly the same way, and has precisely the same meaning, as our own "first-class" and our colloquial "classy"; and any of these words represents my own opinion of 1, Robot, too; except that (owing to my modesty) I would rather die than admit it. I mention it here only because I am speaking confidentially.
However, "classic" has a secondary meaning that dis pleases me. The word came into its own when the literary men of the Renaissance used it to refer to those works of the ancient Greeks and Romans on which they were model ing their own efforts. Consequently, "classic" has come to mean not only good, but also old.
Now 1, Robot first appeared a number of years -ago and some of the material in it was written… Well, never mind. The point is that I have decided to feel a little hurt at being considered old enough to have written a classic, and therefore I will devote this chapter to the one field where "classic" is rather a term of insult.
Naturally, that field must be one where to be old is, almost automatically, to be wrong and incomplete. One may talk about Modem Art or Modern Literature or Modem Furniture and sneer as one speaks, comparing each, to their disadvantage, with the greater work of earlier ages. When one speaks of Modem Science, however, one removes one's hat and places it reverently upon the breast.
In physics, particularly, this is the case. There is Modern Physics and there is (with an offhand, patronizing half smile) Classical Physics. To put it into Modern Terrninol ogy, Modern Physics is in, man, in, and Classical Physics is like squaresvhle.
What's more, the division in physics is sharp. Everything after 1900 is Modern; everything before 1900 is Classical.
That looks arbitrary, I admit; a strictly parochial twentieth-century outlook. Oddly enough, though, it is per fectly legitimate. The year 1900 saw a major physical theory entered into the books and nothing has been quite the same since.
By now you have guessed that I am going to tell you about it.
The problem began with German physicist Gustav Robert Kirchhoff who, with Robert Wilhelm Bunsen (popularizer of the Bunsen burner), pioneered in the de velopment of spectroscopy in 1859. Kirchhoff discovered that each element, when brought to incandescence, gave off certain characteristic frequencies of light; and that the vapor of that element, exposed to radiation from a source hotter than itself, absorbed just those frequencies it itself emitted when radiating. In short, a material will absorb those frequencies which, under other conditions, it will radiate; and will radiate those frequencies which, under other conditions, it will absorb.'
But su Ippose that we consider a body which will absorb all frequencies of radiation that fall upon it-absorb them completely. It will then reflect none and will therefore ap pear absolutely black. It is a "black body." Kirchhoff pointed out that such a body, if heated to incandescence, would then necessarily have to radiate all frequencies of radiation' Radiation over a complete range in this manner would be "black-body radiation."
Of course, no body was absolutely black. In the 1890s, however, a German physicist named Wilhelm Wien thought of a rather interesting dodge to get around tiat.
Suppose you had a furnace with a small opening. Any radiation that passes through the opening is either ab sorbed by the rough wall opposite or reflected. The re 175 flected radiation strikes another wall and is again partially absorbed. What is reflected strikes another wall, and so on. Virtually none of the radiation survives to find its way out the small opening again. That small opening, then, absorbs the radiation and, in a manner of speaking, reflects none. It is a black body. If the furnace is heated, the radia tion that streams out of that small opening should be black-body radiation and should, by Kircbhoff's reasoning, contain all frequencies.
Wien proceeded to study the characteristics of this black-body radiation. He found that at any temperature, a wide spread of frequencies was indeed included, but the spread was not an even one. There was a peak in the mid dle. Some intermediate frequency was radiated to a greater extent than other frequencies either higher or lower than that peak frequency. Moreover, as the temperature was increased, this peak was found to move toward the higher frequencies. If the absolute temperature were doubled, the frequency at the peak would also double.
But now the question arose: Why did black-body radia tion distribute itself like this?
To see why the question was puzzling, let's consider infrared light, visible light, and ultraviolet light. The fre quency range of infrared light, to begin with, is from one hundred billion (100,000,000,000) waves per second to four hundred trillion (400,000,000,000,000) waves per second. In order to make the numbers easier to handle, let's divide by a hundred billion and number the frequency not in individual waves per second but in hundred-billion wave packets per second. In that case the range of infrared would be from 1 to 4000.
Continuing to use this system, the range of visible licht would be from 4000 to 8000; and the range of ultraviolet light would be from 8000 to 300,000.
Now it might be supposed that if a black body absorbed all radiation with equal ease, it ought to give off all radia tion with equal case. Whatever its temperature, the energy it had to radiate might be radiated at any frequency, the particular choice of frequency being purely random.
But suppose you were choosing numbers, any numbers with honest radomness, from I to 300,000. If you did this repeatedly, trillions of times, 1.3 per cent of your numbers would be less than 4000; another 1.3 per cent would be between 4000 and 8000 ' and 97.4 per cent would be between 8000 and 300,000.
This is like saying that a black body ought to radiate
1.3 per cent of its energy in the infrared, 1.3 per cent in visible light, and 97.4 per cent in the ultraviolet. If the temperature went up and it had more energy to radiate, it ought to radiate more at every frequency but the relative amounts in each range ought to be unchanged.
And this is only if we confine ourselves to nothing of still higher frequency than ultraviolet. If we include the x-ray frequencies, it would turn out that just about nothing should come off in the visible light at any temperature.
Everything would be in ultraviolet and x-rays.
An English physicist, Lord Rayleigh (1842-1919), worked out an equation which showed exactly this. The radiation emitted by a black body increased steadily as one went up the frequencies. However, in actual practice, a frequency peak was reached after which, at higher fre quencies still, the quantity of radiation decreased again.
Rayleigh's equation was interesting but did not reflect reality.
Physicists referred to this prediction of the Rayleigh equation as the "Violet Catastrophe"-the fact that every body that bad energy to radiate ought to radiate practically all of it in the ultraviolet and beyond.
Yet the whole point is that the Violet Catastrophe does not take place. A radiating body concentrated its radiation in the low frequencies. It radiated chiefly in the infrared at temperatures below, say, 1000' C., and radiated mainly in the visible region even at a temperature as high as
6000' C., the temperature of the solar surface.
Yet Rayleigh's equation was worked out according to the very best principles available anywhere in physical theory-at the time. His work was an ornament of what we now call Classical Physics.
Wien himself worked out an equation which described the frequency distribution of black-body radiation in the bigh-frequency range, but he had no explanation for why it worked there, and besides it only worked for the high frequency range, not for the low-frequency.
Black, black, black was the color of the physics mood all through the later 1890s.
Bt4t then arose in 1899 a champion, a German physicist, Max Karl Ernst Ludwig Planck. He reasoned as fol -lows…
If beautiful equations worked out by impeccable reason ing from highly respected physical foundations do not de scribe the truth as we observe it, then either the reason ing or the physical foundations or both are wrong.
And if there is nothing wrong about the reasoning (and nothing wrong could be found in it), then the physical foundations had to be altered.
The physics of the day required that all frequencies of light be radiated with equal probability by a black body, and Planck therefore proposed that, on the contrary, they were not radiated with equal probability. Since the equal probability assumption required that more and more light of higher and higher frequency be radiated, whereas the reverse was observed, Planck further proposed that the probability of radiation ought to decrease as frequency increased.
In that case, we would now have two effects. The first effect would be a tendency toward randomness which would favor high frequencies and increase radiation as frequency was increased. Second, there was the new Planck effect of decreasing probability of radiation as frequency went up. This would favor low frequencies and decrease radiation as frequency was increased.
In the low-frequency range the first effect is dominant, but in the high-frequency range the second effect increas ingly overpowers the first. Therefore, in black-body tadia tion, as one goes up the frequencies, the amount of radia tion first increases, reaches a peak, then decreases again exactly as is observed.
Next, suppose the temperature is raised. 'ne first effect can't be changed, for randomness is randomness. But sup 178 pose that as the temperature is raised, the probability of emitting high-frequency radiation increases. The second effect, then, is steadily weakened as the temperature goes up. In that case, the radiation continues to increase with increasing frequency for a longer and longer time before it is overtaken and repressed by the gradually weakening second effect. The peak radiation, consequently, moves into higher and higher frequencies as the temperature goes up-precisely as Wien had discovered.
On this basis, Planck was able to work out an equation that described black-body radiation very nicely both in the low-frequency and high-frequency range.
However, it is all very well to say that the higher the frequency the lower the probability of radiation, but why?
There was nothing in the physics of the time to explain that, and Planck had to make up something new.
Suppose that energy did not flow continuously, as physicists had, always assumed, but was given off in pieces.
Suppose there were "energy atoms" and these increased in size as frequency went up. Suppose, still further, that light of a particular frequency could not be emitted unless enough energy had been accumulated to make up an "energy atom" of the size required by that frequency.
The higher the frequency the larger the "energy atom" and the smaller the probability of its accumulation at any given instant of time. Most of the energy would be lost as radiation of lower frequency, where the "energy atoms" were smaller and more easily accumulated. For that rea son, an object at a temperature of 400' C. would radiate its heat in the infrared entirely. So few "energy atoms" of visible light size would be accumulated that no visible glow would be produced.
As temperature went up, more energy would be gen erally available and the probabilities of accumulating a high-frequency "energy atom" would increase. At 6000' C. most of the radiation would be in "energy atoms" of visible light, but the still larger "energy atoms" of ultraviolet would continue to be formed only to a minor extent.
But how big is an "energy atom"? How much energy does it contain? Since this "how much" is a key question, Planck, with admirable directness, named the "energy atom" a quantum, which is Latin for "how much?" the plural is quanta.
For Planck's equation for the distribution of black-body radiation to work, the size of the quantum had to be directly proportional to the frequency of the radiation. To express this mathematically, let us represent the size of the quantum, or the amount of energy it contains, by e (for energy). The frequency of radiation is invariably repre sented by physicists by means of the Greek letter nu (v).
If energy (e) is proportional to frequency (v), then e must be equal to v multiplied by some constant. This con stant, called Planck's constant, is invariably represented as h. The equation, giving the size of a quantum for a par ticular frequency of radiation, becomes: e = hv (Equation 1)
It is this equation, presented to the world in 1900, which is the Continental Divide that separates Classical Physics from Modern Physics. In Classical Physics, energy was considered continuous; in Modern Physics it is con sidered to be composed of quanta. To put it another way, in Classical Physics the value of h is considered to be 0; in Modern Physics it is considered to be greater than 0.
It is as though there were a sudden change from con sidering motion as taking place in a smooth glide, to mo tion as taking place in a series of steps.
There would be no confusion if steps were long ga lumphing strides. It would be easy, in that case, to dis tinguish steps from a glide. But suppose one minced along in microscopic little tippy-steps, each taking a tiny frac tion of a second. A careless glance could not distinguish that from a glide. Only a painstaking study would show that your head was bobbing slightly with each step. The smaller the steps, the harder to detect the difference from a glide.
In the same way, everything would depend on just how big individual- quanta were; on how "grainy" energy was.
The size of the quanta depends on 'the size of Planck's constant, so let's consider that for a while.
If we solve Equation I for h, we get: h = elv (Equation 2) Energy is very frequently measured in ergs (see Chapter 13). Frequency is measured as "so many per second" and its units are therefore "reciprocal seconds" or "I/second."
We must treat the units of h as we treat h itself. We get h by dividing e by v; so we must get the units of h by dividing the units of e by the units of v. When we divide ergs by I/second we are multiplying ergs by sec onds, and we find the units of h to be "erg-seconds." A unit which is the result of multiplying energy by time is said, by physicists, to be one of "action." Therefore, Planck's constant is expressed in units of action.
Since the nature of the universe depends on the size of Planck's constant, we are all dependent on the size of the piece of action it represents. Planck, in other words, had sought and found the piece of the action. (I understand that others have been searching for a piece of the action ever since, but where's the point since Planck has found it?)
And what is the exact size of h? Planck found it had to be very small indeed. The best value, currently ac cepted, is: 0.0000000000000000000000000066256 erg seconds,or 6.6256 x 10-2" erg-seconds.
Now let's see if I can find a way of expressing just how small this is. The human body, on an average day, con sumes and expends about 2500 kilocalories in maintaining itself and performing its tasks. One kilocalorie is equal to 1000 calories, so the daily supply is 2,500,000 calories.
One calorie, then, is a small quantity of energy from the human standpoint. It is 1/2,500,000 of your daily store. It is the amount of energy contained in 1/113,000 of an ounce of sugar, and so on.
Now imagine you are faced with a book weighing one pound and wish to lift it from the floor to the top of a bookcase three feet from the ground. The energy expended in lifting one pound through a distance of three feet against gravity is just about 1 calorie,.
Suppose that Planck's constant were of the order of a calorie-second in size. The universe would be a very strange place indeed. If you tried to lift the book, you would have to wait until enough energy had been accumu lated to make up the tremendously sized quanta made necessary by so large a piece of action. Then, once it was accumulated, the book would suddenly be three feet in the air.
But a calorie-second is equal to 41,850,000 erg-seconds, and since Planck's constant is 'Such a minute fraction of one erg-secoiid, a single calorie-second equals 6,385,400, 000,000,000,000,000,000,000,000,000 Planck's constants, or 6.3854 x 10:1@' Planck's constants, or about six and a third decillion Planck's constants. However you slice it, a calorie-second is equal to a tremendous number of Planck's constants.
Consequently, in any action such as the lifting of a one pound book, matters are carried through in so many tril lions of trillions of steps, each one so tiny, that motion seems a continuous glide.
When Planck first introduced his "quantum theory 91 in 1900, it caused remarkably little stir, for the quanta seemed to be pulled out of midair. Even Planck himself was dubious-not over his equation describing the dis tribution of black-body radiation, to be sure, for that worked well; but about the quanta he had introduced to explain the equation.
Then came 1905, and in that year a 26-year-old theo retical physicist, Albert Einstein, published fivo separate scientific papers on three subjects, any one of which would have been enough to establish him as a first-magnitude star in the scientific heavens.
In two, he worked out the theoretical basis for "Brown ian motion" and, incidentally, produced the machinery by which the actual size of atoms could be established for the first time. It was one of these papers that earned him his Ph.D.
In the third paper, he dealt with the "photoelectric effect" and showed that although Classical Physics could not explain it, Planck's quantum theory could.
This really startled physicists. Planck had invented quanta merely to account for black-body radiation, and here it turned out to explain the photoelectric effect, too, something entirely different. For quanta to strike in two different places like this, it seemed suddenly very reason able to suppose that they (or something very like them) actually existed.
(Einstein's fourth and fifth papers set up a new view of the universe which we call "The Special Theory of Rela tivity." It is in these papers that he introduced his famous equation e = MC2; see Chapter 13.
These papers on relativity, expanded into a "General Theory" in 1915, are the achievements for which Einstein is known to people outside the world of physics. Just the same, in 1921, when he was awarded the Nobel Prize for Physics, it was for his work on the photoelectric effect and not for his theory of relativity.)
The value of h is so incredibly small that in the ordinary world we can ignore it. The ordinary gross events of everyday life can be considered as though energy were a continuum. This is a good "first approximation."
However, as we deal with smaller and smaller energy changes, the quantum steps by which those changes'must take place become larger and larger in comparison. Thus, a flight of stairs consisting of treads 1 millimeter high and 3 millimeters deep would seem merely a slightly roughened ramp to a six-foot man. To a man the size of an ant, how ever, the steps would seem respectable individual obstacles to be clambered over with difficulty. And to a man the size of a bacterium, they would be mountainous precipices lin the same way, by the time we descend into the world within the atom the quantum step has become a gigantic thing. Atomic physics cannot, therefore, be described in Classical terms, not even as an approximation.
The first to realize this clearly was the Danish physicist Niels Bohr. In 1913 Bohr pointed out that if an electron absorbed energy, it had to absorb it a whole quantum at a time and that to an electron a quantum was a large piece of en 'ergy that forced it to change its relationship to the rest of the atom drastically and all at once.
Bohr pictured the electron as circling the atomic nucleus in a fixed orbit. When it absorbed a quantum of energy, it suddenly found itself in an orbit farther from the nucleus - there was no in-between, it was a one-step proposition.
Since only certain orbits were possible, according to Bohr's treatment of the subject, only quanta of certain size could be absorbed by the atom-only quanta large enoug to raise an electron from one permissible orbit to another.
When the electrons dropped back down the line of per missible orbits, they emitted radiations in quanta. They emitted just those frequencies which went along with the size of quanta they could emit in going from one orbit to another.
In this way, the science of spectroscopy was rational ized. Men understood a little more deeply why each ele ment (consisting of one type of atom with one type of energy relationships among the electrons making up that type of atom) should radiate certain frequencies, and cer tain frequencies only, when incandescent. They also under stood why a substance that could,absorb certain frequen cies should also emit those same frequencies under other circumstances.
In other words, Yirchhoff had started the whole problem and now it had come around fuil-circle to place his em pirical discoveries on a rational basis.
Bohr's initial picture was oversimple; but he and other men gradually made it more complicated, and capable of explaining finer and finer points of observation. Finally, in 1926, the Austrian physicist Erwin Schri3dinger worked out a mathematical treatment that was adequate to an alyze the workings of the particles making up the interior of the atom according to the principles of the quantum theory. This was called "quantum mechanics," as opposed to the "classical mechanics" based on Newton's three laws of motion and it is quantum mechanics that is the founda- tion of Modern Physics.
There are fashions in science as in everything else. Con duct an experiment that brings about an unusual success and before you can say, "There are a dozen imitations!" there are a dozen imitations!
Consider the element xenon (pronounced zee'non), dis covered in 1898 by William Ramsay and Morris William Travers. Like other elements of the same type it was iso lated from liquid air. The existence of these elements in air had remained unsuspected through over a century of ardent chemical analysis of the air, so when they finally dawned upon the chemical consciousness they were greeted as strange and unexpected newcomers. Indeed, the name, xenon, is the neutral form of the Greek word for "strange," so that xenon is "the strange one" in all literalness.
Xenon belongs to a group of elements commonly known as the "inert gases" (because they are chemically inert) or the "rare gases" (because they are rare), or "noble gases" because the standoffishness that results from chemi cal inertness seems to indicate a haughty sense of seff importance.
Xenon is the rarest of the stable inert gas and, as a matter of fact, is the rarest of all the stable elements on Earth. Xenon occurs only in the atmosphere, and there it makes up about 5.3 parts per million by weight. Since the atmosphere weighs about 5,500,000,000,000,000 (five and a half quadrillion) tons, this means that the planetary supply of xenon comes to just about 30,000,000,000 (thirty billion) tons. This seems ample, taken in full, but picking xenon atoms out of the overpoweringly more corn,mon constituents of the atmosphere is an arduous task and so xenon isn't a common substance and never will be.
What with one thing and another, then, xenon was not a popular substance in the chemical laboratories. Its chem ical, physical, and nuclear properties were worked out, but beyond that there seemed little worth doing with it. It remained the little strange one and received cold shoulders and frosty smiles.
Then, in 1962, an unusual experiment involving xenon was announced whereupon from all over the world broad smiles broke out across chemical countenances, and little xenon was led into the test tube with friendly solicitude.
"Welcome, stranger!" was the cry everywhere, and now you can't open a chemical journal anywhere without find ing several papers on xenon.
What happened?
If you expect a quick answer, you little know me. Let me take my customary route around Robin Hood's barn and begin by stating, first of all, that xenon is a gas.
Being a gas is a matter of accident. No substance is a gas intrinsically, but only insofar as temperature dictates.
On Venus, water and ammonia are both gases. On Earth, ammonia is a gas, but water is not. On Titan, neither am monia nor water are gases.
So I'll have to set up an arbitrary criterion to suit my present purpose. Let's say that any substance that remains a gas at -1000 C. (-148' F.) is a Gas with a capital letter, and concentrate on those. This is a temperature that is never reached on Earth, even in an Antarctic winter of extraordinary severity, so that no Gas is ever anything but gaseous on Earth (except occasionally in chemical lab oratories).
Now why is a Gas a Gas?
I can start by saying that every substance is made up of atoms, or of closely knit groups of atoms, said groups being called molecules. There are attractive forces between atoms or molecules which make them "sticky" and tend to hold them together. Heat, however, lends these atoms or molecules a certain kinetic energy (energy of motion) which tends to drive them apart,.since each atom or mole cule has its own idea of where it wants to go. [I enjoy sin]
The attractive forces among a given set of atoms or molecules are relatively constant, but the kinetic energy varies with the temperature. Therefore, if the temperature is raised high enough, any group of atoms or molecules will fly apart and the material becomes a gas. At tempera tures over 60000 C. all known substances are gases.
Of course, there are only a, few exceptional substances with interatomic or intermolecular forces so strong that it takes 6000' C. to overcome them. Some substances, on the other hand, have such weak intermolecular attractive forces that the warmth of a summer day supplies enough kinetic energy to convert them to gas (the common anes thetic, ether, is an example).
Still others have intermolecular attractive forces so much weaker still that there is enough heat at a tempera ture of -I 00' C. to keep them gases, and it is these that are the Gases I am talking about.
The intermolecular or interatomic forces arise out of the distribution of electrons within the atoms or molecules.
The electrons are distributed among various "electron shells," according to a system we can,accept without de tailed explanation. For instance, the aluminum atom con tains 13 electrons, which are distributed as follows: 2 in the innermost shell, 8 in the next shell, and 3 in the next shell. We can therefore signify the electron distribution in the aluminum atom as 2,8,3.
The most stable and symmetrical distribution of the electrons among the electron shells is that distribution in which the outermost shell holds either all the electrons it can hold, or 8 electrons-whichever is less. The innermost electron shell can hold only 2, the next can hold 8, and each of the rest can hold more than 8. Except for the situ ation where only the innermost shell contains electrons, * No, I am not implying that atoms know what they are doing and have consciousness. This is just my teleological way of talk ing. Teleology is forbidden in scientific'articies, 1-ut it s'o happens then, the stable situation consists of 8 electrons in the outermost shell.
There are exactly six elements known in which this situ ation of maximum stability exists:
Electron Electron Element Symbol Distribution Total helium He 2 2 neon Ne 2,8 10 argon Ar 2,8,8 is krypton Kr 2,8,18,8 36 xenon Xe 2,8,18,18,8 54 radon Rn 2,8,18,32,18,8 86
Other atoms without this fortunate electronic distribu tion are forced to attempt to achieve it by grabbing addi tional electrons, or getting rid of some they already pos sess, or sharing electrons. In so doing, they undergo chem ical reactions. The atoms of the six elements listed above, however, need do nothing of this sort and are sufficient unto themselves. They have no need to shift electrons in any way and that means they take part in no chemical reactions and are inert. (At least, this is what I would have said prior to 1962.)
The atoms of the inert gas family listed above are so self-sufficient, in fact, that the atoms even ignore one another. There is little interatomic attraction, so that all are gases at room temperature and all but radon are Gases.
To be sure, there is some interatomic attraction (for no atoms or molecules exist among which there is no attrac tion at all). If one lowers the temperature sufficiently, a point is reached where the attractive forces become dom inant over the disruptive effect of kinetic energy, and every single one of the inert gases will, eventually, become an inert liquid.
What about other elements? As I said, these have atoms with electron distributions of less than maximum stability and each has a tendency to alter that distribution in the direction of stability. For instance, the sodium atom (Na) has a distribution of 2,8, I. If it could get rid of the outer most electron, what would be left would have the stable 2 8 configuration of neon. Again, the chlorine atom (CI) b@s a distribution of 2,8,7. If it could gain an electron, it would have the 2,8,8 distribution of argon.
Consequently, if a sodium atom encounters a chlorine atom, the transfer of an electron from the sodium atom to the chlorine atom satisfies both. However, the loss of a negatively charged electron leaves the sodium atom with a deficiency of negative charge or, which is the same thing, an excess of positive charge. It becomes a positively charged sodium ion (Na+). The chlorine atom, on the other band, gaining an electron, gains an excess of nega tive charge and becomes a negatively charged chloride ion ["chlorine ion" as a convention of chemi amp;al nomenclature we might just as well accept with a weary sigh. Anyway, the "d" is not a typographical error] (CI-).
Opposite charges attract, so the sodium ion attracts all the chloride ions within reach and vice versa. These strong attractions cannot be overcome by the kinetic energy in duced at ordinary temperatures, and so the ions hold to gether firmly enough for "sodium chloride" (common salt) to be a solid. It does not become a gas, in fact, until a temperature of 1413' C. is reached. .Next, consider the carbon atom (C). Its electron dis tribution is 2,4. If it lost 4 electrons, it would gain the 2 helium configuration; if it gained 4 electrons, it would gain the 2,8 neon configuration. Losing or gaining that many electrons is not easy,_so the carbon atom shares electrons instead. It can, for instance, contribute one of its electrons to a "shared pool" of two electrons, a pool to which a neighboring carbon atom also contributes an elec tron. With its second electron it can form another shared pool with a second neighbor, and with its third and fourth, two more pools with two more neighbors. Each neighbor * The charged chlorine atom is called "chloride ion" and not ran set up additional pools with other neighbors. In this way, each carbon atom is surrounded by four other carbon atoms.
These shared electrons fit into the outermost electron shells of each carbon atom that contributes. Each carbon atom has 4 electrons of its own in that outermost shell and 4 electrons contributed (one apiece) by four neighbors.
Now, each carbon atom has the 2,8 configuration of neon, but only at the price of remaining close to its neighbors.
The result is a strong interatomic attraction, even though electrical charge is not involved. Carbon is a solid'and is not a gas until a temperature of 42000 C. is reached.
The atoms of metallic elements also stick together ,strongly, for similar reasons, so that tungsten, for instance, is not a gas until a temperature of 59000 C. is reached.
We cannot, then, expect to have a Gas when atoms achieve stable electron distribution by transferring elec trons in such a manner as to gain an electric charge; or by sharing electrons in so complicated a fashion that vast numbers of atoms stick together in one piece.
What we need is something intermediate. We need a situation where atoms achieve stability by sharing electrons (so that no electric charge arises) but where the total number of atoms involved in the sharing is very small so that only small molecules result. Within the molecules, attractive forces may be large, and the molecules may not be shaken apart without extreme temperature. The attrac tive forces between one molecule and its neighbor, how ever, may be smafl-and that will do.
Let's consider the hydrogen atom, for instance. It has but a single electron. Two hydrogen atoms can each con tribute its single electron to form a shared pool. As long as they stay together, each can count both electrons in its outermost shell and each will have the stable helium configuration. Furthermore, neither hydrogen atom will have any electrons left to form pools with other neighbors, hence the molecule will end there. Hydrogen gas will con sist of two-atom molecules (H2) The attractive force between the atoms in the molecule is large, and it takes temperatures of more than 20001 C. to shake even a small fraction of the hydrogen molecules into single atoms. There will, however, be only weak at tractions among separate hydrogen molecules, each of which, under the new arrangement, will have reached a satisfactory pitch of self-sufficiency. Hydrogen, therefore, will be a Gas not made up of separate atoms as is the case with the inert gases, but of two-atom molecules.
Something similar will be true in the case of fluorine (electronic distribution 2,7), oxygen (2,6) and nitrogen (2,5). The fluorine atom can contribute an electron and form a shared pool of two electrons with a,neighboring fluorine atom which also contributes an electron. Two oxygen atoms can contribute two electrons apiece to form a shared pool of four electrons, and two nitrogen atoms can contribute three electrons each and form a shared pool of six electrons.
I In each case, the atoms will achieve the 2,8 distribution of neon at the cost of forining paired molecules. As a result, enough stability is achieved so that fluorine (F2). oxygen (02), and nitrogen (N2) are all Gases.
The oxygen atom can also form a shared pool of two electrons with each of two neighbors, and those two neigh bors can form another shared pool of two electrons among themselves. The result is a combination of three oxygen atoms (O:j), each with a neon configuration. This com bination, 03, is called ozone, and it is a Gas too.
Oxygen, nitrogen, and fluorine can form mixed mole cules, too. For instance, a nitrogen and an oxygen atom can combine to achieve the necessary stability for each.
Nitrogen may also form shared pools of two electrons with each of three fluorine atoms, while oxygen may do so with each of two. The resulting compounds: nitrogen oxide (NO), nitroen trifluoride (NF3), and oxygen di fluoride (OF2) are all Gases.
Atoms which, by themselves, will not form Gases may do so if combined with either hydrogen, oxygen, nitrogen, or fluorine. For instance, two chlorine atoms (2,8,7, re member) will form a shared pool of two electrons so that ,both achieve the 2,8,8 argon configuration. Chlorine (CI2) is therefore a gas at room temperature-with intermolecu lar attractions, however, large enough to keep it from be ing a Gas, Yet if a chlorine atom forms a shared pool of two electrons with a fluorine atom, the result, chlorine fluoride (CIF), is a.Gas.
The boron atom (2,3) can form a shared pool of two electrons with each of three fluorine atoms, and the carbon atom a shared pool of two electrons with each of four fluorine atoms. The resulting compounds, boron trifluoride (BF3) and carbon tetrafluoride (CF4), are Gases.
A carbon atom can form a shared pool of two elec trons with each of four hydrogen atoms, or a shared pool of four electrons with an oxygen atom, and the resulting compounds, methane (CH-4) and carbon monoxide (CO), are gases. A two-carbon combination may set up a shared pool of two electrons with each of four hydrogen atoms (and a shared pool of four electrons with one another); a silicon atom may setup a shared pool of two electrons with each of four hydrogen atoms. The compounds, ethylene (C2H4) and silane (SiH4), are Gases.
Altogether, then, I can list twenty Gases which fall into the following categories:
(1) Five elements made up of single atoms: helium, neon, argon, krypton, and xenon.
(2) Four elements made up of two-atom molecules: hydrogen, nitrogen, oxygen, and fluorine.
(3) One element form made up of three-atom mole cules: ozone (of oxygen).
(4) Ten compounds, with molecules built up of two different elements, at least one of which falls into category (2).
The twenty Gases are listed in order of increasing boil ing point in the accompanying table, and that boiling point is given in both the Celsius scale (' C.) and the Absolute scale (' K.).
The five inert gases on the list are scattered among the fifteen other Gases. To be sure, two of the three lowest 192 boiling Gases are helium and neon, but argon is seventh, krypton is tenth, and xenon is seventeenth. It would not be surprising if all the Gases, then, were as inert as the inert gases.
The Twenty Gases
Substance Fori ula B.P. (C.-) B.P. (K.-)
Helium He -268.9 4.2
Hydrogen H, -252.8 20.3
Neon Ne -245.9 27.2
Nitrogen N, -195.8 77.3 f
Carbon monoxide '-O -192 81
Fluorine F2 -188 85
Argon Ar -185.7 87.4
Oxygen 0, -183.0 90.1
Methane CH4 -161.5 111.6
Krypton Kr -152.9 120.2
Nitrogen oxide NO -151.8 121.3
Oxygen difluoride OF, -144.8 128.3
Carbon tetrafluoride CF, -128 145
Nitrogen trifluoride NF3 -120 153
Ozone 0, -111.9 161.2
Silane SiH, -111.8 161.3
Xenon Xe -107.1 166.0
Ethylene C,H, -103.9 169.2
Boron trifluoride BF, -101 172
Chlorine fluoride CIF -100.8 172.3
Perhaps they might be at that, if the smug, self-sufficient molecules that made them up were permanent, unbreak able affairs, but they are not. All the molecules can be broken down under certain conditions, and the free atoms (those of fluorine and oxygen particularly) are active in deed.
This does not show up in the Gases themselves. Sup pose a fluorine molecule breaks up,into two fluorine atoms, and these find themselves surrounded only by fluorine molecules? The only possible result is the re-formation of a fluorine molecule, and nothing much has happened. If, however, there are molecules other than fluorine present, a new molecular combination of greater stability than F2 is possible (indeed, almost certain in the case of fluorine), and a chemical reaction results.
The fluorine molecule does have a tendency to break apart (to a very small extent) even at ordinary tempera tures, and this is enough. The free fluorine atom will attack virtually anything n.on-fluorine in sight, and the heat of reaction will raise the temperature, which will bring about a more extensive split in fluorine molecules, and so on. The result is that molecular fluorine is the most chemically active of all the Gases (with chlorine fluoride almost on a par with it and ozone making a pretty good third).
The oxygen molecule is torn apart with greater diffi culty and therefore remains intact (and inert) under con ditions where fluorine will not. You may think that oxygen is an active element, but for the most part this is only true under elevated temperatures, where more energy is avail able to tear it apart. After all, we live in a sea of free oxygen without damage. Inanimate substances such as pa per, wood, coal, and gasoline, all considered flammable, can be bathed by oxygen for indefinite periods without perceptible chemical reaction-until heated.
Of course, once heated, oxygen does become active and combines easily with other Gases such as hydrogen, carbon monoxide, and methane which, by that token, can't be considered particularly inert either.
The nitrogen molecule is torn apart with still more diffi culty and, before the discovery of the inert gases, nitrogen was the inert gas par excellence. It and carbon tetrafluoride are the only Gases on the list, other than the inert gases themselves, that are respectably inert, but even they can be torn apart.
Life depends on the fact that-certain bacteria can split the nitrogen molecule; and important industrial processes arise out of the fact that man has learned to do the same thing on a large scale. Once the nitrogen molecule is torn apart, the individual nitrogen atom is quite active, bounces around in all sorts of reactions and- in fact, is the fourth most common atom in living tissue and is essential to all its workings.
In the case of the inert gases, all is different. There are no molecules to pull apart. We are dealing with the self sufficient atom itself, and there seemed little likelihood that combination with any other atom would produce a situa tion of greater stability. Attempts to get inert gases to form compounds, at the time they were discovered, failed, and chemists were quickly satisfied that this made sense.
To be sure, chemists continued to try, now and again, but they also continued to fail. Until 1962, then, the only successes chemists had had in tying the inert.gas atoms to other atoms was in the formation of "clathrates." In a clathrate, the atoms making up a molecule form a cage like structure and, sometimes, an extraneous atom-even an inert gas atom-is trapped within the cage as it forms.
The inert gas is then tied to the substance and cannot be liberated without breaking down the molecule. However, the inert gas atom is only physically confined; it has not formed a chemical bond.
And yet, let's reason things out a bit. The boiling point of helium is 4.2' K.; that of neon is 27.20 K., that of argon 87.4' K., that of krypton 120.2' K., that of xenon 166.0' K. The boiling point of radon, the sixth and last inert gas and the one with the most massive atom, is 211.3- K. (-61.8- C.) Radon is not even a Gas, but merely a gas.
Furthermore, as the mass of the inert gas atoms in creases, the ionization potential (a quantity which meas ures the ease with which an electron can be removed alto gether from a particular atom) decreases. The increasing boiling point and decreasing ionization potential both indi cate that the inert gases become less inert as the mass of the individual atoms rises.
By this reasoning, radon would be the least inert of the inert gases and efforts to form compounds should concen trate upon it as offering the best chance. However, radon is a radioactive element with a half-life of less than four days, and is so excessively rare that it can be worked with only under extremely specialized conditions. The next best bet, then, is xenon. This is very rare, but it is available and it is, at least, stable.
Then, if xenon is to form a chemical bond, with what other atom might it be expected to react? Naturally, the most logical bet would be to choose the most reactive sub stance of all-fluorine or some fluorine-containing com pound. If xenon wouldn't react with that, it wouldn't react with anything.
(This may sound as though I am being terribly wise after the event, and I am. However, there are some who were legitimately wise. I am told that Linus Pauling rea soned thus in 1932, well before the event, and that a gentleman named A. von Antropoff did so in 1924.)
In 1962, Neil Bartlett and others at the University of British Columbia were working with a very unusual com pound, platinum hexafluoride (PtF6). To their surprise, they discovered that it was a particularly active compound.
Naturally, they wanted to see what it'could be made to do, and one of the thoughts that arose was that here might be something that could (just possibly) finally pin down an inert gas atom.
So Bartlett mixed the vapors of PtF6 with xenon and, to his astonishment, obtained a compound which seemed to be XePtFc,, xenon platinum hexafluoride. The announce ment of this result left a certain area of doubt, however.
Platinum hexafluoride was a sufficiently complex compound to make it just barely possible that it had formed a clath rate and trapped the xenon.
A group of chemists at Argonne National Laboratory in Chicago therefore tried the straight xenon-plus-fluorine experiment, heating one part of xenon with five parts of fluorine under pressure at 400' C. in a nickel container.
They obtained xenon tetrafluoride (XeF4), a straightfor ward compound of an inert gas, with no possibility of a clathrate. (To be sure, this experiment could have been tried years before, but it is no disgrace that it wasn't. Pure xenon is very hard to get and pure fluorine is very danger ous to handle, and no chemist could reasonably have been expected to undergo the expense and the risk for so slim-chanced a catch as an inert gas compound until after Bartlett's experiment had increased that "slim chance" tremendously.)
And once the Argonne results were announced, all Hades broke loose. It.looked as though every inorganic chemist in the world went gibbering into the inert gas field. A whole raft of xenon compounds, including not only XeF4, but also XeF., XeF6, XeOF2, XeOF3, XeOF4, XeO3, H4XeO4, and H,XeO,, have been reported.
Enough radon was scraped together to form radon tetra fluoride (RnF4). Even krypton, which is more inert than xenon, has been tamed, and krypton difluoride (KrF2) and krypton tetrafluoride (KrF4) have been formed.
The remaining three inert gases, argon, neon, and helium (in order of increasing inertness), as yet remain untouched.
They are the last of the bachelors, but the world of chemis try has the sound of wedding bells ringing in its ears, and it is a bad time for bachelors.
As an old (and cautious) married man, I can only say to this-no comment.
When I first began writing about science for the general public-far back in medieval times-I coined a neat phrase about the activity of a "light-fingered magical catalyst."
My editor stiffened as he came across that phrase, but not with admiration (as had been my modestly confident expectation). He turned on me severely and said, "Nothing in science is magical. It may be puzzling, mysterious, in expbeable-but it is never magical."
It pained me, as you can well imagine, to have to learn a lesson from an editor, of all people, but the lesson seemed too good to miss and, with many a wry grimace, I learned
That left me, however, with the problem of describing the workings of a catalyst, without calling upon magical power for an explanation.
Thus, one of the first experiments conducted by any beginner in a high school chemistry laboratory is to pre pare oxygen by heating potassium chlorate. If it were only potassium chlorate he were heating, oxygen would be evolved but slowly and only at comparatively high temper atures. So he is instructed to add some manganese dioxide first. When he heats the mixture, oxygen comes off rapidly at comparatively low temperatures.
What does the manganese dioxide do? It contributes no oxygen. At the conclusion of the reaction it 'is all still there, unchanged. Its mere presence seems sufficient to hasten the evolution of oxygen. It is a haste-maker or, more properly, a catalyst.
And how can one explain influence by mere presence?
Is it a kind of molecular action at a distance, an extra sensory perception on the part of potassium chlorate that the influential aura of manganese dioxide is present? Is it telekinesis, a para-natural action at a distance on the part of the manganese dioxide? Is it, in short, magic?
Well, let's see…
To begin at the beginning, as I almost invariably do, the first and most famous catalyst in scientific history never existed.
The alchemists of old sought methods for turning base metals into gold. They failed, and so it seemed to them that some essential ingredient was missing in their recipes. The more imaginative among them conceived of a substance which, if added to the mixture they were heating (or what ever) would bring about the production of gold. A small quantity would suffice to produce a great deal of gold and it could be recovered and used again, no doubt.
No one had ever seen this substance but it was de scribed, for some reason, as a drv, earthy material. The ancient alchemists therefore called it xenon, from a Greek word meaning "dry."
In the eighth century the Arabs took over alchemy and called this gold-making catalyst "the xerion" or, in Arabic, at-iksir. When West Europeans finally learned Arabic alchemy in the thirteenth century, at-iksir became "elixir."
As a further tribute to its supposed dry, earthy prop erties, it was commonly called, in Europe, "the philos opber's stone." (Remember that as late as 1800, a "natural philosopher" was what we would now call a "scientist.")
The amazing elixir was bound to have other marvelous properties as well, and the notion arose that it was a cure for all diseases and might very well confer immortality.
Hence, alchemists began to speak of "the elixir of life."
For centuries, the philosopher's stone and/or the elixir of life was searched for but not found. Then, when finally a catalyst was found, it brought about the formation not of lovely, shiny gold, but messy, dangerous sulfuric acid.
Wouldn't you know?
Before 1740, sulfuric acid was hard to prepare. In the* That's all right, though. Sulfuric acid may not be as costly as gold, but it is conservatively speaking-a trillion times as in trinsically useful.
ory, it was easy. You bum sulfur, combining it with oxygen to form sulfur dioxide (SO2)- You burn sulfur dioxide further to make sulfur trioxide (SO3)- You dissolve sulfur trioxide in water to make sulfuric acid, (H2SO4) - The trick, though, was to make sulfur dioxide combine with oxygen.
That could only be done slowly and with difficulty.
In the 1740s, however, an English sulfuric acid man ufacturer named Joshua Ward must have reasoned that saltpeter (potassium nitrate), though nonflammable itself, caused carbon and sulfur to burn with great avidity. (In fact, carbon plus sulfur plus saltpeter is gunpower.) Con sequently, he added saltpeter to his burning sulfur and found that he now obtained sulfur tri'oxide without much trouble and could make sulfuric acid easily and cheaply.
The most wonderful thing about the process was that, at the end, the saltpeter was still present, unchanged. It could be used over and over again. Ward patented the process and the price of sulfuric acid dropped to 5 per cent of what it was before.
Magic? - Well, no.
In 1806, two French chemists, Charles Bernard Ddsormes and Nicholas C16ment, advanced an explanation that contained a principle which is accepted to this day.
It seems, you see, that when sulfur and saltpeter bum together, sulfur dioxide combines with a portion of the saltpeter molecule to form a complex. The oxygen of the saltpeter portion of the complex transfers to the sulfur dioxide portion, which now breaks away as sulfur tri oxide.
What's left (the saltpeter fragment minus oxygen) pro ceeds to pick up that missing oxygen, very readily, from the atmosphere. The saltpeter fragment, restored again, is ready to combine with an additional molecule of sulfur dioxide and pass along oxygen. It is the saltpeter's task simply to pass oxygen from air to sulfur dioxide as fast as it can. It is a middleman, and of course it remains un changed at the end of the reaction.
In fact, the wonder is not that a catalyst hastens a re action while remaining apparently unchanged, but that anyone should suspect even for a moment that anything "magical" is involved. If we were to come across the same phenomenon in the more ordinary affairs of life, we would certainly not make that mistake of assuming magic.
For instance, consider a half-finished brick wall and, five feet from it, a heap of bricks and some mortar. If that were all, then you would expect no change in the situation between 9 A.m. and 5 P.m. except that the mortar would dry out.
Suppose, however, that at 9 A.M. you observed one fac tor in addition-a man, in overalls, standing quietly be tween the wall and the heap of bricks with his hands empty. You observed matters again at 5 P.m. and the same man is standing there, his hands still empty. He has not changed. However, the brick wall is now completed and' the heap of bricks is gone.
The man clearly fulfills the role of catalyst. A reaction has taken place as a result, apparently, of his mere pres ence and without any visible change of diminution in him.
Yet would we dream for a moment of saying "Magic!"?
We would, instead, take it for granted that had we ob served the man in detail all day, we would have caught him transferring the bricks from the heap to the wall one at a time. And what's not magic for the bricklayer is not magic for the saltpeter, either.
With the birth and progress of the nineteenth century, more examples of this sort of thing were discovered. In 1812, for instance, the Russian chemist Gottlieb Sigis mund Kirchhoff…
And here I break off and begin a longish digression for no other reason than that I want to; relying, as I always do, on the infinite patience and good humor of the Gentle Readers.
It may strike you that in saying "the Russian chemist, Gottlieb Sig7ismund Kirchhoff" I have made a humorous error. Surely no one with a name like Gottlieb Sigismund Kirchhoff can be a Russian! It depends, however, on whether you mean a Russian in an ethnic or in a geographic sense.
To explain what I mean, let's go back to the beginning of the thirteenth century. At that time, the regions of our land and Livonia, along the southeastern shores of the Baltic Sea (the modem Latvia and Estonia) were in habited by virtually the last group of pagans in Europe. It was the time of the Crusades, and the Germans to the southeast felt it a pious duty to slaughter the poorly armed and disorganized pagans for the sake of their souls.
The crusading Germans were of the "Order of the Knights of the Sword" (better known by the shorter and more popular name of "Livonian Knights"). They were joined in 1237 by the Teutonic Knights, who had first established themselves in the Holy Land. By the end of the thirteenth century the Baltic shores had been conquered, with the German expeditionary forces in control.
The Teutonic Knights, as a political organization, did not maintain control for more than a couple of centuries.
They were defeated by the Poles in the 1460s. The Swedes, under Gustavus Adolphus, took over in the 1620s, and in the 1720s the Russians, under Peter the Great, replaced the Swedes.
Nevertheless, however the political tides might shift and whatever flag flew and to whatever monarch the loyal in habitants might drink toasts, the land itself continued to belong to the "Baltic barons" (or "Balts") who were the German-speaking descendants of the Teutonic Knights.
Peter the Great was an aggressive Westernizer who built a new capital, St. Petersburg* at the very edge of the Livonian area, and the Balts were a valued group of sub jects indeed.
This remained true all through the eighteenth and nine teenth centuries when the Balts possessed an influence within the Russian Empire out of all proportion to their numbers. Their influence in Russian science was even more lopsided.
The trouble was that public education within Russia lagged far behind its status in western Europe. The Tsars saw no reason to encourage public education and make trouble for themselves. No doubt they felt instinctively that The city was named for his name-saint and not for himself.
Whatever Tsar Peter was, a saint he was not a corrupt and stupid government is only really safe with an uneducated populace.
This meant that even elite Russians who wanted a secular education had to go abroad, especially if they wanted a graduate education in science. Going abroad was not easy, either, for it meant leaming a new language and new ways. What's more, the Russian Orthodox Church viewed all Westerners as heretics and little better than heathens. Contact with heathen ways (such as science) was at best dangerous and at worst damnation. Consequently, for a Russian to travel West for an education meant the overcoming of religious scruples as well.
The Balts, however, were German in culture and Lu theran in religion and had none of these inhibitions. They shared, with the Germans of Germany itself, in the,height ening level of education-in particular, of scientific educa tion-through the eighteenth and nineteenth centuries.
So it follows that among the great Russian scientists of the nineteenth century we not only have a man with a name like Gottlieb Sigismund Kirchhoff, but also others with names like Friedrich Konrad Beilstein, Karl Ernst von Baer, and Wilhelm Ostwald.
This is not to say that there weren't Russian scientists in this period with Russian names. Examples are Mikhail Vasilievich Lomonosov, Aleksandr Onufrievich Kovalev ski, and Dmitri Ivanovich Mendel6ev.
However, Russian officialdom actually preferred the Balts (who supported the Tsarist government under which they flourished) to the Russian intelligentsia itself (which frequently made trouble and had vague notions of reform).
In addition, the Germans were the nineteenth-century scientists par excellence, and to speak Russian with a German accent probably leiit distinction to a scientist.
(And before you sneer at this point of view, just think of the American stereotype of a rocket scientist. He has a thick German accent, nicht wahr?-And this despite the fact that the first rocketman, and the one whose experi ments started the Germans on the proper track [Robert Goddard], spoke with a New England twang.)
So it happened that the Imperial Academy of Sciences of the Russian Empire (the most prestigious scientific organization in the land) was divided into a "German party" and a "Russian party," with the former dominant.
In 1880 there was a vacancy in the chair of chemical technology at the Academy, and two names were proposed.
The German party proposed Beilstein, and the Russian party proposed Mende]6ev. There was no comparison really. Beilstein spent years of his life preparing an encyclo pedia of the properties and methods of preparation of many thousands of organic compounds which, with nu merous supplements and additions, is still a chemical bible.
This is a colossal monument to his thorough, hard-work ing competence-but' it is no more. Mendel6ev, who worked out the periodic table of the elements, was, on the other hand, a chemist of the first magnitude-an un doubted genius in the field.
Nevertheless, government officials threw,their weight be bind Beilstein, who was elected by a vote of ten to nine.
It is no wonder, then, that in recent years, when the Russians have finally won a respected place in the scientific sun, they tend to overdo things a bit. They've got a great deal of humiliation to make up for.
That ends the digression, so I'll start over As the nineteenth century wore on, more examples of baste-making were discovered. In 1812, for instance, the Russian chemist Gottlieb Sigismund Kirchhoff found that if he boiled starch in water to which a small amount of sulfuric acid had been added, the starch broke down to a simple form of sugar, one that is now called glucose. This would not happen in the absence of acid. When it did happen in the presence of acid, that acid was not consumed but was still present at the end.
Then, in 1816, the English chemist Humphry Davy found that certain organic vapors, such as those of alcohol, combined with oxygen more easily in the presence of metals such as platinum. Hydrogen combined more easily with oxygen in the presence of platinum also.
Fun and games with platinum started at once. In 1823 a German chemist, Johann Wolfgang Debereiner, set up a hydrogen generator which, on turning an appropriate stop cock, would allow a jet of hydrogen to shoot out against a strip of platinum foil. The hydrogen promptly burst into flame and "Dbbereiner's lamp" was therefore the first cigarette lighter. Unfortunately, impurities in the hydrogen gas quickly "poisoned" the expensive bit of platinum and rendered it useless.
In 1831 an English chemist, Peregrine Phillips, reasoned that if platinum could bring about the combination of hydrogen and of alcohol with oxygen, why should it not do the same for sulfur dioxide? Phillips found it would and patented the process. It was not for years afterward, how ever, that methods were discovered for delaying the poisoning of the metal, and it was only after that that a platinum catalyst could be profitably used in sulfuric acid manufacture to replace Ward's saltpeter.
In 1836 such phenomena were brought to the attention of the Swedish chemist J6ns Jakob Berzelius who, during the first half of the nineteenth century, was the uncrowned king of chemistry. It was he who suggested the words "catalyst" and "catalysis" from Greek words meaning "to break down" or "to decompose." Berzelius had in mind such examples of catalytic action as the decomposition of the large starch molecule into smaller sugar molecules by the action of acid.
But platinum introduced a new glamor to the concept of catalysis. For one thing, it was a rare and precious metal. For another, it enabled people to begin suspecting magic again.
Can platinum be expected to behave as a middleman as saltpeter does?
At first blush, the answer to that would seem to be in the negative. Of all substances, platinum is one of the most inert. It doesn't combine with oxygen or hydrogen under any normal circumstances. How, then, can it cause the two to combine?
If our metaphorical catalyst is a bricklayer, then plati num can only be a bricklayer tightly bound in a strait jacket.
Well, then, are we reduced to magic? To molecular action at a distance?
Chemists searched for something more prosaic. The suspicion grew during the nineteenth century that the inert ness of platinum is, in one sense at least, an illusion. In the body of the metal, platinum atoms are attached to each other in all directions and are satisfied to remain so. In bulk, then, platinum will not react with oxygen or hydro gen (or most other chemicals, either).
On the surface of the platinum, however, atoms on the metal boundary and immediately adjacent to the air have no other platinum atoms, in the air-direction at least, to attach themselves to. Instead, then, they attach themselves to whatever atoms or molecules they find handy oxygen atoms, for instance. This forms a thin film over the surface, a film one molecule thick. It is completely invisible, of course, and all we see is a smooth, shiny, platinum sur face, which seems completely nonreactive and inert.
As parts of a surface film, cixygen and hydrogen react more readily than they do when making up bulk gas.
Suppose, then, that when a water molecule is formed by the combination of hydrogen and oxygen on the platinum surface, it is held more weakly than an oxygen molecule would be. The moment an oxygen molecule struck that portion of the surface it would replace the water molecule in the film. Now there would be the chance for the forma tion of another water molecule, and so on.
The platinum does act as a middleman after all, through its formation of the monomolecular gaseous film.
Furthermore, it is also easy to see how a platinum catalyst can be poisoned. Suppose there are molecules to which the platinum atoms will cling even more tightly than to oxygen. Such molecules will replace oxygen wherever it is found on the film and will not themselves be replaced by any gas in the atmosphere. They are on the, platinum sur face to stay, and any catalytic action involving hydrogen or oxygen is killed.
Since it takes very little substance to form a layer merely one molecule thick over any reasonable stretch of surface, a catalyst can be quickly poisoned by impurities that are present in the working mixture of gases, even when those impurities are present only in trace amounts. . If this is all so, then anything which increases the amount of surface in a given weight of metal will also increase the catalytic efficiency. Thus, powdered platinum, with a great deal of surface, is a much more effective catalytic agent than the same weight of bulk platinum. It is perfectly fair, therefore, to speak of "surface catalysis."
But what is there about a surface film that hastens the process of, let us say, hydrogen-oxygen combination? We still want to remove the suspicion of magic.
To do so, it helps to recognize what catalysts can't do.
For instance, in the 187Ws, the American physicist Josiah Willard Gibbs painstakingly worked out the applica tion of the laws of thermodynamics to chemical reactions.
He showed that there is a quantity called "free energy" which always decreases in any chemical reaction that is spontaneous-that is, that proceeds without any input of energy.
Thus, once hydrogen and oxygen start reacting, they keep on reacting for as long as neither gas is completely used up, and as a result of the reaction water is formed. We explain this by saying that the free energy of the water is less than the free energy of the hydrogen-oxygen mixture.
The reaction of hydrogen and oxygen to form water is analogous to sliding down an "energy slope."
But if that is so, why don't hydrogen and oxygen mole cules combine with'each other as soon as they are mixed.
Why do they linger for indefinite periods at the top of the energy slope after being mixed, and react and slide down ward only after being heated?
Apparently, before hydrogen and oxygen molecules (each composed of a pair of atoms) can react, one or the other must be pulled apart into individual atoms. That requires an energy input. It represents an upward energy slope, before the downward slope can be entered. It is an "energy hump," so to speak. The amount of energy that must be put into a reacting system to get it over that energy hump is called the "energy of activation," and the con 207 cept was first advanced in 1889 by the Swedish chemist Svante August Arrhenius.
When hydrogen and oxygen molecules are colliding at ordinary temperature, only the tiniest fraction happen to possess enough energy of motion to break up on collision.
That tiniest fraction, which does break up and does react, then liberates enough energy, as it slides down the energy slope, to break up additional molecules. However, so little energy is produced at any one-time that it is radiated away before it can do any good. 'ne net result is that hydrogen and oxygen mixed at room temperature do not react. ff the temperature is raised, molecules move more rapidly and a larger proportion of them possess the nec essary energy to break up on collision. (More, in other words, can slide over the energy hump.) More and more energy is released, and there comes a particular tempera ture when more energy is released than can be radiated away. The temperature is therefore further raised, which produces more energy, which raises the temperature still further-and hydrogen and oxygen proceed to react with an explosion.
In 1894 the Russian chemist Wilhelm Ostwald pointed out that a catalyst could not alter the free energy relation ships. It cannot really make a reaction go, that would not go without it-though it can make a reaction go rapidly that in its absence would prciceed with only imperceptible speed.
In other words, hydrogen and oxygen combine in the absence of platinum but at an imperceptible rate, and the platinum baste-maker accelerates that combination. For water to decompose to hydrogen and oxygen at room tem perature (without the input of energy in the form of an electric current, for instance) is impossible, for that would mean spontaneously moving up an energy slope. Neither platinum nor any other catalyst could make a chemical reaction move up an energy slope. If we found one that did so, then that would be magic.
Or else we would have to modify the laws of thermodynamics.
But how does platinum hasten the reaction it does hasten? What does it do to the molecules in the film?
Ostwald's suggestion (accepted ever since) is that cata lysts hasten reactions by lowering the energy of activation of the reaction-flattening out the hump. At any given tem perature, then, more molecules can cross over the hump and slide downward, and the rate of the reaction increases, sometimes enormously.
For instance, the two oxygen atoms m an oxygen mole cule hold together with a certain, rather strong, attachment, and it is not easy to split them apart. Yet such splitting is necessary if a water molecule is to be formed.
When an oxygen atom is attached to a platinum atom and forms part of a surface film, however, the situation changes. Some of the bond-forming capabilities of the oxygen molecule are used up in forming the attachment to the platinum, and less is available for holding the two oxygen atoms together. The oxygen atom might be said to be "strained."
If a hydrogen atom happens to strike such an oxygen atom, strained in the film, it is more likely to knock it apart into individual oxygen atoms (and react with one of them) than would be the case if it collided with an oxygen atom free in the body of a gas. The fact that the oxygen molecule is strained means thaf it is easier to break apart, and that the energy of activation for the hyqrogen-oxygen combination has been lowered.
Or we can try a metaphor again. Imagine a brick resting on the upper reaches of a cement incline. The brick should, ideally, slide down the incline. To do so, however, it must overcome frictional forces which hold it in place against the pull of gravity. The frictional forces are here analogous to the forces holding the oxygen molecule together.
To overcome the frictional force one must give the brick an initial push (the energy of activation), and then it slides down.
Now, however, we will try a little "surface catalysis." We will coat the slide with wax. If we place the brick on top of such an incline, the merest touch will start it moving downward. It may move downward without any help from us at all.
In waxing the cement incline we haven't increased the force of gravity, or added energy to the system. We have merely decreased the frictional forces (that is, the energy, hump), and bricks can be delivered down such a waxed incline much more easily and much more rapidly than down an unwaxed incline.
So you see that on inspection, the magical clouds of glory fade into the light of common day, and the wonderful word "catalyst" loses all its glamor. In fact, notlfing is left to it but to serve as the foundation for virtually all of chemical industry and, in the form of enzymes, the founda tion of all of life, too.
And, come to think of it, that ought to be glory enough for any reasonable catalyst.
Alas, the evidences of mortality are all about us; the other day our little parakeet died. As nearly as we could make out, it was a trifle over five years old, and we had always taken the best of care of it. We had fed it, watered it, kept its cage clean, allowed it to leave the cage and fly about the house, taught it a small but disreputable vocabulary, perrffltted it to ride about on our shoulders and eat at will from dishes at the table. In short, we encouraged it to think of itself as one of us humans.
But alas, its aging process remained that of a parakeet.
During its last year, it slowly grew morose and sullen; men tioned its improper words but rarely; took to walking rather than flying. And finally it died. And, of course, a similar process is taking place within me.
This thought makes me petulant. Each year I break my own previous record and enter new high ground as far as age is concerned, and it is remarkably cold comfort to think that everyone else is doing exactly the same thing.
The fact of the matter is that I resent growing old. In my time I was a kind of mild infant prodigy-you know, the kind that teaches himself to read before he is five and enters college at fifteen and is writing for publication at eighteen and all like that there. As you might expect, I came in for frequent curious inspection as a sort of ludicrous freak, and I invariably interpreted this inspection as admiration and loved it.
But such behavior carries its own punishment, for the moving finger writes, as Edward Fitzgerald said Omar
Khayyam said, and having writ, moves on. And what that means is that the bright, young, bouncy, effervescent infant prodigy becomes a flabby, paunchy, bleary, middle-aged non-prodigy, and age sits twice as heavily on such as these.
It happens quite often that some huge, hulking, raw boned fellow, checks bristling with black stubble, comes to me and says in his bass voice, "I've been reading you ever since I learned to read; and I've collected all the stuff you wrote before I learned to read and I've read that, too.",
My impulse then is to hit him a stiff right cross to the side of the jaw, and I might do so if only I were quite sure he would respect my age and not hit back.
So I see nothing for it but to find a way of looking at the bright side, if any exists…
How long do organisms live anyway? We can only guess.
Statistics on the subject have been carefully kept only in the last century or so, and then only for Homo sapiens, and then only in the more "advanced" parts of the world.
So most of what is said about longevity consists of quite rough estimates. But then, if everyone is guessing, I can guess, too; and as lightheartedly as the next person, you can bet.
In the first place, what do we mean. by length of life?
There are several ways of looking at this, and one is to consider the actual length of time (on the average) that actual organisms live under actual conditions. This is the
"life expectancy-)I
One thing we can be certain of is that life expectancy is quite trifling for all kinds of creatures. If a codfish or an oyster produces millions or billions of eggs and only one or two happen to produce young thal are still alive at the end of the first year, then the average life expectancy of all the coddish or oysterish youngsters can be measured in weeks, or possibly even days. I imagine that thousands upon thousands of them live no more than minutes.
Matters are not so extreme among birds and mammals where there is a certain amount of infant care, but I'll bet relatively few of the smaller ones live out a single year.
From the cold-blooded view of species survival, this is quite enough, however. Once a creature has reached sexual maturity, and contributed to the birth of a litter of young which it sees through to puberty or near-puberty, it has done its bit for species survival and can go its way. If it survives and produces additional litters, well and good, but it doesn't have to.
There is, obviously, considerable survival value in reach ing sexual maturity as early as possible, so that there is time to produce the next generation before the first is gone.
Meadow mice reach puberty in three weeks and can bear their first litter six weeks after birth. Even an animal as large as a horse or cow reaches the age of puberty after one year, and the largest whales reach puberty at two.
Some large land animals can afford to be slower about it.
Bears are adolescent only at six and elephants only at ten.
The large carnivores can expect to live a number of years, if only because they have relatively few enemies (al ways excepting man) and need not expect to be anyone's dinner. The largest herbivores, such as elephants and hip popotami, are also safe; while smaller ones such as baboons and water buffaloes achieve a certain safety by traveling in herds.
Early man falls into this category. He lived in small herds and he cared for his young. He had, at the very least, primitive clubs and eventually gained the use of fire. The average man, therefore, could look forward to a number of years of life. Even so, with undernourishment, disease, the hazards of the chase, and the cruelty of man to man, life was short by modern standards. Naturally, there was a limit to how short life could be. If men didn't live long enough, on the average, to replace themselves, the race would die out. However, I should guess that in a primitive society a life expectancy of 18 would be ample for species survival. And I rather suspect that the actual life ex pectancy of man in the Stone Age was not much greater.
As mankind developed agriculture and as he domesti cated animals, he gained a more dependable food supply.
As he learned to dwell within walled cities and to live under a rule of law, he gained oTeater security against hu man enemies from without and within. Naturally, life ex pectancy rose somewhat. In fact, it doubled.
However, throughout ancient and medieval times, I doubt that life expectancy ever reached 40. In medieval
England, the life expectancy is estimated to have been 35, so that if you did reach the age of 40 you were a revered sage. What with early marriage and early childbirth, you were undoubtedly a grandfather, too.
This situation still existed into the twentieth century in some parts of the world. In India, for instance, as of 1950, the life expectancy was about 32; in Egypt, as of 1938, it was 36; in Mexico, as of 1940, it was 38.
The next great step was medical advance, which brought infection and disease under control. Consider the United
States. In 1850, life expectancy for American white males was 38.3 (not too much different from the situation in medieval England or ancient Rome). By 1900, however, after Pasteur and Koch had done their work, it was up to
48.2; then 56.3 in 1920; 60.6 in 1930; 62.8 in 1940; 66.3 in 1950; 67.3 in 1959; and 67.8 in 1961.
All through, females had a bit the better of it (being the tougher sex). In 1850, they averaged two years longer life than males; and by 1961, the edge had risen to nearly seven years. Non-whites in the United States don't do quite as well-not for any inborn reason, I'm sure, but because they generally occupy a position lower on the economic scale. They run some seven years behind whites in life ex pectancy. (And if anyone wonders why Negroes are rest less these days, there's seven years of life apiece that they have coming to them. That might do as a starter.)
Even if we restrict ourselves to whites, the United States does not hold the record in life expectancy. I rather think
Norway and Sweden do. The latest figures I can find (the middle 1950s) give Scandinavian males a life expectancy of 71, and females one of 74.
This change in life expectancy has introduced certain changes in social custom. In past centuries, the old man was a rare phenomenon-an unusual repository of long memories and a sure guide to ancient traditions. Old age was revered, and in some societies where life expectancy is still low and old men still exceptional, old age is still revered.
It might also be feared. Until the nineteenth century there were particular hazards to childbirth, and, few women survived the process very often (puerperal fever and all that). Old women were therefore even rarer than old men, and with their wrinkled cheeks and toothless gums were strange and frightening phenomena. The witch mania of early modern times may have been a last expression of that.
Nowadays, old men and women are very common and the extremes of both good and evil are spared them. Per haps that's just as well.
One might suppose, what with the steady rise in life expectancy in the more advanced portions of the globe, that we need merely hold on another century to find men routinely living a century and a half. Unfortunately, this is not so. Unless there is a remarkable biological break through in geriatrics, we have gone just about as far as we can go in raising, the life expectancy.
I once read an allegory that has haunted me all my adult life. I can't repeat it word for word; I wish I could. But it goes something like this. Death is an archer and life is a bridge. Children begin to cross the bridge gaily, skipping along and growing older, while Death shoots at them. Ms aim is miserable at first, and only an occasional child is transfixed and falls off the bridge into the cloud-enshrouded mists below. But as the crowd moves farther along, Death's aim improves and the numbers thin. Finally, when Death aiins at the aged who totter nearly to the end of the bridge, his aim is perfect and he never misses. And not one man ever gets across the bridge to see what lies on the other side.
This remains true despite all the advances in social struc ture and medical science throughout history. Death's aim has worsened through early and middle life, but those last perfectly aimed arrows are the arrows of old age, and even now they never miss. All we have done to wipe out war, famine, and disease has been to allow more people the chance of experiencing old age. When life expectancy was
35, perhaps one in a hundred reached old age; nowadays nearly half the population reaches it-but it is the same old old age. Death gets us all, and with every scrap of his ancient efficiency.
In short, putting life expectancy to one side, there is a "specific age" which is our most common time of death from inside, without any outside push at all; the age at which we would die even if we avoided accident, escaped disease, and took every care of ourselves.
Three thousand years ago, the psalmist testified as to the specific age of man (Ps. 90:10), saying: "The days of our years are threescore years and ten; and if by reason of strength they be fourscore years, yet is their strength labor and sorrow; for it is soon cut -off, and we fly away."
I And so it is today; three millennia of civilization and three centuries of science have not changed it. The com monest time of death by old age lies between 70 and 80.
But that is just the commonest time. We don't all die on our 75th birthday; some of us do better, and it is un doubtedly the hope of each one of us that we ourselves, personally, will be one of those who will do better. So what we have our eye on is not the specific age but the maximum age we can reach.
Every species of multicellular creature has a specific age and a maximum age; and of the species that have been studied to any degree at all, the maximum age would seem to be between 50 and 100 per cent longer than the specific age. Thus, the maximum age for man is considered to be about II S.
There have been reports of older men, to be sure. The most famous is the case of Thomas Parr ("Old Parr"), who was supposed to have been born in 1481 in England and to have died in 1635 at the age of 154. The claim is not believed to be authentic (some think it was a put-up job involving three generations of the Parr family), nor are any other claims of the sort. The Soviet Union reports numerous centenarians in the Caucasus, but all were born in a region and at a time when records were not kept. The old man's age rests only upon his own word, therefore, and ancients are notorious for a tendency to lengthen their years. Indeed, we can make it a rule, almost, that the poorer the recording of vital statistics in a particular region, the older the centenarians claim to be.
In 1948, an English woman named Isabella Shepheard died at the reported age of 115. She was the last survivo 'r, within the British Isles, from the period before the com pulsory registration of births, so one couldn't be certain to the year. Still, she could not have been younger by more than a couple of years. In 1814, a French Canadian named Pieffe Joubert died and he, apparently, had reliable records to show that he was bom in 1701, so that he died at 113.
Let's accept 115 as man's maximum age, then, and ask whether we have a good reason to complain about this.
How does the figure stack up against maximum ages for other types of living organisms? if we compare plants with animals, there is no question that plants bear off the palm of victory. Not all plants generally, to be sure. To quote the Bible again (Ps. 103:
15-16), "As for man his days are as grass: as a flower of the field, so he flourisheth. For the wind passeth over it, and it is gone; and the place thereof shall know it no more."
This is a spine-tingling simile representing the evanes cence of human life, but what if the psalmist had said that as for man. his days are as the oak tree; or better still, as the giant sequoia? Specimens of the latter are believed to be over three thousand years old, and no maximum age is known for them.
However, I don't suppose any of us wants long life at the cost of being a tree. Trees live long, but they live slowly, passively, and in terribly, terribly dull fashion. Let's see what we can do with animals.
Very simple animals do surprisingly well and there are reports of sea-anemones, corals, and such-like creatures passing the half-century mark, and even some tales (not very reliable) of centenarians among them. Among more elaborate 'invertebrates, lobsters may reach an age of 50 and clams one of 30. But I think we can pass invertebrates, too. There is no reliable tale of a complex invertebrate liv ing to be 100 and even if giant squids, let us say, did so, we don't want to be giant squids.
What about vertebrates? Here we have legends, par ticularly about fish. Some tell us that fish never grow old but live and grow forever, not dying till they are killed. In dividual fish are reported with ages of several centuries.
Unfortunately, none of this can be confirmed. The oldest age reported for a fish by a reputable observer is that of a lake sturgeon which is supposed to be well over a century old, going by a count of the rings on the spiny ray of its pectoral fin.
Among amphibia the record holder is the giant sala mander, which may reach an age of 50. Reptiles are better.
Snakes ma reach an aoe of 30 and crocodiles may attain y t,
60, but it is the turtles that hold the record for the animal kingdom. Even small turtles may reach the century mark, and at least one larger turtle is known, with reasonable certainty, to have lived 152 years. It may be that the large Galapagos turtles can attain an age of 200.
But then turtles live slowly and dully, too. Not as slowly as plants, but too slowly for us. In fact, there are only two classes of living creatures that live intensely and at peak level at all times, thanks to their warm blood, and these are the birds and the mammals. (Some mammals cheat a little and hibernate through the winter and probably ex tend their life span in that nianner.) We might envv a tiger or an eagle if they. lived a long, long time and even
— as the shades of old age closed in-wish we could trade places with them. But do they live a long, long time?
Of the two classes, birds on the whole do rather better than mammals as far as maximum age is concerned. A pigeon can live as long as a lion and a herring gull as long as a hippopotamus. In fact, we have long-life legends about some birds, such as parrots and swans, which are supposed to pass the century mark with ease.
Any devotee of the Dr. Dolittle stories (weren't you?) must remember Polynesia, the parrot, who was in her third century. Then there is Tennyson's poem Tithonus, about that mythical character who was granted immortality but, through an oversight, not freed from the incubus of old age so that he grew older and older and was finally, out of pity, turned into a grasshopper. Tennyson has him lament that death comes to all but him. He begins by pointing out that men and the plants of the field die, and his fourth line is an early climax, going, "And after many a summer dies the swan." In 1939, Aldous Huxley used the line as a title for a book that dealt with the striving for physical im mortalit y
However, as usual, these stories remain stories. The oldest confirmed age reached by a parrot is 73, and I imagine that swans do not do much better. An age of 115 has been reported for carrion crows and for some vultures, but this is with a pronounced question mark.
Mammals interest us most, naturally, since we are mam mals, so let me list the maximum ages for some mammalian types. (I realize, of course, that the word "rat" or "deer" covers dozens of species, each with its own aging pattern, but I can't help that. Let's say the typical rat or the typical deer.)
Elephant 77 Cat 20
Whale 60 pig 20
Hippopotamus 49 Dog 1 8
Donkey 46 Goat 17
Gorilla 45 Sheep 16
Horse 40 Kangaroo 16
Chimpanzee 39 Bat 15
Zebra 38 Rabbit 15
Lion 35 Squirrel 15
Bear 34 Fox 14
Cow 30 Guinea Pig 7
Monkey 29 Rat 4
Deer 25 Mouse i
Seal 25 Shrew 2
The maximum age, be it remembered, is reached only by exceptional individuals. While an occasional rabbit may make 15, for instance, the average rabbit would die of old age before it was 10 and might have an actual life ex pectancy of only 2 or 3 years.
In general, among all groups of organisms sharing a common plan of structure, the large ones live longer than the small. Among plants, the giant sequoia tree lives longer than the daisy. Among animals, the giant sturgeon lives longer than the herring, the giant salamander lives longer. than the frog, the giant alligator lives longer than the lizard, the vulture lives longer than the sparrow, and the elephant lives longer than the shrew.
Indeed, in mammals particularly, there seems to be a strong correlation between longevity and size. There are exceptions, to be sure-some startling ones. For instance, whales are extraordinarily short-lived for their size. The age of 60 1 have given is quite exceptional. Most cetaceans are doing very well indeed if they reach 30. This may be because life in the water, with the continuous loss of beat and the never-ending necessity of swimming, shortens life.
But much more astonishing is the fact that man has a longer life than any other mammal-much longer than the elephant or even than the closely allied gorilla. When a human centenarian dies, of all the animals in the world alive on the day that he was born, the only ones that re main alive on the day of his death (as far as we know) are a few sluggish turtles, an occasional ancient vulture or sturgeon, and a number of other human centenarians. Not one non-human mammal that came into this world with him has remained. All, without exception (as far as we know), are dead.
If you think this is remarkable, wait! It is more re markable than you suspect.
The smaller the mammal, the faster the rate of its metabolism; the more rapidly, so to speak, it lives. We might well suppose that while a small mammal doesn't live as long as a large one, it lives more rapidly and more intensely. In some subjective manner, the small mammal might be viewed as living just as long in terms of sensation as does the more sluggish large mammal. As concrete evidence of this difference in metabolism among mammals, consider the heartbeat rate. The following table lists some rough figures for the average number of heartbeats per minute in different types of mammal.
Shrew 1000 Sheep 75
Mouse 550 Man 72
Rat 430 Cow 60
Rabbit 150 Lion 45
Cat 130 Horse 38
Dog 95 Elephant 30
Pig 75 Whale 17
For the fourteen types of animals listed we have the heartbeat rate (approximate) and the maximum age (ap proximate), and by appropriate multiplications, we can determine the maximum age of each type of creature, not in years but in total heartbeats. The result follows:
Shrew 1,050,000,000
Mouse 950,000,000
Rat 900,000,000
Rabbit 1,150,000,000
Cat 1,350,000,000
Dog 900,000,000
Pig 800,000,000
Sheep 600,000,000
Lion 830,000,000
Horse 800,000,000
Cow 950,000,000
Elephant 1,200,000,000
Whale 630,000,000
Allowing for the approximate nature of all my figures,
I look at this final table through squinting eyes from a dis tance and come to the following conclusion: A mammal can, at best, live for about a billion heartbeats and when those are done, it is done.
But you'll notice that I have left man out of the table.
That's because I want to treat him separately. He lives at the proper speed for his size. His heartbeat rate is about that of other animals, of similar weight. It is faster than the heartbeat of larger animals, slower than the heartbeat of smaller animals. Yet his maximum age is 115 years, and that means his maximum number of heartbeats is about 4,350,000,000.
An occasional man can live for over 4 billion heartbeats!
In fact, the life expectancy of the American male these days is 2.5 billion heartbeats. Any man- who passes the quarter-century mark has gone beyond the billionth heart beat mark and is still young, with the prime of life ahead.
Why? It is not just that we live longer than other mam mals. Measured in heartbeats, we live four times as long!
Why??
Upon what meat doth this, our species, feed, that we are grown so great? Not even our closest non-buman rela tives match us in this. If we assume the chimpanzee to have our heartbeat rate and the gorilla to have a slightly slower one, each lives for a maximum of about 1.5 billion heartbeats, which isn't very much out of line for mammals generally. How then do we make it to 4 billion?
What secret in our hearts makes those organs work so much better and last so much longer than any other mam malian heart in existence? Why does the moving finger write so slowly for us, and for us only?
Frankly, I don't know, but whatever the answer, I am comforted. If I were a member of any other mamxnalian species my heart would be stilled long years since, for it has gone well past its billionth beat. (Well, a little past.)
But since I am Homo sapiens, my wonderful heart beats even yet with all its old fire; and speeds up in proper fashion at all times when it should speed up, with a verve and efficiency that I find completely satisfying.
Why, when I stop to think of it, I am a young fellow, a child, an infant prodigy. I am a, member of the most un usual species on earth, in longevity as well as brain power, and I laugh at birthdays.
(Let's see now. How many years to 115?)