APPENDIX: Science Science Fiction

Writers, readers and critics of science fiction often seem unable to produce a workable definition of the field, but one of the things they usually agree on is the existence of a particular branch that is usually termed “hard” science fiction. People who like this branch will tell you it is the only subdivision that justifies the word science, and that everything else is simple fantasy; and they will use words like “authentic,” “scientifically accurate,” “extrapolative,” and “inventive” to describe it. People who don’t like it say it is dull and bland, and use words like “characterless,” “mechanical,” “gadgetry,” or “rockets and rayguns” to describe it. Some people can’t stand hard SF, others will read nothing else.

Hard science fiction can be defined in several different ways. My favorite definition is an operational one: if you can take the science and scientific speculation away from a story, and not do it serious injury, then it was not hard SF to begin with. Here is another definition that I like rather less well: in a hard SF story, the scientific techniques of observation, analysis, logical theory, and experimental test must be applied, no matter where or when the story takes place. My problem with this definition is that it would classify many mystery and fantasy stories as hard science fiction.

Whatever the exact definition, there is usually little difficulty deciding whether a particular story is “hard” or “soft” science fiction. And although a writer never knows quite what he or she has written, and readers often pull things out of a story that were never consciously put in, I certainly think of the book you are holding as probably the hardest SF that I write. Each story revolves around some element of science, and without that element the story would collapse. If the stories reflect any common theme, it is my own interest in science, particularly astronomy and physics. Because of this, and because the science is what I have elsewhere termed “borderland science” (Borderlands of Science: How to Think Like A Scientist and Write Science Fiction; Baen Books, 1999), I feel a responsibility to the reader. It is one that derives from my own early experiences with science fiction.

I discovered the field for myself as a teenager (as did almost everyone else I knew — in school we were tormented with Wordsworth and Bunyan, while Clarke and Heinlein had to be private after-school pleasures). Knowing at the time a negligible amount of real science, I swallowed whole and then regurgitated to my friends everything presented as science in the SF magazines. That quickly built me a reputation as a person stuffed with facts and theories — many of them wrong and some of them decidedly weird. The writers didn’t bother to distinguish the scientific theories that they borrowed, from the often peculiarly unscientific theories that they made up for the story. Neither did I.

I knew all about the canals on Mars, the dust pools on the Moon, and the swamps on Venus, about the Dean drive and dianetics and the Hieronymus machine. I believed that men and pigs were more closely related than men and monkeys; that atoms were miniature solar systems; that you could shoot men to the moon with a cannon (a belief that didn’t survive my first course in dynamics); that the pineal gland was certainly a rudimentary third eye and probably the seat of parapsychological powers; that Rhine’s experiments at Duke University had made telepathy an unquestioned part of modern science; that with a little ingenuity and a few electronic bits and pieces you could build in your backyard a spacecraft to take you to the moon; and that, no matter what alien races might have developed on other worlds and be scattered around the Galaxy, humans would prove to be the smartest, most resourceful, and most wonderful species to be found anywhere.

That last point may even be true. As Pogo remarked long ago, true or false, either way it’s a mighty sobering thought.

What I needed was a crib sheet. We had them in school for the works of Shakespeare. They were amazingly authoritative, little summaries that outlined the plot, told us just who did what and why, and even informed us exactly what was in Shakespeare’s head when he was writing the play. If they didn’t say what he had for lunch that day, it was only because that subject never appeared on examination papers. Today’s CliffsNotes are less authoritative, but only I suspect because the changing climate of political correctness encourages commentators to be as bland as possible.

I didn’t know it at the time, but the crib sheets were what I was missing in science fiction. Given the equivalent type of information about SF, I would not have assured my friends (as I did) that the brains of industrial robots made use of positrons, that the work of Dirac and Blackett would lead us to a faster-than-light drive, or that the notebooks of Leonardo da Vinci gave all the details needed to construct a moon rocket.

As Mark Twain remarked, it’s not what we don’t know that causes the trouble, it’s the things we know that ain’t so. (This is an example of the problem. I was sure this was said by Mark Twain, but when I looked it up I found it was a Josh Billings line. Since then I have seen it as attributed to Artemus Ward.) What follows, then, is my crib sheet for this book. This Appendix sorts out the real science, based on and consistent with today’s theories (but probably not tomorrow’s), from the “science” that I made up for these stories. I have tried to provide a clear dividing line, at the threshold where fact stops and fiction takes over. But even the invented material is designed to be consistent with and derived from what is known today. It does not contradict current theories, although you will not find papers about it in the Physical Review or the Astrophysical Journal.

The reader may ask, which issues of these publications? That’s a very fair question. After all, these stories were written over a twenty-year period. In that time, science has advanced, and it’s natural to ask how much of what I wrote still has scientific acceptance.

I reread each story with that in mind, and so far as I know everything still fits with current knowledge. A few things have even gained in plausibility. For example, when I wrote “Rogueworld” we had no direct evidence of any extra-solar planets. Now reports come in every month or two of another world around some other star, based not on direct observation of the planet but on small observed perturbations in the apparent position of the star itself. The idea of vacuum energy extraction, first introduced to science fiction in “All the Colors of the Vacuum,” has proceeded from wild science fiction idea to funded research. Black holes, which at the time I wrote “Killing Vector” were purely theoretical entities, form a standard part of modern cosmology. A big black hole, about 2.5 million times the mass of the Sun, is believed to lie at the center of our own galaxy. Radiating black holes, which in 1977 were another way-out idea, are now firmly accepted. The Oort cloud, described in “The Manna Hunt,” is a standard part of today’s physical model of the extended Solar System.

So has there been nothing new in science in the past twenty years? Not at all. Molecular biology has changed so fast and so much since the 1970s that the field seen from that earlier point of view is almost unrecognizable, and the biggest changes still lie in the future. Computers have become smaller, more powerful, and ubiquitous, beyond what anyone predicted twenty years ago. We also stand today on the verge of quantum computation, which takes advantage of the fact that at the quantum level a system can exist in several states simultaneously. The long-term potential of that development is staggering.

Finally, in the very week that I write this, a report has appeared of the first successful experiment in “quantum teleportation.” Via a process known as “entanglement,” which couples the quantum state of two widely separated systems, a Caltech team “teleported” a pattern of information from one location to another, independent of the speed of light. If there isn’t a new hard SF story in that report, I don’t know where you’ll find one.


* * *

Kernels, black holes, and singularities.

Kernels feature most prominently in the first chronicle, but they are assumed and used in all the others, too. A kernel is actually a Ker-N-le, which is shorthand for Kerr-Newman black hole.

To explain Kerr-Newman black holes, it is best to follow McAndrew’s technique, and go back a long way in time. We begin in 1915. In that year, Albert Einstein published the field equations of general relativity in their present form. He had been trying different possible formulations since about 1908, but he was not satisfied with any of them before the 1915 set. His final statement consisted of ten coupled, nonlinear, partial differential equations, relating the curvature of space-time to the presence of matter.

The equations are very elegant and can be written down in tensor form as a single short line of algebra. But written out in full they are horrendously long and complex — so much so that Einstein himself did not expect to see any exact solutions, and thus perhaps didn’t look very hard. When Karl Schwarzschild, just the next year, produced an exact solution to the “one-body problem” (he found the gravitational field produced by an isolated mass particle), Einstein was reportedly quite surprised.

This “Schwarzschild solution” was for many years considered mathematically interesting, but of no real physical importance. People were much more interested in looking at approximate solutions of Einstein’s field equations that could provide possible tests of the theory. Everyone wanted to compare Einstein’s ideas on gravity with those introduced two hundred and fifty years earlier by Isaac Newton, to see where there might be detectible differences. The “strong field” case covered by the Schwarzschild solution seemed less relevant to the real world.

For the next twenty years, little was discovered to lead us toward kernels. Soon after Schwarzschild published his solution, Reissner and Nordstrom solved the general relativity equations for a spherical mass particle that also carried an electric charge. This included the Schwarzschild solution as a special case, but it was considered to have no physical significance and it too remained a mathematical curiosity.

The situation finally changed in 1939. In that year, Oppenheimer and Snyder were studying the collapse of a star under gravitational forces — a situation which certainly did have physical significance, since it is a common stellar occurrence.

Two remarks made in their summary are worth quoting directly: “Unless fission due to rotation, the radiation of mass, or the blowing off of mass by radiation, reduce the star’s mass to the order of the sun, this contraction will continue indefinitely.” In other words, not only can a star collapse, but if it is heavy enough there is no way that the collapse and contraction can be stopped. And “the radius of the star approaches asymptotically its gravitational radius; light from the surface of the star is progressively reddened, and can escape over a progressively narrower range of angles.” This is the first modern picture of a black hole, a body with a gravitational field so strong that light cannot escape from it. (We have to say “modern picture” because before 1800 it had been noted as a curiosity that a sufficiently massive body could have an escape velocity from its surface that exceeded the speed of light; in a sense, the black hole was predicted more than two hundred years ago.)

Notice that the collapsing body does not have to contract indefinitely if it is the size of the Sun or smaller, so we do not have to worry that the Earth, say, or the Moon, will shrink indefinitely to become a black hole. Notice also that there is a reference to the “gravitational radius” of the black hole. This was something that came straight out of the Schwarzschild solution, the distance where the reddening of light became infinite, so that any light coming from inside that radius could never be seen by an outside observer. Since the gravitational radius for the Sun is only about three kilometers, if the Sun were squeezed down to this size conditions inside the collapsed body defy the imagination. The density of matter must be about twenty billion tons per cubic centimeter.

You might think that Oppenheimer and Snyder’s paper, with its apparently bizarre conclusions, would have produced a sensation. In fact, it aroused little notice for a long time. It too was looked at as a mathematical oddity, a result that physicists needn’t take too seriously.

What was going on here? The Schwarzschild solution had been left on the shelf for a generation, and now the Oppenheimer results were in their turn regarded with no more than mild interest.

One could argue that in the 1920s the attention of leading physicists was elsewhere, as they tried to drink from the fire-hose flood of theory and experiment that established quantum theory. But what about the 1940s and 1950s? Why didn’t whole groups of physicists explore the consequences for general relativity and astrophysics of an indefinitely collapsing stellar mass?

Various explanations could be offered, but I favor one that can be stated in a single word: Einstein. He was a gigantic figure, stretching out over everything in physics for the first half of this century. Even now, he casts an enormous shadow over the whole field. Until his death in 1955, researchers in general relativity and gravitation felt a constant awareness of his presence, of his genius peering over their shoulder. If Einstein had not been able to penetrate the mystery, went the unspoken argument, what chance do the rest of us have? Not until after his death was there a resurgence of interest and spectacular progress in general relativity. And it was one of the leaders of that resurgence, John Wheeler, who in 1958 provided the inspired name for the Schwarzschild solution needed to capture everyone’s fancy: the black hole.

We still have not reached the kernel. The black hole that Wheeler named was still the Schwarzschild black hole, the object that McAndrew spoke of with such derision. It had a mass, and possibly an electric charge, but that was all. The next development came in 1963, and it was a big surprise to everyone working in the field.

Roy Kerr, at that time associated with the University of Texas at Austin, had been exploring a particular set of Einstein’s field equations that assumed an unusually simple form for the metric (the metric is the thing that defines distances in a curved space-time). The analysis was highly mathematical and seemed wholly abstract, until Kerr found that he could produce a form of exact solution to the equations. The solution included the Schwarzschild solution as a special case, but there was more; it provided in addition another quantity that Kerr was able to associate with spin.

In the Physical Review Letters of September, 1963, Kerr published a one-page paper with the not-too-catchy title, “Gravitational field of a spinning mass as an example of algebraically special metrics.” In this paper he described the Kerr solution for a spinning black hole. I think it is fair to say that everyone, probably including Kerr himself, was astonished.

The Kerr black hole has a number of fascinating properties, but before we get to them let us take the one final step needed to reach the kernel. In 1965 Ezra Newman and colleagues at the University of Pittsburgh published a short note in the Journal of Mathematical Physics, pointing out that the Kerr solution could be generated from the Schwarzschild solution by a curious mathematical trick, in which a real coordinate was replaced by a complex one. They also realized that the same trick could be applied to the charged black hole, and thus they were able to provide a solution for a rotating, charged black hole: the Kerr-Newman black hole, that I call the kernel.

The kernel has all the nice features admired by McAndrew. Because it is charged, you can move it about using electric and magnetic fields. Because you can add and withdraw rotational energy, you can use it as a power source and a power reservoir. A Schwarzschild black hole lacks these desirable qualities. As McAndrew says, it just sits there.

One might think that this is just the beginning. There could be black holes that have mass, charge, spin, axial asymmetry, dipole moments, quadrupole moments, and many other properties. It turns out that this is not the case. The only properties that a black hole can possess are mass, charge, spin and magnetic moment — and the last one is uniquely fixed by the other three.

This strange result, often stated as the theorem “A black hole has no hair” (i.e. no detailed structure) was established to most people’s satisfaction in a powerful series of papers in 1967-1972 by Werner Israel, Brandon Carter, and Stephen Hawking. A black hole is fixed uniquely by its mass, spin, and electric charge. Kernels are the end of the line, and they represent the most general kind of black hole that physics permits.

After 1965, more people were working on general relativity and gravitation, and other properties of the Kerr-Newman black holes rapidly followed. Some of them were very strange. For example, the Schwarzschild black hole has a characteristic surface associated with it, a sphere where the reddening of light becomes infinite, and from within which no information can ever be sent to the outside world. This surface has various names: the surface of infinite red shift, the trapping surface, the one-way membrane, and the event horizon. But the Kerr-Newman black holes turn out to have two characteristic surfaces associated with them, and the surface of infinite red shift is in this case distinct from the event horizon.

To visualize these surfaces, take a hamburger bun and hollow out the inside enough to let you put a round hamburger patty entirely within it. For a Kerr-Newman black hole, the outer surface of the bread (which is a sort of ellipsoid in shape) is the surface of infinite red shift, the “static limit” within which no particle can remain at rest, no matter how hard its rocket engines work. Inside the bun, the surface of the meat patty is a sphere, the “event horizon,” from which no light or particle can ever escape. We can never find out anything about what goes on within the meat patty’s surface, so its composition must be a complete mystery (you may have eaten hamburgers that left the same impression). For a rotating black hole, these bun and patty surfaces touch only at the north and south poles of the axis of rotation (the top and bottom centers of the bun). The really interesting region, however, is that between these two surfaces — the remaining bread, usually called the ergosphere. It has a property which allows the kernel to become a power kernel.

Roger Penrose (whom we will meet again in a later chronicle) pointed out in 1969 that it is possible for a particle to dive in towards a Kerr black hole, split in two when it is inside the ergosphere, and then have part of it ejected in such a way that it has more total energy than the whole particle that went in. If this is done, we have extracted energy from the black hole.

Where has that energy come from? Black holes may be mysterious, but we still do not expect that energy can be created from nothing.

Note that we said a Kerr black hole — not a Schwarzschild black hole. The energy we extract comes from the rotational energy of the spinning black hole, and if a hole is not spinning, no energy can possibly be extracted from it in this way. As McAndrew remarked, a Schwarzschild hole is dull, a dead object that cannot be used to provide power. A Kerr black hole, on the other hand, is one of the most efficient energy sources imaginable, better by far than most nuclear fission or fusion processes. (A Kerr-Newman black hole allows the same energy extraction process, but we have to be a little more careful, since only part of the ergosphere can be used.)

If a Kerr-Newman black hole starts out with only a little spin energy, the energy-extraction process can be worked in reverse, to provide more rotational energy — the process that McAndrew referred to as “spin-up” of the kernel. “Spin-down” is the opposite process, the one that extracts energy. A brief paper by Christodoulou in the Physical Review Letters of 1970 discussed the limits on this process, and pointed out that you could only spin-up a kernel to a certain limit, termed an “extreme” Kerr solution. Past that limit (which can never be achieved using the Penrose process) a solution can be written to the Einstein field equations. This was done by Tomimatsu and Sato, and presented in 1972 in another one-page paper in Physical Review Letters. It is a very odd solution indeed. It has no event horizon, which means that activities there are not shielded from the rest of the Universe as they are for the usual kernels. And it has what is referred to as a “naked singularity” associated with it, where cause and effect relationships no longer apply. This bizarre object was discussed by Gibbons and Russell-Clark, in 1973, in yet another paper in Physical Review Letters.

That seems to leave us in pretty good shape. Everything so far has been completely consistent with current physics. We have kernels that can be spun up and spun down by well-defined procedures — and if we allow that McAndrew could somehow take a kernel past the extreme form, we would indeed have something with a naked singularity. It seems improbable that such a physical situation could exist, but if it did, space-time there would be highly peculiar. The existence of certain space-time symmetry directions — called killing vectors — that we find for all usual Kerr-Newman black holes would not be guaranteed. Everything is fine.

Or is it?

Oppenheimer and Snyder pointed out that black holes are created when big masses, larger than the Sun, contract under gravitational collapse. The kernels that we want are much smaller than that. We need to be able to move them around the solar system, and the gravitational field of an object the mass of the Sun would tear the system apart. Unfortunately, there was no prescription in Oppenheimer’s work, or elsewhere, to allow us to make small black holes.

Stephen Hawking finally came to the rescue. Apart from being created by collapsing stars, he said, black holes could also be created in the extreme conditions of pressure that existed during the Big Bang that started our Universe. Small black holes, weighing no more than a hundredth of a milligram, could have been born then. Over billions of years, these could interact with each other to produce more massive black holes, of any size you care to mention. We seem to have the mechanism that will produce the kernels of the size we need.

Unfortunately, what Hawking gave he soon took away. In perhaps the biggest surprise of all in black hole theory, he showed that black holes are not black.

General relativity and quantum theory were both developed in this century, but they have never been combined in a satisfactory way. Physicists have known this and been uneasy about it for a long time. In attempting to move towards what John Wheeler terms the “fiery marriage of general relativity with quantum theory,” Hawking studied quantum mechanical effects in the vicinity of a black hole. He found that particles and radiation can (and must) be emitted from the hole. The smaller the hole, the faster the rate of radiation. He was able to relate the mass of the black hole to a temperature, and as one would expect a “hotter” black hole pours out radiation and particles much faster than a “cold” one. For a black hole the mass of the Sun, the associated temperature is lower than the background temperature of the Universe. Such a black hole receives more than it emits, so it will steadily increase in mass. However, for a small black hole, with the few billion tons of mass that we want in a kernel, the temperature is so high (ten billion degrees) that the black hole will radiate itself away in a gigantic and rapid burst of radiation and particles. Furthermore, a rapidly spinning kernel will preferentially radiate particles that decrease its spin, and a highly charged one will prefer to radiate charged particles that reduce its overall charge.

These results are so strange that in 1972 and 1973 Hawking spent a lot of time trying to find the mistake in his own analysis. Only when he had performed every check that he could think of was he finally forced to accept the conclusion: black holes aren’t black after all; and the smallest black holes are the least black.

That gives us a problem when we want to use power kernels in a story. First, the argument that they are readily available, as leftovers from the birth of the Universe, has been destroyed. Second, a Kerr-Newman black hole is a dangerous object to be near. It gives off high energy radiation and particles.

This is the point where the science of Kerr-Newman black holes stops and the science fiction begins. I assume in these stories that there is some as-yet-unknown natural process which creates sizeable black holes on a continuing basis. They can’t be created too close to Earth, or we would see them. However, there is plenty of room outside the known Solar System — perhaps in the region occupied by the long-period comets, from beyond the orbit of Pluto out to perhaps a light-year from the Sun.

Second, I assume that a kernel can be surrounded by a shield (not of matter, but of electromagnetic fields) which is able to reflect all the emitted particles and radiation back into the black hole. Humans can thus work close to the kernels without being fried in a storm of radiation and high-energy particles.

Even surrounded by such a shield, a rotating black hole would still be noticed by a nearby observer. Its gravitational field would still be felt, and it would also produce a curious effect known as “inertial dragging.”

We have pointed out that the inside of a black hole is completely shielded from the rest of the Universe, so that we can never know what is going on there. It is as though the inside of a black hole is a separate Universe, possibly with its own different physical laws. Inertial dragging adds to that idea. We are used to the notion that when we spin something around, we do it relative to a well-defined and fixed reference frame. Newton pointed out in his Principia Mathematica that a rotating bucket of water, from the shape of the water’s surface, provides evidence of an “absolute” rotation relative to the stars. This is true here on Earth, or over in the Andromeda Galaxy, or out in the Virgo Cluster. It is not true, however, near a rotating black hole. The closer that we get to one, the less that our usual absolute reference frame applies. The kernel defines its own absolute frame, one that rotates with it. Closer than a certain distance to the kernel (the “static limit” mentioned earlier) everything must revolve — dragged along and forced to adopt the rotating reference frame defined by the spinning black hole.


* * *

The McAndrew balanced drive.

This device makes a first appearance in the second chronicle, and is assumed in all the subsequent stories.

Let us begin with well-established science. Again it starts at the beginning of the century, in the work of Einstein. In 1908, he wrote as follows:

“We… assume the complete physical equivalence of a gravitational field and of a corresponding acceleration of the reference system…”

And in 1913:

“An observer enclosed in an elevator has no way to decide whether the elevator is at rest in a static gravitational field or whether the elevator is located in gravitation-free space in an accelerated motion that is maintained by forces acting on the elevator (equivalence hypothesis).”

This equivalence hypothesis, or equivalence principle, is central to general relativity. If you could be accelerated in one direction at a thousand gees, and simultaneously pulled in the other direction by an intense gravitational force producing a thousand gees, you would feel no force whatsoever. It would be just the same as if you were in free fall.

As McAndrew said, once you realize that fact, the rest is mere mechanics. You take a large circular disk of condensed matter (more on that in a moment), sufficient to produce a gravitational acceleration of, say, 50 gees on a test object (such as a human being) sitting on the middle of the plate. You also provide a drive that can accelerate the plate away from the human at 50 gees. The net force on the person at the middle of the plate is then zero. If you increase the acceleration of the plate gradually, from zero to 50 gees, then to remain comfortable the person must also be moved in gradually, starting well away from disk and finishing in contact with it. The life capsule must thus move in and out along the axis of the disk, depending on the ship’s acceleration: high acceleration, close to disk; low acceleration, far from disk.

There is one other variable of importance, and that is the tidal forces on the human passenger. These are caused by the changes in gravitational force with distance — it would be no good having a person’s head feeling a force of one gee, if his feet felt a force of thirty gees. Let us therefore insist that the rate of change of acceleration be no more than one gee per meter when the acceleration caused by the disk is 50 gees.

The gravitational acceleration produced along the axis of a thin circular disk of matter of total mass M and radius R is a textbook problem of classical potential theory. Taking the radius of the disk to be 50 meters, the gravitational acceleration acting on a test object at the center of the disk to be 50 gees, and the tidal force there to be one gee per meter, we can solve for the total mass M, together with the gravitational and tidal forces acting on a body at different distances Z along the axis of the disk.

We find that if the distance of the passengers from the center of the plate is 246 meters, the plate produces gravitational acceleration on passengers of 1 gee, so if the drive is off there is a net force of 1 gee on them; at zero meters (on the plate itself) the plate produces a gravitational acceleration on passengers of 50 gees, so if the drive accelerates them at 50 gees, they feel as though they are in free fall. The tidal force is a maximum, at one gee per meter, when the passengers are closest to the disk.

This device will actually work as described, with no science fiction involved at all, if you can provide the plate of condensed matter and the necessary drive. Unfortunately, this turns out to be nontrivial. All the distances are reasonable, and so are the tidal forces. What is much less reasonable is the mass of the disk that we have used. It is a little more than 9 trillion tons; such a disk 100 meters across and one meter thick would have an average density of 1,170 tons per cubic centimeter.

This density is modest compared with that found in a neutron star, and tiny compared with what we find in a black hole. Thus we know that such densities do exist in the Universe. However, no materials available to us on Earth today even come close to such high values — they have densities that fall short by a factor of more than a million. And the massplate would not work as described, without the dense matter. We have a real problem.

It’s science fiction time again: let us assume that in a couple of hundred years we will be able to compress matter to very high densities, and hold it there using powerful electromagnetic fields. If that is the case, the massplate needed for McAndrew’s drive can be built. It’s certainly massive, but that shouldn’t be a limitation — the Solar System has plenty of spare matter available for construction materials. And although a 9 trillion ton mass may sound a lot, it’s tiny by celestial standards, less than the mass of a modest asteroid.

With that one extrapolation of today’s science it sounds as though we can have the McAndrew balanced drive. We can even suggest how that extrapolation might be performed, with plausible use of present physics.

Unfortunately, things are not as nice as they seem. There is a much bigger piece of science fiction that must be introduced before the McAndrew drive can exist as a useful device. We look at that next, and note that it is a central concern of the third chronicle.

Suppose that the drive mechanism is the most efficient one consistent with today’s physics. This would be a photon drive, in which any fuel is completely converted to radiation and used to propel the ship. There is certainly nothing in present science that suggests such a drive is theoretically impossible, and some analysis of matter-antimatter reactions indicates that the photon drive could one day be built. Let us assume that we know how to construct it. Then, even with this “ultimate” drive, McAndrew’s ship would have problems. It’s not difficult to calculate that with a fifty gee drive, the conversion of matter to radiation needed to keep the drive going will quickly consume the ship’s own mass. More than half the mass will be gone in a few days, and McAndrew’s ship will disappear from under him.

Solution of this problem calls for a lot more fictional science than the simple task of producing stable condensed matter. We have to go back to present physics and look for loopholes. We need to find inconsistencies in the overall picture of the Universe provided by present day physics, and exploit these as necessary.

The best place to seek inconsistencies is where we already know we will find them — in the meeting of general relativity and quantum theory. If we calculate the energy associated with an absence of matter in quantum theory, the “vacuum state,” we do not, as common sense would suggest, get zero.

Instead we get a large, positive value per unit volume. In classical thinking, one could argue that the zero point of energy is arbitrary, so that one can simply start measuring energies from the vacuum state value. But if we accept general relativity, this option is denied to us. Energy, of any form, produces space-time curvature. We are therefore not allowed to change the definition of the origin of the energy scale. Once this is accepted, the energy of the vacuum state cannot be talked out of existence. It is real, if elusive, and its presence provides the loophole that we need.

Again, we are at the point where the science fiction enters. If the vacuum state has an energy associated with it, I assume that this energy is capable of being tapped. Doesn’t this then, according to relativity (E =mc2), suggest that there is also mass associated with the vacuum, contrary to what we think of as vacuum? Yes, it does, and I’m sorry about that, but the paradox is not of my creation. It is implicit in the contradictions that arise as soon as you try to put general relativity and quantum theory together.

Richard Feynman, one of the founders of quantum electrodynamics, addressed the question of the vacuum energy, and computed an estimate for the equivalent mass per unit volume. The estimate came out to two billion tons per cubic centimeter. The energy in two billion tons of matter is more than enough to boil all Earth’s oceans (powerful stuff, vacuum). Feynman, commenting on his vacuum energy estimate, remarks:

“Such a mass density would, at first sight at least, be expected to produce very large gravitational effects which are not observed. It is possible that we are calculating in a naive manner, and, if all of the consequences of the general theory of relativity (such as the gravitational effects produced by the large stresses implied here) were included, the effects might cancel out; but nobody has worked all this out yet. It is possible that some cutoff procedure that not only yields a finite energy for the vacuum state but also provides relativistic invariance may be found. The implications of such a result are at present completely unknown.”

With that degree of uncertainty at the highest levels of present-day physics, I feel not at all uncomfortable in exploiting the troublesome vacuum energy to service McAndrew’s drive.

The third chronicle introduces two other ideas that are definitely science fiction today, even if they become science fact a few years from now. If there are ways to isolate the human central nervous system and keep it alive independently of the body, we certainly don’t know much about them. On the other hand, I see nothing that suggests this idea is impossible in principle — heart transplants were not feasible forty years ago, and until this century blood transfusions were rare and highly dangerous. A century hence, today’s medical impossibilities should be routine.

The Sturm Invocation for vacuum survival is also invented, but I believe that it, like the Izaak Walton introduced in the seventh chronicle, is a logical component of any space-oriented future. Neither calls for technology beyond what we have today. The hypnotic control implied in the Invocation, though advanced for most practitioners, could already be achieved. And any competent engineering shop could build a Walton for you in a few weeks — I am tempted to patent the idea, but fear that it would be rejected as too obvious or inevitable a development.


* * *

Life in space and the Oort cloud.

Most chronicles take place at least partly in the Halo, or the Outer Solar System, which I define to extend from the distance of Pluto from the Sun, out to a little over a light-year. Within this radius, the Sun is still the primary gravitational influence, and controls the orbits of objects moving out there.

To give an idea of the size of the Halo, we note that Pluto lies at an average distance of about 6 billion kilometers from the Sun. This is about forty astronomical units, where the astronomical unit, usually abbreviated to AU, is defined as the mean distance of the Earth from the Sun. The AU provides a convenient yardstick for measurements within the Solar System. One light-year is about 63,000 AU (inches in a mile, is how I remember it). So the volume of space in the Halo is 4 billion times as large as the sphere enclosing the nine known planets.

By Solar System standards, the Halo is thus a huge region. But beyond Neptune and Pluto, we know little about it. There are a number of “trans-Neptunian objects,” but no one knows how many. Some of them may be big enough to qualify as planets. The search for Pluto was inspired early this century by differences between theory and observation in the orbits of Uranus and Neptune. When Pluto was found, it soon became clear that it was not nearly heavy enough to produce the observed irregularities. The obvious explanation is yet another planet, farther out than the ones we know.

Calculations of the orbit and size of a tenth planet needed to reconcile observation and theory for Uranus and Neptune suggest a rather improbable object, out of the orbital plane that all the other planets move in and about seventy times the mass of the Earth. I don’t believe this particular object exists.

On the other hand, observational equipment and techniques for faint objects are improving rapidly. The number of known trans-Neptunian objects increases almost every month.

The other thing we know for sure about the Halo is that it is populated by comets. The Halo is often called the Oort cloud, since the Dutch astronomer Oort suggested thirty years ago that the entire Solar System is enveloped by a cloud of cometary material, to a radius of perhaps a light-year. He regarded this region as a “cometary reservoir,” containing perhaps a hundred billion comets. Close encounters between comets out in the Halo would occasionally disturb the orbit of one of them enough to divert it to the Inner System, where it would appear as a long-period comet when it came close enough to the Sun. Further interactions with Jupiter and the other planets would then sometimes convert the long-period comet to a short-period comet, such as Halley’s or Encke’s comet, which we observe repeatedly each time they swing by close to the Sun.

Most comets, however, continue their lonely orbits out in the cloud, never approaching the Inner System. They do not have to be small to be invisible to us. The amount of sunlight a body receives is inversely proportional to the square of its distance from the Sun; the apparent area it presents to our telescopes is also inversely proportional to the square of its distance from Earth. For bodies in the Halo, the reflected light that we receive from them thus varies as the inverse fourth power of their distance from the Sun. A planet with the size and composition of Uranus, but half a light-year away, would be seven trillion times as faint. And we should remember that Uranus itself is faint enough that it was not discovered until 1781, when high-quality telescopes were available. So far as present-day detection powers are concerned, there could be almost anything out there in the Halo.

One of the things that may be there is life. In a carefully argued but controversial theory developed over the past thirty years, Hoyle and Wickramasinghe have advanced the idea that space is the natural place for the creation of “pre-biotic” molecules in large quantities. Pre-biotic molecules are compounds such as carbohydrates, amino acids, and chlorophyll, which form the necessary building blocks for the development of life. Simpler organic molecules, such as methyl cyanide and ethanol, have already been observed in interstellar clouds.

Hoyle and Wickramasinghe go further. They state explicitly: “We shall argue that primitive living organisms evolve in the mixture of organic molecules, ices and silicate smoke which make up a comet’s head.

The science fiction of the fourth chronicle consists of these two assumptions:

1. The complex organic molecules described by Hoyle and Wickramasinghe are located in a particular region of the Halo, a “life ring” that lies between 3,200 and 4,000 AU from the Sun;

2. The “primitive living organism” have evolved quite a bit further than Hoyle and Wickramasinghe expected, on at least one body of the Oort cloud.


* * *

Missing matter and the beginning of the Universe.

Today’s so-called “standard model” of cosmology suggests that the Universe began in a “Big Bang” somewhere between ten and twenty billion years ago. Since we have been able to study the Universe in detail for less than four hundred years (the telescope was invented about 1608), any attempt to say something about the origin of the Universe implies considerable extrapolation into the past. There is a chance of success only because the basic physical laws of the Universe that govern events on both the smallest scale (atoms and subatomic particles) and the largest scale (stars, galaxies, and clusters of galaxies) appear not to have changed since its earliest days.

The primary evidence for a finite age for the whole Universe comes from observation of distant galaxies. When we observe the light that they emit, we find, as was suggested by Carl Wirtz in 1924 and confirmed by Edwin Hubble in 1929, that more distant galaxies appear redder than nearer ones.

To be more specific, in the fainter (and therefore presumably more distant) galaxies, every wavelength of light emitted has been shifted toward a longer wavelength. The question is, what could cause such a shift?

The most plausible mechanism, to a physicist, is called the Doppler effect. According to the Doppler effect, light from a receding object will be shifted to longer (redder) wavelengths; light from an approaching object will be shifted to shorter (bluer) wavelengths. Exactly the same thing works for sound, which is why a speeding police car’s siren seems to drop in pitch as it passes by.

If we accept the Doppler effect as the cause of the reddened appearance of the galaxies, we are led (as was Hubble) to an immediate conclusion: the whole Universe must be expanding, at a close to constant rate, because the red shift of the galaxies corresponds to their brightness, and therefore to their distance.

Note that this does not mean that the Universe is expanding into some other space. There is no other space. It is the whole Universe — everything there is — that has grown over time to its present dimension.

And from this we can draw another immediate conclusion. If expansion proceeded in the past as it does today, there must have been a time when everything in the whole Universe was drawn together to a single point. It is logical to call the time that has elapsed since everything was in that infinitely dense singularity the age of the Universe. The Hubble galactic redshift allows us to calculate how long ago that happened.

Our estimate is bounded on the one hand by the constancy of the laws of physics (how far back can we go, before the Universe would be totally unrecognizable and far from the place where we believe today’s physical laws are valid?); and on the other hand by our knowledge of the distance of the galaxies, as determined by other methods.

Curiously, it is the second problem that forms the major constraint. When we say that the Universe is between ten and twenty billion years old, that uncertainty of a factor of two betrays our ignorance of galactic distances.

It is remarkable that observation of the faint agglomerations of stars known as galaxies leads us, very directly and cleanly, to the conclusion that we live in a Universe of finite and determinable age. A century ago, no one could have offered even an approximate age for the Universe. For an upper bound, most nonreligious scientists would probably have said “forever.” For a lower bound, all they had was the age of the Earth.

Asking one question, How old is the Universe? inevitably leads us to another: What was the Universe like, ten or twenty billion years ago, when it was compressed into a very small volume?

That question was tackled by a Belgian, Georges Lemaître. Early in the 1930s Lemaître went backwards mentally in time, to a period when the whole Universe was a “primeval atom.” In this first and single atom, everything was squashed into a sphere only a few times as big as the Sun, with no space between atoms, or even between nuclei. As Lemaître saw it, this unit must then have exploded, fragmenting into the atoms and stars and galaxies and everything else in the Universe that we know today. He might justifiably have called it the Big Bang, but he didn’t. That name seems to have been coined by Fred Hoyle, whom we met in the previous chronicle.

Lemaître did not ask the next question, namely, where did the primeval atom come from? Since he was an ordained Catholic priest, he probably felt that the answer to that was a given. Lemaître also did not worry too much about the composition of his primeval atom — what was it made of? It might be thought that the easiest assumption is that everything in the Universe was already there, much as it is now. But that cannot be true, because as we go back in time, the Universe had to be hotter as well as more dense. Before a certain point, atoms as we know them could not exist, because they would be torn apart by the intense radiation that permeated the whole Universe.

The person who did worry about the composition of the primeval atom was George Gamow. In the 1940s, he conjectured that the original stuff of the Universe was nothing more than densely packed neutrons. Certainly, it seemed reasonable to suppose that the Universe at its outset had no net charge, since it seems to have no net charge today. Also, a neutron left to itself has a fifty percent chance that it will, in about thirteen minutes, decay radioactively to form an electron and a proton. One electron and one proton form an atom of hydrogen; and even today, the Universe is predominantly atomic hydrogen. So neutrons could account for most, if not all, of today’s Universe.

If the early Universe was very hot and very dense and all hydrogen, some of it ought to have fused and become helium, carbon, and other elements. The question, How much of each? was one that Gamow and his student, Ralph Alpher, set out to answer. They calculated that about a quarter of the matter in the primeval Universe should have turned to helium, a figure very consistent with the present composition of the oldest stars. They published their results on April 1, 1948. In one of physics’ best-known jokes, Hans Bethe (pronounced Bay-ter, like the Greek letter Beta) allowed his name to be added to the paper, although he had nothing to do with its writing. The authors thus became Alpher, Bethe, and Gamow.

Apart from showing how to calculate the ratio of hydrogen to helium after the Big Bang, Gamow and his colleagues did one other thing whose full significance probably escaped them. In 1948 they produced an equation that allowed one to compute the present background temperature of the Universe from its age, assuming a Universe that expanded uniformly since its beginning in the Big Bang. The background radiation, corresponding to a temperature of 2.7 degrees above absolute zero, was discovered by Arno Penzias and Robert Wilson in 1964, and made the Big Bang theory fully respectable for the first time.

We now believe that hydrogen fused to form helium when the Universe was between three and four minutes old. What about even earlier times? Let us run the clock backwards, as far as we can towards the Big Bang.

How far back do we want to start the clock? Well, when the Universe was smaller in size, it was also hotter. In a hot enough environment, atoms as we know them cannot hold together. High-energy radiation rips them apart as fast as they form. A good time to begin our backward running of the clock might then be the period when atoms could form and persist as stable units. Although stars and galaxies would not yet exist, at least the Universe would be made up of familiar components, hydrogen and helium atoms that we would recognize.

Atoms can form, and hold together, somewhere between half a million and a million years after the Big Bang. Before that time, matter and radiation interacted continuously, and the Universe was almost opaque to radiation. After it, matter and radiation “decoupled,” became near-independent, and went their separate ways. The temperature of the Universe when this happened was about 3,000 degrees. Ever since then, the expansion of the Universe has lengthened the wavelength of the background radiation, and thus lowered its temperature. The cosmic background radiation discovered by Penzias and Wilson is nothing more than the radiation at the time when it decoupled from matter, now grown old.

Continuing backwards, even before atoms could form, helium and hydrogen nuclei and free electrons could combine to form atoms; but they could not remain in combination, because radiation broke them apart. The content of the Universe was, in effect, controlled by radiation energetic enough to prevent the formation of atoms. This situation held from about three minutes to one million years A.C. (After Creation).

If we go back to a period less than three minutes A.C., radiation was even more dominant. It prevented the build-up even of helium nuclei. As noted earlier, the fusion of hydrogen to helium requires hot temperatures, such as we find in the center of stars. But fusion cannot take place if it is too hot, as it was before three minutes after the Big Bang. Before helium could form, the Universe had to “cool” to about a billion degrees. All that existed before then were electrons (and their positively charged forms, positrons), neutrons, protons, neutrinos (a chargeless particle, until recently assumed to be massless but now thought to possess a tiny mass), and radiation.

Until three minutes A.C., it might seem as though radiation controlled events. But this is not the case. As we proceed farther backwards and the temperature of the primordial fireball continues to increase, we reach a point where the temperature is so high (above ten billion degrees) that large numbers of electron-positron pairs can be created from pure radiation. That happened from one second up to fourteen seconds A.C. After that, the number of electron-positron pairs decreased rapidly. Less were being generated than were annihilating themselves and returning to pure radiation. After the Universe cooled to ten billion degrees, neutrinos also decoupled from other forms of matter.

Still we have a long way to go, physically speaking, to the moment of creation. As we continue backwards, temperatures rise and rise. At a tenth of a second A.C., the temperature of the Universe is thirty billion degrees. The Universe is a soup of electrons, protons, neutrons, neutrinos, and radiation. As the kinetic energy of particle motion becomes greater and greater, effects caused by differences of particle mass are less important. At thirty billion degrees, an electron easily carries enough energy to convert a proton into the slightly heavier neutron. Thus in this period, free neutrons are constantly trying to decay to form protons and electrons; but energetic proton-electron collisions go on remaking neutrons.

We keep the clock running. Now the important time intervals become shorter and shorter. At one ten -thousandth of a second A.C., the temperature is one thousand billion degrees. The Universe is so small that the density of matter, everywhere, is as great as that in the nucleus of an atom today (about 100 million tons per cubic centimeter; a fair-sized asteroid, at this density, would squeeze down to fit in a match box). Modern theory says that the nucleus is best regarded not as protons and neutrons, but as quarks, elementary particles from which the neutrons and protons themselves are made. Thus at this early time, 0.0001 seconds A.C. the Universe was a sea of quarks, electrons, neutrinos, and energetic radiation. We move on, to the time, 10-36 seconds A.C., when the Universe went through a super-rapid “inflationary” phase, growing from the size of a proton to the size of a basketball in about 5 x 10-32 seconds. We are almost back as far as we can go. Finally we reach a time 10-43 seconds A.C, (called the Plank time), when according to a class of theories known as supersymmetry theories, the force of gravity decoupled from everything else, and remains decoupled to this day.

This may already sound like pure science fiction. It is not. It is today’s science — though it certainly may be wrong. But at last we have reached the time when McAndrew’s “hidden matter” was created. And today’s Universe seems to require that something very like it exist.

The argument for hidden matter goes as follows: The Universe is expanding. Every cosmologist today agrees on that. Will it go on expanding forever, or will it one day slow to a halt, reverse direction, and fall back in on itself to end in a Big Crunch? Or is the Universe poised on the infinitely narrow dividing line between expansion and ultimate contraction, so that it will increase more and more slowly, and finally (but after infinite time) stop its growth?

The thing that decides which of these three possibilities will occur is the total amount of mass in the Universe, or rather, since we do not care what form mass takes and mass and energy are totally equivalent, the future of the Universe is decided by the total mass-energy content per unit volume.

If the mass-energy is too big, the Universe will end in the Big Crunch. If it is too small, the Universe will fly apart forever. And only in the Goldilocks situation, where the mass-energy is “just right,” will the Universe ultimately reach a “flat” condition. The amount of matter needed to stop the expansion is not large, by terrestrial standards. It calls for only three hydrogen atoms per cubic meter.

Is there that much available?

If we estimate the mass and energy from visible material in stars and galaxies, we find a value nowhere near the “critical density” needed to make the Universe finally flat. If we say that the critical mass-energy density has to be equal to unity just to slow the expansion, we observe in visible matter only a value of about 0.01.

There is evidence, though, from the rotation of galaxies, that there is a lot more “dark matter” present there than we see as stars. It is not clear what this dark matter is — black holes, very dim stars, clouds of neutrinos — but when we are examining the future of the Universe, we don’t care. All we worry about is the amount. And that amount, from galactic dynamics, could be at least ten times as much as the visible matter. Enough to bring the density to 0.1, or possible even 0.2. But no more than that.

One might say, all right, that’s it. There is not enough matter in the Universe to stop the expansion, by a factor of about ten, so we have confirmed that we live in a forever-expanding Universe. Recent (1999) observations seem to confirm that result.

Unfortunately, that is not the answer that most cosmologists would really like to hear. The problem comes because the most acceptable cosmological models tell us that if the density is as much as 0.1 today, then in the past it must have been much closer to unity. For example, at one second A.C., the density would have had to be within one part in a million billion of unity, in order for it to be 0.1 today. It would be an amazing coincidence if, by accident, the actual density were so close to the critical density.

Most cosmologists therefore say that, today’s observations notwithstanding, the density of the Universe is really exactly equal to the critical value. In this case, the Universe will expand forever, but more and more slowly.

The problem, of course, is then to account for the matter that we don’t observe. Where could the “missing matter” be, that makes up the other nine-tenths of the universe?

There are several candidates. One suggestion is that the Universe is filled with energetic (“hot”) neutrinos, each with a small but non-zero mass. However, there are problems with the Hot Neutrino theory. If they are the source of the mass that stops the expansion of the Universe, the galaxies, according to today’s models, should not have developed as early as they did in the history of the Universe.

What about other candidates? Well, the class of theories already alluded to and known as supersymmetry theories require that as-yet undiscovered particles ought to exist.

There are axions, which are particles that help to preserve certain symmetries (charge, parity, and time-reversal) in elementary particle physics; and there are photinos, gravitinos, and others, based on theoretical supersymmetries between particles and radiation. These candidates are slow moving (and so considered “cold”) but some of them have substantial mass. They too would have been around soon after the Big Bang. These slow-moving particles clump more easily together, so the formation of galaxies could take place earlier than with the hot neutrinos. We seem to have a better candidate for the missing matter — except that no one has yet observed the necessary particles. At least neutrinos are known to exist!

Supersymmetry, in a particular form known as superstring theory, offers another possible source of hidden mass. This one is easily the most speculative. Back at a time, 10-43 seconds A.C., when gravity decoupled from everything else, a second class of matter may have been created that is able to interact with normal matter and radiation, today, only through the gravitational force. We can never observe such matter, in the usual sense, because our observational methods, from ordinary telescopes to radio telescopes to gamma ray detectors, all rely on electromagnetic interaction with matter.

This “shadow matter” produced at the time of gravitational decoupling lacks any such interaction with the matter of the familiar Universe. We can determine its existence only by the gravitational effects it produces, which, of course, is exactly what we need to “close the Universe,” and also exactly what we needed for the fifth chronicle.

One can thus argue that the fifth chronicle is all straight science; or, if you are more skeptical, that it and the theories on which it is based are both science fiction. I think that I prefer not to give an opinion.


* * *

Invariance and science.

In mathematics and physics, an invariant is something that does not change when certain changes of condition are made. For example, the “connectedness” or “connectivity” of an object remains the same, no matter how we deform its surface shape, provided only that no cutting or merging of surface parts is permitted. A grapefruit and a banana have the same connectedness — one of them can, with a little effort, be squashed to look like the other (at least in principle, though it does sound messy). A coffee cup with one handle and a donut have the same connectedness; but both have a different connectedness from that of a two-handled mug, or from a mug with no handle. You and I have the same connectedness — unless you happen to have had one or both of your ears pierced, or wear a ring through your nose.

The “knottedness” of a piece of rope is similarly unchanging, provided that we keep hold of the ends and don’t break the string, There is an elaborate vocabulary of knots. A “knot of degree zero” is one that is equivalent to no knot at all, so that pulling the ends of the rope in such a case will give a straight piece of string — a knot trick known to every magician. But when Alexander the Great “solved” the problem of the Gordian Knot by cutting it in two with his sword, he was cheating.

Invariants may sound useless, or at best trivial. Why bother with them? Simply for this reason: they often allow us to make general statements, true in a wide variety of circumstances, where otherwise we would have to deal with lots of specific and different cases.

For example, the statement that a partial differential equation is of elliptic, parabolic, or hyperbolic type is based on a particular invariant, and it tells us a great deal about the possible solutions of such equations before we ever begin to try to solve them. And the statement that a real number is rational or irrational is invariant, independent of the number base that we are using, and it too says something profound about the nature of that number.

What about the invariants of physics, which interested McAndrew? Some invariants are so obvious, we may feel they hardly justify being mentioned. For example, we certainly expect the area or volume of a solid body to be the same, no matter what coordinate system we may use to define it.

Similarly, we expect physical laws to be “invariant under translation” (so they don’t depend on the actual position of the measuring instrument) and “invariant under rotation” (it should not matter which direction our experimental system is pointing) and “invariant under time translation” (we ought to get the same results tomorrow as we did yesterday). Most scientists took such invariants for granted for hundreds of years, although each of these is actually making a profound statement about the physical nature of the Universe.

So, too, is the notion that physical laws should be “invariant under constant motion.” But assuming this, and rigorously applying it, led Einstein straight to the theory of special relativity. The idea of invariance under accelerated motion took him in turn to the theory of general relativity.

Both these theories, and the invariants that go with them, are linked inevitably with the name of one man, Albert Einstein. Another great invariant, linear momentum, is coupled in my mind with the names of two men, Galileo Galilei and Isaac Newton. Although the first explicit statement of this invariant is given in Newton’s First Law of Motion (“Every body will continue in its state of rest or of uniform motion in a straight line except in so far as it is compelled to change that state by impressed force.”), Galileo, fifty years earlier, was certainly familiar with the general principle.

Some of the other “great invariants” needed the efforts of many people before they were firmly defined and their significance was appreciated. The idea that mass was an invariant came about through the efforts of chemists, beginning with Dalton and Lavoisier, who weighed combustion products and found that the total was the same before and after. The equivalence of different forms of energy (heat, motion, potential energy, and electromagnetic energy), and the invariance of total energy of all forms, developed even later. It was a combined effort by Count Rumford, Joule, Maxwell, Lord Kelvin, Helmholtz and others. The merger of the two invariants became possible when Einstein showed the equivalence of mass and energy, after which it was only the combined mass-energy total that was conserved.

Finally, although the idea that angular momentum must be conserved seems to arise naturally in classical mechanics from the conservation of linear momentum, in quantum physics it is much more of an independent invariant because particles such as protons, neutrons, electrons, and neutrinos have an intrinsic, internal spin, whose existence is not so much seen as deduced in order to make angular momentum a conserved quantity.

This sounds rather like a circular argument, but it isn’t, because intrinsic spin couples with orbital angular momentum, and quantum theory cannot make predictions that match experiments without both of them. And as McAndrew remarks, Wolfgang Pauli in 1931 introduced the idea of a new particle to physics, the neutrino, just in order to preserve the laws of conservation of energy and momentum.

There are other important invariants in the quantum world, However, some things which “common sense” would insist to be invariants may be no such thing. For example, it was widely believed that parity (which is symmetry upon reflection in a mirror) must be a conserved quantity, because the Universe should have no preference for left-handed sub-nuclear processes over right-handed ones. But in 1956, Tsung Dao Lee and Chen Ning Yang suggested this might not be the case, and their radical idea was confirmed experimentally by C.S. Wu’s team in 1957. Today, only a combination of parity, charge, and time-reversal is regarded as a fully conserved quantity.

Given the overall importance of invariants and conservation principles to science, there is no doubt that McAndrew would have pursued any suggestion of a new basic invariant. But if invariants are real, where is the fiction in the sixth chronicle? I’m afraid there isn’t any, because the nature of the new invariant is never defined.

Wait a moment, you may say. What about the Geotron?

That is not fiction science, either, at least so far as principles are concerned. Such an instrument was seriously proposed a few years ago by Robert Wilson, the former director of the Fermilab accelerator. His design called for a donut-shaped device thirty-two miles across, in which protons would be accelerated to very high energies and then strike a metal target, to produce a beam of neutrinos. The Geotron designers wanted to use the machine to probe the interior structure of the Earth, and in particular to prospect for oil, gas, and valuable deep-seated metal deposits.

So maybe there is no fiction at all in the sixth chronicle — just a little pessimism about how long it will take before someone builds a Geotron.


* * *

Rogue planets.

The Halo beyond the known Solar System offers so much scope for interesting celestial objects of every description that I assume we will find a few more there. In the second chronicle, I introduced collapsed objects, high-density bodies that are neither stars nor conventional planets. The dividing line between stars and planets is usually decided by whether or not the center of the object supports a nuclear fusion process and contains a high density core of “degenerate” matter. Present theories place that dividing line at about a hundredth of the Sun’s mass — smaller than that, you have a planet; bigger than that you must have a star. I assume that there are in-between bodies out in the Halo, made largely of degenerate matter but only a little more massive than Jupiter.

I also assume that there is a “kernel ring” of Kerr-Newman black holes, about 300 to 400 AU from the Sun, and that this same region contains many of the collapsed objects. Such bodies would be completely undetectable using any techniques of present-day astronomy. This is science fiction, not science.

Are rogue planets also science fiction? This brings us to Vandell’s Fifth Problem, and the seventh chronicle.

David Hilbert did indeed pose a set of mathematical problems in 1900, and they served as much more than a summary of things that were “hard to solve.” They were concise and exact statements of questions, which, if answered, would have profound implications for many other problems in mathematics. The Hilbert problems are both deep and difficult, and have attracted the attention of almost every mathematician of the twentieth century. Several problems of the set, for example, ask whether certain numbers are “transcendental” — which means they can never occur as solutions to the usual equations of algebra (more precisely, they cannot be roots of finite algebraic equations with algebraic coefficients). These questions were not disposed of until 1930, when Kusmin and Siegel proved a more general result than the one that Hilbert had posed. In 1934 Gelfond provided another generalization.

At the moment there is no such “super-problem” set defined for astronomy and cosmology. If there were, the one I invented as Vandell’s Fifth Problem would certainly be a worthy candidate, and might take generations to solve. (Hilbert’s Fifth Problem, concerning a conjecture in topological group theory, was finally solved in 1952 by Gleason, Montgomery, and Zippin.) We cannot even imagine a technique, observational instrument or procedure that would have a chance of detecting a rogue planet. The existence, frequency of occurrence, and mode of escape of rogue planets raise many questions concerning the stability of multiple-body systems moving under their mutual gravitational attractions — questions that cannot be answered yet by astronomers and mathematicians.

In general relativity, the exact solution of the “one-body problem” as given by Schwarzschild has been known for more than 80 years. The relativistic “two-body problem,” of two objects orbiting each other under mutual gravitational influence, has not yet been solved. In nonrelativistic or Newtonian mechanics, the two-body problem was disposed of three hundred years ago by Newton. But the nonrelativistic solution for more than two bodies has not been found to this day, despite three centuries of hard work.

A good deal of progress has been made for a rather simpler situation that is termed the “restricted three-body problem.” In this, a small mass (such as a planet or small moon) moves under the influence of two much larger ones (stars or large planets). The large bodies define the gravitational field, and the small body moves in this field without contributing significantly to it. The restricted three-body problem applies to the case of a planet moving in the gravitational field of a binary pair of stars, or an asteroid moving in the combined fields of the Sun and Jupiter. It also offers a good approximation for the motion of a small body moving in the combined field of the Earth and Moon. Thus the problem is of practical interest, and the list of workers who have studied it in the past 200 years includes several of history’s most famous mathematicians: Euler, Lagrange, Jacobi, Poincaré, and Birkhoff. (Lagrange in particular provided certain exact solutions that include the L-4 and L-5 points, famous today as proposed sites for large space colonies.)

The number of papers written on the subject is huge — Victor Szebehely, in a 1967 book on the topic, listed over 500 references, and restricted himself to only the major source works.

Thanks to the efforts of all these workers, a good deal is known about the possible solutions of the restricted three-body problem. One established fact is that the small object cannot be thrown away to infinity by the gravitational interactions of its two large companions. Like much of modern astronomy, this result is not established by looking at the orbits themselves. It is proved by general arguments based on a particular constant of the motion, termed the Jacobian integral.

Unfortunately, those arguments cannot be applied in the general three-body problem, or in the N-body problem whenever N is bigger than two. It is presently conjectured by astronomers, but not generally proved, that ejection to infinity is possible whenever more than three bodies are involved. In such a situation, the lightest member of the system is most likely to be the one ejected. Thus, rogue planets can probably be produced when a stellar system has more than two stars in it. As it happens, this is rather common. Solitary stars, like the Sun, are in the minority. Once separated from its stellar parents, the chances that a rogue world will ever again be captured to form part of a star system are remote. To this point, the seventh chronicle’s discussion of solitary planets fits known theory, although it is an admittedly incomplete theory.

So how many rogue planets are there? There could conceivably be as many as there are stars, strewn thick across the Galaxy but completely undetectable to our instruments. Half a dozen may lie closer to us than the nearest star. Or they may be an endangered species, vanishingly rare among the varied bodies that comprise the celestial zoo.

In the seventh chronicle I suggest that they are rather common — and that’s acceptable to me as science fiction. Maybe they are, because certainly planets around other stars seem far more common than we used to think. Up to 1996, there was no evidence at all that even one planet existed around any star other than Sol. Now we know of a dozen or more. Every one is Jupiter’s size or bigger, but that does not imply that most planets in the universe are massive. It merely shows that our detection methods can find only big planets. Possibly there are other, smaller planets in every system where a Jupiter-sized giant has been discovered.

If we cannot actually see a planet, how can we possibly know that it exists? There are two methods. First, it is not accurate to say that a planet orbits a star. The bodies orbit around their common center of mass. That means, if the orbit lies at right angles to the direction of the star as seen from Earth, the star’s apparent position in the sky will show a variation over the period of the planetary year. That change will be tiny, but if the planet is large, the movement of the star might be small enough to measure.

The other (and to this date more successful) method of detection relies on the periodic shift in the wavelengths of light that we receive from a star and planet orbiting around their common center of gravity. When the star is approaching us because the planet is moving away from us, the light will be shifted toward the blue. When the star is moving away from us because the planet is approaching us, the star’s light will be shifted toward the red. The tiny difference between these two cases allows us, from the wavelength changes in the star’s light, to infer the existence of a planet in orbit around it.

Since both methods of detection depend for their success on the planet’s mass being an appreciable fraction of the star’s mass, it is no surprise that we are able to detect only the existence of massive planets, Jupiter-sized or bigger. And so far as rogue worlds are concerned, far from any stellar primary, our methods for the detection of extra-solar planets are no use at all.


* * *

The solar focus.

We go to general relativity again. According to that theory, the gravitational field of the Sun will bend light beams that pass by it (actually, Newtonian theory also turns out to predict a similar effect, a factor of two less in magnitude). Rays of light coming from a source at infinity and just missing the Sun will be bent the most, and they will converge at a distance from the Sun of 550 astronomical units, which is about 82.5 billion kilometers. To gain a feeling for that number, note that the average distance of the planet Pluto from the Sun is 5.9 billion kilometers; the solar focus, as the convergence point is known, is a fair distance out.

Those numbers apply for a spherical Sun. Since Sol rotates and so has a bulge at its equator, the Sun considered as a lens is slightly astigmatic.

If the source of light (or radio signal, which is simply another form of electromagnetic wave) is not at infinity, but closer, then the rays will still be converged in their passage by the Sun, but they will be drawn to a point at a different location. As McAndrew correctly points out in the eighth chronicle, a standard result in geometrical optics applies. If a lens converges a parallel beam of light at a distance F from the lens, then light starting at a distance S from the lens will be converged at a distance D beyond it, where 1/F = 1/S + 1/D.

This much is straightforward. The more central element of this chronicle involves far more speculation. When, or if you prefer it, if, will it be possible to produce an artificial intelligence, an “AI,” that rivals or surpasses human intelligence?

It depends which writers you believe as to how you answer that question. Some, such as Hans Moravec, have suggested that this will happen in fifty years or less. Others, while not accepting any specific date, still feel that it is sure to come to pass. Our brains are, in Marvin Minsky’s words, “computers made of meat.” It may be difficult and take a long time, but eventually we will have an AI able to think as well or better than we do.

However, not everyone accepts this. Roger Penrose, whom we have already mentioned in connection with energy extraction from kernels, has argued that an AI will never be achieved by the further development of computers as we know them today, because the human brain is “non-algorithmic.”

In a difficult book that was a surprising best-seller, The Emperor’s New Mind (1989), he claimed that some functions of the human brain will never be duplicated by computers developed along today’s lines. The brain, he asserts, performs some functions for which no computer program can be written.

This idea has been received with skepticism and even outrage by many workers in the field of AI and computer science. So what does Penrose say that is so upsetting to so many? He argues that human thought employs physics and procedures drawn from the world of quantum theory. In Penrose’s words, “Might a quantum world be required so that thinking, perceiving creatures, such as ourselves, can be constructed from its substance?”

His answer to his own question is, yes, a quantum world-view is required. In that world, a particle does not necessarily have a well-defined spin, speed, or position. Rather, it has a number of different possible positions or speeds or spins, and until we make an observation of it, all we can know are the probabilities associated with each possible spin, speed, and position. Only when an observation is made does the particle occupy a well-defined state, in which the measured variable is precisely known. This change, from undefined to well-defined status, is called the “collapse of the quantum mechanical wave function.” It is a well-known, if not well-understood, element of standard quantum theory.

What Penrose suggests is that the human brain itself is a kind of quantum device. In particular, the same processes that collapse the quantum mechanical wave function in sub-atomic particles are at work in the brain. When humans are considering many different possibilities, Penrose argues that we are operating in a highly parallel, quantum mechanical mode. Our thinking resolves and “collapses to a thought” at some point when the wave function collapses, and at that time the many millions or billions of possibilities become a single definite idea.

This is certainly a peculiar notion. However, when quantum theory was introduced in the 1920s, most of its ideas seemed no less strange. Now they are accepted by almost all physicists. Who is to say that in another half-century, Penrose will not be equally accepted when he asserts, “there is an essential non-algorithmic ingredient to (conscious) thought processes” and “I believe that (conscious) minds are not algorithmic entities”?

Meanwhile, almost everyone in the AI community (who, it might be argued, are hardly disinterested parties) listens to what Penrose has to say, then dismisses it as just plain wrong. Part of the problem is Penrose’s suggestion as to the mechanism employed within the brain, which seems bizarre indeed.

As he points out in a second book, Shadows of the Mind (Penrose, 1994), he is not the first to suggest that quantum effects are important to human thought. Herbert Fröhlich, in 1968, noted that there was a high-frequency microwave activity in the brain, produced, he said, by a biological quantum resonance. In 1992, John Eccles proposed a brain structure called the presynaptic vesicular grid, which is a kind of crystalline lattice in the brain’s pyramidal cells, as a suitable site for quantum activity.

Penrose himself favors a different location and mechanism. He suggests, though not dogmatically, that the quantum world is evoked in elements of a neuron known as microtubules. A microtubule is a tiny tube, with an outer diameter of about twenty-five nanometers and an inner diameter of fourteen nanometers. The tube is made up of peanut-shaped objects called tubulin dimers. Each dimer has about ten thousand atoms in it. Penrose proposes that each dimer is a basic computational unit, operating using quantum effects. If he is right, the computing power of the brain is grossly underestimated if neurons are considered as the basic computing element. There are about ten million dimers per neuron, and because of their tiny size each one ought to operate about a million times as fast as a neuron can fire. Only with such a mechanism, Penrose argues, can the rather complex behavior of a single-celled animal such as a paramecium (which totally lacks a nervous system) be explained.

Penrose’s critics point out that microtubules are also found elsewhere in the body, in everything from livers to lungs. Does this mean that your spleen, big toe, and kidneys are to be credited with intelligence?

My own feeling is that Penrose’s ideas sounded a lot better before he suggested a mechanism. The microtubule idea feels weak and unpersuasive.

Fortunately I don’t have to take sides. In the eighth chronicle, I was deliberately silent on how the AI came into existence. However, as a personal observation, I would be much surprised if in our future we do not have human-level AI’s, through whatever method of development, before humans routinely travel to the satellites of Jupiter and Saturn; and I believe that the latter will surely happen in less than five hundred years.


* * *

Compressed matter.


We know that compressed matter exists. In a neutron star, matter has been squeezed together so hard that the individual protons and electrons that normally make up atoms have combined to form neutrons. A neutron star with the mass of the Sun can be as little as twenty kilometers across, and a simple calculation tells us that the average density of such a body is about 475 million tons per cubic centimeter. That is still not at the limit of how far matter can be compressed. If the Sun were to become a black hole, as mentioned earlier, its Schwarzschild radius would be about three kilometers and its mean density twenty billion tons per cubic centimeter. McAndrew’s illustrious but unfortunate father developed an unspecified way of squeezing matter down to something between neutron star and black hole densities.

It is easy to calculate what it would be like if you were unwise enough to take hold of a speck of such compressed matter. And it might well be a speck. An eighteen thousand ton asteroid in normal conditions would be a substantial lump of rock about twenty meters across. Squeeze it to a density of three billion tons per cubic centimeter, and it becomes a tiny ball with radius 0.11 millimeters. Its surface gravity is almost ten thousand gees.

The gravitational force falls off rapidly with distance, so if you were a meter away from the mote of matter you would probably be unaware of its existence. It would pull you toward it with a mere ten-millionth of a gee. But take hold of it, and that’s a different story. Ten thousand gees would suck any known material, no matter how strong, toward and into the ball. That process would continue, until either you sacrificed some skin and broke free, or you were eventually totally absorbed. In practice, I think that McAndrew’s father would have realized what was happening and found a way to free himself. He would have plenty of time, because the absorption process into the compressed matter sphere would be slow. That, however, would not have made as interesting a story.

The way that McAndrew’s father produced compressed matter remains pure science fiction. However, the “strong force” itself is an accepted part of modern physics, one of four basic known forces. The other three are gravity, the electromagnetic force, and the so-called “weak force” responsible for beta decay (emission of an electron or positron) in a nucleus. Although there is an adequate theory of the strong force, embodied in what is known as quantum chromodynamics, there is not the slightest hint in that theory of a method to make such a force either stronger or weaker than it is.

That’s all right. Five hundred years ago, magnetism was a curious property of certain materials, and no one knew what it was or had any way of generating it artificially. That had to wait until another strange phenomenon, electricity, had been explored, and experimenters such as Ampère, Oersted, and Faraday proved a link between electricity and magnetism. After that could come Maxwell, providing a unified theory for the two ideas that led to such practical devices as radios, dynamos, and powerful electromagnets.

It is not unreasonable to model the future on the past. A few hundred years from now, maybe we will be able to play our own games with all the known forces in the context of a unified theory, creating or modifying them as we choose. The weak force and the electromagnetic force have already been unified, work for which Glashow, Weinberg, and Salam were awarded the Nobel prize in physics in 1979.

I cannot resist a couple of personal reminiscences regarding the late Abdus Salam. He was my mathematics supervisor when I was a new undergraduate. His personal style of solving the problems that I and my supervision partner brought to him was unique. More often than not, he would look at the result to be derived and say, “Consider the following identity.” He would then write down a mathematical result which was far from obvious and usually new to us. Applying the identity certainly gave the required answer, but it didn’t help us much with our struggles.

Salam also had one endearing but disconcerting habit. He did not drink, but he must have been told that it was a tradition at Cambridge for tutors to serve sherry to their students on holiday occasions. He offered my partner and me sherry, an offer which we readily accepted. He then, unfamiliar with sherry as a drink, poured a large tumbler for each of us. We were too polite to refuse, or not to drink what we had been given, but we emerged from the supervision session much the worse for wear.

There is a throwaway comment in the ninth chronicle, that McAndrew was going off to hear a lecture entitled “Higher-dimensional complex manifolds and a new proof of the Riemann Conjecture.” This is a joke intended for mathematicians. In the nineteenth century, the great German mathematician Bernhard Riemann conjectured, but did not prove, that all the zeroes of a function known as the zeta function lay in a certain region of the complex plane. Riemann could not prove the result, and since then no one has managed to do so. It remains the most important unproven conjecture in mathematics, far more central to the field than the long-unproved but finally disposed-of Fermat Last Theorem.

People will keep chipping away at the Riemann conjecture, precisely because it is unproven. Just as we will keep pushing for better observing instruments, more rapid and sophisticated interplanetary or interstellar probes, quantum computers, artificial intelligence, higher temperature superconductors, faster-than-light travel, treatment for all known diseases, and human life extension.

The future in which McAndrew lives is fiction, but I believe that the science and technology of the real future will be far more surprising. There will indeed be ships, built by humans and their intellectual companions, computers, headed for the stars. They will not be powered by Kerr-Newman black holes, nor employ the McAndrew balanced drive, nor will they tap the resonance modes of the vacuum zero-point energy. They will not be multi-generation arks, nor will they find life-bearing planetoids in the Oort cloud, or rogue planets in the interstellar void. What they will be, and what they will find, will be far stranger and more interesting than that. And they will make today’s boldest science fiction conjectures appear timid, near-sighted, small-scale, and lacking in imagination.

Writing of this I wish, like Benjamin Franklin, that I could be pickled in a barrel for a couple of hundred years, to experience the surprising future that I’m sure lies ahead. If I can’t do that and don’t last that long, here is a message to my descendants two centuries from now: On my behalf, make the most of it.

Загрузка...