Nature and Nature's laws lay hid in night; God said, Let Newton be! and all was light.
Alexander Pope
IF YOU HAD to select the least convivial scientific field trip of all time, you could certainly do worse than the French Royal Academy of Sciences’ Peruvian expedition of 1735. Led by a hydrologist named Pierre Bouguer and a soldier-mathematician named Charles Marie de La Condamine, it was a party of scientists and adventurers who traveled to Peru with the purpose of triangulating distances through the Andes.
At the time people had lately become infected with a powerful desire to understand the Earth-to determine how old it was, and how massive, where it hung in space, and how it had come to be. The French party’s goal was to help settle the question of the circumference of the planet by measuring the length of one degree of meridian (or 1/360 of the distance around the planet) along a line reaching from Yarouqui, near Quito, to just beyond Cuenca in what is now Ecuador, a distance of about two hundred miles.[3]
Almost at once things began to go wrong, sometimes spectacularly so. In Quito, the visitors somehow provoked the locals and were chased out of town by a mob armed with stones. Soon after, the expedition’s doctor was murdered in a misunderstanding over a woman. The botanist became deranged. Others died of fevers and falls. The third most senior member of the party, a man named Pierre Godin, ran off with a thirteen-year-old girl and could not be induced to return.
At one point the group had to suspend work for eight months while La Condamine rode off to Lima to sort out a problem with their permits. Eventually he and Bouguer stopped speaking and refused to work together. Everywhere the dwindling party went it was met with the deepest suspicions from officials who found it difficult to believe that a group of French scientists would travel halfway around the world to measure the world. That made no sense at all. Two and a half centuries later it still seems a reasonable question. Why didn’t the French make their measurements in France and save themselves all the bother and discomfort of their Andean adventure?
The answer lies partly with the fact that eighteenth-century scientists, the French in particular, seldom did things simply if an absurdly demanding alternative was available, and partly with a practical problem that had first arisen with the English astronomer Edmond Halley many years before-long before Bouguer and La Condamine dreamed of going to South America, much less had a reason for doing so.
Halley was an exceptional figure. In the course of a long and productive career, he was a sea captain, a cartographer, a professor of geometry at the University of Oxford, deputy controller of the Royal Mint, astronomer royal, and inventor of the deep-sea diving bell. He wrote authoritatively on magnetism, tides, and the motions of the planets, and fondly on the effects of opium. He invented the weather map and actuarial table, proposed methods for working out the age of the Earth and its distance from the Sun, even devised a practical method for keeping fish fresh out of season. The one thing he didn’t do, interestingly enough, was discover the comet that bears his name. He merely recognized that the comet he saw in 1682 was the same one that had been seen by others in 1456, 1531, and 1607. It didn’t become Halley’s comet until 1758, some sixteen years after his death.
For all his achievements, however, Halley’s greatest contribution to human knowledge may simply have been to take part in a modest scientific wager with two other worthies of his day: Robert Hooke, who is perhaps best remembered now as the first person to describe a cell, and the great and stately Sir Christopher Wren, who was actually an astronomer first and architect second, though that is not often generally remembered now. In 1683, Halley, Hooke, and Wren were dining in London when the conversation turned to the motions of celestial objects. It was known that planets were inclined to orbit in a particular kind of oval known as an ellipse-“a very specific and precise curve,” to quote Richard Feynman-but it wasn’t understood why. Wren generously offered a prize worth forty shillings (equivalent to a couple of weeks’ pay) to whichever of the men could provide a solution.
Hooke, who was well known for taking credit for ideas that weren’t necessarily his own, claimed that he had solved the problem already but declined now to share it on the interesting and inventive grounds that it would rob others of the satisfaction of discovering the answer for themselves. He would instead “conceal it for some time, that others might know how to value it.” If he thought any more on the matter, he left no evidence of it. Halley, however, became consumed with finding the answer, to the point that the following year he traveled to Cambridge and boldly called upon the university’s Lucasian Professor of Mathematics, Isaac Newton, in the hope that he could help.
Newton was a decidedly odd figure-brilliant beyond measure, but solitary, joyless, prickly to the point of paranoia, famously distracted (upon swinging his feet out of bed in the morning he would reportedly sometimes sit for hours, immobilized by the sudden rush of thoughts to his head), and capable of the most riveting strangeness. He built his own laboratory, the first at Cambridge, but then engaged in the most bizarre experiments. Once he inserted a bodkin-a long needle of the sort used for sewing leather-into his eye socket and rubbed it around “betwixt my eye and the bone as near to [the] backside of my eye as I could” just to see what would happen. What happened, miraculously, was nothing-at least nothing lasting. On another occasion, he stared at the Sun for as long as he could bear, to determine what effect it would have upon his vision. Again he escaped lasting damage, though he had to spend some days in a darkened room before his eyes forgave him.
Set atop these odd beliefs and quirky traits, however, was the mind of a supreme genius-though even when working in conventional channels he often showed a tendency to peculiarity. As a student, frustrated by the limitations of conventional mathematics, he invented an entirely new form, the calculus, but then told no one about it for twenty-seven years. In like manner, he did work in optics that transformed our understanding of light and laid the foundation for the science of spectroscopy, and again chose not to share the results for three decades.
For all his brilliance, real science accounted for only a part of his interests. At least half his working life was given over to alchemy and wayward religious pursuits. These were not mere dabblings but wholehearted devotions. He was a secret adherent of a dangerously heretical sect called Arianism, whose principal tenet was the belief that there had been no Holy Trinity (slightly ironic since Newton’s college at Cambridge was Trinity). He spent endless hours studying the floor plan of the lost Temple of King Solomon in Jerusalem (teaching himself Hebrew in the process, the better to scan original texts) in the belief that it held mathematical clues to the dates of the second coming of Christ and the end of the world. His attachment to alchemy was no less ardent. In 1936, the economist John Maynard Keynes bought a trunk of Newton’s papers at auction and discovered with astonishment that they were overwhelmingly preoccupied not with optics or planetary motions, but with a single-minded quest to turn base metals into precious ones. An analysis of a strand of Newton’s hair in the 1970s found it contained mercury-an element of interest to alchemists, hatters, and thermometer-makers but almost no one else-at a concentration some forty times the natural level. It is perhaps little wonder that he had trouble remembering to rise in the morning.
Quite what Halley expected to get from him when he made his unannounced visit in August 1684 we can only guess. But thanks to the later account of a Newton confidant, Abraham DeMoivre, we do have a record of one of science’s most historic encounters:
In 1684 Dr Halley came to visit at Cambridge [and] after they had some time together the Dr asked him what he thought the curve would be that would be described by the Planets supposing the force of attraction toward the Sun to be reciprocal to the square of their distance from it.
This was a reference to a piece of mathematics known as the inverse square law, which Halley was convinced lay at the heart of the explanation, though he wasn’t sure exactly how.
Sr Isaac replied immediately that it would be an [ellipse]. The Doctor, struck with joy amp; amazement, asked him how he knew it. ‘Why,’ saith he, ‘I have calculated it,’ whereupon Dr Halley asked him for his calculation without farther delay, Sr Isaac looked among his papers but could not find it.
This was astounding-like someone saying he had found a cure for cancer but couldn’t remember where he had put the formula. Pressed by Halley, Newton agreed to redo the calculations and produce a paper. He did as promised, but then did much more. He retired for two years of intensive reflection and scribbling, and at length produced his masterwork: the Philosophiae Naturalis Principia Mathematica or Mathematical Principles of Natural Philosophy, better known as the Principia.
Once in a great while, a few times in history, a human mind produces an observation so acute and unexpected that people can’t quite decide which is the more amazing-the fact or the thinking of it. Principia was one of those moments. It made Newton instantly famous. For the rest of his life he would be draped with plaudits and honors, becoming, among much else, the first person in Britain knighted for scientific achievement. Even the great German mathematician Gottfried von Leibniz, with whom Newton had a long, bitter fight over priority for the invention of the calculus, thought his contributions to mathematics equal to all the accumulated work that had preceded him. “Nearer the gods no mortal may approach,” wrote Halley in a sentiment that was endlessly echoed by his contemporaries and by many others since.
Although the Principia has been called “one of the most inaccessible books ever written” (Newton intentionally made it difficult so that he wouldn’t be pestered by mathematical “smatterers,” as he called them), it was a beacon to those who could follow it. It not only explained mathematically the orbits of heavenly bodies, but also identified the attractive force that got them moving in the first place-gravity. Suddenly every motion in the universe made sense.
At Principia’s heart were Newton’s three laws of motion (which state, very baldly, that a thing moves in the direction in which it is pushed; that it will keep moving in a straight line until some other force acts to slow or deflect it; and that every action has an opposite and equal reaction) and his universal law of gravitation. This states that every object in the universe exerts a tug on every other. It may not seem like it, but as you sit here now you are pulling everything around you-walls, ceiling, lamp, pet cat-toward you with your own little (indeed, very little) gravitational field. And these things are also pulling on you. It was Newton who realized that the pull of any two objects is, to quote Feynman again, “proportional to the mass of each and varies inversely as the square of the distance between them.” Put another way, if you double the distance between two objects, the attraction between them becomes four times weaker. This can be expressed with the formula
which is of course way beyond anything that most of us could make practical use of, but at least we can appreciate that it is elegantly compact. A couple of brief multiplications, a simple division, and, bingo, you know your gravitational position wherever you go. It was the first really universal law of nature ever propounded by a human mind, which is why Newton is regarded with such universal esteem.
Principia’s production was not without drama. To Halley’s horror, just as work was nearing completion Newton and Hooke fell into dispute over the priority for the inverse square law and Newton refused to release the crucial third volume, without which the first two made little sense. Only with some frantic shuttle diplomacy and the most liberal applications of flattery did Halley manage finally to extract the concluding volume from the erratic professor.
Halley’s traumas were not yet quite over. The Royal Society had promised to publish the work, but now pulled out, citing financial embarrassment. The year before the society had backed a costly flop called The History of Fishes, and they now suspected that the market for a book on mathematical principles would be less than clamorous. Halley, whose means were not great, paid for the book’s publication out of his own pocket. Newton, as was his custom, contributed nothing. To make matters worse, Halley at this time had just accepted a position as the society’s clerk, and he was informed that the society could no longer afford to provide him with a promised salary of £50 per annum. He was to be paid instead in copies of The History of Fishes.
Newton’s laws explained so many things-the slosh and roll of ocean tides, the motions of planets, why cannonballs trace a particular trajectory before thudding back to Earth, why we aren’t flung into space as the planet spins beneath us at hundreds of miles an hour [4]-that it took a while for all their implications to seep in. But one revelation became almost immediately controversial.
This was the suggestion that the Earth is not quite round. According to Newton’s theory, the centrifugal force of the Earth’s spin should result in a slight flattening at the poles and a bulging at the equator, which would make the planet slightly oblate. That meant that the length of a degree wouldn’t be the same in Italy as it was in Scotland. Specifically, the length would shorten as you moved away from the poles. This was not good news for those people whose measurements of the Earth were based on the assumption that the Earth was a perfect sphere, which was everyone.
For half a century people had been trying to work out the size of the Earth, mostly by making very exacting measurements. One of the first such attempts was by an English mathematician named Richard Norwood. As a young man Norwood had traveled to Bermuda with a diving bell modeled on Halley’s device, intending to make a fortune scooping pearls from the seabed. The scheme failed because there were no pearls and anyway Norwood’s bell didn’t work, but Norwood was not one to waste an experience. In the early seventeenth century Bermuda was well known among ships’ captains for being hard to locate. The problem was that the ocean was big, Bermuda small, and the navigational tools for dealing with this disparity hopelessly inadequate. There wasn’t even yet an agreed length for a nautical mile. Over the breadth of an ocean the smallest miscalculations would become magnified so that ships often missed Bermuda-sized targets by dismaying margins. Norwood, whose first love was trigonometry and thus angles, decided to bring a little mathematical rigor to navigation and to that end he determined to calculate the length of a degree.
Starting with his back against the Tower of London, Norwood spent two devoted years marching 208 miles north to York, repeatedly stretching and measuring a length of chain as he went, all the while making the most meticulous adjustments for the rise and fall of the land and the meanderings of the road. The final step was to measure the angle of the Sun at York at the same time of day and on the same day of the year as he had made his first measurement in London. From this, he reasoned he could determine the length of one degree of the Earth’s meridian and thus calculate the distance around the whole. It was an almost ludicrously ambitious undertaking-a mistake of the slightest fraction of a degree would throw the whole thing out by miles-but in fact, as Norwood proudly declaimed, he was accurate to “within a scantling”-or, more precisely, to within about six hundred yards. In metric terms, his figure worked out at 110.72 kilometers per degree of arc.
In 1637, Norwood’s masterwork of navigation, The Seaman’s Practice, was published and found an immediate following. It went through seventeen editions and was still in print twenty-five years after his death. Norwood returned to Bermuda with his family, becoming a successful planter and devoting his leisure hours to his first love, trigonometry. He survived there for thirty-eight years and it would be pleasing to report that he passed this span in happiness and adulation. In fact, he didn’t. On the crossing from England, his two young sons were placed in a cabin with the Reverend Nathaniel White, and somehow so successfully traumatized the young vicar that he devoted much of the rest of his career to persecuting Norwood in any small way he could think of.
Norwood’s two daughters brought their father additional pain by making poor marriages. One of the husbands, possibly incited by the vicar, continually laid small charges against Norwood in court, causing him much exasperation and necessitating repeated trips across Bermuda to defend himself. Finally in the 1650s witch trials came to Bermuda and Norwood spent his final years in severe unease that his papers on trigonometry, with their arcane symbols, would be taken as communications with the devil and that he would be treated to a dreadful execution. So little is known of Norwood that it may in fact be that he deserved his unhappy declining years. What is certainly true is that he got them.
Meanwhile, the momentum for determining the Earth’s circumference passed to France. There, the astronomer Jean Picard devised an impressively complicated method of triangulation involving quadrants, pendulum clocks, zenith sectors, and telescopes (for observing the motions of the moons of Jupiter). After two years of trundling and triangulating his way across France, in 1669 he announced a more accurate measure of 110.46 kilometers for one degree of arc. This was a great source of pride for the French, but it was predicated on the assumption that the Earth was a perfect sphere-which Newton now said it was not.
To complicate matters, after Picard’s death the father-and-son team of Giovanni and Jacques Cassini repeated Picard’s experiments over a larger area and came up with results that suggested that the Earth was fatter not at the equator but at the poles-that Newton, in other words, was exactly wrong. It was this that prompted the Academy of Sciences to dispatch Bouguer and La Condamine to South America to take new measurements.
They chose the Andes because they needed to measure near the equator, to determine if there really was a difference in sphericity there, and because they reasoned that mountains would give them good sightlines. In fact, the mountains of Peru were so constantly lost in cloud that the team often had to wait weeks for an hour’s clear surveying. On top of that, they had selected one of the most nearly impossible terrains on Earth. Peruvians refer to their landscape as muy accidentado-“much accidented”-and this it most certainly is. The French had not only to scale some of the world’s most challenging mountains-mountains that defeated even their mules-but to reach the mountains they had to ford wild rivers, hack their way through jungles, and cross miles of high, stony desert, nearly all of it uncharted and far from any source of supplies. But Bouguer and La Condamine were nothing if not tenacious, and they stuck to the task for nine and a half long, grim, sun-blistered years. Shortly before concluding the project, they received word that a second French team, taking measurements in northern Scandinavia (and facing notable discomforts of their own, from squelching bogs to dangerous ice floes), had found that a degree was in fact longer near the poles, as Newton had promised. The Earth was forty-three kilometers stouter when measured equatorially than when measured from top to bottom around the poles.
Bouguer and La Condamine thus had spent nearly a decade working toward a result they didn’t wish to find only to learn now that they weren’t even the first to find it. Listlessly, they completed their survey, which confirmed that the first French team was correct. Then, still not speaking, they returned to the coast and took separate ships home.
Something else conjectured by Newton in the Principia was that a plumb bob hung near a mountain would incline very slightly toward the mountain, affected by the mountain’s gravitational mass as well as by the Earth’s. This was more than a curious fact. If you measured the deflection accurately and worked out the mass of the mountain, you could calculate the universal gravitational constant-that is, the basic value of gravity, known as G-and along with it the mass of the Earth.
Bouguer and La Condamine had tried this on Peru’s Mount Chimborazo, but had been defeated by both the technical difficulties and their own squabbling, and so the notion lay dormant for another thirty years until resurrected in England by Nevil Maskelyne, the astronomer royal. In Dava Sobel’s popular book Longitude, Maskelyne is presented as a ninny and villain for failing to appreciate the brilliance of the clockmaker John Harrison, and this may be so, but we are indebted to him in other ways not mentioned in her book, not least for his successful scheme to weigh the Earth. Maskelyne realized that the nub of the problem lay with finding a mountain of sufficiently regular shape to judge its mass.
At his urging, the Royal Society agreed to engage a reliable figure to tour the British Isles to see if such a mountain could be found. Maskelyne knew just such a person-the astronomer and surveyor Charles Mason. Maskelyne and Mason had become friends eleven years earlier while engaged in a project to measure an astronomical event of great importance: the passage of the planet Venus across the face of the Sun. The tireless Edmond Halley had suggested years before that if you measured one of these passages from selected points on the Earth, you could use the principles of triangulation to work out the distance to the Sun, and from that calibrate the distances to all the other bodies in the solar system.
Unfortunately, transits of Venus, as they are known, are an irregular occurrence. They come in pairs eight years apart, but then are absent for a century or more, and there were none in Halley’s lifetime.[5] But the idea simmered and when the next transit came due in 1761, nearly two decades after Halley’s death, the scientific world was ready-indeed, more ready than it had been for an astronomical event before.
With the instinct for ordeal that characterized the age, scientists set off for more than a hundred locations around the globe-to Siberia, China, South Africa, Indonesia, and the woods of Wisconsin, among many others. France dispatched thirty-two observers, Britain eighteen more, and still others set out from Sweden, Russia, Italy, Germany, Ireland, and elsewhere.
It was history’s first cooperative international scientific venture, and almost everywhere it ran into problems. Many observers were waylaid by war, sickness, or shipwreck. Others made their destinations but opened their crates to find equipment broken or warped by tropical heat. Once again the French seemed fated to provide the most memorably unlucky participants. Jean Chappe spent months traveling to Siberia by coach, boat, and sleigh, nursing his delicate instruments over every perilous bump, only to find the last vital stretch blocked by swollen rivers, the result of unusually heavy spring rains, which the locals were swift to blame on him after they saw him pointing strange instruments at the sky. Chappe managed to escape with his life, but with no useful measurements.
Unluckier still was Guillaume Le Gentil, whose experiences are wonderfully summarized by Timothy Ferris in Coming of Age in the Milky Way. Le Gentil set off from France a year ahead of time to observe the transit from India, but various setbacks left him still at sea on the day of the transit-just about the worst place to be since steady measurements were impossible on a pitching ship.
Undaunted, Le Gentil continued on to India to await the next transit in 1769. With eight years to prepare, he erected a first-rate viewing station, tested and retested his instruments, and had everything in a state of perfect readiness. On the morning of the second transit, June 4, 1769, he awoke to a fine day, but, just as Venus began its pass, a cloud slid in front of the Sun and remained there for almost exactly the duration of the transit: three hours, fourteen minutes, and seven seconds.
Stoically, Le Gentil packed up his instruments and set off for the nearest port, but en route he contracted dysentery and was laid up for nearly a year. Still weakened, he finally made it onto a ship. It was nearly wrecked in a hurricane off the African coast. When at last he reached home, eleven and a half years after setting off, and having achieved nothing, he discovered that his relatives had had him declared dead in his absence and had enthusiastically plundered his estate.
In comparison, the disappointments experienced by Britain’s eighteen scattered observers were mild. Mason found himself paired with a young surveyor named Jeremiah Dixon and apparently they got along well, for they formed a lasting partnership. Their instructions were to travel to Sumatra and chart the transit there, but after just one night at sea their ship was attacked by a French frigate. (Although scientists were in an internationally cooperative mood, nations weren’t.) Mason and Dixon sent a note to the Royal Society observing that it seemed awfully dangerous on the high seas and wondering if perhaps the whole thing oughtn’t to be called off. In reply they received a swift and chilly rebuke, noting that they had already been paid, that the nation and scientific community were counting on them, and that their failure to proceed would result in the irretrievable loss of their reputations. Chastened, they sailed on, but en route word reached them that Sumatra had fallen to the French and so they observed the transit inconclusively from the Cape of Good Hope. On the way home they stopped on the lonely Atlantic outcrop of St. Helena, where they met Maskelyne, whose observations had been thwarted by cloud cover. Mason and Maskelyne formed a solid friendship and spent several happy, and possibly even mildly useful, weeks charting tidal flows.
Soon afterward, Maskelyne returned to England where he became astronomer royal, and Mason and Dixon-now evidently more seasoned-set off for four long and often perilous years surveying their way through 244 miles of dangerous American wilderness to settle a boundary dispute between the estates of William Penn and Lord Baltimore and their respective colonies of Pennsylvania and Maryland. The result was the famous Mason and Dixon line, which later took on symbolic importance as the dividing line between the slave and free states. (Although the line was their principal task, they also contributed several astronomical surveys, including one of the century’s most accurate measurements of a degree of meridian-an achievement that brought them far more acclaim in England than the settling of a boundary dispute between spoiled aristocrats.)
Back in Europe, Maskelyne and his counterparts in Germany and France were forced to the conclusion that the transit measurements of 1761 were essentially a failure. One of the problems, ironically, was that there were too many observations, which when brought together often proved contradictory and impossible to resolve. The successful charting of a Venusian transit fell instead to a little-known Yorkshire-born sea captain named James Cook, who watched the 1769 transit from a sunny hilltop in Tahiti, and then went on to chart and claim Australia for the British crown. Upon his return there was now enough information for the French astronomer Joseph Lalande to calculate that the mean distance from the Earth to the Sun was a little over 150 million kilometers. (Two further transits in the nineteenth century allowed astronomers to put the figure at 149.59 million kilometers, where it has remained ever since. The precise distance, we now know, is 149.597870691 million kilometers.) The Earth at last had a position in space.
As for Mason and Dixon, they returned to England as scientific heroes and, for reasons unknown, dissolved their partnership. Considering the frequency with which they turn up at seminal events in eighteenth-century science, remarkably little is known about either man. No likenesses exist and few written references. Of Dixon the Dictionary of National Biography notes intriguingly that he was “said to have been born in a coal mine,” but then leaves it to the reader’s imagination to supply a plausible explanatory circumstance, and adds that he died at Durham in 1777. Apart from his name and long association with Mason, nothing more is known.
Mason is only slightly less shadowy. We know that in 1772, at Maskelyne’s behest, he accepted the commission to find a suitable mountain for the gravitational deflection experiment, at length reporting back that the mountain they needed was in the central Scottish Highlands, just above Loch Tay, and was called Schiehallion. Nothing, however, would induce him to spend a summer surveying it. He never returned to the field again. His next known movement was in 1786 when, abruptly and mysteriously, he turned up in Philadelphia with his wife and eight children, apparently on the verge of destitution. He had not been back to America since completing his survey there eighteen years earlier and had no known reason for being there, or any friends or patrons to greet him. A few weeks later he was dead.
With Mason refusing to survey the mountain, the job fell to Maskelyne. So for four months in the summer of 1774, Maskelyne lived in a tent in a remote Scottish glen and spent his days directing a team of surveyors, who took hundreds of measurements from every possible position. To find the mass of the mountain from all these numbers required a great deal of tedious calculating, for which a mathematician named Charles Hutton was engaged. The surveyors had covered a map with scores of figures, each marking an elevation at some point on or around the mountain. It was essentially just a confusing mass of numbers, but Hutton noticed that if he used a pencil to connect points of equal height, it all became much more orderly. Indeed, one could instantly get a sense of the overall shape and slope of the mountain. He had invented contour lines.
Extrapolating from his Schiehallion measurements, Hutton calculated the mass of the Earth at 5,000 million million tons, from which could reasonably be deduced the masses of all the other major bodies in the solar system, including the Sun. So from this one experiment we learned the masses of the Earth, the Sun, the Moon, the other planets and their moons, and got contour lines into the bargain-not bad for a summer’s work.
Not everyone was satisfied with the results, however. The shortcoming of the Schiehallion experiment was that it was not possible to get a truly accurate figure without knowing the actual density of the mountain. For convenience, Hutton had assumed that the mountain had the same density as ordinary stone, about 2.5 times that of water, but this was little more than an educated guess.
One improbable-seeming person who turned his mind to the matter was a country parson named John Michell, who resided in the lonely Yorkshire village of Thornhill. Despite his remote and comparatively humble situation, Michell was one of the great scientific thinkers of the eighteenth century and much esteemed for it.
Among a great deal else, he perceived the wavelike nature of earthquakes, conducted much original research into magnetism and gravity, and, quite extraordinarily, envisioned the possibility of black holes two hundred years before anyone else-a leap of intuitive deduction that not even Newton could make. When the German-born musician William Herschel decided his real interest in life was astronomy, it was Michell to whom he turned for instruction in making telescopes, a kindness for which planetary science has been in his debt ever since.[6]
But of all that Michell accomplished, nothing was more ingenious or had greater impact than a machine he designed and built for measuring the mass of the Earth. Unfortunately, he died before he could conduct the experiments and both the idea and the necessary equipment were passed on to a brilliant but magnificently retiring London scientist named Henry Cavendish.
Cavendish is a book in himself. Born into a life of sumptuous privilege-his grandfathers were dukes, respectively, of Devonshire and Kent-he was the most gifted English scientist of his age, but also the strangest. He suffered, in the words of one of his few biographers, from shyness to a “degree bordering on disease.” Any human contact was for him a source of the deepest discomfort.
Once he opened his door to find an Austrian admirer, freshly arrived from Vienna, on the front step. Excitedly the Austrian began to babble out praise. For a few moments Cavendish received the compliments as if they were blows from a blunt object and then, unable to take any more, fled down the path and out the gate, leaving the front door wide open. It was some hours before he could be coaxed back to the property. Even his housekeeper communicated with him by letter.
Although he did sometimes venture into society-he was particularly devoted to the weekly scientific soirées of the great naturalist Sir Joseph Banks-it was always made clear to the other guests that Cavendish was on no account to be approached or even looked at. Those who sought his views were advised to wander into his vicinity as if by accident and to “talk as it were into vacancy.” If their remarks were scientifically worthy they might receive a mumbled reply, but more often than not they would hear a peeved squeak (his voice appears to have been high pitched) and turn to find an actual vacancy and the sight of Cavendish fleeing for a more peaceful corner.
His wealth and solitary inclinations allowed him to turn his house in Clapham into a large laboratory where he could range undisturbed through every corner of the physical sciences-electricity, heat, gravity, gases, anything to do with the composition of matter. The second half of the eighteenth century was a time when people of a scientific bent grew intensely interested in the physical properties of fundamental things-gases and electricity in particular-and began seeing what they could do with them, often with more enthusiasm than sense. In America, Benjamin Franklin famously risked his life by flying a kite in an electrical storm. In France, a chemist named Pilatre de Rozier tested the flammability of hydrogen by gulping a mouthful and blowing across an open flame, proving at a stroke that hydrogen is indeed explosively combustible and that eyebrows are not necessarily a permanent feature of one’s face. Cavendish, for his part, conducted experiments in which he subjected himself to graduated jolts of electrical current, diligently noting the increasing levels of agony until he could keep hold of his quill, and sometimes his consciousness, no longer.
In the course of a long life Cavendish made a string of signal discoveries-among much else he was the first person to isolate hydrogen and the first to combine hydrogen and oxygen to form water-but almost nothing he did was entirely divorced from strangeness. To the continuing exasperation of his fellow scientists, he often alluded in published work to the results of contingent experiments that he had not told anyone about. In his secretiveness he didn’t merely resemble Newton, but actively exceeded him. His experiments with electrical conductivity were a century ahead of their time, but unfortunately remained undiscovered until that century had passed. Indeed the greater part of what he did wasn’t known until the late nineteenth century when the Cambridge physicist James Clerk Maxwell took on the task of editing Cavendish’s papers, by which time credit had nearly always been given to others.
Among much else, and without telling anyone, Cavendish discovered or anticipated the law of the conservation of energy, Ohm’s law, Dalton’s Law of Partial Pressures, Richter’s Law of Reciprocal Proportions, Charles’s Law of Gases, and the principles of electrical conductivity. That’s just some of it. According to the science historian J. G. Crowther, he also foreshadowed “the work of Kelvin and G. H. Darwin on the effect of tidal friction on slowing the rotation of the earth, and Larmor’s discovery, published in 1915, on the effect of local atmospheric cooling . . . the work of Pickering on freezing mixtures, and some of the work of Rooseboom on heterogeneous equilibria.” Finally, he left clues that led directly to the discovery of the group of elements known as the noble gases, some of which are so elusive that the last of them wasn’t found until 1962. But our interest here is in Cavendish’s last known experiment when in the late summer of 1797, at the age of sixty-seven, he turned his attention to the crates of equipment that had been left to him-evidently out of simple scientific respect-by John Michell.
When assembled, Michell’s apparatus looked like nothing so much as an eighteenth-century version of a Nautilus weight-training machine. It incorporated weights, counterweights, pendulums, shafts, and torsion wires. At the heart of the machine were two 350-pound lead balls, which were suspended beside two smaller spheres. The idea was to measure the gravitational deflection of the smaller spheres by the larger ones, which would allow the first measurement of the elusive force known as the gravitational constant, and from which the weight (strictly speaking, the mass)[7] of the Earth could be deduced.
Because gravity holds planets in orbit and makes falling objects land with a bang, we tend to think of it as a powerful force, but it is not really. It is only powerful in a kind of collective sense, when one massive object, like the Sun, holds on to another massive object, like the Earth. At an elemental level gravity is extraordinarily unrobust. Each time you pick up a book from a table or a dime from the floor you effortlessly overcome the combined gravitational exertion of an entire planet. What Cavendish was trying to do was measure gravity at this extremely featherweight level.
Delicacy was the key word. Not a whisper of disturbance could be allowed into the room containing the apparatus, so Cavendish took up a position in an adjoining room and made his observations with a telescope aimed through a peephole. The work was incredibly exacting and involved seventeen delicate, interconnected measurements, which together took nearly a year to complete. When at last he had finished his calculations, Cavendish announced that the Earth weighed a little over 13,000,000,000,000,000,000,000 pounds, or six billion trillion metric tons, to use the modern measure. (A metric ton is 1,000 kilograms or 2,205 pounds.)
Today, scientists have at their disposal machines so precise they can detect the weight of a single bacterium and so sensitive that readings can be disturbed by someone yawning seventy-five feet away, but they have not significantly improved on Cavendish’s measurements of 1797. The current best estimate for Earth’s weight is 5.9725 billion trillion metric tons, a difference of only about 1 percent from Cavendish’s finding. Interestingly, all of this merely confirmed estimates made by Newton 110 years before Cavendish without any experimental evidence at all.
So, by the late eighteenth century scientists knew very precisely the shape and dimensions of the Earth and its distance from the Sun and planets; and now Cavendish, without even leaving home, had given them its weight. So you might think that determining the age of the Earth would be relatively straightforward. After all, the necessary materials were literally at their feet. But no. Human beings would split the atom and invent television, nylon, and instant coffee before they could figure out the age of their own planet.
To understand why, we must travel north to Scotland and begin with a brilliant and genial man, of whom few have ever heard, who had just invented a new science called geology.
AT JUST THE time that Henry Cavendish was completing his experiments in London, four hundred miles away in Edinburgh another kind of concluding moment was about to take place with the death of James Hutton. This was bad news for Hutton, of course, but good news for science as it cleared the way for a man named John Playfair to rewrite Hutton’s work without fear of embarrassment.
Hutton was by all accounts a man of the keenest insights and liveliest conversation, a delight in company, and without rival when it came to understanding the mysterious slow processes that shaped the Earth. Unfortunately, it was beyond him to set down his notions in a form that anyone could begin to understand. He was, as one biographer observed with an all but audible sigh, “almost entirely innocent of rhetorical accomplishments.” Nearly every line he penned was an invitation to slumber. Here he is in his 1795 masterwork, A Theory of the Earth with Proofs and Illustrations, discussing . . . something:
The world which we inhabit is composed of the materials, not of the earth which was the immediate predecessor of the present, but of the earth which, in ascending from the present, we consider as the third, and which had preceded the land that was above the surface of the sea, while our present land was yet beneath the water of the ocean.
Yet almost singlehandedly, and quite brilliantly, he created the science of geology and transformed our understanding of the Earth. Hutton was born in 1726 into a prosperous Scottish family, and enjoyed the sort of material comfort that allowed him to pass much of his life in a genially expansive round of light work and intellectual betterment. He studied medicine, but found it not to his liking and turned instead to farming, which he followed in a relaxed and scientific way on the family estate in Berwickshire. Tiring of field and flock, in 1768 he moved to Edinburgh, where he founded a successful business producing sal ammoniac from coal soot, and busied himself with various scientific pursuits. Edinburgh at that time was a center of intellectual vigor, and Hutton luxuriated in its enriching possibilities. He became a leading member of a society called the Oyster Club, where he passed his evenings in the company of men such as the economist Adam Smith, the chemist Joseph Black, and the philosopher David Hume, as well as such occasional visiting sparks as Benjamin Franklin and James Watt.
In the tradition of the day, Hutton took an interest in nearly everything, from mineralogy to metaphysics. He conducted experiments with chemicals, investigated methods of coal mining and canal building, toured salt mines, speculated on the mechanisms of heredity, collected fossils, and propounded theories on rain, the composition of air, and the laws of motion, among much else. But his particular interest was geology.
Among the questions that attracted interest in that fanatically inquisitive age was one that had puzzled people for a very long time-namely, why ancient clamshells and other marine fossils were so often found on mountaintops. How on earth did they get there? Those who thought they had a solution fell into two opposing camps. One group, known as the Neptunists, was convinced that everything on Earth, including seashells in improbably lofty places, could be explained by rising and falling sea levels. They believed that mountains, hills, and other features were as old as the Earth itself, and were changed only when water sloshed over them during periods of global flooding.
Opposing them were the Plutonists, who noted that volcanoes and earthquakes, among other enlivening agents, continually changed the face of the planet but clearly owed nothing to wayward seas. The Plutonists also raised awkward questions about where all the water went when it wasn’t in flood. If there was enough of it at times to cover the Alps, then where, pray, was it during times of tranquility, such as now? Their belief was that the Earth was subject to profound internal forces as well as surface ones. However, they couldn’t convincingly explain how all those clamshells got up there.
It was while puzzling over these matters that Hutton had a series of exceptional insights. From looking at his own farmland, he could see that soil was created by the erosion of rocks and that particles of this soil were continually washed away and carried off by streams and rivers and redeposited elsewhere. He realized that if such a process were carried to its natural conclusion then Earth would eventually be worn quite smooth. Yet everywhere around him there were hills. Clearly there had to be some additional process, some form of renewal and uplift, that created new hills and mountains to keep the cycle going. The marine fossils on mountaintops, he decided, had not been deposited during floods, but had risen along with the mountains themselves. He also deduced that it was heat within the Earth that created new rocks and continents and thrust up mountain chains. It is not too much to say that geologists wouldn’t grasp the full implications of this thought for two hundred years, when finally they adopted plate tectonics. Above all, what Hutton’s theories suggested was that Earth processes required huge amounts of time, far more than anyone had ever dreamed. There were enough insights here to transform utterly our understanding of the Earth.
In 1785, Hutton worked his ideas up into a long paper, which was read at consecutive meetings of the Royal Society of Edinburgh. It attracted almost no notice at all. It’s not hard to see why. Here, in part, is how he presented it to his audience:
In the one case, the forming cause is in the body which is separated; for, after the body has been actuated by heat, it is by the reaction of the proper matter of the body, that the chasm which constitutes the vein is formed. In the other case, again, the cause is extrinsic in relation to the body in which the chasm is formed. There has been the most violent fracture and divulsion; but the cause is still to seek; and it appears not in the vein; for it is not every fracture and dislocation of the solid body of our earth, in which minerals, or the proper substances of mineral veins, are found.
Needless to say, almost no one in the audience had the faintest idea what he was talking about. Encouraged by his friends to expand his theory, in the touching hope that he might somehow stumble onto clarity in a more expansive format, Hutton spent the next ten years preparing his magnum opus, which was published in two volumes in 1795.
Together the two books ran to nearly a thousand pages and were, remarkably, worse than even his most pessimistic friends had feared. Apart from anything else, nearly half the completed work now consisted of quotations from French sources, still in the original French. A third volume was so unenticing that it wasn’t published until 1899, more than a century after Hutton’s death, and the fourth and concluding volume was never published at all. Hutton’s Theory of the Earth is a strong candidate for the least read important book in science (or at least would be if there weren’t so many others). Even Charles Lyell, the greatest geologist of the following century and a man who read everything, admitted he couldn’t get through it.
Luckily Hutton had a Boswell in the form of John Playfair, a professor of mathematics at the University of Edinburgh and a close friend, who could not only write silken prose but-thanks to many years at Hutton’s elbow-actually understood what Hutton was trying to say, most of the time. In 1802, five years after Hutton’s death, Playfair produced a simplified exposition of the Huttonian principles, entitled Illustrations of the Huttonian Theory of the Earth. The book was gratefully received by those who took an active interest in geology, which in 1802 was not a large number. That, however, was about to change. And how.
In the winter of 1807, thirteen like-minded souls in London got together at the Freemasons Tavern at Long Acre, in Covent Garden, to form a dining club to be called the Geological Society. The idea was to meet once a month to swap geological notions over a glass or two of Madeira and a convivial dinner. The price of the meal was set at a deliberately hefty fifteen shillings to discourage those whose qualifications were merely cerebral. It soon became apparent, however, that there was a demand for something more properly institutional, with a permanent headquarters, where people could gather to share and discuss new findings. In barely a decade membership grew to four hundred-still all gentlemen, of course-and the Geological was threatening to eclipse the Royal as the premier scientific society in the country.
The members met twice a month from November until June, when virtually all of them went off to spend the summer doing fieldwork. These weren’t people with a pecuniary interest in minerals, you understand, or even academics for the most part, but simply gentlemen with the wealth and time to indulge a hobby at a more or less professional level. By 1830, there were 745 of them, and the world would never see the like again.
It is hard to imagine now, but geology excited the nineteenth century-positively gripped it-in a way that no science ever had before or would again. In 1839, when Roderick Murchison published The Silurian System, a plump and ponderous study of a type of rock called greywacke, it was an instant bestseller, racing through four editions, even though it cost eight guineas a copy and was, in true Huttonian style, unreadable. (As even a Murchison supporter conceded, it had “a total want of literary attractiveness.”) And when, in 1841, the great Charles Lyell traveled to America to give a series of lectures in Boston, sellout audiences of three thousand at a time packed into the Lowell Institute to hear his tranquilizing descriptions of marine zeolites and seismic perturbations in Campania.
Throughout the modern, thinking world, but especially in Britain, men of learning ventured into the countryside to do a little “stone-breaking,” as they called it. It was a pursuit taken seriously, and they tended to dress with appropriate gravity, in top hats and dark suits, except for the Reverend William Buckland of Oxford, whose habit it was to do his fieldwork in an academic gown.
The field attracted many extraordinary figures, not least the aforementioned Murchison, who spent the first thirty or so years of his life galloping after foxes, converting aeronautically challenged birds into puffs of drifting feathers with buckshot, and showing no mental agility whatever beyond that needed to read The Times or play a hand of cards. Then he discovered an interest in rocks and became with rather astounding swiftness a titan of geological thinking.
Then there was Dr. James Parkinson, who was also an early socialist and author of many provocative pamphlets with titles like “Revolution without Bloodshed.” In 1794, he was implicated in a faintly lunatic-sounding conspiracy called “the Pop-gun Plot,” in which it was planned to shoot King George III in the neck with a poisoned dart as he sat in his box at the theater. Parkinson was hauled before the Privy Council for questioning and came within an ace of being dispatched in irons to Australia before the charges against him were quietly dropped. Adopting a more conservative approach to life, he developed an interest in geology and became one of the founding members of the Geological Society and the author of an important geological text, Organic Remains of a Former World, which remained in print for half a century. He never caused trouble again. Today, however, we remember him for his landmark study of the affliction then called the “shaking palsy,” but known ever since as Parkinson’s disease. (Parkinson had one other slight claim to fame. In 1785, he became possibly the only person in history to win a natural history museum in a raffle. The museum, in London’s Leicester Square, had been founded by Sir Ashton Lever, who had driven himself bankrupt with his unrestrained collecting of natural wonders. Parkinson kept the museum until 1805, when he could no longer support it and the collection was broken up and sold.)
Not quite as remarkable in character but more influential than all the others combined was Charles Lyell. Lyell was born in the year that Hutton died and only seventy miles away, in the village of Kinnordy. Though Scottish by birth, he grew up in the far south of England, in the New Forest of Hampshire, because his mother was convinced that Scots were feckless drunks. As was generally the pattern with nineteenth-century gentlemen scientists, Lyell came from a background of comfortable wealth and intellectual vigor. His father, also named Charles, had the unusual distinction of being a leading authority on the poet Dante and on mosses. (Orthotricium lyelli, which most visitors to the English countryside will at some time have sat on, is named for him.) From his father Lyell gained an interest in natural history, but it was at Oxford, where he fell under the spell of the Reverend William Buckland-he of the flowing gowns-that the young Lyell began his lifelong devotion to geology.
Buckland was a bit of a charming oddity. He had some real achievements, but he is remembered at least as much for his eccentricities. He was particularly noted for a menagerie of wild animals, some large and dangerous, that were allowed to roam through his house and garden, and for his desire to eat his way through every animal in creation. Depending on whim and availability, guests to Buckland’s house might be served baked guinea pig, mice in batter, roasted hedgehog, or boiled Southeast Asian sea slug. Buckland was able to find merit in them all, except the common garden mole, which he declared disgusting. Almost inevitably, he became the leading authority on coprolites-fossilized feces-and had a table made entirely out of his collection of specimens.
Even when conducting serious science his manner was generally singular. Once Mrs. Buckland found herself being shaken awake in the middle of the night, her husband crying in excitement: “My dear, I believe that Cheirotherium’s footsteps are undoubtedly testudinal.” Together they hurried to the kitchen in their nightclothes. Mrs. Buckland made a flour paste, which she spread across the table, while the Reverend Buckland fetched the family tortoise. Plunking it onto the paste, they goaded it forward and discovered to their delight that its footprints did indeed match those of the fossil Buckland had been studying. Charles Darwin thought Buckland a buffoon-that was the word he used-but Lyell appeared to find him inspiring and liked him well enough to go touring with him in Scotland in 1824. It was soon after this trip that Lyell decided to abandon a career in law and devote himself to geology full-time.
Lyell was extremely shortsighted and went through most of his life with a pained squint, which gave him a troubled air. (Eventually he would lose his sight altogether.) His other slight peculiarity was the habit, when distracted by thought, of taking up improbable positions on furniture-lying across two chairs at once or “resting his head on the seat of a chair, while standing up” (to quote his friend Darwin). Often when lost in thought he would slink so low in a chair that his buttocks would all but touch the floor. Lyell’s only real job in life was as professor of geology at King’s College in London from 1831 to 1833. It was around this time that he produced The Principles of Geology, published in three volumes between 1830 and 1833, which in many ways consolidated and elaborated upon the thoughts first voiced by Hutton a generation earlier. (Although Lyell never read Hutton in the original, he was a keen student of Playfair’s reworked version.)
Between Hutton’s day and Lyell’s there arose a new geological controversy, which largely superseded, but is often confused with, the old Neptunian-Plutonian dispute. The new battle became an argument between catastrophism and uniformitarianism-unattractive terms for an important and very long-running dispute. Catastrophists, as you might expect from the name, believed that the Earth was shaped by abrupt cataclysmic events-floods principally, which is why catastrophism and neptunism are often wrongly bundled together. Catastrophism was particularly comforting to clerics like Buckland because it allowed them to incorporate the biblical flood of Noah into serious scientific discussions. Uniformitarians by contrast believed that changes on Earth were gradual and that nearly all Earth processes happened slowly, over immense spans of time. Hutton was much more the father of the notion than Lyell, but it was Lyell most people read, and so he became in most people’s minds, then and now, the father of modern geological thought.
Lyell believed that the Earth’s shifts were uniform and steady-that everything that had ever happened in the past could be explained by events still going on today. Lyell and his adherents didn’t just disdain catastrophism, they detested it. Catastrophists believed that extinctions were part of a series in which animals were repeatedly wiped out and replaced with new sets-a belief that the naturalist T. H. Huxley mockingly likened to “a succession of rubbers of whist, at the end of which the players upset the table and called for a new pack.” It was too convenient a way to explain the unknown. “Never was there a dogma more calculated to foster indolence, and to blunt the keen edge of curiosity,” sniffed Lyell.
Lyell’s oversights were not inconsiderable. He failed to explain convincingly how mountain ranges were formed and overlooked glaciers as an agent of change. He refused to accept Louis Agassiz’s idea of ice ages-“the refrigeration of the globe,” as he dismissively termed it-and was confident that mammals “would be found in the oldest fossiliferous beds.” He rejected the notion that animals and plants suffered sudden annihilations, and believed that all the principal animal groups-mammals, reptiles, fish, and so on-had coexisted since the dawn of time. On all of these he would ultimately be proved wrong.
Yet it would be nearly impossible to overstate Lyell’s influence. The Principles of Geology went through twelve editions in Lyell’s lifetime and contained notions that shaped geological thinking far into the twentieth century. Darwin took a first edition with him on the Beagle voyage and wrote afterward that “the great merit of the Principles was that it altered the whole tone of one’s mind, and therefore that, when seeing a thing never seen by Lyell, one yet saw it partially through his eyes.” In short, he thought him nearly a god, as did many of his generation. It is a testament to the strength of Lyell’s sway that in the 1980s when geologists had to abandon just a part of it to accommodate the impact theory of extinctions, it nearly killed them. But that is another chapter.
Meanwhile, geology had a great deal of sorting out to do, and not all of it went smoothly. From the outset geologists tried to categorize rocks by the periods in which they were laid down, but there were often bitter disagreements about where to put the dividing lines-none more so than a long-running debate that became known as the Great Devonian Controversy. The issue arose when the Reverend Adam Sedgwick of Cambridge claimed for the Cambrian period a layer of rock that Roderick Murchison believed belonged rightly to the Silurian. The dispute raged for years and grew extremely heated. “De la Beche is a dirty dog,” Murchison wrote to a friend in a typical outburst.
Some sense of the strength of feeling can be gained by glancing through the chapter titles of Martin J. S. Rudwick’s excellent and somber account of the issue, The Great Devonian Controversy. These begin innocuously enough with headings such as “Arenas of Gentlemanly Debate” and “Unraveling the Greywacke,” but then proceed on to “The Greywacke Defended and Attacked,” “Reproofs and Recriminations,” “The Spread of Ugly Rumors,” “Weaver Recants His Heresy,” “Putting a Provincial in His Place,” and (in case there was any doubt that this was war) “Murchison Opens the Rhineland Campaign.” The fight was finally settled in 1879 with the simple expedient of coming up with a new period, the Ordovician, to be inserted between the two.
Because the British were the most active in the early years, British names are predominant in the geological lexicon. Devonian is of course from the English county of Devon. Cambrian comes from the Roman name for Wales, while Ordovician and Silurian recall ancient Welsh tribes, the Ordovices and Silures. But with the rise of geological prospecting elsewhere, names began to creep in from all over. Jurassic refers to the Jura Mountains on the border of France and Switzerland. Permian recalls the former Russian province of Perm in the Ural Mountains. For Cretaceous (from the Latin for “chalk”) we are indebted to a Belgian geologist with the perky name of J. J. d’Omalius d’Halloy.
Originally, geological history was divided into four spans of time: primary, secondary, tertiary, and quaternary. The system was too neat to last, and soon geologists were contributing additional divisions while eliminating others. Primary and secondary fell out of use altogether, while quaternary was discarded by some but kept by others. Today only tertiary remains as a common designation everywhere, even though it no longer represents a third period of anything.
Lyell, in his Principles, introduced additional units known as epochs or series to cover the period since the age of the dinosaurs, among them Pleistocene (“most recent”), Pliocene (“more recent”), Miocene (“moderately recent”), and the rather endearingly vague Oligocene (“but a little recent”). Lyell originally intended to employ “-synchronous” for his endings, giving us such crunchy designations as Meiosynchronous and Pleiosynchronous. The Reverend William Whewell, an influential man, objected on etymological grounds and suggested instead an “-eous” pattern, producing Meioneous, Pleioneous, and so on. The “-cene” terminations were thus something of a compromise.
Nowadays, and speaking very generally, geological time is divided first into four great chunks known as eras: Precambrian, Paleozoic (from the Greek meaning “old life”), Mesozoic (“middle life”), and Cenozoic (“recent life”). These four eras are further divided into anywhere from a dozen to twenty subgroups, usually called periods though sometimes known as systems. Most of these are also reasonably well known: Cretaceous, Jurassic, Triassic, Silurian, and so on.[8]
Then come Lyell’s epochs-the Pleistocene, Miocene, and so on-which apply only to the most recent (but paleontologically busy) sixty-five million years, and finally we have a mass of finer subdivisions known as stages or ages. Most of these are named, nearly always awkwardly, after places: Illinoian, Desmoinesian, Croixian, Kimmeridgian, and so on in like vein. Altogether, according to John McPhee, these number in the “tens of dozens.” Fortunately, unless you take up geology as a career, you are unlikely ever to hear any of them again.
Further confusing the matter is that the stages or ages in North America have different names from the stages in Europe and often only roughly intersect in time. Thus the North American Cincinnatian stage mostly corresponds with the Ashgillian stage in Europe, plus a tiny bit of the slightly earlier Caradocian stage.
Also, all this changes from textbook to textbook and from person to person, so that some authorities describe seven recent epochs, while others are content with four. In some books, too, you will find the tertiary and quaternary taken out and replaced by periods of different lengths called the Palaeogene and Neogene. Others divide the Precambrian into two eras, the very ancient Archean and the more recent Proterozoic. Sometimes too you will see the term Phanerozoic used to describe the span encompassing the Cenozoic, Mesozoic, and Paleozoic eras.
Moreover, all this applies only to units of time. Rocks are divided into quite separate units known as systems, series, and stages. A distinction is also made between late and early (referring to time) and upper and lower (referring to layers of rock). It can all get terribly confusing to nonspecialists, but to a geologist these can be matters of passion. “I have seen grown men glow incandescent with rage over this metaphorical millisecond in life’s history,” the British paleontologist Richard Fortey has written with regard to a long-running twentieth-century dispute over where the boundary lies between the Cambrian and Ordovician.
At least today we can bring some sophisticated dating techniques to the table. For most of the nineteenth century geologists could draw on nothing more than the most hopeful guesswork. The frustrating position then was that although they could place the various rocks and fossils in order by age, they had no idea how long any of those ages were. When Buckland speculated on the antiquity of an Ichthyosaurus skeleton he could do no better than suggest that it had lived somewhere between “ten thousand, or more than ten thousand times ten thousand” years earlier.
Although there was no reliable way of dating periods, there was no shortage of people willing to try. The most well known early attempt was in 1650 when Archbishop James Ussher of the Church of Ireland made a careful study of the Bible and other historical sources and concluded, in a hefty tome called Annals of the Old Testament, that the Earth had been created at midday on October 23, 4004 B.C., an assertion that has amused historians and textbook writers ever since.[9]
There is a persistent myth, incidentally-and one propounded in many serious books-that Ussher’s views dominated scientific beliefs well into the nineteenth century, and that it was Lyell who put everyone straight. Stephen Jay Gould, in Time’s Arrow, cites as a typical example this sentence from a popular book of the 1980s: “Until Lyell published his book, most thinking people accepted the idea that the earth was young.” In fact, no. As Martin J. S. Rudwick puts it, “No geologist of any nationality whose work was taken seriously by other geologists advocated a timescale confined within the limits of a literalistic exegesis of Genesis.” Even the Reverend Buckland, as pious a soul as the nineteenth century produced, noted that nowhere did the Bible suggest that God made Heaven and Earth on the first day, but merely “in the beginning.” That beginning, he reasoned, may have lasted “millions upon millions of years.” Everyone agreed that the Earth was ancient. The question was simply how ancient.
One of the better early attempts at dating the planet came from the ever-reliable Edmond Halley, who in 1715 suggested that if you divided the total amount of salt in the world’s seas by the amount added each year, you would get the number of years that the oceans had been in existence, which would give you a rough idea of Earth’s age. The logic was appealing, but unfortunately no one knew how much salt was in the sea or by how much it increased each year, which rendered the experiment impracticable.
The first attempt at measurement that could be called remotely scientific was made by the Frenchman Georges-Louis Leclerc, Comte de Buffon, in the 1770s. It had long been known that the Earth radiated appreciable amounts of heat-that was apparent to anyone who went down a coal mine-but there wasn’t any way of estimating the rate of dissipation. Buffon’s experiment consisted of heating spheres until they glowed white hot and then estimating the rate of heat loss by touching them (presumably very lightly at first) as they cooled. From this he guessed the Earth’s age to be somewhere between 75,000 and 168,000 years old. This was of course a wild underestimate, but a radical notion nonetheless, and Buffon found himself threatened with excommunication for expressing it. A practical man, he apologized at once for his thoughtless heresy, then cheerfully repeated the assertions throughout his subsequent writings.
By the middle of the nineteenth century most learned people thought the Earth was at least a few million years old, perhaps even some tens of millions of years old, but probably not more than that. So it came as a surprise when, in 1859 in On the Origin of Species, Charles Darwin announced that the geological processes that created the Weald, an area of southern England stretching across Kent, Surrey, and Sussex, had taken, by his calculations, 306,662,400 years to complete. The assertion was remarkable partly for being so arrestingly specific but even more for flying in the face of accepted wisdom about the age of the Earth.[10] It proved so contentious that Darwin withdrew it from the third edition of the book. The problem at its heart remained, however. Darwin and his geological friends needed the Earth to be old, but no one could figure out a way to make it so.
Unfortunately for Darwin, and for progress, the question came to the attention of the great Lord Kelvin (who, though indubitably great, was then still just plain William Thomson; he wouldn’t be elevated to the peerage until 1892, when he was sixty-eight years old and nearing the end of his career, but I shall follow the convention here of using the name retroactively). Kelvin was one of the most extraordinary figures of the nineteenth century-indeed of any century. The German scientist Hermann von Helmholtz, no intellectual slouch himself, wrote that Kelvin had by far the greatest “intelligence and lucidity, and mobility of thought” of any man he had ever met. “I felt quite wooden beside him sometimes,” he added, a bit dejectedly.
The sentiment is understandable, for Kelvin really was a kind of Victorian superman. He was born in 1824 in Belfast, the son of a professor of mathematics at the Royal Academical Institution who soon after transferred to Glasgow. There Kelvin proved himself such a prodigy that he was admitted to Glasgow University at the exceedingly tender age of ten. By the time he had reached his early twenties, he had studied at institutions in London and Paris, graduated from Cambridge (where he won the university’s top prizes for rowing and mathematics, and somehow found time to launch a musical society as well), been elected a fellow of Peterhouse, and written (in French and English) a dozen papers in pure and applied mathematics of such dazzling originality that he had to publish them anonymously for fear of embarrassing his superiors. At the age of twenty-two he returned to Glasgow University to take up a professorship in natural philosophy, a position he would hold for the next fifty-three years.
In the course of a long career (he lived till 1907 and the age of eighty-three), he wrote 661 papers, accumulated 69 patents (from which he grew abundantly wealthy), and gained renown in nearly every branch of the physical sciences. Among much else, he suggested the method that led directly to the invention of refrigeration, devised the scale of absolute temperature that still bears his name, invented the boosting devices that allowed telegrams to be sent across oceans, and made innumerable improvements to shipping and navigation, from the invention of a popular marine compass to the creation of the first depth sounder. And those were merely his practical achievements.
His theoretical work, in electromagnetism, thermodynamics, and the wave theory of light, was equally revolutionary.[11] He had really only one flaw and that was an inability to calculate the correct age of the Earth. The question occupied much of the second half of his career, but he never came anywhere near getting it right. His first effort, in 1862 for an article in a popular magazine called Macmillan’s, suggested that the Earth was 98 million years old, but cautiously allowed that the figure could be as low as 20 million years or as high as 400 million. With remarkable prudence he acknowledged that his calculations could be wrong if “sources now unknown to us are prepared in the great storehouse of creation”-but it was clear that he thought that unlikely.
With the passage of time Kelvin would become more forthright in his assertions and less correct. He continually revised his estimates downward, from a maximum of 400 million years, to 100 million years, to 50 million years, and finally, in 1897, to a mere 24 million years. Kelvin wasn’t being willful. It was simply that there was nothing in physics that could explain how a body the size of the Sun could burn continuously for more than a few tens of millions of years at most without exhausting its fuel. Therefore it followed that the Sun and its planets were relatively, but inescapably, youthful.
The problem was that nearly all the fossil evidence contradicted this, and suddenly in the nineteenth century there was a lot of fossil evidence.
IN 1787, SOMEONE in New Jersey-exactly who now seems to be forgotten-found an enormous thighbone sticking out of a stream bank at a place called Woodbury Creek. The bone clearly didn’t belong to any species of creature still alive, certainly not in New Jersey. From what little is known now, it is thought to have belonged to a hadrosaur, a large duck-billed dinosaur. At the time, dinosaurs were unknown.
The bone was sent to Dr. Caspar Wistar, the nation’s leading anatomist, who described it at a meeting of the American Philosophical Society in Philadelphia that autumn. Unfortunately, Wistar failed completely to recognize the bone’s significance and merely made a few cautious and uninspired remarks to the effect that it was indeed a whopper. He thus missed the chance, half a century ahead of anyone else, to be the discoverer of dinosaurs. Indeed, the bone excited so little interest that it was put in a storeroom and eventually disappeared altogether. So the first dinosaur bone ever found was also the first to be lost.
That the bone didn’t attract greater interest is more than a little puzzling, for its appearance came at a time when America was in a froth of excitement about the remains of large, ancient animals. The cause of this froth was a strange assertion by the great French naturalist the Comte de Buffon-he of the heated spheres from the previous chapter-that living things in the New World were inferior in nearly every way to those of the Old World. America, Buffon wrote in his vast and much-esteemed Histoire Naturelle, was a land where the water was stagnant, the soil unproductive, and the animals without size or vigor, their constitutions weakened by the “noxious vapors” that rose from its rotting swamps and sunless forests. In such an environment even the native Indians lacked virility. “They have no beard or body hair,” Buffon sagely confided, “and no ardor for the female.” Their reproductive organs were “small and feeble.”
Buffon’s observations found surprisingly eager support among other writers, especially those whose conclusions were not complicated by actual familiarity with the country. A Dutchman named Comeille de Pauw announced in a popular work called Recherches Philosophiques sur les Américains that native American males were not only reproductively unimposing, but “so lacking in virility that they had milk in their breasts.” Such views enjoyed an improbable durability and could be found repeated or echoed in European texts till near the end of the nineteenth century.
Not surprisingly, such aspersions were indignantly met in America. Thomas Jefferson incorporated a furious (and, unless the context is understood, quite bewildering) rebuttal in his Notes on the State of Virginia, and induced his New Hampshire friend General John Sullivan to send twenty soldiers into the northern woods to find a bull moose to present to Buffon as proof of the stature and majesty of American quadrupeds. It took the men two weeks to track down a suitable subject. The moose, when shot, unfortunately lacked the imposing horns that Jefferson had specified, but Sullivan thoughtfully included a rack of antlers from an elk or stag with the suggestion that these be attached instead. Who in France, after all, would know?
Meanwhile in Philadelphia-Wistar’s city-naturalists had begun to assemble the bones of a giant elephant-like creature known at first as “the great American incognitum” but later identified, not quite correctly, as a mammoth. The first of these bones had been discovered at a place called Big Bone Lick in Kentucky, but soon others were turning up all over. America, it appeared, had once been the home of a truly substantial creature-one that would surely disprove Buffon’s foolish Gallic contentions.
In their keenness to demonstrate the incognitum’s bulk and ferocity, the American naturalists appear to have become slightly carried away. They overestimated its size by a factor of six and gave it frightening claws, which in fact came from a Megalonyx, or giant ground sloth, found nearby. Rather remarkably, they persuaded themselves that the animal had enjoyed “the agility and ferocity of the tiger,” and portrayed it in illustrations as pouncing with feline grace onto prey from boulders. When tusks were discovered, they were forced into the animal’s head in any number of inventive ways. One restorer screwed the tusks in upside down, like the fangs of a saber-toothed cat, which gave it a satisfyingly aggressive aspect. Another arranged the tusks so that they curved backwards on the engaging theory that the creature had been aquatic and had used them to anchor itself to trees while dozing. The most pertinent consideration about the incognitum, however, was that it appeared to be extinct-a fact that Buffon cheerfully seized upon as proof of its incontestably degenerate nature.
Buffon died in 1788, but the controversy rolled on. In 1795 a selection of bones made their way to Paris, where they were examined by the rising star of paleontology, the youthful and aristocratic Georges Cuvier. Cuvier was already dazzling people with his genius for taking heaps of disarticulated bones and whipping them into shapely forms. It was said that he could describe the look and nature of an animal from a single tooth or scrap of jaw, and often name the species and genus into the bargain. Realizing that no one in America had thought to write a formal description of the lumbering beast, Cuvier did so, and thus became its official discoverer. He called it a mastodon (which means, a touch unexpectedly, “nipple-teeth”).
Inspired by the controversy, in 1796 Cuvier wrote a landmark paper, Note on the Species of Living and Fossil Elephants, in which he put forward for the first time a formal theory of extinctions. His belief was that from time to time the Earth experienced global catastrophes in which groups of creatures were wiped out. For religious people, including Cuvier himself, the idea raised uncomfortable implications since it suggested an unaccountable casualness on the part of Providence. To what end would God create species only to wipe them out later? The notion was contrary to the belief in the Great Chain of Being, which held that the world was carefully ordered and that every living thing within it had a place and purpose, and always had and always would. Jefferson for one couldn’t abide the thought that whole species would ever be permitted to vanish (or, come to that, to evolve). So when it was put to him that there might be scientific and political value in sending a party to explore the interior of America beyond the Mississippi he leapt at the idea, hoping the intrepid adventurers would find herds of healthy mastodons and other outsized creatures grazing on the bounteous plains. Jefferson’s personal secretary and trusted friend Meriwether Lewis was chosen co-leader and chief naturalist for the expedition. The person selected to advise him on what to look out for with regard to animals living and deceased was none other than Caspar Wistar.
In the same year-in fact, the same month-that the aristocratic and celebrated Cuvier was propounding his extinction theories in Paris, on the other side of the English Channel a rather more obscure Englishman was having an insight into the value of fossils that would also have lasting ramifications. William Smith was a young supervisor of construction on the Somerset Coal Canal. On the evening of January 5, 1796, he was sitting in a coaching inn in Somerset when he jotted down the notion that would eventually make his reputation. To interpret rocks, there needs to be some means of correlation, a basis on which you can tell that those carboniferous rocks from Devon are younger than these Cambrian rocks from Wales. Smith’s insight was to realize that the answer lay with fossils. At every change in rock strata certain species of fossils disappeared while others carried on into subsequent levels. By noting which species appeared in which strata, you could work out the relative ages of rocks wherever they appeared. Drawing on his knowledge as a surveyor, Smith began at once to make a map of Britain’s rock strata, which would be published after many trials in 1815 and would become a cornerstone of modern geology. (The story is comprehensively covered in Simon Winchester’s popular book The Map That Changed the World.)
Unfortunately, having had his insight, Smith was curiously uninterested in understanding why rocks were laid down in the way they were. “I have left off puzzling about the origin of Strata and content myself with knowing that it is so,” he recorded. “The whys and wherefores cannot come within the Province of a Mineral Surveyor.”
Smith’s revelation regarding strata heightened the moral awkwardness concerning extinctions. To begin with, it confirmed that God had wiped out creatures not occasionally but repeatedly. This made Him seem not so much careless as peculiarly hostile. It also made it inconveniently necessary to explain how some species were wiped out while others continued unimpeded into succeeding eons. Clearly there was more to extinctions than could be accounted for by a single Noachian deluge, as the Biblical flood was known. Cuvier resolved the matter to his own satisfaction by suggesting that Genesis applied only to the most recent inundation. God, it appeared, hadn’t wished to distract or alarm Moses with news of earlier, irrelevant extinctions.
So by the early years of the nineteenth century, fossils had taken on a certain inescapable importance, which makes Wistar’s failure to see the significance of his dinosaur bone all the more unfortunate. Suddenly, in any case, bones were turning up all over. Several other opportunities arose for Americans to claim the discovery of dinosaurs but all were wasted. In 1806 the Lewis and Clark expedition passed through the Hell Creek formation in Montana, an area where fossil hunters would later literally trip over dinosaur bones, and even examined what was clearly a dinosaur bone embedded in rock, but failed to make anything of it. Other bones and fossilized footprints were found in the Connecticut River Valley of New England after a farm boy named Plinus Moody spied ancient tracks on a rock ledge at South Hadley, Massachusetts. Some of these at least survive-notably the bones of an Anchisaurus, which are in the collection of the Peabody Museum at Yale. Found in 1818, they were the first dinosaur bones to be examined and saved, but unfortunately weren’t recognized for what they were until 1855. In that same year, 1818, Caspar Wistar died, but he did gain a certain unexpected immortality when a botanist named Thomas Nuttall named a delightful climbing shrub after him. Some botanical purists still insist on spelling it wistaria.
By this time, however, paleontological momentum had moved to England. In 1812, at Lyme Regis on the Dorset coast, an extraordinary child named Mary Anning-aged eleven, twelve, or thirteen, depending on whose account you read-found a strange fossilized sea monster, seventeen feet long and now known as the ichthyosaurus, embedded in the steep and dangerous cliffs along the English Channel.
It was the start of a remarkable career. Anning would spend the next thirty-five years gathering fossils, which she sold to visitors. (She is commonly held to be the source for the famous tongue twister “She sells seashells on the seashore.”) She would also find the first plesiosaurus, another marine monster, and one of the first and best pterodactyls. Though none of these was technically a dinosaur, that wasn’t terribly relevant at the time since nobody then knew what a dinosaur was. It was enough to realize that the world had once held creatures strikingly unlike anything we might now find.
It wasn’t simply that Anning was good at spotting fossils-though she was unrivalled at that-but that she could extract them with the greatest delicacy and without damage. If you ever have the chance to visit the hall of ancient marine reptiles at the Natural History Museum in London, I urge you to take it for there is no other way to appreciate the scale and beauty of what this young woman achieved working virtually unaided with the most basic tools in nearly impossible conditions. The plesiosaur alone took her ten years of patient excavation. Although untrained, Anning was also able to provide competent drawings and descriptions for scholars. But even with the advantage of her skills, significant finds were rare and she passed most of her life in poverty.
It would be hard to think of a more overlooked person in the history of paleontology than Mary Anning, but in fact there was one who came painfully close. His name was Gideon Algernon Mantell and he was a country doctor in Sussex.
Mantell was a lanky assemblage of shortcomings-he was vain, self-absorbed, priggish, neglectful of his family-but never was there a more devoted amateur paleontologist. He was also lucky to have a devoted and observant wife. In 1822, while he was making a house call on a patient in rural Sussex, Mrs. Mantell went for a stroll down a nearby lane and in a pile of rubble that had been left to fill potholes she found a curious object-a curved brown stone, about the size of a small walnut. Knowing her husband’s interest in fossils, and thinking it might be one, she took it to him. Mantell could see at once it was a fossilized tooth, and after a little study became certain that it was from an animal that was herbivorous, reptilian, extremely large-tens of feet long-and from the Cretaceous period. He was right on all counts, but these were bold conclusions since nothing like it had been seen before or even imagined.
Aware that his finding would entirely upend what was understood about the past, and urged by his friend the Reverend William Buckland-he of the gowns and experimental appetite-to proceed with caution, Mantell devoted three painstaking years to seeking evidence to support his conclusions. He sent the tooth to Cuvier in Paris for an opinion, but the great Frenchman dismissed it as being from a hippopotamus. (Cuvier later apologized handsomely for this uncharacteristic error.) One day while doing research at the Hunterian Museum in London, Mantell fell into conversation with a fellow researcher who told him the tooth looked very like those of animals he had been studying, South American iguanas. A hasty comparison confirmed the resemblance. And so Mantell’s creature became Iguanodon, after a basking tropical lizard to which it was not in any manner related.
Mantell prepared a paper for delivery to the Royal Society. Unfortunately it emerged that another dinosaur had been found at a quarry in Oxfordshire and had just been formally described-by the Reverend Buckland, the very man who had urged him not to work in haste. It was the Megalosaurus, and the name was actually suggested to Buckland by his friend Dr. James Parkinson, the would-be radical and eponym for Parkinson’s disease. Buckland, it may be recalled, was foremost a geologist, and he showed it with his work on Megalosaurus. In his report, for the Transactions of the Geological Society of London, he noted that the creature’s teeth were not attached directly to the jawbone as in lizards but placed in sockets in the manner of crocodiles. But having noticed this much, Buckland failed to realize what it meant: Megalosaurus was an entirely new type of creature. So although his report demonstrated little acuity or insight, it was still the first published description of a dinosaur, and so to him rather than the far more deserving Mantell goes the credit for the discovery of this ancient line of beings.
Unaware that disappointment was going to be a continuing feature of his life, Mantell continued hunting for fossils-he found another giant, the Hylaeosaurus, in 1833-and purchasing others from quarrymen and farmers until he had probably the largest fossil collection in Britain. Mantell was an excellent doctor and equally gifted bone hunter, but he was unable to support both his talents. As his collecting mania grew, he neglected his medical practice. Soon fossils filled nearly the whole of his house in Brighton and consumed much of his income. Much of the rest went to underwriting the publication of books that few cared to own. Illustrations of the Geology of Sussex, published in 1827, sold only fifty copies and left him £300 out of pocket-an uncomfortably substantial sum for the times.
In some desperation Mantell hit on the idea of turning his house into a museum and charging admission, then belatedly realized that such a mercenary act would ruin his standing as a gentleman, not to mention as a scientist, and so he allowed people to visit the house for free. They came in their hundreds, week after week, disrupting both his practice and his home life. Eventually he was forced to sell most of his collection to pay off his debts. Soon after, his wife left him, taking their four children with her.
Remarkably, his troubles were only just beginning.
In the district of Sydenham in south London, at a place called Crystal Palace Park, there stands a strange and forgotten sight: the world’s first life-sized models of dinosaurs. Not many people travel there these days, but once this was one of the most popular attractions in London-in effect, as Richard Fortey has noted, the world’s first theme park. Quite a lot about the models is not strictly correct. The iguanodon’s thumb has been placed on its nose, as a kind of spike, and it stands on four sturdy legs, making it look like a rather stout and awkwardly overgrown dog. (In life, the iguanodon did not crouch on all fours, but was bipedal.) Looking at them now you would scarcely guess that these odd and lumbering beasts could cause great rancor and bitterness, but they did. Perhaps nothing in natural history has been at the center of fiercer and more enduring hatreds than the line of ancient beasts known as dinosaurs.
At the time of the dinosaurs’ construction, Sydenham was on the edge of London and its spacious park was considered an ideal place to re-erect the famous Crystal Palace, the glass and cast-iron structure that had been the centerpiece of the Great Exhibition of 1851, and from which the new park naturally took its name. The dinosaurs, built of concrete, were a kind of bonus attraction. On New Year’s Eve 1853 a famous dinner for twenty-one prominent scientists was held inside the unfinished iguanodon. Gideon Mantell, the man who had found and identified the iguanodon, was not among them. The person at the head of the table was the greatest star of the young science of paleontology. His name was Richard Owen and by this time he had already devoted several productive years to making Gideon Mantell’s life hell.
Owen had grown up in Lancaster, in the north of England, where he had trained as a doctor. He was a born anatomist and so devoted to his studies that he sometimes illicitly borrowed limbs, organs, and other parts from cadavers and took them home for leisurely dissection. Once while carrying a sack containing the head of a black African sailor that he had just removed, Owen slipped on a wet cobble and watched in horror as the head bounced away from him down the lane and through the open doorway of a cottage, where it came to rest in the front parlor. What the occupants had to say upon finding an unattached head rolling to a halt at their feet can only be imagined. One assumes that they had not formed any terribly advanced conclusions when, an instant later, a fraught-looking young man rushed in, wordlessly retrieved the head, and rushed out again.
In 1825, aged just twenty-one, Owen moved to London and soon after was engaged by the Royal College of Surgeons to help organize their extensive, but disordered, collections of medical and anatomical specimens. Most of these had been left to the institution by John Hunter, a distinguished surgeon and tireless collector of medical curiosities, but had never been catalogued or organized, largely because the paperwork explaining the significance of each had gone missing soon after Hunter’s death.
Owen swiftly distinguished himself with his powers of organization and deduction. At the same time he showed himself to be a peerless anatomist with instincts for reconstruction almost on a par with the great Cuvier in Paris. He become such an expert on the anatomy of animals that he was granted first refusal on any animal that died at the London Zoological Gardens, and these he would invariably have delivered to his house for examination. Once his wife returned home to find a freshly deceased rhinoceros filling the front hallway. He quickly became a leading expert on all kinds of animals living and extinct-from platypuses, echidnas, and other newly discovered marsupials to the hapless dodo and the extinct giant birds called moas that had roamed New Zealand until eaten out of existence by the Maoris. He was the first to describe the archaeopteryx after its discovery in Bavaria in 1861 and the first to write a formal epitaph for the dodo. Altogether he produced some six hundred anatomical papers, a prodigious output.
But it was for his work with dinosaurs that Owen is remembered. He coined the term dinosauria in 1841. It means “terrible lizard” and was a curiously inapt name. Dinosaurs, as we now know, weren’t all terrible-some were no bigger than rabbits and probably extremely retiring-and the one thing they most emphatically were not was lizards, which are actually of a much older (by thirty million years) lineage. Owen was well aware that the creatures were reptilian and had at his disposal a perfectly good Greek word, herpeton, but for some reason chose not to use it. Another, more excusable error (given the paucity of specimens at the time) was that dinosaurs constitute not one but two orders of reptiles: the bird-hipped ornithischians and the lizard-hipped saurischians.
Owen was not an attractive person, in appearance or in temperament. A photograph from his late middle years shows him as gaunt and sinister, like the villain in a Victorian melodrama, with long, lank hair and bulging eyes-a face to frighten babies. In manner he was cold and imperious, and he was without scruple in the furtherance of his ambitions. He was the only person Charles Darwin was ever known to hate. Even Owen’s son (who soon after killed himself) referred to his father’s “lamentable coldness of heart.”
His undoubted gifts as an anatomist allowed him to get away with the most barefaced dishonesties. In 1857, the naturalist T. H. Huxley was leafing through a new edition of Churchill’s Medical Directory when he noticed that Owen was listed as Professor of Comparative Anatomy and Physiology at the Government School of Mines, which rather surprised Huxley as that was the position he held. Upon inquiring how Churchill’s had made such an elemental error, he was told that the information had been provided to them by Dr. Owen himself. A fellow naturalist named Hugh Falconer, meanwhile, caught Owen taking credit for one of his discoveries. Others accused him of borrowing specimens, then denying he had done so. Owen even fell into a bitter dispute with the Queen’s dentist over the credit for a theory concerning the physiology of teeth.
He did not hesitate to persecute those whom he disliked. Early in his career Owen used his influence at the Zoological Society to blackball a young man named Robert Grant whose only crime was to have shown promise as a fellow anatomist. Grant was astonished to discover that he was suddenly denied access to the anatomical specimens he needed to conduct his research. Unable to pursue his work, he sank into an understandably dispirited obscurity.
But no one suffered more from Owen’s unkindly attentions than the hapless and increasingly tragic Gideon Mantell. After losing his wife, his children, his medical practice, and most of his fossil collection, Mantell moved to London. There in 1841-the fateful year in which Owen would achieve his greatest glory for naming and identifying the dinosaurs-Mantell was involved in a terrible accident. While crossing Clapham Common in a carriage, he somehow fell from his seat, grew entangled in the reins, and was dragged at a gallop over rough ground by the panicked horses. The accident left him bent, crippled, and in chronic pain, with a spine damaged beyond repair.
Capitalizing on Mantell’s enfeebled state, Owen set about systematically expunging Mantell’s contributions from the record, renaming species that Mantell had named years before and claiming credit for their discovery for himself. Mantell continued to try to do original research but Owen used his influence at the Royal Society to ensure that most of his papers were rejected. In 1852, unable to bear any more pain or persecution, Mantell took his own life. His deformed spine was removed and sent to the Royal College of Surgeons where-and now here’s an irony for you-it was placed in the care of Richard Owen, director of the college’s Hunterian Museum.
But the insults had not quite finished. Soon after Mantell’s death an arrestingly uncharitable obituary appeared in the Literary Gazette. In it Mantell was characterized as a mediocre anatomist whose modest contributions to paleontology were limited by a “want of exact knowledge.” The obituary even removed the discovery of the iguanodon from him and credited it instead to Cuvier and Owen, among others. Though the piece carried no byline, the style was Owen’s and no one in the world of the natural sciences doubted the authorship.
By this stage, however, Owen’s transgressions were beginning to catch up with him. His undoing began when a committee of the Royal Society-a committee of which he happened to be chairman-decided to award him its highest honor, the Royal Medal, for a paper he had written on an extinct mollusc called the belemnite. “However,” as Deborah Cadbury notes in her excellent history of the period, Terrible Lizard, “this piece of work was not quite as original as it appeared.” The belemnite, it turned out, had been discovered four years earlier by an amateur naturalist named Chaning Pearce, and the discovery had been fully reported at a meeting of the Geological Society. Owen had been at that meeting, but failed to mention this when he presented a report of his own to the Royal Society-in which, not incidentally, he rechristened the creature Belemnites owenii in his own honor. Although Owen was allowed to keep the Royal Medal, the episode left a permanent tarnish on his reputation, even among his few remaining supporters.
Eventually Huxley managed to do to Owen what Owen had done to so many others: he had him voted off the councils of the Zoological and Royal societies. As a final insult Huxley became the new Hunterian Professor at the Royal College of Surgeons.
Owen would never again do important research, but the latter half of his career was devoted to one unexceptionable pursuit for which we can all be grateful. In 1856 he became head of the natural history section of the British Museum, in which capacity he became the driving force behind the creation of London’s Natural History Museum. The grand and beloved Gothic heap in South Kensington, opened in 1880, is almost entirely a testament to his vision.
Before Owen, museums were designed primarily for the use and edification of the elite, and even then it was difficult to gain access. In the early days of the British Museum, prospective visitors had to make a written application and undergo a brief interview to determine if they were fit to be admitted at all. They then had to return a second time to pick up a ticket-that is assuming they had passed the interview-and finally come back a third time to view the museum’s treasures. Even then they were whisked through in groups and not allowed to linger. Owen’s plan was to welcome everyone, even to the point of encouraging workingmen to visit in the evening, and to devote most of the museum’s space to public displays. He even proposed, very radically, to put informative labels on each display so that people could appreciate what they were viewing. In this, somewhat unexpectedly, he was opposed by T. H. Huxley, who believed that museums should be primarily research institutes. By making the Natural History Museum an institution for everyone, Owen transformed our expectations of what museums are for.
Still, his altruism in general toward his fellow man did not deflect him from more personal rivalries. One of his last official acts was to lobby against a proposal to erect a statue in memory of Charles Darwin. In this he failed-though he did achieve a certain belated, inadvertent triumph. Today his statue commands a masterly view from the staircase of the main hall in the Natural History Museum, while Darwin and T. H. Huxley are consigned somewhat obscurely to the museum coffee shop, where they stare gravely over people snacking on cups of tea and jam doughnuts.
It would be reasonable to suppose that Richard Owen’s petty rivalries marked the low point of nineteenth-century paleontology, but in fact worse was to come, this time from overseas. In America in the closing decades of the century there arose a rivalry even more spectacularly venomous, if not quite as destructive. It was between two strange and ruthless men, Edward Drinker Cope and Othniel Charles Marsh.
They had much in common. Both were spoiled, driven, self-centered, quarrelsome, jealous, mistrustful, and ever unhappy. Between them they changed the world of paleontology.
They began as mutual friends and admirers, even naming fossil species after each other, and spent a pleasant week together in 1868. However, something then went wrong between them-nobody is quite sure what-and by the following year they had developed an enmity that would grow into consuming hatred over the next thirty years. It is probably safe to say that no two people in the natural sciences have ever despised each other more.
Marsh, the elder of the two by eight years, was a retiring and bookish fellow, with a trim beard and dapper manner, who spent little time in the field and was seldom very good at finding things when he was there. On a visit to the famous dinosaur fields of Como Bluff, Wyoming, he failed to notice the bones that were, in the words of one historian, “lying everywhere like logs.” But he had the means to buy almost anything he wanted. Although he came from a modest background-his father was a farmer in upstate New York-his uncle was the supremely rich and extraordinarily indulgent financier George Peabody. When Marsh showed an interest in natural history, Peabody had a museum built for him at Yale and provided funds sufficient for Marsh to fill it with almost whatever took his fancy.
Cope was born more directly into privilege-his father was a rich Philadelphia businessman-and was by far the more adventurous of the two. In the summer of 1876 in Montana while George Armstrong Custer and his troops were being cut down at Little Big Horn, Cope was out hunting for bones nearby. When it was pointed out to him that this was probably not the most prudent time to be taking treasures from Indian lands, Cope thought for a minute and decided to press on anyway. He was having too good a season. At one point he ran into a party of suspicious Crow Indians, but he managed to win them over by repeatedly taking out and replacing his false teeth.
For a decade or so, Marsh and Cope’s mutual dislike primarily took the form of quiet sniping, but in 1877 it erupted into grandiose dimensions. In that year a Colorado schoolteacher named Arthur Lakes found bones near Morrison while out hiking with a friend. Recognizing the bones as coming from a “gigantic saurian,” Lakes thoughtfully dispatched some samples to both Marsh and Cope. A delighted Cope sent Lakes a hundred dollars for his trouble and asked him not to tell anyone of his discovery, especially Marsh. Confused, Lakes now asked Marsh to pass the bones on to Cope. Marsh did so, but it was an affront that he would never forget.
It also marked the start of a war between the two that became increasingly bitter, underhand, and often ridiculous. They sometimes stooped to one team’s diggers throwing rocks at the other team’s. Cope was caught at one point jimmying open crates that belonged to Marsh. They insulted each other in print and each poured scorn on the other’s results. Seldom-perhaps never-has science been driven forward more swiftly and successfully by animosity. Over the next several years the two men between them increased the number of known dinosaur species in America from 9 to almost 150. Nearly every dinosaur that the average person can name-stegosaurus, brontosaurus, diplodocus, triceratops-was found by one or the other of them.[12] Unfortunately, they worked in such reckless haste that they often failed to note that a new discovery was something already known. Between them they managed to “discover” a species called Uintatheres anceps no fewer than twenty-two times. It took years to sort out some of the classification messes they made. Some are not sorted out yet.
Of the two, Cope’s scientific legacy was much the more substantial. In a breathtakingly industrious career, he wrote some 1,400 learned papers and described almost 1,300 new species of fossil (of all types, not just dinosaurs)-more than double Marsh’s output in both cases. Cope might have done even more, but unfortunately he went into a rather precipitate descent in his later years. Having inherited a fortune in 1875, he invested unwisely in silver and lost everything. He ended up living in a single room in a Philadelphia boarding house, surrounded by books, papers, and bones. Marsh by contrast finished his days in a splendid mansion in New Haven. Cope died in 1897, Marsh two years later.
In his final years, Cope developed one other interesting obsession. It became his earnest wish to be declared the type specimen for Homo sapiens-that is, that his bones would be the official set for the human race. Normally, the type specimen of a species is the first set of bones found, but since no first set of Homo sapiens bones exists, there was a vacancy, which Cope desired to fill. It was an odd and vain wish, but no one could think of any grounds to oppose it. To that end, Cope willed his bones to the Wistar Institute, a learned society in Philadelphia endowed by the descendants of the seemingly inescapable Caspar Wistar. Unfortunately, after his bones were prepared and assembled, it was found that they showed signs of incipient syphilis, hardly a feature one would wish to preserve in the type specimen for one’s own race. So Cope’s petition and his bones were quietly shelved. There is still no type specimen for modern humans.
As for the other players in this drama, Owen died in 1892, a few years before Cope or Marsh. Buckland ended up by losing his mind and finished his days a gibbering wreck in a lunatic asylum in Clapham, not far from where Mantell had suffered his crippling accident. Mantell’s twisted spine remained on display at the Hunterian Museum for nearly a century before being mercifully obliterated by a German bomb in the Blitz. What remained of Mantell’s collection after his death passed on to his children, and much of it was taken to New Zealand by his son Walter, who emigrated there in 1840. Walter became a distinguished Kiwi, eventually attaining the office of Minister of Native Affairs. In 1865 he donated the prime specimens from his father’s collection, including the famous iguanodon tooth, to the Colonial Museum (now the Museum of New Zealand) in Wellington, where they have remained ever since. The iguanodon tooth that started it all-arguably the most important tooth in paleontology-is no longer on display.
Of course dinosaur hunting didn’t end with the deaths of the great nineteenth-century fossil hunters. Indeed, to a surprising extent it had only just begun. In 1898, the year that fell between the deaths of Cope and Marsh, a trove greater by far than anything found before was discovered-noticed, really-at a place called Bone Cabin Quarry, only a few miles from Marsh’s prime hunting ground at Como Bluff, Wyoming. There, hundreds and hundreds of fossil bones were to be found weathering out of the hills. They were so numerous, in fact, that someone had built a cabin out of them-hence the name. In just the first two seasons, 100,000 pounds of ancient bones were excavated from the site, and tens of thousands of pounds more came in each of the half dozen years that followed.
The upshot is that by the turn of the twentieth century, paleontologists had literally tons of old bones to pick over. The problem was that they still didn’t have any idea how old any of these bones were. Worse, the agreed ages for the Earth couldn’t comfortably support the numbers of eons and ages and epochs that the past obviously contained. If Earth were really only twenty million years old or so, as the great Lord Kelvin insisted, then whole orders of ancient creatures must have come into being and gone out again practically in the same geological instant. It just made no sense.
Other scientists besides Kelvin turned their minds to the problem and came up with results that only deepened the uncertainty. Samuel Haughton, a respected geologist at Trinity College in Dublin, announced an estimated age for the Earth of 2,300 million years-way beyond anything anybody else was suggesting. When this was drawn to his attention, he recalculated using the same data and put the figure at 153 million years. John Joly, also of Trinity, decided to give Edmond Halley’s ocean salts idea a whirl, but his method was based on so many faulty assumptions that he was hopelessly adrift. He calculated that the Earth was 89 million years old-an age that fit neatly enough with Kelvin’s assumptions but unfortunately not with reality.
Such was the confusion that by the close of the nineteenth century, depending on which text you consulted, you could learn that the number of years that stood between us and the dawn of complex life in the Cambrian period was 3 million, 18 million, 600 million, 794 million, or 2.4 billion-or some other number within that range. As late as 1910, one of the most respected estimates, by the American George Becker, put the Earth’s age at perhaps as little as 55 million years.
Just when matters seemed most intractably confused, along came another extraordinary figure with a novel approach. He was a bluff and brilliant New Zealand farm boy named Ernest Rutherford, and he produced pretty well irrefutable evidence that the Earth was at least many hundreds of millions of years old, probably rather more.
Remarkably, his evidence was based on alchemy-natural, spontaneous, scientifically credible, and wholly non-occult, but alchemy nonetheless. Newton, it turned out, had not been so wrong after all. And exactly how that came to be is of course another story.
CHEMISTRY AS AN earnest and respectable science is often said to date from 1661, when Robert Boyle of Oxford published The Sceptical Chymist-the first work to distinguish between chemists and alchemists-but it was a slow and often erratic transition. Into the eighteenth century scholars could feel oddly comfortable in both camps-like the German Johann Becher, who produced an unexceptionable work on mineralogy called Physica Subterranea, but who also was certain that, given the right materials, he could make himself invisible.
Perhaps nothing better typifies the strange and often accidental nature of chemical science in its early days than a discovery made by a German named Hennig Brand in 1675. Brand became convinced that gold could somehow be distilled from human urine. (The similarity of color seems to have been a factor in his conclusion.) He assembled fifty buckets of human urine, which he kept for months in his cellar. By various recondite processes, he converted the urine first into a noxious paste and then into a translucent waxy substance. None of it yielded gold, of course, but a strange and interesting thing did happen. After a time, the substance began to glow. Moreover, when exposed to air, it often spontaneously burst into flame.
The commercial potential for the stuff-which soon became known as phosphorus, from Greek and Latin roots meaning “light bearing”-was not lost on eager businesspeople, but the difficulties of manufacture made it too costly to exploit. An ounce of phosphorus retailed for six guineas-perhaps five hundred dollars in today’s money-or more than gold.
At first, soldiers were called on to provide the raw material, but such an arrangement was hardly conducive to industrial-scale production. In the 1750s a Swedish chemist named Karl (or Carl) Scheele devised a way to manufacture phosphorus in bulk without the slop or smell of urine. It was largely because of this mastery of phosphorus that Sweden became, and remains, a leading producer of matches.
Scheele was both an extraordinary and extraordinarily luckless fellow. A poor pharmacist with little in the way of advanced apparatus, he discovered eight elements-chlorine, fluorine, manganese, barium, molybdenum, tungsten, nitrogen, and oxygen-and got credit for none of them. In every case, his finds were either overlooked or made it into publication after someone else had made the same discovery independently. He also discovered many useful compounds, among them ammonia, glycerin, and tannic acid, and was the first to see the commercial potential of chlorine as a bleach-all breakthroughs that made other people extremely wealthy.
Scheele’s one notable shortcoming was a curious insistence on tasting a little of everything he worked with, including such notoriously disagreeable substances as mercury, prussic acid (another of his discoveries), and hydrocyanic acid-a compound so famously poisonous that 150 years later Erwin Schrödinger chose it as his toxin of choice in a famous thought experiment (see page 146). Scheele’s rashness eventually caught up with him. In 1786, aged just forty-three, he was found dead at his workbench surrounded by an array of toxic chemicals, any one of which could have accounted for the stunned and terminal look on his face.
Were the world just and Swedish-speaking, Scheele would have enjoyed universal acclaim. Instead credit has tended to lodge with more celebrated chemists, mostly from the English-speaking world. Scheele discovered oxygen in 1772, but for various heartbreakingly complicated reasons could not get his paper published in a timely manner. Instead credit went to Joseph Priestley, who discovered the same element independently, but latterly, in the summer of 1774. Even more remarkable was Scheele’s failure to receive credit for the discovery of chlorine. Nearly all textbooks still attribute chlorine’s discovery to Humphry Davy, who did indeed find it, but thirty-six years after Scheele had.
Although chemistry had come a long way in the century that separated Newton and Boyle from Scheele and Priestley and Henry Cavendish, it still had a long way to go. Right up to the closing years of the eighteenth century (and in Priestley’s case a little beyond) scientists everywhere searched for, and sometimes believed they had actually found, things that just weren’t there: vitiated airs, dephlogisticated marine acids, phloxes, calxes, terraqueous exhalations, and, above all, phlogiston, the substance that was thought to be the active agent in combustion. Somewhere in all this, it was thought, there also resided a mysterious élan vital, the force that brought inanimate objects to life. No one knew where this ethereal essence lay, but two things seemed probable: that you could enliven it with a jolt of electricity (a notion Mary Shelley exploited to full effect in her novel Frankenstein) and that it existed in some substances but not others, which is why we ended up with two branches of chemistry: organic (for those substances that were thought to have it) and inorganic (for those that did not).
Someone of insight was needed to thrust chemistry into the modern age, and it was the French who provided him. His name was Antoine-Laurent Lavoisier. Born in 1743, Lavoisier was a member of the minor nobility (his father had purchased a title for the family). In 1768, he bought a practicing share in a deeply despised institution called the Ferme Générale (or General Farm), which collected taxes and fees on behalf of the government. Although Lavoisier himself was by all accounts mild and fair-minded, the company he worked for was neither. For one thing, it did not tax the rich but only the poor, and then often arbitrarily. For Lavoisier, the appeal of the institution was that it provided him with the wealth to follow his principal devotion, science. At his peak, his personal earnings reached 150,000 livres a year-perhaps $20 million in today’s money.
Three years after embarking on this lucrative career path, he married the fourteen-year-old daughter of one of his bosses. The marriage was a meeting of hearts and minds both. Madame Lavoisier had an incisive intellect and soon was working productively alongside her husband. Despite the demands of his job and busy social life, they managed to put in five hours of science on most days-two in the early morning and three in the evening-as well as the whole of Sunday, which they called their jour de bonheur (day of happiness). Somehow Lavoisier also found the time to be commissioner of gunpowder, supervise the building of a wall around Paris to deter smugglers, help found the metric system, and coauthor the handbook Méthode de Nomenclature Chimique, which became the bible for agreeing on the names of the elements.
As a leading member of the Académie Royale des Sciences, he was also required to take an informed and active interest in whatever was topical-hypnotism, prison reform, the respiration of insects, the water supply of Paris. It was in such a capacity in 1780 that Lavoisier made some dismissive remarks about a new theory of combustion that had been submitted to the academy by a hopeful young scientist. The theory was indeed wrong, but the scientist never forgave him. His name was Jean-Paul Marat.
The one thing Lavoisier never did was discover an element. At a time when it seemed as if almost anybody with a beaker, a flame, and some interesting powders could discover something new-and when, not incidentally, some two-thirds of the elements were yet to be found-Lavoisier failed to uncover a single one. It certainly wasn’t for want of beakers. Lavoisier had thirteen thousand of them in what was, to an almost preposterous degree, the finest private laboratory in existence.
Instead he took the discoveries of others and made sense of them. He threw out phlogiston and mephitic airs. He identified oxygen and hydrogen for what they were and gave them both their modern names. In short, he helped to bring rigor, clarity, and method to chemistry.
And his fancy equipment did in fact come in very handy. For years, he and Madame Lavoisier occupied themselves with extremely exacting studies requiring the finest measurements. They determined, for instance, that a rusting object doesn’t lose weight, as everyone had long assumed, but gains weight-an extraordinary discovery. Somehow as it rusted the object was attracting elemental particles from the air. It was the first realization that matter can be transformed but not eliminated. If you burned this book now, its matter would be changed to ash and smoke, but the net amount of stuff in the universe would be the same. This became known as the conservation of mass, and it was a revolutionary concept. Unfortunately, it coincided with another type of revolution-the French one-and for this one Lavoisier was entirely on the wrong side.
Not only was he a member of the hated Ferme Générale, but he had enthusiastically built the wall that enclosed Paris-an edifice so loathed that it was the first thing attacked by the rebellious citizens. Capitalizing on this, in 1791 Marat, now a leading voice in the National Assembly, denounced Lavoisier and suggested that it was well past time for his hanging. Soon afterward the Ferme Générale was shut down. Not long after this Marat was murdered in his bath by an aggrieved young woman named Charlotte Corday, but by this time it was too late for Lavoisier.
In 1793, the Reign of Terror, already intense, ratcheted up to a higher gear. In October Marie Antoinette was sent to the guillotine. The following month, as Lavoisier and his wife were making tardy plans to slip away to Scotland, Lavoisier was arrested. In May he and thirty-one fellow farmers-general were brought before the Revolutionary Tribunal (in a courtroom presided over by a bust of Marat). Eight were granted acquittals, but Lavoisier and the others were taken directly to the Place de la Revolution (now the Place de la Concorde), site of the busiest of French guillotines. Lavoisier watched his father-in-law beheaded, then stepped up and accepted his fate. Less than three months later, on July 27, Robespierre himself was dispatched in the same way and in the same place, and the Reign of Terror swiftly ended.
A hundred years after his death, a statue of Lavoisier was erected in Paris and much admired until someone pointed out that it looked nothing like him. Under questioning the sculptor admitted that he had used the head of the mathematician and philosopher the Marquis de Condorcet-apparently he had a spare-in the hope that no one would notice or, having noticed, would care. In the second regard he was correct. The statue of Lavoisier-cum-Condorcet was allowed to remain in place for another half century until the Second World War when, one morning, it was taken away and melted down for scrap.
In the early 1800s there arose in England a fashion for inhaling nitrous oxide, or laughing gas, after it was discovered that its use “was attended by a highly pleasurable thrilling.” For the next half century it would be the drug of choice for young people. One learned body, the Askesian Society, was for a time devoted to little else. Theaters put on “laughing gas evenings” where volunteers could refresh themselves with a robust inhalation and then entertain the audience with their comical staggerings.
It wasn’t until 1846 that anyone got around to finding a practical use for nitrous oxide, as an anesthetic. Goodness knows how many tens of thousands of people suffered unnecessary agonies under the surgeon’s knife because no one thought of the gas’s most obvious practical application.
I mention this to make the point that chemistry, having come so far in the eighteenth century, rather lost its bearings in the first decades of the nineteenth, in much the way that geology would in the early years of the twentieth. Partly it was to do with the limitations of equipment-there were, for instance, no centrifuges until the second half of the century, severely restricting many kinds of experiments-and partly it was social. Chemistry was, generally speaking, a science for businesspeople, for those who worked with coal and potash and dyes, and not gentlemen, who tended to be drawn to geology, natural history, and physics. (This was slightly less true in continental Europe than in Britain, but only slightly.) It is perhaps telling that one of the most important observations of the century, Brownian motion, which established the active nature of molecules, was made not by a chemist but by a Scottish botanist, Robert Brown. (What Brown noticed, in 1827, was that tiny grains of pollen suspended in water remained indefinitely in motion no matter how long he gave them to settle. The cause of this perpetual motion-namely the actions of invisible molecules-was long a mystery.)
Things might have been worse had it not been for a splendidly improbable character named Count von Rumford, who, despite the grandeur of his title, began life in Woburn, Massachusetts, in 1753 as plain Benjamin Thompson. Thompson was dashing and ambitious, “handsome in feature and figure,” occasionally courageous and exceedingly bright, but untroubled by anything so inconveniencing as a scruple. At nineteen he married a rich widow fourteen years his senior, but at the outbreak of revolution in the colonies he unwisely sided with the loyalists, for a time spying on their behalf. In the fateful year of 1776, facing arrest “for lukewarmness in the cause of liberty,” he abandoned his wife and child and fled just ahead of a mob of anti-Royalists armed with buckets of hot tar, bags of feathers, and an earnest desire to adorn him with both.
He decamped first to England and then to Germany, where he served as a military advisor to the government of Bavaria, so impressing the authorities that in 1791 he was named Count von Rumford of the Holy Roman Empire. While in Munich, he also designed and laid out the famous park known as the English Garden.
In between these undertakings, he somehow found time to conduct a good deal of solid science. He became the world’s foremost authority on thermodynamics and the first to elucidate the principles of the convection of fluids and the circulation of ocean currents. He also invented several useful objects, including a drip coffeemaker, thermal underwear, and a type of range still known as the Rumford fireplace. In 1805, during a sojourn in France, he wooed and married Madame Lavoisier, widow of Antoine-Laurent. The marriage was not a success and they soon parted. Rumford stayed on in France, where he died, universally esteemed by all but his former wives, in 1814.
But our purpose in mentioning him here is that in 1799, during a comparatively brief interlude in London, he founded the Royal Institution, yet another of the many learned societies that popped into being all over Britain in the late eighteenth and early nineteenth centuries. For a time it was almost the only institution of standing to actively promote the young science of chemistry, and that was thanks almost entirely to a brilliant young man named Humphry Davy, who was appointed the institution’s professor of chemistry shortly after its inception and rapidly gained fame as an outstanding lecturer and productive experimentalist.
Soon after taking up his position, Davy began to bang out new elements one after another-potassium, sodium, magnesium, calcium, strontium, and aluminum or aluminium, depending on which branch of English you favor.[13] He discovered so many elements not so much because he was serially astute as because he developed an ingenious technique of applying electricity to a molten substance-electrolysis, as it is known. Altogether he discovered a dozen elements, a fifth of the known total of his day. Davy might have done far more, but unfortunately as a young man he developed an abiding attachment to the buoyant pleasures of nitrous oxide. He grew so attached to the gas that he drew on it (literally) three or four times a day. Eventually, in 1829, it is thought to have killed him.
Fortunately more sober types were at work elsewhere. In 1808, a dour Quaker named John Dalton became the first person to intimate the nature of an atom (progress that will be discussed more completely a little further on), and in 1811 an Italian with the splendidly operatic name of Lorenzo Romano Amadeo Carlo Avogadro, Count of Quarequa and Cerreto, made a discovery that would prove highly significant in the long term-namely, that two equal volumes of gases of any type, if kept at the same pressure and temperature, will contain identical numbers of molecules.
Two things were notable about Avogadro’s Principle, as it became known. First, it provided a basis for more accurately measuring the size and weight of atoms. Using Avogadro’s mathematics, chemists were eventually able to work out, for instance, that a typical atom had a diameter of 0.00000008 centimeters, which is very little indeed. And second, almost no one knew about Avogadro’s appealingly simple principle for almost fifty years.[14]
Partly this was because Avogadro himself was a retiring fellow-he worked alone, corresponded very little with fellow scientists, published few papers, and attended no meetings-but also it was because there were no meetings to attend and few chemical journals in which to publish. This is a fairly extraordinary fact. The Industrial Revolution was driven in large part by developments in chemistry, and yet as an organized science chemistry barely existed for decades.
The Chemical Society of London was not founded until 1841 and didn’t begin to produce a regular journal until 1848, by which time most learned societies in Britain-Geological, Geographical, Zoological, Horticultural, and Linnaean (for naturalists and botanists)-were at least twenty years old and often much more. The rival Institute of Chemistry didn’t come into being until 1877, a year after the founding of the American Chemical Society. Because chemistry was so slow to get organized, news of Avogadro’s important breakthrough of 1811 didn’t begin to become general until the first international chemistry congress, in Karlsruhe, in 1860.
Because chemists for so long worked in isolation, conventions were slow to emerge. Until well into the second half of the century, the formula H2O2 might mean water to one chemist but hydrogen peroxide to another. C2H4 could signify ethylene or marsh gas. There was hardly a molecule that was uniformly represented everywhere.
Chemists also used a bewildering variety of symbols and abbreviations, often self-invented. Sweden’s J. J. Berzelius brought a much-needed measure of order to matters by decreeing that the elements be abbreviated on the basis of their Greek or Latin names, which is why the abbreviation for iron is Fe (from the Latin ferrum) and that for silver is Ag (from the Latin argentum). That so many of the other abbreviations accord with their English names (N for nitrogen, O for Oxygen, H for hydrogen, and so on) reflects English’s Latinate nature, not its exalted status. To indicate the number of atoms in a molecule, Berzelius employed a superscript notation, as in H2O. Later, for no special reason, the fashion became to render the number as subscript: H2O.
Despite the occasional tidyings-up, chemistry by the second half of the nineteenth century was in something of a mess, which is why everybody was so pleased by the rise to prominence in 1869 of an odd and crazed-looking professor at the University of St. Petersburg named Dmitri Ivanovich Mendeleyev.
Mendeleyev (also sometimes spelled Mendeleev or Mendeléef) was born in 1834 at Tobolsk, in the far west of Siberia, into a well-educated, reasonably prosperous, and very large family-so large, in fact, that history has lost track of exactly how many Mendeleyevs there were: some sources say there were fourteen children, some say seventeen. All agree, at any rate, that Dmitri was the youngest. Luck was not always with the Mendeleyevs. When Dmitri was small his father, the headmaster of a local school, went blind and his mother had to go out to work. Clearly an extraordinary woman, she eventually became the manager of a successful glass factory. All went well until 1848, when the factory burned down and the family was reduced to penury. Determined to get her youngest child an education, the indomitable Mrs. Mendeleyev hitchhiked with young Dmitri four thousand miles to St. Petersburg-that’s equivalent to traveling from London to Equatorial Guinea-and deposited him at the Institute of Pedagogy. Worn out by her efforts, she died soon after.
Mendeleyev dutifully completed his studies and eventually landed a position at the local university. There he was a competent but not terribly outstanding chemist, known more for his wild hair and beard, which he had trimmed just once a year, than for his gifts in the laboratory.
However, in 1869, at the age of thirty-five, he began to toy with a way to arrange the elements. At the time, elements were normally grouped in two ways-either by atomic weight (using Avogadro’s Principle) or by common properties (whether they were metals or gases, for instance). Mendeleyev’s breakthrough was to see that the two could be combined in a single table.
As is often the way in science, the principle had actually been anticipated three years previously by an amateur chemist in England named John Newlands. He suggested that when elements were arranged by weight they appeared to repeat certain properties-in a sense to harmonize-at every eighth place along the scale. Slightly unwisely, for this was an idea whose time had not quite yet come, Newlands called it the Law of Octaves and likened the arrangement to the octaves on a piano keyboard. Perhaps there was something in Newlands’s manner of presentation, but the idea was considered fundamentally preposterous and widely mocked. At gatherings, droller members of the audience would sometimes ask him if he could get his elements to play them a little tune. Discouraged, Newlands gave up pushing the idea and soon dropped from view altogether.
Mendeleyev used a slightly different approach, placing his elements into groups of seven, but employed fundamentally the same principle. Suddenly the idea seemed brilliant and wondrously perceptive. Because the properties repeated themselves periodically, the invention became known as the periodic table.
Mendeleyev was said to have been inspired by the card game known as solitaire in North America and patience elsewhere, wherein cards are arranged by suit horizontally and by number vertically. Using a broadly similar concept, he arranged the elements in horizontal rows called periods and vertical columns called groups. This instantly showed one set of relationships when read up and down and another when read from side to side. Specifically, the vertical columns put together chemicals that have similar properties. Thus copper sits on top of silver and silver sits on top of gold because of their chemical affinities as metals, while helium, neon, and argon are in a column made up of gases. (The actual, formal determinant in the ordering is something called their electron valences, for which you will have to enroll in night classes if you wish an understanding.) The horizontal rows, meanwhile, arrange the chemicals in ascending order by the number of protons in their nuclei-what is known as their atomic number.
The structure of atoms and the significance of protons will come in a following chapter, so for the moment all that is necessary is to appreciate the organizing principle: hydrogen has just one proton, and so it has an atomic number of one and comes first on the chart; uranium has ninety-two protons, and so it comes near the end and has an atomic number of ninety-two. In this sense, as Philip Ball has pointed out, chemistry really is just a matter of counting. (Atomic number, incidentally, is not to be confused with atomic weight, which is the number of protons plus the number of neutrons in a given element.) There was still a great deal that wasn’t known or understood. Hydrogen is the most common element in the universe, and yet no one would guess as much for another thirty years. Helium, the second most abundant element, had only been found the year before-its existence hadn’t even been suspected before that-and then not on Earth but in the Sun, where it was found with a spectroscope during a solar eclipse, which is why it honors the Greek sun god Helios. It wouldn’t be isolated until 1895. Even so, thanks to Mendeleyev’s invention, chemistry was now on a firm footing.
For most of us, the periodic table is a thing of beauty in the abstract, but for chemists it established an immediate orderliness and clarity that can hardly be overstated. “Without a doubt, the Periodic Table of the Chemical Elements is the most elegant organizational chart ever devised,” wrote Robert E. Krebs in The History and Use of Our Earth’s Chemical Elements, and you can find similar sentiments in virtually every history of chemistry in print.
Today we have “120 or so” known elements-ninety-two naturally occurring ones plus a couple of dozen that have been created in labs. The actual number is slightly contentious because the heavy, synthesized elements exist for only millionths of seconds and chemists sometimes argue over whether they have really been detected or not. In Mendeleyev’s day just sixty-three elements were known, but part of his cleverness was to realize that the elements as then known didn’t make a complete picture, that many pieces were missing. His table predicted, with pleasing accuracy, where new elements would slot in when they were found.
No one knows, incidentally, how high the number of elements might go, though anything beyond 168 as an atomic weight is considered “purely speculative,” but what is certain is that anything that is found will fit neatly into Mendeleyev’s great scheme.
The nineteenth century held one last great surprise for chemists. It began in 1896 when Henri Becquerel in Paris carelessly left a packet of uranium salts on a wrapped photographic plate in a drawer. When he took the plate out some time later, he was surprised to discover that the salts had burned an impression in it, just as if the plate had been exposed to light. The salts were emitting rays of some sort.
Considering the importance of what he had found, Becquerel did a very strange thing: he turned the matter over to a graduate student for investigation. Fortunately the student was a recent émigré from Poland named Marie Curie. Working with her new husband, Pierre, Curie found that certain kinds of rocks poured out constant and extraordinary amounts of energy, yet without diminishing in size or changing in any detectable way. What she and her husband couldn’t know-what no one could know until Einstein explained things the following decade-was that the rocks were converting mass into energy in an exceedingly efficient way. Marie Curie dubbed the effect “radioactivity.” In the process of their work, the Curies also found two new elements-polonium, which they named after her native country, and radium. In 1903 the Curies and Becquerel were jointly awarded the Nobel Prize in physics. (Marie Curie would win a second prize, in chemistry, in 1911, the only person to win in both chemistry and physics.)
At McGill University in Montreal the young New Zealand-born Ernest Rutherford became interested in the new radioactive materials. With a colleague named Frederick Soddy he discovered that immense reserves of energy were bound up in these small amounts of matter, and that the radioactive decay of these reserves could account for most of the Earth’s warmth. They also discovered that radioactive elements decayed into other elements-that one day you had an atom of uranium, say, and the next you had an atom of lead. This was truly extraordinary. It was alchemy, pure and simple; no one had ever imagined that such a thing could happen naturally and spontaneously.
Ever the pragmatist, Rutherford was the first to see that there could be a valuable practical application in this. He noticed that in any sample of radioactive material, it always took the same amount of time for half the sample to decay-the celebrated half-life-and that this steady, reliable rate of decay could be used as a kind of clock. By calculating backwards from how much radiation a material had now and how swiftly it was decaying, you could work out its age. He tested a piece of pitchblende, the principal ore of uranium, and found it to be 700 million years old-very much older than the age most people were prepared to grant the Earth.
In the spring of 1904, Rutherford traveled to London to give a lecture at the Royal Institution-the august organization founded by Count von Rumford only 105 years before, though that powdery and periwigged age now seemed a distant eon compared with the roll-your-sleeves-up robustness of the late Victorians. Rutherford was there to talk about his new disintegration theory of radioactivity, as part of which he brought out his piece of pitchblende. Tactfully-for the aging Kelvin was present, if not always fully awake-Rutherford noted that Kelvin himself had suggested that the discovery of some other source of heat would throw his calculations out. Rutherford had found that other source. Thanks to radioactivity the Earth could be-and self-evidently was-much older than the twenty-four million years Kelvin’s calculations allowed.
Kelvin beamed at Rutherford’s respectful presentation, but was in fact unmoved. He never accepted the revised figures and to his dying day believed his work on the age of the Earth his most astute and important contribution to science-far greater than his work on thermodynamics.
As with most scientific revolutions, Rutherford’s new findings were not universally accepted. John Joly of Dublin strenuously insisted well into the 1930s that the Earth was no more than eighty-nine million years old, and was stopped only then by his own death. Others began to worry that Rutherford had now given them too much time. But even with radiometric dating, as decay measurements became known, it would be decades before we got within a billion years or so of Earth’s actual age. Science was on the right track, but still way out.
Kelvin died in 1907. That year also saw the death of Dmitri Mendeleyev. Like Kelvin, his productive work was far behind him, but his declining years were notably less serene. As he aged, Mendeleyev became increasingly eccentric-he refused to acknowledge the existence of radiation or the electron or anything else much that was new-and difficult. His final decades were spent mostly storming out of labs and lecture halls all across Europe. In 1955, element 101 was named mendelevium in his honor. “Appropriately,” notes Paul Strathern, “it is an unstable element.”
Radiation, of course, went on and on, literally and in ways nobody expected. In the early 1900s Pierre Curie began to experience clear signs of radiation sickness-notably dull aches in his bones and chronic feelings of malaise-which doubtless would have progressed unpleasantly. We shall never know for certain because in 1906 he was fatally run over by a carriage while crossing a Paris street.
Marie Curie spent the rest of her life working with distinction in the field, helping to found the celebrated Radium Institute of the University of Paris in 1914. Despite her two Nobel Prizes, she was never elected to the Academy of Sciences, in large part because after the death of Pierre she conducted an affair with a married physicist that was sufficiently indiscreet to scandalize even the French-or at least the old men who ran the academy, which is perhaps another matter.
For a long time it was assumed that anything so miraculously energetic as radioactivity must be beneficial. For years, manufacturers of toothpaste and laxatives put radioactive thorium in their products, and at least until the late 1920s the Glen Springs Hotel in the Finger Lakes region of New York (and doubtless others as well) featured with pride the therapeutic effects of its “Radioactive mineral springs.” Radioactivity wasn’t banned in consumer products until 1938. By this time it was much too late for Madame Curie, who died of leukemia in 1934. Radiation, in fact, is so pernicious and long lasting that even now her papers from the 1890s-even her cookbooks-are too dangerous to handle. Her lab books are kept in lead-lined boxes, and those who wish to see them must don protective clothing.
Thanks to the devoted and unwittingly high-risk work of the first atomic scientists, by the early years of the twentieth century it was becoming clear that Earth was unquestionably venerable, though another half century of science would have to be done before anyone could confidently say quite how venerable. Science, meanwhile, was about to get a new age of its own-the atomic one.