23


The Genius of the Experiment


The scientific revolution ‘outshines everything since the rise of Christianity and reduces the Renaissance and Reformation to the rank of mere episodes, mere internal displacements within the system of medieval Christendom.’ These are the words of Herbert Butterfield, the British historian, in his book The Origins of Modern Science, 1300–1800, published in 1949.1 They typify one view of ‘the scientific revolution’, that the changes which took place between Copernicus’ publication of his book on the solar system, in 1543, and Sir Isaac Newton’s Principia Mathematica, some 144 years later, in 1687, transformed our understanding of nature fundamentally and for all time – modern science was born. The Aristotelian view of the world was thrown out, to be replaced by the Newtonian view. (Newton, complained his contemporaries, some of them anyway, had destroyed the romance of the rainbow and killed the need for angels.) It was now that austere, cumulative, mathematical rationality replaced the fuzzy, haphazard, supernatural speculation of the Middle Ages. As Butterfield also insisted, this was the most important change in thinking since the rise of ethical monotheism.

This argument has come under attack in the last quarter of a century. The assault has a great deal to do with the discovery, mentioned in the Introduction to this book, of certain papers belonging to Newton, which were first discussed publicly by John Maynard Keynes. These papers showed that, besides his interest in physics and mathematics, Newton had an abiding fascination with alchemy and theology, in particular biblical chronology. This has led certain modern scholars – Betty Jo Teeter Dobbs and I. Bernard Cohen, for example – to question whether, with such interests as these, Newton and some of his contemporaries can be said to have had truly modern minds. Dobbs and Cohen remind us that Newton sought to demonstrate the laws of ‘divine activity’ in nature, in order to show ‘the existence and providential care of the Deity’ and they have therefore cast doubt on whether the transformation in thought was really so profound. They also point out that the change to modern chemistry came well after Newton, in the eighteenth century, and therefore, they argue, we cannot really speak of a scientific ‘revolution’, if by that we mean ‘a change that is sudden, radical, and complete’.2 They point out, further, that Copernicus was a ‘timid conservative’ in his private life – hardly a revolutionary – that there were barely ten ‘heliocentrists’ in the world in 1600, and that Kepler was a ‘tortured mystic’. None of these ‘heroes’ were cold rationalists. The reader is warned therefore that the version of events which follows is very much in contention. I shall return to this discussion at the end of the chapter.

For scientists, we are now living our lives surrounded by the second scientific revolution. This began just over a hundred years ago at the turn of the twentieth century with the simultaneous discovery of the quantum, the gene and the unconscious. The first scientific revolution stemmed from a similar set of simultaneous and equally momentous events. These were the discovery of the heliocentric view of the heavens, the identification of universal gravitation, important advances in the understanding of light, of the vacuum, of gases, of the body and of microscopic life.3 It is still not entirely clear why these advances all came together at much the same time. Protestantism, itself a revolutionary cause, with an emphasis on private conscience, surely had something to do with it. One of the other effects of the Reformation was to persuade reflective people that if there were so many, on all sides, who were convinced of their divine inspiration, they couldn’t all be right. Therefore, divine inspiration must be, by definition, often wrong. Capitalism was a factor too, with its emphasis on materialism, money and interest, and its focus on calculation. The growing capacity in the world for precision in all walks of life also played a role. The discovery of the New World, with its very different geography, botany and humanity, contributed much. A final general background factor may have been the fall of Constantinople in 1453, which removed the last living link with ancient Greek culture, and what it had to offer. Not long before the city fell, the Sicilian manuscript dealer and collector Giovanni Aurispa brought back, after just one visit, no fewer than 238 Greek manuscripts, introducing Westerners to Aeschylus, Sophocles and Plato.4

Toby Huff has also drawn attention to the ways in which non-European sciences dropped behind. As late as the eleventh century there had been ‘hundreds’ of libraries in the Muslim Middle East, with one, in Shiraz, said to contain 360 rooms.5 But under Islam astronomers and mathematicians usually had other roles, as muwaqqit, time-keepers and calendar-makers in mosques – they were thus hardly motivated to come up with new ideas that might have been threatening to the faith. Huff makes the point that Arab astronomers knew all the astronomy that Kepler knew but never thought it through to the heliocentric system.6 The Chinese and Arabs never developed the ‘equals’ sign (=) and in fact the Chinese never believed that empirical investigation could ever completely explain physical phenomenon. In the thirteenth century there were, Huff says, the same number of scholars in Europe as in the Muslim world, or in China, but the latter two civilisations, because scholarship was validated centrally, either by the state or by masters, never developed organised or corporate scepticism and, ultimately, this is what counted. This is a question also addressed by the twentieth-century philosopher Ernst Cassirer, in his book The Philosophy of Symbolic Forms. He notes, for example, that in some African tribes the word for ‘five’ actually means ‘completes the hand’, whereas ‘six’ means literally ‘jump’ – i.e., to the other hand. Elsewhere number is not divorced from the object it is qualifying: ‘two canoes’ for instance is different from ‘two coconuts’, and with others the counting is simply organised as ‘one’, ‘two’, ‘many’. With such a system, Cassirer says, the breakthrough to advanced mathematics is highly unlikely.7

In the sixteenth century, understanding the heavens was regarded as the most important aim of science, by which people chiefly meant physics. In a religious society, ‘The whole fate of life and everything else was tied up with the movement of the heavens: the heavens ruled the earth. Therefore, whoever understood how the heavens worked, would understand everything on earth.’8 One of the chief effects of the scientific revolution – and it was clear by the time Newton’s work had been assimilated – was that the heavens do not rule the earth. As J. D. Bernal says, the scientists of the day came to realise that the problem was actually not very important and this of course down-graded the standing of the heavens. In the process, however, the new science of dynamics had been discovered, with its own mathematics, the mathematics of differential equations. This has been the bedrock for theoretical physics ever since.

Nicholas Copernicus, a Pole, was fortunate in having an uncle who was a bishop, who took a great interest in his nephew and paid for his education in Italy. Copernicus was what we probably would call overeducated: he studied law, medicine, philosophy and belles lettres, and was also knowledgeable about astronomy and navigation.9 He was fascinated by Columbus’ discoveries but he would not have made a good navigator himself on Columbus’ fleet, because Copernicus was in fact a weak astronomer – his observations were notoriously inaccurate. But these drawbacks were more than offset by his one simple observation: that the traditional way to explain the heavens was in disarray. Copernicus became convinced that Ptolemy had to be wrong because he sensed that nature would never have organised herself into a complex set of ‘epicycles’ and ‘eccentrics’ as the Greek maintained. Copernicus applied himself to this disarray, with a view to simplifying the explanation. He described his approach as follows: ‘After I had addressed myself to this very difficult and almost insoluble problem, the suggestion at length came to me how it could be solved with fewer and much simpler constructions than were formerly used, if some assumptions (which are called axioms) were granted me. They follow in this order. 1. There is no one centre of all the celestial circles. 2. The centre of the earth is not the centre of the universe, but only of gravity and of the lunar sphere. 3. All the spheres revolve about the sun as their mid-point, and therefore the sun is the centre of the Universe. 4. The ratio of the earth’s distance from the sun to the height of the firmament [in other words, the fixed stars] is so much smaller than the ratio of the earth’s radius to its distance from the sun that the distance from the earth to the sun is imperceptible in comparison with the height of the firmament.’10

Everyone remembers that Copernicus displaced the earth as the centre of the universe but, as can be seen from his words above, two other things stand out. The first is that he was only saying what Archimedes had said two thousand years before. Second, and no less important theologically than his displacement of the earth as the centre of the universe, was his claim that the heavens – the realm of the stars – were much, much further away than anyone thought. This was shocking and disconcerting but, unlike Archimedes, Copernicus was – before too long – believed. One reason for his high credibility was a further set of arguments that fitted well with people’s observations, namely that the earth has three different motions. In the first place, the planet revolves every year in a great circle around the sun. Second, it spins on its own axis. And third, there is a variation in the attitude of the earth to the sun. All of this, Copernicus said, meant that the apparent motion of the sun is not uniform. In some ways, this was his cleverest piece of reasoning: people had been puzzled for centuries as to why summer on earth does not last the same length of time as winter, and why the equinoxes do not occur half-way through the year, or half-way between solstices. The real answer of course was that the planets, including the earth, orbited not in circles but in ellipses. But that crucial insight – which we shall come to – would not have been possible without Copernicus’ observation about the relative movements of the earth and sun.

Copernicus’ new ideas, systematised in his On the Revolution of the Celestial Orbs, commonly referred to by its Latin title as De revolutionibus, had some holes in it. For example, he still believed the medieval idea that the planets were fixed on the surfaces of a set of gigantic hollow concentric crystal balls. That apart, however, Copernicus had succeeded in his aim, of dispensing with the disarray and replacing Ptolemy’s complicated epicycles.11

Though De revolutionibus was revolutionary, it was not immediately seen as incendiary. When Copernicus finally put pen to paper and sent it to the pope, the pontiff circulated the manuscript among fellow scholars, who recommended that it be printed. And although it was published by a Protestant printer, Copernicus’ new ideas were regarded as ‘perfectly respectable’ all the way through the sixteenth century. It was not until 1615 that anyone complained that it contravened conventional theology.12

By then Copernicus’ work was already being built on by the Danish nobleman Tycho Brahe. The Brahe family fortune came from a share in the toll which the Danes imposed on every ship going in or out of the Baltic through the Oresund, the straits between Denmark and Sweden. Tycho was an argumentative soul who, once, in a duel, had the end of his nose snipped off, and thereafter always had to appear in public with a neat silver tip glinting in the light. But the Danish Crown realised that Brahe was a talented scientist and granted him an island of his own in the Oresund where there were few opportunities for argument and where he was allowed to set up ‘the first scientific institution of modern times’, called Uraniborg, or Heaven’s Gate.13 The laboratory included an observatory.

Brahe may not have had as original a mind as Copernicus but he was a much better astronomer and, from his Oresund lab, he made many accurate astronomical measurements. These observations were left behind when, in 1599, Brahe quit Denmark and transferred to Prague, where he was appointed chief mathematician to the Holy Roman Emperor, Rudolf II, a highly eccentric man who was fascinated by alchemy and astrology. Back in Denmark, Brahe’s measurements were held by his no less talented assistant Johann Kepler. He set about the task of trying to marry Brahe’s measurements and Copernicus’ theories.

Kepler was dogged and diligent and a keen observer. Like Copernicus he started with the belief that the stars were arranged, as traditionally thought, on a series of concentric crystal balls. Gradually, however, he was forced to dispense with this theory, when he found that Brahe’s observations could not be reconciled with the crystal ball theory. His breakthrough came when, instead of trying to fit all the planets into a system, he concentrated on Mars.14 Mars is particularly useful for astronomers because it can be observed almost all the time, and using Brahe’s measurements, Kepler came to realise that, in its journey around the sun, Mars described not a circle but an ellipse. Once this breakthrough had been made, Kepler soon showed that all planets that orbit the sun do so elliptically and that even the moon’s orbit of the earth is an ellipse. There were two immediate implications of this, one physical and mathematical, the other theological. In terms of science, an ellipse, though a relatively simple shape, is nowhere near as straightforward as a circle and would take a great deal more explaining – how and why should an orbiting planet be further away from the sun at some points than others? Thus the discovery of elliptical orbits stimulated the study of gravity and dynamics. At the same time, what did the existence of ellipses do to the idea that the heavens consisted of a series of hollow concentric crystal balls? It made such an idea untenable.

Yet an elliptical orbit did explain why the seasons were of unequal length. An ellipse implied that the earth did not move around the sun at constant speed, but travelled faster when the planet was nearer the sun and slower when it was further away. There was, however, a constancy in the system, as Kepler found. The velocity multiplied by the radius vector (broadly the planet’s distance from the sun) remained the same.15 After his work with Mars, and Earth, and still using Brahe’s calculations, Kepler was able to calculate the orbits, speeds and distances of the other planets, all in relation to the sun. He found that there was a constancy here too: the period of rotation and the distance from the sun was in the ratio of the square to the cube. There was thus a new and definite harmony to the heavens and, as Thomas Kuhn says, whether or not it pointed to God, ‘it certainly pointed to gravity’.

The fourth of the great heroes of the scientific revolution, after Copernicus, Brahe and Kepler, was Galileo. Professor of mathematics and military engineering at Pisa University, Galileo somehow got his hands on a Dutch discovery that, because of the Dutch wars with Spain, was regarded as a military secret. This was the telescope. Though he was well aware of the military applications of the device (in helping one side count the enemy before they could themselves be counted), his own interest lay in an exploration of the heavens. And when he pointed his telescope at the night sky, he received one of the greatest shocks in all history. It was immediately clear that the heavens comprised far more stars than anyone had seen previously. There are, roughly speaking, two thousand stars in the sky at night that are visible to the naked eye. Galileo saw that, via the telescope, there are myriads more. Again, this had profound implications for the size of the universe and was therefore theologically challenging. But that wasn’t all. With his telescope, Galileo also noticed three and then four ‘stars’ or ‘moons’ moving about Jupiter, just as the planets moved around the sun. This confirmed the Copernican theory of the heavens but at the same time provided Galileo with an example of what was in effect a celestial clock. The movement of these bodies was so far away as to be unaffected by the movement of the earth, thus providing a sense of absolute time. It offered navigators a way of finding longitudes at sea.16

As a professor of military engineering, another interest of Galileo’s, naturally enough, was weapons – in particular what we call ballistics. At that point, as with much else, the basic understanding of dynamics (of which ballistics was a part) was essentially Aristotelian. Aristotle’s theory of spear-throwing, for example, was that a spear, when thrown, moved through the air, and the air which was displaced from the tip of the spear somehow went round to the back of the shaft and pushed it along. But a spear did not shoot through the air for ever, because it got ‘tired’ and dropped to the ground. This was clearly unsatisfactory as an explanation of movement but, for two thousand years, no one had been able to come up with a better one. That began to change after observations on another relatively new weapon – the cannon ball.17 Part of the point of a cannon was that its angle of attack could be varied. As the gun barrel was raised from parallel with the ground, the range increased and went on increasing until 45°, after which it began to fall off again. It was this behaviour of cannon balls which provoked Galileo’s interest in the laws of moving bodies, though another factor was the storms which periodically rocked Pisa and Florence, during which he noticed that the chandeliers and hanging lamps would sway and swing. Using his own pulse as measuring device, he timed the swaying of the lamps and found that there was a relation between the length of a pendulum and its swing. This became his square-root law.18

Galileo produced two famous treatises, The Two Chief Systems (1632) and The Two New Sciences (1638). Both were written in Italian (rather than Latin) and were in the form of dialogues – plays almost – designed to introduce his ideas to a wider audience. In the first, the relative merits of the Ptolemaic and Copernican systems were discussed between three men: Salviati (a scientist and scholar), Sagredo (an intelligent layman) and Simplicio (an obtuse Aristotelian). In the dialogue Galileo left little doubt as to where his sympathies lay but he also (and indirectly) satirised the pope. This led to his famous trial before the Inquisition, and to his imprisonment. During his year in jail, however, he prepared The Two New Sciences, a dialogue between the same three men, concerning dynamics. It was in this second book that he set out his views on projectiles and was able to show that the path of a projectile, disregarding air resistance, is a parabola.19 A parabola is a function of a cone, as is an ellipse. For two thousand years, conics had been studied in the abstract: now, all of a sudden, two applications in the real world had emerged virtually simultaneously. Yet more harmony of the heavens had been revealed.

It was ironic that The Two New Sciences was written in jail. Galileo’s imprisonment had been designed to keep the lid on the Copernican revolution. In fact, it provided Galileo with the opportunity to reflect and write the work which led to Newton and struck the greatest blow against religion.

According to a list of the most influential people in history, published in 1993, Isaac Newton ranked as number 2, after Muhammad and ahead of Jesus Christ.20 Born in the same year that Galileo died, 1642, Newton grew up in an atmosphere where science was regarded as a quite normal occupation or interest. This is already very different from the world inhabited by Copernicus, Kepler or Galileo, where religion and metaphysics mattered most.21 At the same time, Newton shared with them certain heroic qualities, in particular an ability to work almost entirely on his own. This was just as well because much of his ground-breaking labour was carried out in forced isolation in 1665 when London was devastated by the plague and he sought refuge in the village where he was born, Woolsthorpe in Lincolnshire. This was, in the words of Carl Boyer, in his history of mathematics, ‘the most productive period of mathematical discovery ever reported’, and was reflected later in Wordsworth’s lines: ‘a mind forever / voyaging through strange seas of thought alone.’22

At first Newton was interested in chemistry, rather than mathematics or physics.23 But, at Trinity College, Cambridge, he started reading Euclid and attended the lectures of Isaac Barrow, the (first) Lucasian professor, and became acquainted with the work of Galileo and others. The early seventeenth century was a time when mathematics became modern, taking a form that resembles what it has now.24 In addition to Newton (1642–1727), Gottfried Leibniz (1646–1716) and Nicholas Mercator (1620–1687) were near-contemporaries and René Descartes (1596–1650), Pierre de Fermat (1601–1665) and Blaise Pascal (1623–1662) not long dead by the time he graduated.25 Among the new mathematical techniques were symbolic expression, the use of letters, the working out of mathematical series, and a number of new ideas in geometry. But most of all, there was the introduction of logarithms, and the calculus.

Some form of decimals had been used by both the Chinese and the Arabs and, in 1585, the French mathematician François Viète had urged their introduction in the West. But it was Simon Stevin, of Bruges who, in the same year, published in Flemish De thiende (‘The Tenth’; French title La disme), which explained decimals in a way that more or less everyone could understand them. Stevin did not use the decimal point, however. He set out the value for π, pi, for instance, as:

Instead of the words ‘tenth’, ‘hundredth’ and so on, he used ‘prime,’ ‘second’ etc. It wasn’t until 1617 that John Napier, referring to Stevin’s method, proposed a point or comma as the decimal separatrix.26 The decimal point became standard in Britain but the comma was (and is) widely used elsewhere.

Napier (or Neper) was not a professional mathematician but an anti-Catholic Scottish laird, the baron of Murchiston, who wrote on many topics. He was interested in mathematics, in trigonometry and he conceived logarithms some twenty years before he published anything. Logarithm takes its name from two Greek words, Logos (ratio) and arithmos (number). Napier had been thinking about sequences of numbers since 1594, and while he was ruminating on the problem he was visited by a Dr John Craig, physician to James VI of Scotland (the future James I of England), who told him of the use of prosthaphaeresis in Denmark. Craig, almost certainly, had been with James when he crossed the North Sea to meet his bride-to-be, Anne of Denmark. A storm had forced the party ashore not far from Tycho Brahe’s observatory and, while awaiting an improvement in the weather, they had been entertained by the astronomer and the device of prosthaphaeresis had been mentioned.27 This term, derived from a Greek word meaning ‘addition and subtraction’, was a set of rules for converting the product (i.e., multiplication) of functions into a sum or difference. This is essentially what logarithms are: numbers, viewed geometrically, are converted to ratios, and in this way multiplication becomes a matter of simple addition or subtraction, making calculation much, much easier.12 The tables Napier started were completed and refined by Henry Briggs, the first Savilian professor of mathematics at Oxford. He eventually produced logarithms for all numbers up to 100,000.28

It is no criticism of Newton’s genius to say, therefore, that he was fortunate to be the intellectual heir of so many illustrious predecessors. The air had, so to speak, been primed. Of his many sparkling achievements we may begin with pure mathematics, where his greatest innovation was the binomial theorem, which led to his idea of the infinitesimal calculus.29 The calculus is essentially an algebraic method for understanding (i.e., calculating and measuring) the variation in properties (such as velocities) which may be altered in infinitesimal differences, that is, in properties that are continuous. In our study at home we may have 200 books or 2,000, or 2,001, but we don’t have 2003/4 books, or 20011/2. However, when travelling on a train its speed can vary continuously, infinitesimally, from 0 mph to 186 mph (if it is Eurostar). The calculus concerns infinitesimal differences and is important because it helps explain the way so much of our universe varies.

The measure of Newton’s advance may be seen from the fact that, for a time, he was the only person who could ‘differentiate’ (calculate the area under a curve). For a time it was so difficult that when he wrote his greatest book, Principia Mathematica, he did not use differential notation as he thought no one would understand it. Published in 1687, Philosophae naturalis principia mathematica, to give the book its full title, has been described as ‘the most admired scientific treatise of all times’.30

But Newton’s main achievement was his theory of gravitation. As J. D. Bernal points out, although Copernicus’ theory was accepted widely by this time, ‘it was not in any way explained’. One problem had been pointed up by Galileo: if the earth really was spinning, as Copernicus had argued, ‘why was there not a terrific wind blowing all round, blowing in the opposite direction to that in which the earth was rotating, from west to east?’31 At the speed the earth was alleged to be rotating, the wind generated should destroy everything. There was at that stage no conception of the atmosphere, so Galileo’s objection seemed reasonable.32 Then there was the problem of inertia. If the planet was spinning, what was pushing it? Some people proposed that it was pushed by angels but that didn’t satisfy Newton. Aware of Galileo’s work on pendulums, he introduced the notion of the centrifugal force.33 Galileo had begun with the swinging pendulum before moving on to circular pendulums. And it was this, the circular pendulum, which led to the concept of the centrifugal force which, in turn, led Newton to his idea that it was gravity which held the planets in, while they swing around perfectly freely. (In the case of the circular pendulum, gravity is represented by the weight of the bob and its tendency towards the centre.)

The beauty of Newton’s solution to the problem of gravity is astounding to modern mathematicians, but we should not overlook the fact that the theory was itself part of the changing attitudes in the wider society. Although no serious thinker any longer believed in astrology, the central problem in astronomy had been to understand the workings of the divine mind. By Newton’s day, however, the aim was much less theological and rather more practical: the calculation of longitude. Galileo had already used the satellites of Jupiter as a form of clock, but Newton wanted to understand the more fundamental laws of motion. Though his main interest was in these fundamentals, he was not blind to the fact that a set of tables – based on them – would be very practical.

The genesis of the idea has been recreated by historians of science. To begin with, G. A. Borelli, an Italian, introduced the notion of something he called gravity, as a balancing force against the centrifugal force – otherwise, he said, the planets would just fly off at a tangent. Newton had grasped this too, but he went further, arguing that, to account for an elliptical orbit, where a planet moves faster the closer it gets to the sun, then the force of gravity ‘must increase to balance the increased centrifugal force’. It follows that gravity is a function of the distance. But what function? Robert Hooke, the talented son of a clergyman from the Isle of Wight, who was in charge of the plans to rebuild the City of London after the Great Fire of 1666, had gone so far as to measure the weight of different objects deep in a mine shaft, and at the very summit of a church steeple. But his instruments were nowhere near accurate enough to confirm what he was looking for. From France Descartes, who had sought his own copy of Galileo’s Two Systems, came up with the idea of the solar system as a form of whirlpool or vortex: as objects approach the centre of the whirlpool, so they are sucked in, unless they have enough spin to keep them out.34 These ideas were all close to the truth but not the real thing. The breakthrough came with Edmund Halley. A passionate astronomer, he had sailed as far south as St Helena to observe the heavens of the southern hemisphere. Halley, who was to help pay for the printing of the Principia, urged several scientists, among them Hooke, Wren and Newton, to work on the proof of the inverse square law. Beginning with Kepler, several scientists had suspected that the length of time of an elliptical orbit was proportional to the radius but no one had done the work to prove the exact relationship. At least, no one had published anything. In fact, Newton, sitting in Cambridge, hard at work on what he considered the much more important problems of prisms, had already solved the inverse square law but, not sharing the modern scientist’s urge to publish, had kept the results to himself. Goaded by Halley, however, he finally divulged his findings. He sat down and wrote the Principia, ‘the bible of science as a whole and in particular the bible of physics’.35

Like Copernicus’ major work, the Principia is not an easy book to read but there is a clarity of understanding that underlies the more complex prose. In explaining ‘the system of the world’, by which he meant the solar system, Newton identified mass, density of matter – an intrinsic property – and an ‘innate force’, what we now call inertia. In Principia the universe is, intellectually speaking, systematised, stabilised and demystified. The heavens had been tamed and had become part of nature. The music of the spheres had been described in all its beauty. But it had told man nothing of God. Sacred history had become natural history.

It is now accepted by most historians of science that Leibniz discovered the calculus entirely unaware that Newton had discovered it too, nine years earlier. The German (he was born in Leipzig) was no less versatile than his English counterpart – he discovered/invented binary arithmetic (representing numbers as a combination of 0s and 1s), an early form of relativity, the notion that matter and energy are fundamentally the same, and entropy (the idea that the universe will one day run out of energy), not to mention his concept of ‘monads’, from the Greek, μονας, meaning ‘unit’, constituent parts of matter, not just atoms, but incorporating a primitive idea of cells, too, that organisms are also made up of parts. In the case of both Leibniz and Newton, however, it is the calculus that represents their highest achievement. ‘Any development of physics beyond the point reached by Newton would have been virtually impossible without the calculus.’36

Beautiful and complete as it was, in its way, Principia Mathematica and the calculus represented but two of Newton’s achievements. His other great body of work was in optics. Optics, for the Greeks, involved the study of shadows and mirrors, in particular the concave mirror, which formed an image but could also be used as a burning glass.37 In the late Middle Ages lenses and spectacles had been invented and later still, in the Renaissance, the Dutch had developed the telescope, from which the microscope derived.

Newton had combined two of these inventions – into the reflecting telescope. He had noticed that images in mirrors never showed the coloured fringes that stars usually had when seen directly through telescopes and he wondered why the fringes occurred in the first place. It was this which led him to experiment with the telescope, which in turn led on to his exploration of the properties of the prism. Prisms were originally objects of fascination because of their link to the rainbow which, in medieval times, had a religious significance. However, anyone with a scientific bent could observe that the colours of the rainbow were produced by the sun’s light passing through water drops in the sky.38 Subsequently it had been observed that the make-up of the rainbow was related to the elevation of the sun, with red rays being bent less than purple ones. In other words, refraction had been identified as a phenomenon but was imperfectly understood.39

Newton’s first experiments with light involved him making a small hole in the wooden shutter to his rooms in Trinity College, Cambridge. This let in a narrow shaft of light, which he so arranged that it struck a prism and was then refracted on to the wall opposite. Newton observed two things. One, the image was upside down, and two, the light was broken up into its constituent colours. To him it was clear from this that light consisted of rays, and that the different colours were affected by the prism to a different extent. The ancients had had their own concept of light rays but it had been the opposite of Newton’s idea. Previously, light was believed to travel from the observer’s eye to the object being observed. But for Newton light was itself a kind of projectile, shot this way and that from the object looked at: he had in effect identified what we now call photons. In his next experiment, he arranged for the light to come in from the window and pass through a prism, which cast a rainbow of light on to a lens which, in turn, focused the coloured rays on to a second prism which cancelled the effect of the first.40 In other words, given the right equipment, white light could be broken up and put back together again at will. As with his work on the calculus, Newton didn’t rush into print but once his findings were published (by the Royal Society) their wider importance was soon realised. For example, it had been observed since antiquity (in Egypt especially) that stars near the horizon take longer to set and rise sooner than expected. This could be explained if it were assumed that, near Earth, there was some substance that caused light to bend. At that stage there was no understanding of the concept of the atmosphere but it is to Newton’s credit that his observations kick-started this notion. In the same way, he noticed that both diamond and oils refracted light, which made him think that diamond ‘must contain oily material’. He was right, of course, in that diamond consists largely of carbon. This too was a forerunner of modern ideas – the twentieth-century discoveries of spectrography and X-ray crystallography.41

Tycho Brahe’s laboratory, on the Danish island of Hveen, has already featured in this story. In 1671 it featured again, when the French astronomer Jean Picard arrived there, to find that the whole place had been destroyed by ignorant locals. As he wandered around, however, traipsing through the ruins, he met a young man who seemed different from the others. Olaus Römer appeared very interested in – and knowledgeable about – astronomy. Touched that the man had worked so hard to better his knowledge, Picard invited Römer back to France. There, under Picard’s guidance, the young man initiated his own observations of the heavens and, very early on, and to his considerable amazement, he discovered that Galileo’s famous theory, based on the orbits of the ‘moons’ of Jupiter, was wrong. The speed of the ‘moons’ was not constant as Galileo had said, but appeared to vary systematically according to the time of the year. When Römer sat back and considered his data quietly, he realised that the speed of the ‘moons’ seemed to be related to how far Jupiter was from the earth. It was this observation which led to Römer’s fantastic insight – that light had a speed. A lot of people took some convincing but the idea did have a precedent of sorts. By watching cannonballs fired on battlefields, soldiers knew all too well that sound had a speed: they saw the smoke from the gun well before they heard the sound of the shot. If sound had speed, was it so far-fetched that light could too?42

These were enormous advances in physics, reflecting a continuous period of innovation and creative thought. Newton himself, in a famous quote, comparing himself to Descartes, said in a letter to Robert Hooke, ‘If I have seen farther than Descartes, it is because I have stood on the shoulders of giants.’43 But on one question, Newton was wrong, and wrong in an important way. He thought that matter was made up of atoms and set out his view as follows: ‘All these things being consider’d, it seems probable to me, that God in the Beginning form’d Matter in solid, massy, hard, impenetrable, movable Particles, of such Sizes and such Figures, and with such other Properties, and in such Proportion to Space, as most conduced to the End for which he form’d them; and that the primitive Particles being Solids are incomparably harder than any porous Bodies compounded of them; even so very hard, as never to wear or break in pieces . . . But . . . compound Bodies being apt to break, not in the midst of solid Particles, but where those Particles are laid together, and only touch in a few points.’44

As we have seen, Democritus had proposed that matter consisted of atoms two thousand years before Newton. His ideas had been elaborated on and introduced into western Europe by Pierre Gassendi, a Provençal priest. Newton had built on this but despite all the innovations he had made, his view of the universe and the atoms within it did not include the concept of change or evolution. As much as he had improved our understanding of the solar system, the idea that it might have a history was beyond him.

In 1543, the year in which Copernicus finally published De revolutionibus orbium celestium, Andreas Vesalius presented to the world in printed form his book on the structure of the human body. Arguably, this was even more important. Copernicus’ theory never had much direct influence on the thought of the sixteenth century – its theological ramifications would spark controversy only much later. For biology, on the other hand, 1543 is a natural end-point and the beginning of a new epoch, for Vesalius’ observations had an immediate influence.45 Everyone was curious about their own make-up (Vesalius’ students begged him to make charts of the veins and arteries) and it was by no means unusual in the sixteenth century to see anatomical plates of skeletons displayed in barber shops and public baths. Vesalius’ extremely meticulous study of anatomy also raised philosophical speculation, about man’s purpose.46

His advances have to be placed in context. Until he published his book the dominant intellectual force in human biology was still Galen (131–201). It will be recalled from Chapter 9 that Galen was one of the monumental figures in the history of medicine, the last of the great anatomists of antiquity, but who worked under unfavourable conditions. Ever since Herophilus (born c. 320 BC) and Erasistratus (born c. 304 BC) dissection of the human body had been proscribed and Galen had been forced to make deductions based on his observations of dogs, swine, oxen and the barbary ape.47 For more than a millennium, almost no advances had been made beyond him. Change had begun only in the time of Frederick II (1194–1250), king of Sicily and Holy Roman Emperor. A general concern for his subjects, combined with a genuine interest in knowledge, led Frederick to decree in 1231 ‘that no surgeon be admitted to practise unless he be learned in the anatomy of the human body’. The emperor backed this with a law that provided for the public dissection of the human body ‘at least once in five years’, at Salerno. This, the initial legislation for dissection, was followed by other states in due course. Early in the following century, the college of medicine for Venice, which was located at Padua, was authorised to dissect a human body once every year. In the early decades of the sixteenth century, Vesalius travelled to Padua for his training.48

That attitudes to the body were changing is shown from the drawings of Leonardo da Vinci, mostly executed around 1510, or three decades before Vesalius. There is a memorandum of the artist which shows that he had conceived a book on the ‘human body’ as early as 1489 (though this, like much else of his, was never completed).49 What seems clear from the memorandum, and from Leonardo’s drawings, is that he had studied anatomy professionally even before he joined forces with the anatomist Antonio della Torre, and that Leonardo continued to make dissections long after their relations were severed about 1506. The artist made more than seven hundred sketches showing the architecture of the heart and the layout of the vascular system, bones drawn from different aspects, the muscles and their attachments, cross-sections of the leg at different levels, and of the brain and nerves. The detail was sufficient not just for artists, but for medical students as well.50 According to one source, by 1510 Leonardo had dissected no fewer than thirty human cadavers, of both sexes.

Born in Brussels on New Year’s Eve 1514, Andreas Vesalius came from a family of physicians but was given a wide-ranging education. As a young man, he published a translation from the Greek of a medical book by Rhazes. Vesalius went from Brussels to the Universities of Louvain and Paris, returning home to become a military surgeon, serving in Belgium’s wars. Finally, he moved to Padua, drawn by the relatively free access to bodies. In 1537, when he was still only twenty-three, he was placed in charge of anatomy teaching, and it was there, in the course of repeated dissections, that he began to see where Galen had gone wrong. This soon led him to reject Galen entirely and Vesalius began to teach only what he himself had uncovered. This proved enormously popular and students flocked to his lectures, five hundred at a time according to some accounts.51

After five years in Padua and while he was still barely twenty-eight, he produced The Structure of the Human Body, with a dedication to Charles V. Published in Basle, it contained many plates and woodcuts.52 (The illustrations were drawn by his fellow countryman John Stephen de Calcar, a pupil of Titian.) To the modern eye, de Calcar’s images are bizarre: in an attempt to soften the sheer rawness of what he was depicting, the artist put his skeletons in lifelike poses, and arrayed them, for example, in picturesque landscapes. Bizarre or not, no drawings of such vivid detail had been seen before and the impact was immense and immediate. ‘Vesalius corrected more than two hundred anatomical errors of Galen.’53 Many contemporaries denounced him for this, but Vesalius had done the work and nothing they said could trump that. For example, he showed that the jawbone in man is a single bone, not divided as it is in the dog and other lower mammals. He proved that the thigh bone is straight, not curved as it is in the dog. He proved that the sternum is made up of three bones, not eight, as was thought. There were some who tried to argue that human anatomy had developed since Galen’s day, or that ‘the fashion for narrow trousers had caused man’s leg bones to straighten’. Theologians also remained unconvinced. ‘It was a widely accepted dogma that man had one less rib on one side, because from the scriptural account Eve was formed from one of Adam’s ribs. Vesalius, however, found an equal number of ribs on each side.’54 But this was the mid-sixteenth century, the Reformation and Counter-Reformation were under way and the Church was implacable. The attacks on Vesalius got so bad that he resigned his professorship in Padua and accepted a position as court physician to the emperor Charles V, then living in Spain.

‘But what Vesalius had begun, nothing could stop.’55 The main figure to follow him was the Englishman William Harvey. Born at Folkestone in 1578, he studied for five years at King’s School Canterbury, and then went up to Cambridge at the age of sixteen. Like Newton he did not shine early on (he was very young) and he studied mainly Latin and Greek, and an elementary level of physics. However, after graduation at nineteen, he immediately set out for Italy, and for Padua, showing he must have had some interest in medicine. There he studied under Fabricius, a famous teacher of the day.56 Sixty-one when Harvey arrived, Fabricius was just then refining his understanding of the valves of the veins, though he also showed that the pupils of the eye responded to light. Fabricius’ own knowledge was dated but he did stimulate in Harvey a great enthusiasm for medicine, which he took back home in 1602, having gained his doctorate. He went back to Cambridge, this time to earn an MD, which was necessary if he wanted to practise in Britain. He opened up shop in London and, within barely a decade, was appointed a lecturer at the Royal College of Physicians.57 There is written evidence – the written evidence of his own spindly hand – that he was teaching the doctrine of the circulation of the blood within a year of his arrival at the Royal College, in 1616. But he was rather less forward than Vesalius who – remember – had published his anatomical observations when he was just twenty-eight. Harvey, we now know, had been lecturing on the circulation of the blood for a good twelve years before he committed himself to print. When his great classic, The Movement of the Heart and the Blood, appeared in 1628, Harvey was already fifty.

His observations were nothing if not thorough. In De motu cordis et sanguinis, to give the book its Latin title, he refers to forty animals in which he had seen the heart beating. These animals included fish, reptiles, birds, mammals and several invertebrates.58 At one point he confides as follows: ‘I have also observed that almost all animals have truly a heart, not only (as Aristotle says) the larger red-blooded creatures, but also the higher pale-blooded crustacea and shell fish, such as slugs, snails, mussels, shrimps, crabs, crayfish and many others; nay, even in wasps, hornets and flies, with the aid of magnifying glasses (perspicilli), and at the upper part of what is called the tail, I have seen the heart pulsating myself, and have shown it to many others.’59 The book is only seventy-eight pages long, is much more clearly written than either Newton’s or Copernicus’ masterpieces, and its argument is plain enough for even the layman to grasp: all the blood in the body moves in a circuit and the propelling force is supplied by the beating of the heart.60 In order to make his breakthrough and conceive the circulation of the blood, Harvey must have deduced that something very like capillaries existed, connecting the arteries and veins. But he himself never observed a capillary network. He saw very clearly that the blood passes from arteries to veins ‘and moves in a kind of circle’. But he preferred the idea that arterial blood filtered through the tissues in reaching the veins. It was only in 1660 that Marcello Malpighi, using lenses, observed the movement of the blood through the capillaries in transparent animal tissues.

Harvey’s discovery of the circulation of the blood was the fruit of a clear mind and some beautiful observation. He used ligatures to show the direction of the blood currents – towards the heart in veins and away from the heart in arteries. And he calculated the volume of the blood being carried, to show that the heart was capable of the role he assigned to it. Observing the heart carefully, he demonstrated that its contraction expels blood into the arteries and creates the pulse. In particular, he showed that the amount of blood which leaves the left side of the heart must return, since in just under half-an-hour the heart, by successive beats, delivers into the arterial system more than the total volume of blood in the body.61 It was because of Harvey, and his experiments, that people came to realise that, in fact, it was the blood which played the prime role in physiology. This change in perspective created modern medicine. Without it we would have no understanding of respiration, gland secretion (as with hormones) or chemical changes in tissues.

In the 1840s the English archaeologist Austen Layard discovered a lens-shaped rock crystal in the ruins of the palace at Nineveh in what is now Iraq. For some, this was ‘a quartz lens of great antiquity’, dating from 720–700 BC.62 Few people believe this any longer – more likely it was a ‘burning glass’, to create fire, which we know was used in antiquity. In Seneca’s Natural Questions (AD 63) he says: ‘I may now add that every object much exceeds its natural size when seen through water. Letters however small and dim are comparatively large when seen through a glass globe filled with water.’ Even this, which does show a reference to magnification, is no longer taken as evidence that magnifying appliances were used in ancient times.63 The first accepted reference comes in the writing of Alhazen, the Arab physician, in a manuscript of 1052. The subject of the manuscript is not only the human eye and optical principles, but he also refers to globules of glass or crystals, by means of which he observes that objects are enlarged. Roger Bacon (1214–1294) in his Opus majus (1267) says much the same, but there is no evidence that Bacon ever made either a telescope or a microscope.

This situation had changed by the end of the sixteenth century. We know that spectacle makers were common at the time in the Netherlands, Italy and Germany and it did not take long for people to happen upon a combination of lenses inserted into tubes. The Englishman, Leonard Digges (1571), and the Dutchman, Zacharias Jansen (1590), both flirted with telescopes, but it was very possibly Galileo who first used the telescope and the compound microscope fruitfully.64 Following his first telescope in 1608, which has already been mentioned, a year later he made microscopical observations on tiny objects. In 1637, when Descartes published his Discourse on Method, it contained an appendix with printed pictures of microscopes.

This was all prologue. The first clear descriptions of minute living organisms were published by Athanasius Kircher in his Ars magna lucis et umbrae, released in 1646. There, he says that with the aid of two convex lenses, held together in a tube, he observed ‘minute “worms” in all decaying substances’ – in milk, in the blood of persons stricken with fever, and in the spittle ‘of an old man who had lived soberly’.65 In this way Kircher anticipated the germ theory of disease. He was followed by the Dutchman Antony van Leeuwenhoek of Delft, who in the course of his life made several hundred microscopes, some of which, it was said, could achieve magnification of up to 270 times.66 At his death Leeuwenhoek left a couple of dozen of his instruments to the Royal Society of London, which had published a good deal of his work, and where he was elected a Fellow.67 These microscopes account for his great success as an observer. Beginning in 1673, when Leeuwenhoek was forty-one years of age, and throughout his career, he sent 375 letters to the Royal Society.68 Out of these, William Locy tells us, three in particular stand out. ‘These are his discovery of protozoa, of bacteria, and his observation on the circulation of the blood.’ ‘In the year 1675,’ Leeuwenhoek wrote, ‘I discover’d living creatures in Rain water, which had stood but a few days in a new earthern pot, glazed blew within. This invited me to view the water with great attention, especially those little animals appearing to me ten thousand times less than those represented by Mons.

Swammerdam, and by him called Water-fleas or Water-lice, which may be perceived in water with the naked eye . . . The first sorte by me discover’d in the said water, I divers times observed to consist of 5, 6, 7, or 8 clear globules, without being able to discern any film that held them together, or contained them. When these animalcula or living Atoms did move, they put forth two little horns, continually moving themselves . . .’ Regarding size, Leeuwenhoek said that some of the ‘animalcula’ in question were ‘more than 25 times less than a globul of blood’. One philosophical implication of this was that it seemed to supply the long looked-for bridge between visible organisms and inanimate nature.69 Other observers soon followed and, by 1693, the world was given the first drawings of protozoa. For quite some time, little distinction was made between protozoa, bacteria and rotifers and even in the eighteenth century Linnaeus, who did not use the microscope, completely misconceived micro-organisms, placing them together in a single group which he called ‘Chaos’.70

But in 1683, Leeuwenhoek discovered an even smaller form of life – bacteria. He had first observed them two years before but made careful drawings before he dared publish his discovery. (These too appeared in the Philosophical Transactions of the Royal Society.) The drawings were essential because they make it clear that he had indeed observed the chief forms of bacteria – round, rod-shaped and spiral forms.71 Here are some details from his letter: ‘Tho my teeth are kept usually very clean, nevertheless when I view them with a Magnifying Glass, I find growing between them a little white matter as thick as a wetted flower: in this substance tho I could not perceive any motion, I judge there might probably be living Creatures. I therefore took some of this flower and mixt it either with pure rain water wherein were no animals; or else with some of my Spittle (having no Air bubbles to cause a motion in it) and then to my great surprise perceived that the aforesaid matter contained very many small living Animals, which moved themselves very extravagantly.’72

Leeuwenhoek’s final triumph was his visual confirmation of the circulation of the blood. (Harvey, remember, had never actually seen the circulation of the blood through the capillaries. He had attempted to fit the final piece of the jigsaw – via the comb of a young cock, for example, the ears of a rabbit, the membranous wing of a bat. But that final observation had always eluded him.73) Then, in 1688, Leeuwenhoek trained his microscope on the transparent tail of the tadpole. ‘A sight presented itself more delightful than any mine eyes had ever beheld; for here I discovered more than fifty circulations of the blood in different places, while the animal lay quiet in the water, and I could bring it before my microscope to my wish. For I saw that not only in many places the blood was conveyed through exceedingly minute vessels, from the middle of the tail toward the edges, but that each of the vessels had a curve or turning, and carried the blood back toward the middle of the tail, in order to be again conveyed to the heart.’74 Nor should we overlook Leeuwenhoek’s discovery, in 1677, of spermatozoa, though it would be another century before their true role was identified. Leeuwenhoek was the first person to make biologists aware of the vast realms of microscopic life.75

In biology, the seventeenth century proved to be as fertile as it was in physics. In 1688 Francesco Redi showed that insects were not the result of spontaneous generation, as had been thought, but developed from eggs laid by fertilised females. As early as 1672 Nehemiah Grew had speculated on the role of pollen as an agent in fertilisation in plants but it was not until 1694 that Rudolf Jakob Camerarius demonstrated, in his De sexu plantarum epistola, that anthers are the male sex organs in plants, and confirmed through experimentation that pollen (and very often wind) was needed for fertilisation. Camerarius showed himself well aware that sexual reproduction in plants was just the same in principle as in animals.76

Francis Bacon (1561–1626) and René Descartes (1596–1650) are both intermediary figures, in the sense that they lived their entire lives between the publication of Copernicus’ De revolutionibus and Newton’s Principia Mathematica. But they were not intermediate in any other sense: both were radical thinkers who used the scientific findings of their own day to move philosophy forward to accommodate the recent discoveries, and in so doing anticipated much of the world that Newton finally identified.

As Richard Tarnas, among others, has pointed out, there have been three great epochs in Western philosophy. During the classical era, philosophy – though influenced by the science and religion of the day – was a largely autonomous activity, mainly as a definer and judge of all other modes of activity. Then, with the advent of Christianity, theology assumed a pre-eminent role and philosophy became subordinate to that. With the coming of science, however, philosophy transferred its allegiance from theology – and this is still more or less where we are today.77 Bacon and Descartes were the main figures in bringing about this latest phase.

Francis Bacon wrote a number of works in which, in effect, he proposed a society of scientists, exploring the world together by experiment and showing no especial concern for theory (and none at all for traditional theory). Chief among these books were the Advancement of Learning (1605) (dedicated to James I), the Novum Organum (1620), and the New Atlantis (1626). Socrates had equated knowledge with virtue but for Bacon, a man of the world as well as a philosopher, it was to be associated with power – he had a very practical view of knowledge and this in itself changed beliefs about and attitudes to philosophy. For Bacon, science in itself became an almost religious obligation and, since his view was that history is not cyclical but progressive, he looked forward to a new, scientific civilisation. This was his concept of ‘The Great Instauration’, the Great Renovation, ‘a total reconstruction of the sciences, arts, and all human knowledge, raised upon the proper foundations’.78 Bacon shared the view of many contemporaries, that knowledge could only be built up by the observation of nature (rather than through intuition or ‘revealed’ knowledge), starting from concrete data rather than abstractions that had just occurred to someone. This was his main criticism of both the ancients and the schoolmen and what he most wanted to jettison before moving on. ‘To discover nature’s true order, the mind must be purified of all its internal obstacles.’79 But Bacon also thought that the understanding of the High Middle Ages and of the Renaissance – that the study of nature would reveal God, by disclosing the parallels between man’s mind and God’s – was wrong. Matters of faith, he felt, were appropriate to theology but matters of nature were different, with their own set of rules. Philosophy, therefore, had to dispense with theology and go back to basics, examining the detailed findings of science and using those as the basis for further reasoning. This ‘marriage’, between the human mind and nature, was the basis of the modern philosophical approach. Bacon’s view had a major influence on the fledgling Royal Society. ‘It has been estimated that nearly 60 per cent of the problems handled by the Royal Society in its first thirty years were prompted by practical needs of public use, and only 40 per cent were problems in pure science.’80

Descartes was no less a child of his time than Bacon, though in many ways he was very different from the Englishman. He was, for a start, a considerable mathematician. He received a thorough Jesuit education, spent some time in the military, and wrote La géométrie, which introduced analytical geometry to his contemporaries.81 This was not published separately, however, but as one of three appendices to the Discours de la méthode, which explained Descartes’ general philosophical approach. The other two appendices were La dioptrique, which included the first publication of the law of refraction (actually discovered by Willebrord Snell), and Les météores, which contained among other things the first generally satisfactory quantitative explanation of the rainbow.82 It was by no means clear why Descartes had included these appendices in the book, except that they showed the high place he accorded science in philosophy.83

His philosophy was in fact much influenced by the then-current vogue for scepticism. This had been partly stimulated by the rediscovery of Sextus Empiricus’ classical defence, which had been seized upon by Montaigne, who argued that all doctrine is ‘humanly invented’, that nothing was certain because belief was determined by tradition or custom, because the senses could deceive, and because there was no way of knowing if nature matched the processes of the human mind. Descartes brought his own brand of scepticism to bear on this. Geometry and arithmetic offered certainty, he said, observation of nature was free of contradiction and, in practical terms, life went on, with certain events at least being predictable. This was common sense. And when he looked about him, he realised that one thing was clear. The one thing that could not be doubted – because he was certain of it – was his own doubt. (This ‘Pentecost of reason’, Daniel Boorstin says, took place on the night of 10 November 1619.84) It was doubt that gave rise to Descartes’ famous saying ‘Cogito, ergo sum’ – I am thinking, therefore I am. But Descartes also believed that, since God was perfect, he would not deceive man, and therefore what could be worked out by reason ‘was in fact so’. This led Descartes to his famous distinction between res cogitans – subjective experience, consciousness, the interior life, which is certain – and res extensa – matter, physical things, the exterior objective world, the universe ‘out there’. Thus was conceived Descartes’ famous dualism, in which soul is understood as mind. It was a bigger change than we might imagine today for, at a stroke, Descartes denied that objects in the world – stones and streams, which at one stage had been worshipped, machines and mountains, everything physical – had any human qualities, or any form of consciousness. God, he said, had created the universe but, after that, it moved on its own, composed of non-vital, atomistic matter. ‘The laws of mechanics,’ he said, ‘are identical with those of nature,’ and so the basic understanding of the universe would be discovered via mathematics, which was available to human reason. This was a major transformation, for underneath it all (but not buried in any way) Descartes was saying that God had been established by human reason, rather than the other way round. Revelation, which had once been a form of knowledge with equal authority to science, now began to slip: from here on, the truths of revelation needed to be reaffirmed by reason.

And so finally, after a long night of two thousand years, since classical Greece, the twin forces of empiricism and rationalism were back at the forefront of human activity. ‘After Newton, science reigned as the authoritative definer of the universe, and philosophy defined itself in relation to science.’ The universe ‘out there’ was devoid of human or spiritual properties, nor was it especially Christian.85 After Bacon and Descartes (sitting on the shoulders of Copernicus, Galileo, Newton and Leibniz), the world was set for a new view of humanity: that fulfilment would come, not from the revelations of a religious nature, but from an increasingly fruitful engagement with the natural world.

While all these events were taking place, England was going through a civil war which resulted in the king losing his head. In the run-up to that event, the war produced some bizarre side-effects. At one point, for example, King Charles was forced to make his headquarters in Oxford. The professors and Fellows of the Oxford colleges proved very loyal to his majesty, but that backfired when he was driven out and they were all condemned by the rebels as ‘security risks’. Removed from their positions, they were replaced by more republican-minded men from Cambridge and London. Several of these were scientists and, as a result and for a while, science at Oxford blossomed. As part of this, a number of distinguished scientists began to meet in each other’s rooms to discuss their problems. This was a new practice that was occurring all over Europe. In Italy, for instance, in the early years of the seventeenth century, the Accademia dei Lincei (the Academy of the Lynx-Eyed) was formed, with Galileo as its sixth member. There was a similar group in Florence, and in Paris the Académie Royale des Sciences was founded formally in 1666, though men such as Descartes, Pascal and Pierre de Fermat had been meeting informally since about 1630.86

In Britain there were two groups. One set formed around John Wallis, a mathematician, and met weekly at Gresham College in London from about 1645. (Wallis was a particular favourite of Oliver Cromwell because he had used his mathematical gift to break enemy ciphers.) The second group included the republican-minded men that centred in Oxford around the Hon. Robert Boyle, son of the Earl of Cork, who had spent some years in Puritan Geneva. He was a physicist interested in the vacuum and in gases. A rich aristocrat, Boyle was helped by his assistant Robert Hooke, who made the instruments and actually did the experiments. (Boyle called his group the Invisible College.) It may well have been Hooke who first had the idea of the inverse-square law and gravity.87 Wallis and his group were among those who were put in place at Oxford by Cromwell, where they met up with Boyle and his Invisible College. This enlarged group turned into the Royal Society, which was formally founded in 1662, though for some time the Fellows of the new society were still known as Gresham Philosophers. Charles II, who was persuaded to start the society by John Evelyn, the diarist, must have thought the whole process somewhat odd because, as recent scholarship has shown, out of sixty-eight early Fellows, no fewer than forty-two were Puritans.88 On the other hand, this make-up gave the society its complexion – such men showed an indifference to the authority of the past.

Among the other early Fellows of the Royal Society were Christopher Wren, better known as the architect of St Paul’s and many London churches. There was also Thomas Sprat, later bishop of Rochester, who wrote what he called a ‘history’ of the Royal Society in 1667, only seven years after it had been founded, though it was more a defence of the so-called ‘new experimental philosophy’ and skipped over the awkward political colour of some of its members. (The frontispiece, besides showing the royal patron, also shows Francis Bacon.) After denouncing a number of dogmatic (speculative/metaphysical) philosophers, Sprat went on: ‘The Third sort of new Philosophers, have been those, who have not onely disagreed from the Antients, but have also propos’d to themselves the right course of slow, and sure Experimenting . . . For now the Genius of Experimenting is so much dispers’d . . . All places and corners are now busie . . .’ And he described some of the members. ‘The principal and most constant of them were Seth Ward, the present Lord Bishop of Exeter, Mr Boyle, Dr Wilkins, Sir William Petty, Mr Mathew Wren, Dr Wallis [a mathematician], Dr Goddard, Dr Willis [another mathematician], Dr Theodore Haak, Dr Christopher Wren and Mr Hooke.’89

Sir William Petty was a pioneer of statistical methods (though he was also a professor of anatomy at Oxford, where he carried out many dissections, and at one stage was credited with inventing the water closet, now thought to have been introduced in Elizabethan times). Once described as ‘being bored with three quarters of what he knows’, in 1662 Petty published a Treatise on Taxes and Contributions which was one of the first works to show an awareness that value in an economy derives not from its store of treasure but from its capacity for production.90 In the same year, with Petty’s help, John Graunt, another early FRS, published Observations on the Bills of Mortality of the City of London, which became the basis for life-insurance tables. These illustrate the very practical bent of the early Royal Society Fellows and their many-sided nature. None more so than Robert Hooke, the society’s curator of experiments, whom history has treated unkindly. Hooke invented the balance spring of the modern watch, produced one of the first books to publish drawings of microscopic animals, Micrographia (a ‘jolting revelation’), laid out the meridian at Greenwich, and had the idea, along with others, that gravitation extended throughout the solar system and held it together. As we have seen, it was discussions between Hooke, Wren and Halley that induced the latter to approach Newton, which resulted in the Principia. Hooke has been relatively forgotten because he quarrelled with Newton over his interpretation of the results of his optics experiments. Lately, however, Hooke has been rehabilitated.91

It was the Fellows of the Royal Society who developed the familiar form of scientific publication. One of Hooke’s jobs, as an employee of the Society, was to help earn its keep by publishing Philosophical Transactions and selling them. Fellows, and other scientists, had begun writing in to the Society with their discoveries and in this way the Society became a clearing house and then publisher of the Transactions, which formed a model for subsequent scientific communication. In their hard-headed, practical way, the Fellows demanded good English in these papers, even going so far as to appoint the poet John Dryden to a committee to oversee the writing style of scientists.

It has often been claimed that the early universities played little role in the development of modern science – that most of the academies and societies were private or ‘royal’ affairs. Mordechai Feingold has recently cast doubt on this. He shows that there was a big increase in the university population between 1550 and 1650 (at least in England), that the Lucasian chair in mathematics was founded at Cambridge in 1663 and the Savilean chairs in mathematics and astronomy were also founded in Oxford at much the same time.92 John Bainbridge, an early Savilean professor of astronomy, undertook expeditions to see eclipses and other phenomena, and when Henry Briggs, the logarithm expert (see here), died in 1630, his funeral was attended by all the heads of Oxford colleges. Feingold identified the correspondence of several individuals – Henry Savile himself, William Camden, Patric Young, Thomas Crane, Richard Madox – who each formed part of a Europe-wide network of scientists, linked to such figures as Brahe, Kepler, Scaliger and Gassendi. He shows that students were exposed to scientific results and that textbooks were modified in the light of those results.93 Overall, the picture he paints is of the universities as part of the scientific revolution but without producing any great names of their own or major innovations. This is not perhaps a very dramatic or striking contribution, but Feingold insists it wasn’t negligible either. Nor should we forget that Newton was a Cambridge man, Galileo a professor at Pisa, and Harvey and Vesalius both developed their ideas in a university context.

These few details about the early days of the Royal Society and the universities bring us back to the beginning of this chapter and the question as to whether or not we may speak of a scientific revolution. It is certainly true that 144 years elapsed between publication of Copernicus’ De revolutionibus and Newton’s Principia Mathematica, and that no less a figure than Newton himself was interested in alchemy and numerology, subjects or practices that were dying out. But, as Thomas Sprat’s book shows, the men of the time did feel that they were taking part in something new, in a venture that needed defending from its critics, and that they took as their guiding spirit Francis Bacon, rather than some figure from antiquity. Experimentation, he said, was proliferating.

There is little doubt too that knowledge was being reorganised in new and more modern ways. Peter Burke, for example, has described this reorganisation in the sixteenth and seventeenth centuries. The word ‘research’ was first used in Étienne Pasquier’s Recherches de la France in 1560.94 Libraries were revamped in the seventeenth century, with a more secular layout, with subjects like mathematics, geography and dictionaries being promoted at the expense of theology.95 The Catholic Index was alphabetised, an essentially artificial and non-theological arrangement, and Graunt and Petty’s work on early statistics was augmented by the plague episodes of 1575 and 1630, which stimulated yet more counting of people. And by a royal census of trees in France.96

Richard Westfall has outlined what are perhaps the more important ways in which ideas changed during the scientific revolution. Beforehand, he says, theology was queen of all the sciences – now, it is ‘not allowed on the premises any more’.97 ‘A once Christian culture has become a scientific one . . . Scientists of today can read and recognise works done after 1687. It takes a historian to comprehend those written before 1543.’98 ‘. . . in its most general terms, the Scientific Revolution was the replacement of Aristotelian natural philosophy, which aside from its earlier career had completely dominated thought about nature in western Europe during the previous four centuries.’99 ‘We have to look carefully . . . to find experiments before the seventeenth century. Experiment had not yet been considered the distinctive procedure of natural philosophy; by the end of the century it was so recognised . . . The elaboration and expansion of the set of available instruments was closely allied to experimentation. I have been collecting information on the scientists from this period that appear in the Dictionary of Scientific Biography, 631 in all. One hundred fifty-six of them, only a small decimal short of one-quarter, either made instruments or developed new ones. They are spread over every field of investigation.’100

In the end, Westfall thought it all came down to the relationship between Christianity and science. He quotes the episode, early in the seventeenth century, when the Catholic Church, in particular Cardinal Bellarmino, condemned Copernican astronomy because it conflicted with certain overt passages in the scriptures. Sixty-five years later Newton engaged in a correspondence with a certain Thomas Burnet, who claimed that the scriptural account of the Creation was a fiction, composed by Moses for political purposes. Newton defended Genesis, arguing that it stated what science – chemistry – would lead us to expect. ‘Where Bellarmino had employed Scripture to judge a scientific opinion, both Burnet and Newton used science to judge the validity of Scripture.’ This was a huge transformation. Theology had become subordinate to science, the very opposite of the earlier position and, as Westfall concluded, that hierarchy has never been reversed.101

In historical terms, sixty-five years is a very brief time-span. Without question, the changes wrought by science in the seventeenth century were ‘sudden, radical, and complete’. In short, they were a revolution.

Загрузка...