A fictional speculation: suppose a ‘time machine’ allowed us to send one succinct ‘tweet’ to great scientists of the past—Newton or Archimedes, for instance. What message would most enlighten them and transform their vision of the world? I think it would be the marvellous realisation that we ourselves, and everything in the everyday world, are made from fewer than one hundred different kinds of atoms—lots of hydrogen, oxygen, and carbon; small but crucial admixtures of iron, phosphorous, and other elements. All materials—living and nonliving—owe their structures to the intricate patterns in which atoms stick together, and how they react. The whole of chemistry is determined by the interactions between the positively charged nuclei of atoms and the negatively charged swarm of electrons that they’re embedded in.
Atoms are simple; we can write down the equations of quantum mechanics (Schrödinger’s equation) that describe their properties. So, on the cosmic scale, are black holes, for which we can solve Einstein’s equations. These ‘basics’ are well enough understood to enable engineers to design all the objects of the modern world. (Einstein’s theory of general relativity has found practical use in GPS satellites; their clocks would lose accuracy if they weren’t properly corrected for the effects of gravity.)
The intricate structure of all living things testifies that layer on layer of complexity can emerge from the operation of underlying laws. Mathematical games can help to develop our awareness of how simple rules, reiterated over and over again, can indeed have surprisingly complex consequences.
John Conway, now at Princeton University, is one of the most charismatic figures in mathematics.[1] When he taught at Cambridge, students created a ‘Conway appreciation society’. His academic research deals with a branch of mathematics known as group theory. But he reached a wider audience and achieved a greater intellectual impact through developing the Game of Life.
In 1970 Conway was experimenting with patterns on a Go board; he wanted to devise a game that would start with a simple pattern and use basic rules to iterate again and again. He discovered that by adjusting the rules of his game and the starting patterns, some arrangements produce incredibly complicated results—seemingly from nowhere because the rules of the game are so basic. ‘Creatures’ emerged, moving around the board, that seemed to have a life of their own. The simple rules merely specify when a white square turns into a black square (and vice versa), but, applied over and over again, a fascinating variety of complicated patterns is created. Devotees of the game identified objects such as ‘glider’, ‘glider gun’, and other reproducing patterns.
Conway indulged in a lot of ‘trial and error’ before he came up with a simple ‘virtual world’ that allowed for interesting emergent variety. He used pencil and paper, before the days of personal computers, but the implications of the Game of Life only emerged when the greater speed of computers could be harnessed. Likewise, early PCs enabled Benoit Mandelbrot and others to plot out the marvellous patterns of fractals—showing how simple mathematical formulas can encode intricate apparent complexity.
Most scientists resonate with the perplexity expressed in a classic essay by the physicist Eugene Wigner, titled ‘The Unreasonable Effectiveness of Mathematics in the Natural Sciences’.[2] And also with Einstein’s dictum that ‘the most incomprehensible thing about the universe is that it is comprehensible’. We marvel that the physical world isn’t anarchic—that atoms obey the same laws in distant galaxies as in our laboratories. As I’ve already noted (section 3.5), if we ever discover aliens and want to communicate with them, mathematics, physics, and astronomy would be perhaps the only shared culture. Mathematics is the language of science—and has been ever since the Babylonians devised their calendar and predicted eclipses. (Some of us would likewise regard music as the language of religion.)
Paul Dirac, one of the pioneers of quantum theory, showed how the internal logic of mathematics can point the way towards new discoveries. Dirac averred that ‘the most powerful method of advance is to employ all the resources of pure mathematics in attempts to perfect and generalise the mathematical formalism that forms the existing basis of theoretical physics and—after each success in this direction—to try to interpret the new mathematical features in terms of physical entities’.[3] It was this approach—following the mathematics where it leads—that led Dirac to the idea of antimatter: ‘antielectrons’, now known as positrons, were discovered just a few years after he formulated an equation that would have seemed ugly without them.
Present-day theorists, with the same motives as Dirac, are hoping to understand reality at a deeper level by exploring concepts such as string theory, involving scales far smaller than any we can directly probe. Likewise, at the other extreme, some are exploring cosmological theories that offer intimations that the universe is vastly more extensive than the ‘patch’ we can observe with our telescopes (see section 4.3).
Every structure in the universe is composed of basic ‘building blocks’ governed by mathematical laws. However, the structures are generally too complicated for even the most powerful computers to calculate. But perhaps in the far-distant future, posthuman intelligence (not in organic form, but in autonomously evolving objects) will develop hypercomputers with the processing power to simulate living things—even entire worlds. Perhaps advanced beings could use hypercomputers to simulate a ‘universe’ that is not merely patterns on a chequerboard (like Conway’s game) or even like the best ‘special effects’ in movies or computer games. Suppose they could simulate a universe fully as complex as the one we perceive ourselves to be in. A disconcerting thought (albeit a wild speculation) then arises: perhaps that’s what we really are!
Possibilities once in the realms of science fiction have shifted into serious scientific debate. From the very first moments of the big bang to the possibilities for alien life, scientists are led to worlds even weirder than most fiction writers envision. At first sight one might think it presumptuous to claim—or even seek—to understand the remote cosmos when there’s so much that baffles us closer at hand. But that’s not necessarily a fair assessment. There is nothing paradoxical about the whole being simpler than its parts. Imagine an ordinary brick—its shape can be described in a few numbers. But if you shatter it, the fragments can’t be described so succinctly.
Scientific progress seems patchy. Odd though it may seem, some of the best-understood phenomena are far away in the cosmos. Even in the seventeenth century, Newton could describe the ‘clockwork of the heavens’; eclipses could be both understood and predicted. But few other things are so predictable, even when we understand them. For instance, it’s hard to forecast, even a day before, whether those who travel to view an eclipse will encounter clouds or clear skies. Indeed, in most contexts, there’s a fundamental limit to how far ahead we can predict. That’s because tiny contingencies—like whether or not a butterfly flaps its wings—have consequences that grow exponentially. For reasons like this, even the most fine-grained computation cannot normally forecast British weather even a few days ahead. (But—and this is important—this doesn’t stymie predictions of long-term climate change, nor weaken our confidence that it will be colder next January than it is in July.)
Today, astronomers can convincingly attribute tiny vibrations in a gravitational-wave detector to a ‘crash’ between two black holes more than a billion light years from Earth.[4] In contrast, our grasp of some familiar matters that interest us all—diet and child care, for instance—is still so meagre that ‘expert’ advice changes from year to year. When I was young, milk and eggs were thought to be good; a decade later they were deemed dangerous because of their high cholesterol content; and now they seem again to be deemed harmless. So lovers of chocolate and cheese may not have to wait long before being told those foods are good for them. And there is still no cure for many of the commonest ailments.
But it actually isn’t paradoxical that we’ve achieved confident understanding of arcane and remote cosmic phenomena while being flummoxed by everyday things. It’s because astronomy deals with phenomena far less complex than the biological and human sciences (even than ‘local’ environmental sciences).
So how should we define or measure complexity? A formal definition was suggested by the Russian mathematician Andrey Kolmogorov: an object’s complexity depends on the length of the shortest computer programme that could generate a full description of it.
Something made of only a few atoms cannot be very complicated. Big things need not be complex either. Consider, for instance, a crystal—even if it were large it wouldn’t be called complex. The recipe for (for instance) a salt crystal is short: take sodium and chlorine atoms and pack them together, over and over again, to make a cubical lattice. Conversely, if you take a large crystal and chop it up, there is little change until it is broken down to the scale of single atoms. Despite its vastness, a star is fairly simple too. Its core is so hot that no chemicals can exist (complex molecules get torn apart); it is basically an amorphous gas of atomic nuclei and electrons. Indeed, black holes, exotic though they seem, are among the simplest entities in nature. They can be described precisely by equations no more complicated than those that describe a single atom.
Our high-tech objects are complex. For instance, a silicon chip with a billion transistors has structure on all levels down to a few atoms. But most complex of all are living things. An animal has interlinked internal structure on several different scales—from the proteins in single cells, right up to limbs and major organs. It doesn’t preserve its essence if it is chopped up. It dies. Humans are more complex than atoms or stars (and, incidentally, midway between them in mass; it takes about as many human bodies to make up the Sun as there are atoms in each of us). The genetic recipe for a human being is encoded in three billion links of DNA. But we are not fully determined by our genes; we are moulded by our environment and experiences. The most complex things we know about in the universe are our own brains. Thoughts and memories (coded by neurons in the brain) are far more varied than genes.
There’s an important difference, however, between ‘Kolmogorov complexity’ and whether something actually looks complicated. For instance, Conway’s Game of Life leads to complicated-looking structures. But these can all be described by a short programme: take a particular starting position, and then iterate, over and over again, according to the simple rules of the game. The intricate fractal pattern of Mandelbrot’s set is likewise the result of a simple algorithm. But these are exceptions. Most things in our everyday environment are too complicated to be predicted, or even fully described in detail. But much of their essence can nonetheless be captured by a few key insights. Our perspective has been transformed by great unifying ideas. The concept of continental drift (plate tectonics) helps us to fit together a whole raft of geological and ecological patterns across the globe. Darwin’s insight—evolution via natural selection—reveals the overarching unity of the entire web of life on this planet. And the double helix of DNA reveals the universal basis for heredity. There are patterns in nature. There are even patterns in how we humans behave—in how cities grow, how epidemics spread, and how technologies like computer chips develop. The more we understand the world, the less bewildering it becomes and the more we’re able to change it.
The sciences can be viewed as a hierarchy, ordered like the floors of a building, with those dealing with more complex systems higher up: particle physics in the basement, then the rest of physics, then chemistry, then cell biology, then botany and zoology, and then the behavioural and human sciences (with the economists claiming the penthouse).
The ‘ordering’ of the sciences in this hierarchy is not controversial. But what is more controversial is the sense in which the ‘ground floor sciences’—particle physics in particular—are deeper or more fundamental than the others. In one sense they truly are. As the physicist Steven Weinberg has pointed out: ‘The arrows all point downward’. Put another way, if you go on asking Why? Why? Why? you end up at the particle level. Scientists are nearly all reductionists in Weinberg’s sense; they feel confident that everything, however complex, is a solution of Schrödinger’s equation—unlike the ‘vitalists’ of earlier eras, who thought that living things were infused with some special ‘essence’. But this reductionism isn’t conceptually useful. As another great physicist, Philip Anderson, emphasised, ‘more is different’; macroscopic systems that contain large numbers of particles manifest ‘emergent’ properties and are best understood in terms of new concepts appropriate to the level of the system.
Even a phenomenon as un-mysterious as the flow of water in pipes or rivers is understood in terms of ‘emergent’ concepts like viscosity and turbulence. Specialists in fluid mechanics don’t care that water is actually made up of H2O molecules; they see water as a continuum. Even if they had a hypercomputer that could solve Schrödinger’s equation for the flow, atom by atom, the resultant simulation wouldn’t provide any insight into how waves break, or what makes a flow become turbulent. And new irreducible concepts are even more crucial to our understanding of really complicated phenomena—for instance, migrating birds or human brains. Phenomena on different levels of the hierarchy are understood in terms of different concepts—turbulence, survival, alertness, and so forth. The brain is an assemblage of cells; a painting is an assemblage of pigments. But what is important and interesting is the pattern and structure—the emergent complexity.
That’s why the analogy with a building is a poor one. The entire structure of a building is imperilled by weak foundations. In contrast, the ‘higher level’ sciences dealing with complex systems aren’t vulnerable to an insecure base, as a building is. Each science has its own distinct concepts and modes of explanation. Reductionism is true in a sense. But it’s seldom true in a useful sense. Only about 1 percent of scientists are particle physicists or cosmologists. The other 99 percent work on ‘higher’ levels of the hierarchy. They’re challenged by the complexity of their subject—not by any deficiencies in our understanding of subnuclear physics.
The Sun formed 4.5 billion years ago, but it’s got around 6 billion years more before its fuel runs out. It will then flare up, engulfing the inner planets. And the expanding universe will continue—perhaps forever—destined to become ever colder, ever emptier. To quote Woody Allen, eternity is very long, especially towards the end.
Any creatures witnessing the Sun’s demise won’t be human—they’ll be as different from us as we are from a bug. Posthuman evolution—here on Earth and far beyond—could be as prolonged as the Darwinian evolution that has led to us—and even more wonderful. And evolution is now accelerating; it can happen via ‘intelligent design’ on a technological time-scale, operating far faster than natural selection and driven by advances in genetics and in artificial intelligence (AI). The long-term future probably lies with electronic rather than organic ‘life’ (see section 3.3).
In cosmological terms (or indeed in a Darwinian time frame) a millennium is but an instant. So let us ‘fast forward’ not for a few centuries, or even for a few millennia, but for an ‘astronomical’ timescale millions of times longer than that. The ‘ecology’ of stellar births and deaths in our galaxy will proceed gradually more slowly, until jolted by the ‘environmental shock’ of an impact with the Andromeda Galaxy, maybe four billion years hence. The debris of our galaxy, Andromeda, and their smaller companions—which now make up what is called the Local Group—will thereafter aggregate into one amorphous swarm of stars.
On the cosmic scale, gravitational attraction is overwhelmed by a mysterious force latent in empty space that pushes galaxies apart from each other. Galaxies accelerate away and disappear over a horizon—rather like an inside-out version of what happens when something falls into a black hole. All that will be left in view, after a hundred billion years, will be the dead and dying stars of our Local Group. But these could continue for trillions of years—time enough, perhaps, for the long-term trend for living systems to gain complexity and ‘negative entropy’ to reach a culmination. All the atoms that were once in stars and gas could be transformed into structures as intricate as a living organism or a silicon chip—but on a cosmic scale. Against the darkening background, protons may decay, dark matter particles annihilate, occasional flashes when black holes evaporate—and then silence.
In 1979, Freeman Dyson (already mentioned in section 2.1) published a now-classic article whose aim was ‘to establish numerical bounds within which the universe’s destiny must lie’.[5] Even if all material were optimally converted into a computer or superintelligence, would there still be limits on how much information could be processed? Could an unbounded number of thoughts be thought? The answer depends on the cosmology. It takes less energy to carry out computations at low temperatures. For the universe we seem to be in, Dyson’s limit would be finite, but would be maximised if the ‘thinkers’ stayed cool and thought slowly.
Our knowledge of space and time is incomplete. Einstein’s relativity (describing gravity and the cosmos) and the quantum principle (crucial for understanding the atomic scale) are the two pillars of twentieth-century physics, but a theory that unifies them is unfinished business. Current ideas suggest that progress will depend on fully understanding what might seem the simplest entity of all—‘mere’ empty space (the vacuum) is the arena for everything that happens; it may have a rich texture, but on scales a trillion trillion times smaller than an atom. According to string theory, each ‘point’ in ordinary space might, if viewed with this magnification, be revealed as a tightly folded origami in several extra dimensions.
The same fundamental laws apply throughout the entire domain we can survey with telescopes. Were that not so—were atoms ‘anarchic’ in their behaviour—we’d have made no progress in understanding the observable universe. But this observable domain may not be all of physical reality; some cosmologists speculate that ‘our’ big bang wasn’t the only one—that physical reality is grand enough to encompass an entire ‘multiverse’.
We can only see a finite volume—a finite number of galaxies. That’s essentially because there’s a horizon, a shell around us, delineating the greatest distance from which light can reach us. But that shell has no more physical significance than the circle that delineates your horizon if you’re in the middle of the ocean. Even conservative astronomers are confident that the volume of space-time within range of our telescopes—what astronomers have traditionally called ‘the universe’—is only a tiny fraction of the aftermath of the big bang. We’d expect far more galaxies located beyond the horizon, unobservable, each of which (along with any intelligences it hosts) will evolve rather like our own.
It’s a familiar idea that if enough monkeys were given enough time, they would write the works of Shakespeare (and indeed all other books, along with every conceivable string of gobbledygook). This statement is mathematically correct. But the number of ‘failures’ that would precede eventual success is a number with about ten million digits. Even the number of atoms in the visible universe has only eighty digits. If all the planets in our galaxy were crawling with monkeys, who had been typing ever since the first planets formed, then the best they would have done is typed a single sonnet (their output would include short coherent stretches from all the world’s literatures, but no single complete work). To produce a specific set of letters as long as a book is so immensely improbable that it wouldn’t have happened even once within the observable universe. When we throw dice we eventually get a long succession of sixes, but (unless they are biased) we wouldn’t expect to get more than a hundred in a row even if we went on for a billion years.
However, if the universe stretches far enough, everything could happen—somewhere far beyond our horizon there could even be a replica of Earth. This requires space to be VERY big—described by a number not merely with a million digits but with 10 to the power of 100 digits: a one followed by one hundred zeroes. Ten to the power of 100 is called a googol, and a number with a googol of zeros is a googolplex.
Given enough space and time, all conceivable chains of events could be played out somewhere, though almost all of these would occur far out of range of any observations we could conceivably make. The combinatorial options could encompass replicas of ourselves, taking all possible choices. Whenever a choice has to be made, one of the replicas will take each option. You may feel that a choice you make is ‘determined’. But it may be a consolation that, somewhere far away (far beyond the horizon of our observations) you have an avatar who has made the opposite choice.
All this could be encompassed within the aftermath of ‘our’ big bang, which could extend over a stupendous volume. But that’s not all. What we’ve traditionally called ‘the universe’—the aftermath of ‘our’ big bang—may be just one island, just one patch of space and time, in a perhaps infinite archipelago. There may have been many big bangs, not just one. Each constituent of this ‘multiverse’ could have cooled down differently, maybe ending up governed by different laws. Just as Earth is a very special planet among zillions of others, so—on a far grander scale—could our big bang have been a rather special one. In this hugely expanded cosmic perspective, the laws of Einstein and the quantum could be mere parochial bylaws governing our cosmic patch. So, not only could space and time be intricately ‘grainy’ on a submicroscopic scale, but also, at the other extreme—on scales far larger than astronomers can probe—it may have a structure as intricate as the fauna of a rich ecosystem. Our current concept of physical reality could be as constricted, in relation to the whole, as the perspective of the Earth available to a plankton whose ‘universe’ is a spoonful of water.
Could this be true? A challenge for twenty-first-century physics is to answer two questions. First, are there many ‘big bangs’ rather than just one? Second—and this is even more interesting—if there are many, are they all governed by the same physics?
If we’re in a multiverse, it would imply a fourth and grandest Copernican revolution; we’ve had the Copernican revolution itself, then the realisation that there are billions of planetary systems in our galaxy; then that there are billions of galaxies in our observable universe. But now that’s not all. The entire panorama that astronomers can observe could be a tiny part of the aftermath of ‘our’ big bang, which is itself just one bang among a perhaps infinite ensemble.
(At first sight, the concept of parallel universes might seem too arcane to have any practical impact. But it may [in one of its variants] actually offer the prospect of an entirely new kind of computer: the quantum computer, which can transcend the limits of even the fastest digital processor by, in effect, sharing the computational burden among a near infinity of parallel universes.)
Fifty years ago, we weren’t sure whether there had been a big bang. My Cambridge mentor Fred Hoyle, for instance, contested the concept, favouring a ‘steady state’ cosmos that was eternal and unchanging. (He was never fully converted—in his later years he espoused a compromise idea that might be called a ‘steady bang’.) Now we have enough evidence to delineate cosmic history back to the ultradense first nanosecond—with as much confidence as a geologist inferring the early history of Earth. So in fifty more years, it is not overoptimistic to hope that we may have a ‘unified’ physical theory, corroborated by experiment and observation in the everyday world, that is broad enough to describe what happened in the first trillionth of a trillionth of a trillionth of a second—where the densities and energies were far higher than the range in which current theories apply. If that future theory were to predict multiple big bangs we should take that prediction seriously, even though it can’t be directly verified (just as we give credence to what Einstein’s theory tells us about the unobservable insides of black holes, because the theory has survived many tests in domains we can observe).
We may, by the end of this century, be able to ask whether or not we live in a multiverse, and how much variety its constituent ‘universes’ display. The answer to this question will determine how we should interpret the ‘biofriendly’ universe in which we live (sharing it with any aliens with whom we might one day make contact).
My 1997 book, Before the Beginning,[6] speculated about a multiverse. Its arguments were partly motivated by the seemingly ‘biophilic’ and fine-tuned character of our universe. This would occasion no surprise if physical reality embraced a whole ensemble of universes that ‘ring the changes’ on the basic constants and laws. Most would be stillborn or sterile, but we would find ourselves in one of those where the laws permitted emergent complexity. This idea had been bolstered by the ‘cosmic inflation’ theory of the 1980s, which offered new insights into how our entire observable universe could have ‘sprouted’ from an event of microscopic size. It gained further serious attention when string theorists began to favour the possibility of many different vacuums—each an arena for microphysics governed by different laws.
I’ve ever since had a close-up view of this shift in opinion and the emergence of these (admittedly speculative) ideas. In 2001, I helped organise a conference on this theme. It took place in Cambridge, but not in the university. I hosted it at my home, a farmhouse on the edge of the city, in a converted barn that offered a somewhat austere location for our discussions. Some years later, we had a follow-up conference. This time the location was very different: a rather grand room in Trinity College, with a portrait of Newton (the college’s most famous alumnus) behind the podium.
The theorist Frank Wilczek (famous for his role, while still a student, in formulating what is called the ‘standard model’ of particle physics) attended both meetings. When he spoke at the second, he contrasted the atmosphere at the two gatherings.
He described physicists at the first meeting as ‘fringe’ voices in the wilderness who had for many years promoted strange arguments about conspiracies among fundamental constants and alternative universes. Their concerns and approaches seemed totally alien to the consensus vanguard of theoretical physics, which was busy successfully constructing a unique and mathematically perfect universe. But at the second meeting, he noted that ‘the vanguard had marched off to join the prophets in the wilderness’.
Some years ago, I was on a panel at Stanford University where we were asked by the chairman: ‘On the scale, “would you bet your goldfish, your dog, or your life,” how confident are you about the multiverse concept?’ I said that I was nearly at the dog level. Andrei Linde, a Russian cosmologist who had spent twenty-five years promoting a theory of ‘eternal inflation’ said he’d almost bet his life. Later, on being told this, the eminent theorist Steven Weinberg said he’d happily bet Martin Rees’s dog and Andrei Linde’s life.
Andrei Linde, my dog, and I will all be dead before this is settled. It’s not metaphysics. It’s highly speculative. But it’s exciting science. And it may be true.
A feature of science is that as the frontiers of our knowledge are extended, new mysteries, just beyond the frontiers, come into sharper focus. Unexpected discoveries have been perennially exciting in my own subject of astronomy. In every subject there will, at every stage, be ‘unknown unknowns’. (Donald Rumsfeld was mocked for saying this in a different context—but of course he was right, and it might have been better for the world had he become a philosopher.) But there is a deeper question. Are there things that we’ll never know, because they are beyond the power of human minds to grasp? Are our brains matched to an understanding of all key features of reality?
We should actually marvel at how much we have understood. Human intuition evolved to cope with the everyday phenomena our remote ancestors encountered on the African savanna. Our brains haven’t changed much since that time, so it is remarkable that they can grasp the counterintuitive behaviours of the quantum world and the cosmos. I conjectured earlier that answers to many current mysteries will come into focus in the coming decades. But maybe not all of them; some key features of reality may be beyond our conceptual grasp. We may sometime ‘hit the buffers’; there may be phenomena, crucial to our long-term destiny and to a full understanding of physical reality, that we are not aware of, any more than a monkey comprehends the nature of stars and galaxies. If aliens exist, some may have ‘brains’ that structure their consciousness in a fashion that we can’t conceive and that have a quite different perception of reality.
We are already being aided by computational power. In the ‘virtual world’ inside a computer, astronomers can mimic galaxy formation, or crash another planet into the Earth to see if that’s how the Moon might have formed; meteorologists can simulate the atmosphere, for weather forecasts and to predict long-term climatic trends; brain scientists can simulate how neurons interact. Just as video games get more elaborate as their consoles get more powerful, so, as computer power grows, these ‘virtual’ experiments become more realistic and useful.
Furthermore, there is no reason why computers can’t actually make discoveries that have eluded unaided human brains. For example, some substances are perfect conductors of electricity when cooled to very low temperatures (superconductors). There is a continuing quest to find the ‘recipe’ for a superconductor that works at ordinary room temperatures (the highest superconducting temperature achieved so far is about −135 degrees Celsius at normal pressures and somewhat higher, about −70 degrees, for hydrogen sulphide at very high pressure). This would allow lossless transcontinental transmission of electricity, and efficient ‘mag-lev’ trains.
The quest involves a lot of ‘trial and error’. But it’s becoming possible to calculate the properties of materials, and to do this so fast that millions of alternatives can be computed, far more quickly than actual experiments could be performed. Suppose that a machine came up with a unique and successful recipe. It might have succeeded in the same way as AlphaGo. But it would have achieved something that would earn a scientist a Nobel prize. It would have behaved as though it had insight and imagination within its rather specialised universe—just as AlphaGo flummoxed and impressed human champions with some of its moves. Likewise, searches for the optimal chemical composition for new drugs will increasingly be done by computers rather than by real experiments, just as for many years aeronautical engineers have simulated air flow over wings by computer calculations rather than depending on wind-tunnel experiments.
Equally important is the capability to discern small trends or correlations by ‘crunching’ huge data sets. To take an example from genetics, qualities like intelligence and height are determined by combinations of genes. To identify these combinations would require a machine fast enough to scan large samples of genomes to identify small correlations. Similar procedures are used by financial traders in seeking out market trends and responding rapidly to them, so that their investors can top-slice funds from the rest of us.
My claim that there are limits to what human brains can understand was, incidentally, contested by David Deutsch, a physicist who has pioneered key concepts of ‘quantum computing’. In his provocative and excellent book The Beginning of Infinity,[7] he pointed out that any process is in principle computable. This is true. However, being able to compute something is not the same as having an insightful comprehension of it. Consider an example from geometry, where points in the plane are designated by two numbers, the distance along the x-axis and along the y-axis. Anyone who has studied geometry at school would recognise the equation x2 + y2 = 1 as describing a circle. The famous Mandelbrot set is described by an algorithm that can be written down in a few lines. And its shape can be plotted by even a modestly powered computer—its ‘Kolmogorov complexity’ isn’t high. But no human who is just given the algorithm can grasp and visualise this immensely complicated ‘fractal’ pattern in the same sense that they can visualise a circle.
We can expect further dramatic advances in the sciences during this century. Many questions that now perplex us will be answered, and new questions will be posed that we can’t even conceive today. We should nonetheless be open-minded about the possibility that despite all our efforts, some fundamental truths about nature could be too complex for unaided human brains to fully grasp. Indeed, perhaps we’ll never understand the mystery of these brains themselves—how atoms can assemble into ‘grey matter’ that can become aware of itself and ponder its origins. Or perhaps any universe complicated enough to have allowed our emergence is for just that reason too complicated for our minds to understand.
Whether the long-range future lies with organic posthumans or with intelligent machines is a matter for debate. But we would be too anthropocentric if we believed that a full understanding of physical reality is within humanity’s grasp, and that no enigmas will remain to challenge our posthuman descendants.
If the number one question astronomers are asked is, Are we alone?, the number two question is surely, Do you believe in God? My conciliatory answer is that I do not, but that I share a sense of wonder and mystery with many who do.
The interface between science and religion still engenders controversy, even though there has been no essential change since the seventeenth century. Newton’s discoveries triggered a range of religious (and antireligious) responses. So, even more, did Charles Darwin in the nineteenth century. Today’s scientists evince a variety of religious attitudes; there are traditional believers as well as hard-line atheists among them. My personal view—a boring one for those who wish to promote constructive dialogue (or even just unconstructive debate) between science and religion—is that, if we learn anything from the pursuit of science, it is that even something as basic as an atom is quite hard to understand. This should induce scepticism about any dogma, or any claim to have achieved more than a very incomplete and metaphorical insight into any profound aspect of existence. As Darwin said, in a letter to the American biologist Asa Gray: ‘I feel most deeply that the whole subject is too profound for the human intellect. A dog might as well speculate on the mind of Newton. Let each man hope and believe as he can’.[8]
Creationists believe that God created the Earth more or less as it is—leaving no scope for emergence of new species or enhanced complexity and paying little regard to the wider cosmos. It is impossible to refute, by pure logic, even someone who claims that the universe was created an hour ago, along with all our memories and all vestiges of earlier history. ‘Creationist’ concepts still hold sway among many US evangelicals and in parts of the Muslim world. In Kentucky there is a ‘creation museum’ with what its promoters describe as a ‘full-size’ Noah’s Ark, 510 feet long, built at a cost of $150 million.
A more sophisticated variant—‘intelligent design’—is now more fashionable. This concept accepts evolution but denies that random natural selection can account for the immensely long chain of events that led to our emergence. Much is made of stages where a key component of living things seems to have required a series of evolutionary steps rather than a single leap, but where the intermediate steps would in themselves confer no survival advantage. But this style of argument is akin to traditional creationism. The ‘believer’ focuses on some details (and there are many) that are not yet understood and argues that the seeming mystery constitutes a fundamental flaw in the theory. Anything can be ‘explained’ by invoking supernatural intervention. So, if success is measured by having an explanation, however ‘flip’, then the ‘intelligent designers’ will always win.
But an explanation only has value insofar as it integrates disparate phenomena and relates them to a single underlying principle or unified idea. Such a principle is Darwinian natural selection as expounded in On the Origin of Species, a book he described as ‘one long argument’. Actually, the first great unifying idea was Newton’s law of gravity, identifying the familiar gravitational pull that holds us on the ground and makes an apple fall with the force that holds the Moon and planets in their orbits. Because of Newton, we need not record the fall of every apple.
Intelligent design dates back to classic arguments: a design needs a designer. Two centuries ago, the theologian William Paley introduced the now-well-known metaphor of the watch and the watchmaker—adducing the eye, the opposable thumb, and so forth as evidence of a benign Creator.[9] We now view any biological contrivance as the outcome of prolonged evolutionary selection and symbiosis with its surroundings. Paley’s arguments have fallen from favour even among theologians.[10]
Paley’s view of astronomy was that it was not the most fruitful science for yielding evidence of design, but ‘that being proved, it shows, above all others, the scale of [the Creator’s] operations’. Paley might have reacted differently if he’d known about the providential-seeming physics that led to galaxies, stars, planets, and the distinctive elements of the periodic table. The universe evolved from a simple beginning—a ‘big bang’—specified by quite a short recipe. But the physical laws are ‘given’ rather than having evolved. Claims that this recipe seems rather special can’t be so readily dismissed as Paley’s biological ‘evidences’ (and a possible explanation in terms of a multiverse is mentioned in section 4.3).
A modern counterpart of Paley, the ex-mathematical physicist John Polkinghorne, interprets our fine-tuned habitat as ‘the creation of a Creator who wills that it should be so’.[11] I have had genial public debates with Polkinghorne; he taught me physics when I was a Cambridge student. The line I take is that his theology is too anthropocentric and constricted to be credible. He doesn’t espouse ‘intelligent design’ but believes that God can influence the world by giving a nudge or tweak at places and times when the outcome is especially responsive to small changes—maximum impact with a minimal and readily concealed effort.
When meeting Christian clergy (or their counterparts in other faiths), I try to enquire about what they consider the ‘bottom line’—‘the theoretical minimum’ that must be accepted by their adherents. It’s clear that many Christians regard the resurrection as a historical and physical event. Polkinghorne certainly does; he dresses it up as physics, saying that Christ transitioned to an exotic material state that will befall the rest of us when the apocalypse comes. And in his 2018 Easter message, the Archbishop of Canterbury, Justin Welby, said that if the resurrection is ‘just a story or metaphor, frankly, I should resign from my job’. But how many Catholics really believe in the two miracles—the ‘practical’ part of the examination—that a potential candidate must achieve in order to qualify for sainthood? I’m genuinely perplexed that so many have a faith that has such literal content.
I would describe myself as a practising but unbelieving Christian. The parallel concept is familiar among Jews: there are many who follow traditional observances—lighting candles on Friday nights and so forth. But this need not mean that they accord their religion any primacy, still less that they claim it has any unique truth. They may even describe themselves as atheists. Likewise, as a ‘cultural Christian’, I’m content to participate (albeit irregularly) in the rituals of the Anglican church with which I’ve been familiar since early childhood.
Hard-line atheists focus too much, however, on religious dogma and on what is called ‘natural theology’—seeking evidence of the supernatural in the physical world. They must be aware of ‘religious’ people who are manifestly neither unintelligent nor naive. By attacking mainstream religion, rather than striving for peaceful coexistence with it, they weaken the alliance against fundamentalism and fanaticism. They also weaken science. If a young Muslim or evangelical Christian is told that they can’t have their God and accept evolution, they will opt for their God and be lost to science. Adherents of most religions accord high importance to their faith’s communal and ritual aspects—indeed many of them might prioritise ritual over belief. When so much divides us, and change is disturbingly fast, such shared ritual offers bonding within a community. And religious traditions, linking adherents with past generations, should strengthen our concern that we should not leave a degraded world for generations to come.
This line of thought segues into my final theme: how should we respond to the challenges of the twenty-first century and narrow the gap between the world as it is and the world we’d like to live in and share with the rest of ‘creation’?