39


‘THE BEST IDEA, EVER’

Narborough is a small village about ten miles south of Leicester, in the British East Midlands. Late on the evening of 21 November 1983 a fifteen-year-old girl, Lynda Mann, was sexually assaulted and strangled, her body left in a field not too far from her home. A manhunt was launched, but the investigation revealed nothing. Interest in the case died down until the summer of 1986, when on 2 August the body of another fifteen-year-old, Dawn Ashworth, was discovered in a thicket of blackthorn bushes, also near Narborough. She too had been strangled, after being sexually assaulted.

The manhunt this time soon produced a suspect, Richard Buckland, a porter in a nearby hospital.1 He was arrested exactly one week after Dawn’s body was found, following his confession. The similarities in the victims’ ages, the method of killing, and the proximity to Narborough naturally made the police wonder whether Richard Buckland might also be responsible for the death of Lynda Mann, and with this in mind they called upon the services of a scientist who had just developed a new technique, which had become known to police and public alike as ‘genetic fingerprinring.’2 This advance was the brainchild of Professor Alec Jeffreys of Leicester University. Like so many scientific discoveries, Jeffreys’s breakthrough came in the course of his investigation of something else – he was looking to identify the myoglobin gene, which governs the tissues that carry oxygen from the blood to the muscles. Jeffreys was in fact using the myoglobin gene to look for ‘markers,’ characteristic formations of DNA that would identify, say, certain families and would help scientists see how populations varied genetically from village to village, and country to country. What Jeffreys found was that on this gene one section of DNA was repeated over and over again. He soon found that the same observation – repeated sections – was being made in other experiments, investigating other chromosomes. What he realised, and no one else did, was that there seemed to be a widespread weakness in DNA that caused this pointless duplication to take place. As Walter Bodmer and Robin McKie describe it, the process is analogous to a stutterer who repeatedly stammers over the same letter. Moreover, this weakness differed from person to person. The crucial repeated segment was about fifteen base pairs long, and Jeffreys set about identifying it in such a way that it could be seen by eye with the aid of just a microscope. He first froze the DNA, then thawed it, which broke down the membranes of the red blood cells, but not those of the white cells that contain DNA. With the remains of the red blood cells washed away, an enzyme called proteinase K was added, exploding the white cells and freeing the DNA coils. These were then treated with another enzyme, known as Hinfl, which separates out the ribbons of DNA that contain the repeated sequences. Finally, by a process known as electrophoresis, the DNA fragments were sorted into bands of different length and transferred to nylon sheets, where radioactive or luminescent techniques obtained images unique to individuals.3

Jeffreys was called in to try this technique with Richard Buckland. He was sent samples of semen taken from the bodies of both Lynda Mann and Dawn Ashworth, together with a few cubic centimetres of Buckland’s blood. Jeffreys later described the episode as one of the tensest moments of his life. Until that point he had used his technique simply to test whether immigrants who came to Britain and were admitted on the basis of a law that allowed entry only to close relatives of those already living in the country really were as close as they claimed. A double murder case would clearly attract far more attention. When he went into his lab late one night to get the results, because he couldn’t bear hanging on until the next morning, he got a shock. He lifted the film from its developing fluid, and could immediately see that the semen taken from Lynda and Dawn came from the same man – but that killer wasn’t Richard Buckland.4 The police were infuriated when he told them. Buckland had confessed. To the police mind, that meant the new technique had to be flawed. Jeffreys was dismayed, but when an independent test by Home Office forensic experts confirmed his findings, the police were forced to think again, and Buckland was eventually acquitted, the first person ever to benefit in this way from DNA testing. Once they had adjusted to the surprising result, the police mounted a campaign to test the DNA of all the men in the Narborough area. Despite 4,000 men coming forward, no match was obtained, not until Ian Kelly, a baker who lived some distance from Narborough, revealed to friends that he had taken the test on behalf of a friend, Colin Pitchfork, who did live in the vicinity of the village. Worried by this deception, one of Kelly’s friends alerted the police. Pitchfork was arrested and DNA-tested. The friend had been right to be worried: tests showed that Pitchfork’s DNA matched the semen found on Lynda and Dawn. In January 1988, Pitchfork became the first person to be convicted after genetic fingerprinting. He went to prison for life.5

DNA fingerprinting was the most visible aspect of the revolution in molecular biology. Throughout the late 1980s it came into widespread use, for testing immigrants and men in paternity suits, as well as in rape cases. Its practical successes, so soon after the structure of the double helix had been identified, underlined the new intellectual climate initiated by techniques to clone and sequence genetic material. In tandem with these practical developments, a great deal of theorising about genetics revised and refined our understanding of evolution. In particular, much light was thrown on the stages of evolutionary progress, working forward from the moment life had been created, and on the philosophical implications of evolution.

In 1985 a Glasgow-based chemist, A. G. Cairns-Smith, published Seven Clues to the Origin of Life.6 In some ways a maverick, this book gave a totally different view of how life began to the one most biologists preferred. The traditional view about the origins of life had been summed up by a series of experiments carried out in the 1950s by S. L. Miller and H. C. Urey. They had assumed a primitive atmosphere on early Earth, consisting of ammonia, methane, and steam (but no oxygen – we shall come back to that). Into this early atmosphere they had introduced ‘lightning’ in the form of electrical discharges, and produced a ‘rich brew’ of organic chemicals, much richer than had been expected, including quite a large yield of amino acids, the building blocks for the nucleic acids which make up DNA. Somehow, from this rich brew, the ‘molecules of life’ formed. Graham Cairns-Smith thought this view nonsense because DNA molecules are extremely complicated, too complicated architecturally and in an engineering sense to have been produced accidentally, as the Miller-Urey reactions demanded. In one celebrated part of his book, he calculated that for nucleotides to have been invented, something like 140 operations would have needed to have evolved at the same time, and that the chances of this having occurred were one in 10109. Since this is more than the number of electrons in the universe, calculated as 108°, Cairns-Smith argued that there has simply not been enough time, or that the universe is not big enough, for nucleotides to have evolved in this way.7

His own version was startlingly different. He argued that evolution arrived before life as we know it, that there were chemical ‘organisms’ on earth before biochemical ones, and that they provided the architecture that made complex molecules like DNA possible. Looking about him, he saw that there are, in nature, several structures that, in effect, grow and reproduce – the crystal structures in certain clays, which form when water reaches saturation point. These crystals grow, sometimes break up into smaller units, and continue growing again, a process that can be called reproduction.8 Such crystals form different shapes – long columns, say, or flat mats – and since these have formed because they are suited to their micro-environments, they may be said to be adapted and to have evolved. No less important, the mats of crystal can form into layers that differ in ionisation, and it was between these layers, Cairns-Smith believed, that amino acids may have formed, in minute amounts, created by the action of sunlight, in effect photosynthesis. This process would have incorporated carbon atoms into inorganic organisms – there are many substances, such as titanium dioxide, that under sunshine can fix nitrogen into ammonia. By the same process, under ultraviolet light, certain iron salts dissolved in water can fix carbon dioxide into formic acid. The crystal structure of the clays was related to their outward appearance (their phenotype), all of which would have been taken over by carbon-based structures.9 As Linus Pauling’s epic work showed, carbon is amazingly symmetrical and stable, and this is how (and why), Cairns-Smith said, inorganic reproducing organisms were taken over by organic ones.

It is a plausible and original idea, but there are problems. The next step in the chain of life was the creation of cellular organisms, bacteria, for which a skin was required. Here the best candidates are what are known as lipid vesicles, tiny bubbles that form membranes automatically. These chemicals were found naturally occurring in meteorites, which, many people argue, brought the first organic compounds to the very young Earth. On this reasoning then, life in at least some of its elements had an extraterrestrial beginning. Another problem was that the most primitive bacteria, which are indeed little more than rods or discs of activity, surrounded by a skin, are chiefly found around volcanic vents on the ocean floor, where the hot interior of the earth erupts in the process that, as we have already seen, contributes to sea-floor spreading (some of these bacteria can only thrive in temperatures above boiling point, so that one might say life began in hell). It is therefore difficult to reconcile this with the idea that life originally began as a result of sunlight acting on clay-crystal structures in much shallower bodies of water.10

Whatever the actual origin of life (generally regarded as having occurred around 3,800 million years ago), there is no question that the first bacterial organisms were anaerobes, operating only in the absence of oxygen. Given that the early atmosphere of the earth contained very little or no oxygen, this is not so surprising. Around 2,500 million years ago, however, we begin to see in the earth’s rocks the accumulation of haematite, an oxidised form of iron. This appears to mean that oxygen was being produced, but was at first ‘used up’ by other minerals in the world. The best candidate for an oxygen-producer is a blue-green bacterium that, in shallower reaches of water where the sun could get at it and with the light acting on chlorophyll, broke carbon dioxide down into carbon, which it utilised for its own purposes, and oxygen – in other words, photosynthesis. For a time the minerals of the earth soaked up what oxygen was going (limestone rocks captured oxygen as calcium carbonate, iron rusted, and so on), but eventually the mineral world became saturated, and after that, over a thousand million years, billions of bacteria poured out tiny puffs of oxygen, gradually transforming the earth’s atmosphere.11

According to Richard Fortey, in his history of the earth, the next advance was the formation of slimy communities of microbes, structured into ‘mats,’ almost two-dimensional layers. These are still found even today on saline flats in the tropics where the absence of grazing animals allows their survival, though fossilised forms have also been found in rocks dating to more than 3,500 million years old in South Africa and Australia. These structures are known as stromatolites.12 Resembling ‘layered cabbages,’ they could grow to immense lengths – 30 feet was normal, and 100 metres not unknown. But they were made up of prokaryotes, or cells without nuclei, which reproduced simply by splitting. The advent of nuclei was the next advance; as the American biologist Lynn Margulis has pointed out, one bacterium cannibalised another, which became an organelle within another organism, and eventually formed the nucleus.13 A chloroplast is another such organelle, performing photosynthesis within a cell. The development of the nucleus and organelles was a crucial step, allowing more complex structures to be formed. This, it is believed, was followed by the evolution of sex, which seems to have occurred about 2,000 million years ago. Sex occurred because it allowed the possibility of genetic variation, giving a boost to evolution which, at that time, would have speeded up (the fossil records do become gradually more varied then). Cells became larger, more complex – and slimes appeared. Slimes can take on various forms, and can also on occasion move over the surface of other objects. In other words, they are both animate and inanimate, showing the development of rudimentary specialised tissues, behaving in ways faintly resembling animals.

By 700 million years ago, the Ediacara had appeared.14 These, the most primitive form of animal, have been discovered in various parts of the world, from Leicester, England, to the Flinders Mountains in south Australia. They take many exotic forms but in general are characterised by radial symmetry, skin walls only two cells thick, with primitive stomachs and mouths, like primitive jellyfish in appearance, and therefore not unimaginably far from slime. The first truly multicellular organisms, the Ediacara did not survive, at least not until the present day. For some reason they became extinct, despite their multifarious forms, and this may have been ultimately because they lacked a skeleton. This seems to have been the next important moment in evolution. Palaeontologists can say this with some confidence because, about 500 million years ago, there was a revolution in animal life on Earth. This is what became known as the Cambrian Explosion. Over the course of only 15 million years, animals with shells appeared, and in forms that are familiar even today. These were the trilobites – some with jointed legs and grasping claws, some with rudimentary dorsal nerves, some with early forms of eye, others with features so strange they are hard to describe.15

And so, by the mid- to late 1980s a new evolutionary synthesis began to emerge, one that filled in the order of important developments and provided more accurate dating. Moving forward in geological time, we can leap ahead from the Cambrian Explosion by more than 400 million years, to approximately 65 million years ago. One of the effects of the landing on the Moon, and the subsequent space probes, was that geology went from being a discipline with a single planet to study to one where there was suddenly a much richer base of data. One of the ways that the moon and other planets differ from Earth is that they seem to have far more craters on them, these craters being formed by impacts from asteroids or meteorites: bodies from space.16 This was important in geology because, by the 1970s, the discipline had become used to a slow-moving chronology, measured in millions of years. There was, however, one great exception to this rule, and that became known as the K/T boundary, the boundary between the Cretaceous and Tertiary geological periods, occurring about 65 million years ago, when the fossil records showed a huge and very sudden disruption, the chief feature of which was that many forms of life on Earth suddenly disappeared.17 The most notable of these extinctions was that of the dinosaurs, dominant large animals for about 150 million years before that, and completely absent from the fossil record afterward. Traditionally, geologists and palaeontologists considered that the mass extinctions were due to climate change or a fall in sea level. For many, however, this process would have been too slow – plants and animals would have adjusted, whereas in fact about half the life forms on Earth suddenly disappeared between the Cretaceous and the Tertiary. After the study of so many craters on other moons and planets, some palaeontologists began to consider whether a similarly catastrophic event might not have caused the mass extinctions seen on earth 65 million years ago. In this way there began an amazing scientific detective story that was not fully resolved until 1991.

For a meteorite or asteroid to cause such a devastating impact, it needed to have been a certain minimum size, so the crater it caused ought to have been difficult to overlook.18 No immediate candidate suggested itself, but the first breakthrough came when scientists realised that meteorites have a different chemical structure to that of Earth, in particular with regard to the platinum group of elements. This is because these elements are absorbed by iron, and the earth has a huge iron core. Meteorite dust, on the other hand, would be rich in these elements, such as iridium. Sure enough, by testing rocky outcrops dating from the Cretaceous/Tertiary border, Luis and Walter Alvarez, from the University of California at Berkeley, discovered that iridium was present in quantities that were ninety times as rich as they should have been if no impact had taken place.19 It was this discovery, in June 1978, that set off this father- and-son (and subsequently daughter-in-law) team on the quest that took them more than a decade. The second breakthrough came in 1981, in Nature, when Jan Smit, a Dutch scientist, reported his discoveries at a K/T boundary site at Caravaca in Spain.20 He described some small round objects, the size of a sand grain, called spherules, which he said were common at these sites and on analysis were shown to have crystals of a ‘feathery’ shape, made of sanidine, a form of potassium feldspar.21 These spherules, it was shown, had developed from earlier structures made of olivine – pyroxene and calcium-rich feldspar – and their significance lay in the fact that they are characteristic of basalt, the main rock that forms the earth crust under the oceans. In other words, the meteorite had slammed into the earth in the ocean and not on land.

This was both good news and bad news. It was good news in that it confirmed there had been a massive impact 65 million years ago. It was bad news in the sense that it led scientists to look for a crater in the oceans, and also to look for evidence of the massive tsunami, or tidal wave, that must have followed. Calculations showed that such a wave would have been a kilometre high as it approached continental shorelines. Both of these searches proved fruitless, and although evidence for an impact began to accumulate throughout the 1980s, with more than 100 areas located that showed iridium anomalies, as they were called, the actual site of the impact still remained elusive. It was not until 1988, when Alan Hildebrand, a Canadian attached to the University of Arizona, first began studying the Brazos River in Texas, that the decade-long search moved into its final stage.22 It had been known for some time that in one place near Waco the Brazos passes over some rapids associated with a hard sandy bed, and this bed, it was recognised, was the remnant of a tsunami inundation. Hildebrand looked hard at Brazos and then went in search of evidence that would link it, in a circular fashion, with other features in the area. By examining maps, and gravity anomalies, he finally found a circular structure, which might be an impact crater, on the floor of the Caribbean, north of Colombia, but also extending into the Yucatán Peninsula in Mexico. Other palaeontologists were sceptical at first, but when Hildebrand brought in help from geologists more familiar with Yucatán, they soon confirmed the area as the impact site. The reason everyone had been so confused was that the crater – known as Chicxulub – was buried under more recent rocks.23 When Hildebrand and his colleagues published their paper in 1991, it caused a sensation, at least to geologists and palaeontologists, who now had to revise their whole attitude: catastrophic events could have an impact on evolution.24

The discovery of Chicxulub produced other surprises. First, it turned out that the crater was to an extent responsible for the distribution of cenotes, small, spring-fed lakes that provided the fresh water that made the Mayan civilisation possible.25 Second, three other mass extinctions are now recognised by palaeontologists, occurring at 365, 250, and 205 million years ago. The disappearance of the dinosaurs also proved to have had a liberating effect on mammals. Until the K/T boundary, mammals were small creatures. This may have helped their survival after the impact – because they were so numerous – but in any event the larger mammals did not emerge until after the K/T, and in the absence of competition from Tyrannosaurus rex, Triceratops, and their brothers and sisters. There would probably have been no humans unless the K/T meteorite had collided with Earth.

So far as the origins of humanity were concerned, the 1980s provided one or two crucial excavations, but the period was really a golden age of interpretation and analysis rather than of discovery.

‘Turkana Boy,’ discovered by the Leakeys near Kenya’s Lake Turkana in August 1984, was much taller than people expected and quite slender, the first hominid to approach modern man in his dimensions.26 He had a narrow spinal canal and a thorax that tapered upward, which suggested to anatomists that Turkana Boy had only limited nerve signals being sent to the thorax, giving him less command of respiration than would have been needed if he were to speak as we do. In other words, Turkana Boy had no language. At the same time the tapered thorax meant that his arms would be closer together, making it easier to hang in trees. Assigning him to Homo erectus, the Leakeys dated Turkana Boy to 1.6 million years ago. Two years later their archrival Don Johanson discovered a skeleton at Olduvai, attributed to Homo habilis and only 200,000 or so years older. This was very different – short and squat with long arms very like those of an ape.27 The idea that more than one hominid type was alive at the same time around 2 million years ago was not accepted by all palaeontologists, but it did seem plausible that this was the time when the change occurred that caused hominids to leave the forest. Elisabeth Vrba, from Yale, argued that around 2.5 million years ago other changes induced evolutionary developments.28 For instance, polar glaciation reduced the temperature of the earth, lowering sea levels and making the climate more arid, reducing vegetation. This was supported by the observation that fossils of forest antelopes become rare at this time, to be replaced by a variety that grazed on dry, open savannahs.29 Stone tools appeared around 2.5 million years ago, suggesting that hominids left the forests between, say, 2.5 and 1.5 million years ago, growing taller and more graceful in the process, and using primitive tools. More ‘prepared’ tools are seen at about 200,000 years ago, roughly the time when the Neanderthals appeared. Opinions on them changed, too. We now know that their brains were as large as ours, though ‘behind’ the face rather than ‘above’ it. They appeared to bury their dead, decorate their bodies with ochre, and support disabled members of their communities.30 In other words, they were not the savages the Victorians imagined, and they coexisted with Homo sapiens from about 50,000 to 28,000 years ago.31

These and other varied finds, between 1975 and 1995, consolidated in Ian Tattersall’s compilation of fossils, therefore suggested the following revised chronology for hominid evolution:

4–3 million years ago Bipedalism 2.5 million years ago early tool-using 1.5 million years ago fire (for cooking food, which implies hunting) 1 million years ago emigration of hominids from Africa 200,000 years ago more refined tools Neanderthal Man appears 50,000–100,000 years ago Homo sapiens appears 28,000 years ago Neanderthals disappear

And why did the Neanderthals disappear? Many palaeontologists think there can be only one answer: Homo sapiens developed the ability to speak. Language gave modern man such an advantage in the competition for food and other resources that his rival was swiftly wiped out.

There are within cells organelles known as mitochondrial DNA. These organelles lie outside the nucleus and are in effect cell batteries – they produce a substance known as adenosine triphosphate or ATP. In January 1987 in Nature, Allan Wilson and Rebecca Cann, from Berkeley, revealed a groundbreaking analysis of mitochondrial DNA used in an archaeological context. The particular property of mitochondrial DNA that interested Wilson and Cann was that it is inherited only through the mother – it therefore does not change as nuclear DNA changes, through mating. Mitochondrial DNA can therefore only change, much more slowly, through mutation. Wilson and Cann had the clever idea of comparing the mitochondrial DNA among people from different populations, on the reasoning that the more different they were, the longer ago they must have diverged from whatever common ancestor we all share. Mutations are known to occur at a fairly constant pace, so this change should also give an idea of how long ago various groups of people diverged.32

To begin with, Wilson and Cann found that the world is broken down into two major groups – Africans on the one hand, and everyone else on the other. Second, Africans had slightly more mutations than anyone else, confirming the palaeontological results that humanity is older in Africa, very probably began there, and then spread from that continent to populate the rest of the world. Finally, by studying the rate of mutations and working backward, Wilson and Cann were able to show that humanity as we know it is no more than 200,000 years old, again broadly confirming the evidence of the fossils.33

One reason that the Wilson and Cann paper attracted the attention it did was because its results agreed well not only with what the palaeontologists were discovering in Africa, but also with recent work in linguistics and archaeology. As long ago as 1786, Sir William Jones, a British judge serving in India at the High Court in Calcutta, discovered that Sanskrit bore an unmistakable resemblance to both Latin and Greek.34 This observation gave him the idea of the ‘mother tongue,’ the notion that there was once, many years ago, a single language from which all other languages are derived. Joseph Greenberg, beginning in 1956, began to re-examine Sir William Jones’s hypothesis as applied to the Americas. In 1987 he concluded a massive study of native American languages, from southern South America to the Eskimos in the north, published as Language in the Americas, which concluded that, at base, the American languages could be divided into three.35 The first and earliest was ‘Amerind’, which covers South America and the southern states of the US, and shows much more variation than the other, northern languages, suggesting that it is much older. The second group was Na-dene, and the third Aleut-Eskimo, covering Canada and Alaska. Na-dene is more varied than Aleut-Eskimo, all of which, says Greenberg, points to three migrations into America, by groups speaking three different languages. He believes, on the basis of ‘mutations’ in words, that Amerind speakers arrived on the continent before 11,000 years ago, Na-denes around 9,000 years ago, and that the Aleuts and Eskimos diverged about 4,000 years ago.36

Greenberg’s conclusions are highly controversial but agree quite well with evidence from dental studies and surveys of genetic variation, in particular the highly original work of Professor Luca Cavalli-Sforza of Stanford University. In a series of books – Cultural Transmission and Evolution (1981), African Pygmies (1986), The Great Human Diasporas (1993), and History and Geography of Human Genes (1994) – Cavalli-Sforza and his colleagues have examined the variability of both blood, especially the rhesus factor, and genes around the world. This has led to fairly good agreement on the dates when early humans spread out across the globe. It has also led to a number of extraordinary possibilities in our longue durée history. For example, it seems that the Na-dene, Sino-Tibetan, Caucasian and Basque languages may be related in a very primitive way, and once belonged to a superfamily that was broken up by other peoples, shunting this superfamily into backwaters, and expelling Na-dene speakers into the Americas. The evidence also shows great antiquity for Basque speakers, whose language and blood is quite different from those around them. Cavalli-Sforza notes the contiguity between the Basque nation and the early sites of cave art in Europe, and wonders whether this is evidence for an ancient people who recorded their hunter-gatherer techniques on cave walls and resisted the spread of farming peoples from the Middle East.37

Finally, Cavalli-Sforza attempted to answer two of the most fascinating questions of all – when did language first appear, and was there ever a single ancestral language, a true mother tongue? We saw earlier that some palaeontologists believe that the Neanderthals died out about 28,000 years ago because they did not have language. Against that, Cavalli-Sforza points out that the region in our brains responsible for language lies behind the eye, on the left side, making the cranium slightly asymmetrical. This asymmetry is absent in apes but present in skulls of Homo habilis dated to 2 million years ago. Furthermore, our brain case ceased to grow about 300,000 years ago, and so on this basis it seems that language might be older than many palaeontologists think.38 On the other hand, studies of the way languages change over time (a rate that is known, roughly) points back to between 20,000–40,000 years ago when the main superfamilies split. This discrepancy has not been resolved.

Regarding the mother tongue, Cavalli-Sforza relies on Greenberg, who claims that there is at least one word that seems to be common to all languages. This is the root word tik.

Family or Language Forms Meaning Nilo-Saharan tok-tek-dik one Caucasian titi, tito finger, single Uralic ik-odik-itik one Indo-European dik-deik to indicate/point Japanese te hand Eskimo tik index finger Sino-Tibetan tik one Austroasiatic ti hand, arm Indo-Pacific tong-tang-ten finger, hand, arm Na-dene tek-tiki-tak one Amerind tik finger39

For the Indo-European languages, those stretching from western Europe to India, Greenberg’s approach has been taken further by Colin Renfrew, the Cambridge archaeologist who rationalised the effects of the carbon-14 revolution on dating. Renfrew’s aim, in Archaeology and Language (1987), was not simply to examine language origins but to compare those findings with others from archaeology, to see if a consistent picture could be arrived at and, most controversially, to identify the earliest homeland of the Indo-European peoples, to see what light this threw on human development overall. After introducing the idea of regular sound shifts, according to nation–

‘milk‘: French lait Italian latte Spanish leche ‘fact‘: French fait Italian fatto Spanish hecho

Renfrew went on to study the rates of change of language and to consider what the earliest vocabulary might have been. Comparing variations in the use of key words (like eye, rain, and dry), together with an analysis of early pottery and a knowledge of farming methods, Renfrew examined the spread of farming through Europe and adjacent areas. He concluded that the central homeland for the Indo-Europeans, the place where the mother tongue, ‘proto-Indo-European,’ was located, was in central and eastern Anatolia about 6500 BC and that the distribution of this language was associated with the spread of farming.40

The surprising thing about all this is the measure of agreement between archaeology, linguistics and genetics. The spread of peoples around the globe, the demise of the Neanderthals, the arrival of humanity in the Americas, the rise of language, its spread associated with art and with agriculture, its link to pottery, and the different tongues we see about us today all fall into a particular order, the beginnings of the last chapter in the evolutionary synthesis.

Against such a strong research/empirical background, it is not surprising that theoretical work on evolution should flourish. What is perhaps surprising is that writing about biology in the 1980s and 1990s became a literary phenomenon. A clutch of authors – biologists, palaeontologists, philosophers – wrote dozens of books that became best-sellers and filled the shelves of good bookshops, marking a definite change in taste, matched only by an equivalent development in physics and mathematics, which we shall come to in a later chapter. In alphabetical order the main authors in this renaissance of Darwinian studies were: Richard Dawkins, Daniel Dennett, Niles Eldredge, Stephen Jay Gould, Richard Lewontin, Steven Pinker, Steven Rose, John Maynard Smith, and E. O. Wilson. The group was known collectively as the neo-Darwinists, and they aroused enthusiasm and hostility in equal measure: their books sold well, but Dawkins at one point, in 1998, was described as ‘the most dangerous man in Britain.’41 The message of the neo-Darwinists was twofold. One view was represented by Wilson, Dawkins, Smith and Dennett, the other by Eldredge, Gould, Lewontin and Rose. Wilson himself produced two kinds of books. There was first, as we have seen, Sociobiology, published in 1975, On Human Nature (1978), and Consilience (1998). These books all had in common a somewhat stern neo-Darwinism, centred around Wilson’s conviction that ‘the genes hold culture on a leash.’42 Wilson wanted above all to bridge C. P. Snow’s two cultures, which he believed existed, and to show how science could penetrate human nature so as to explain culture: ‘The essence of the argument, then, is that the brain exists because it promotes the survival and multiplication of the genes that direct its assembly.’43 Wilson believed that biology will eventually be able to explain anthropology, psychology, sociology, and economics, that all these disciplines will become blended in ever closer ways. In On Human Nature he expanded on Sociobiology, with more aspects of human experience that could be explained in adaptive terms. He described, for example, the notion of hypergamy, the practice of females marrying men of equal or greater wealth and status; he pointed to the ways in which the great civilisations around the world, although they were not in touch with each other, developed similar features often in much the same order; he believes that chronic meat shortages may have determined the great religions, in that as early man moved away from game-rich areas, the elites invented religious rules to confine meat-eating to a religious caste; and he quotes the example of inmates in the Federal Reformatory for Women, Alderson, West Virginia, where it has been observed that the females form themselves into family-like units centred on a sexually active pair who call themselves ‘husband’ and ‘wife,’ with other women being added, known as ‘brothers’ and ‘sisters,’ and older inmates serving as ‘aunts’ and ‘uncles.’ He points out that male prisoners never organise in this way.44 Wilson’s chief aim all the way through his work was to show how the cultural and even ethical life of humanity can be explained biologically, genetically, and though his tone was cheerful and optimistic, it was uncompromising.

In the second strand of his work, particularly in Biophilia: The Human Bond with Other Species (1984), Wilson’s aim was to show that humankind’s bond with nature can help explain and enrich our lives as no other approach can.45 Besides arguing that biophilia may explain aesthetics (why we like savannah-type landscapes, rather than urban ones), why scientific understanding of animal life may enrich the reading of nature poems, why all peoples have learned to fear the snake (because it is dangerous; no need to invoke Freud), he takes the reader on his own journeys of scientific discovery, to show not only how intellectually exciting it may be but how it may offer meaning (a partial meaning admittedly) for life. He shows us, for example, how he demonstrated that the size of an island is related to the number of species it can bear, and how this deepens our understanding of conservation. Biophilia struck a chord, generating much research, which was all brought together ten years later at a special conference convened at Woods Hole Oceanographic Institute in Massachusetts in August 1992. Here, more systematic studies were reported which showed, for example, that, given a choice, people prefer unspectacular countryside landscapes in which to live; one prison study was reported that showed that prisoners whose cells faced fields reported sick less often than those whose cells faced the parade ground; a list of biota that produce psychosomatic illness (flies, lizards, vultures) was prepared, and these were found to be associated with food taboos. The symposium also examined James Lovelock’s Gaia theory, which had been published in 1979 and argued that the whole of the earth biota is one interregulated system, more akin to physiology than to physics (i.e., that the gases of the atmosphere, the salinity and alkalinity of the oceans, are regulated to keep the maximum number of things alive, like a gigantic organism). Biophilia was an extension of sociobiology, a less iconoclastic version which didn’t catch on to the same extent.46

Second only to Wilson in the passion with which he advances a neo-Darwinian view of the world is Richard Dawkins. Dawkins won the Royal Society of Literature Award in 1987 for his 1986 book The Blind Watchmaker, and in 1995 he became Charles Simonyi Professor of the Public Understanding of Science at Oxford. His other books were The Extended Phenotype (1982), River out of Eden (1995), and Climbing Mount Improbable (1996), with The Selfish Gene being reissued in 1989. There is a relentless quality about The Blind Watchmaker, as there is about many of Dawkins’s books, a reflection of his desire once and for all to dispel every fuzzy notion about evolution.47 One of the arguments of the antievolutionists is to say: if evolution is a fact, why aren’t there intermediate forms of life, and how did complex organisms, like eyes or wings, form without intermediate organisms also occurring? Surely only a designer, like God, could arrange all this? And so Dawkins spends time demolishing such objections. Take wings: ‘There are animals alive today that beautifully illustrate every stage in the continuum. There are frogs that glide with big webs between their toes, tree-snakes with flattened bodies that catch the air, lizards with flaps along their bodies, and several different kinds of mammals that glide with membranes stretched between their limbs, showing us the kind of way bats must have got their start. Contrary to the Creationist literature, not only are animals with “half a wing” common, so are animals with a quarter of a wing, three quarters of a wing, and so on.’48 Dawkins’s second aim is to emphasise that natural selection really does happen, and his technique here is to quote some telling examples, one of the best being the cicadas, whose life cycles are always prime numbers (thirteen or seventeen years), the point being that such locusts reach maturity at an unpredictable time, meaning that the species they feed on can never adjust to their arrival – it is mathematically random! But Dawkins’s main original contribution was his notion of ‘memes,’ a neologism to describe the cultural equivalent of genes.49 Dawkins argued that as a result of human cognitive evolution, such things as ideas, books, tunes, and cultural practices come to resemble genes in that the more successful – those that help their possessors thrive – live on, and so will ‘reproduce’ and be used by later generations.

Daniel Dennett, a philosopher from Tufts University in Medford, near Boston, is another uncompromising neo-Darwinist. In Darwin’s Dangerous Idea: Evolution and the Meanings of Life (1995), Dennett states baldly, ‘If I were to give an award for the single best idea anyone has ever had, I’d give it to Darwin, ahead of Newton and Einstein and everyone else. In a single stroke, the idea of evolution by natural selection unifies the realm of life, meaning, and purpose with the realm of space, time, cause and effect, mechanism and physical law.’50 Like Wilson and Dawkins, Dennett is concerned to drum evolutionary theory’s opponents out of town: ‘Darwin’s dangerous idea is reductionism incarnate.’51 His book is an attempt to explain how life, intelligence, language, art, and ultimately consciousness are, in essence, no more than ‘engineering problems.’ We haven’t got there yet, when it comes to explaining all the small steps that have been taken in the course of natural selection, but Dennett has no doubt we will some day. Perhaps the heart of his book (one heart anyway; it is very rich) is an examination of the ideas of Stuart Kauffman in his 1993 book The Origins of Order: Self-Organisation and Selection in Evolution.52 Kauffman’s idea was an attack on natural selection insofar as he argued that the similarity between organisms did not necessarily imply descent; it could just as easily be due to the fact that there are only a small number of design solutions to any problem, and that these ‘inherent’ solutions shape the organisms.53 Dennett concedes that Kauffman has a point, far more than any others who offer rival theories to natural selection, but he argues that these ‘constraints over design’ in fact only add to the possibilities in evolution, using poetry as an analogy. When poetry is written to rhyme, he points out, the poet finds many more juxtapositions than he or she would have found had he or she just been writing a shopping list. In other words, order may begin as a constraint, but it can end up by being liberating. Dennett’s other main aim, beyond emphasising life as a physical-engineering phenomenon, shaped by natural selection, is to come to grips with what is at the moment the single most important mystery still outstanding in the biological sciences – consciousness. This will be discussed more fully later in this chapter.

John Maynard Smith, emeritus professor of biology at the University of Sussex, is the doyen of the neo-Darwinists, publishing his first book as long ago as 1956. Less of a populariser than the others, he is one of the most original thinkers and uncompromising theorists. In 1995, in conjunction with Eörs Szathmáry, he published The Major Transitions in Evolution, where the chapter titles neatly summarise the bones of the argument:

Chemical evolution

The evolution of templates

The origin of translation and the genetic code

The origin of protocells

The origin of eukaryotes

The origin of sex and the nature of species

Symbiosis

The development of spatial patterns

The origin of societies

The origin of language54

In the same year that Maynard Smith and Szathmáry were putting together their book, Steven Pinker, professor of brain and cognitive sciences at MIT, released The Language Instinct. Maynard Smith’s book, and Pinker’s, finally put to rest the Skinner versus Chomsky debate, both concluding that the greater part of language ability is inherited.55 Mainly this was done by reference to the effects on language ability of various forms of brain injury, the development of language in children, and its relation to known maturational changes in the child’s nervous system, the descent of later languages from earlier ones, the similarity in the skulls of various primates, not to mention certain areas of chimpanzee brains that equate to human brains and seem to account for the reception of warning sounds and other calls from fellow chimpanzees. Pinker also presented evidence of language disabilities that have run in families (particularly dyslexia), and a new technique, called positron emission topography, in which a volunteer inhales a mildly radioactive gas and then puts his head inside a ring of gamma ray detectors. Computers can then calculate which parts of the brain ‘light up.’56 There seems no doubt now that language is an instinct, or at least has a strong genetic component. In fact, the evidence is so strong, one wonders why it was ever doubted.

*

Set alongside – and sometimes against – Wilson, Dawkins, Dennett, and Co. is a secondset of biologists who agree with them about most things, but disagree on a handful of fundamental topics. This second group includes Stephen Jay Gould and Richard Lewontin of Harvard, Niles Eldredge at the American Museum of Natural History in New York, and Steven Rose at the Open University in England.

Pride of place in this group must go to Gould. A prolific author, Gould specialises in books with ebullient, almost avuncular tides: Ever since Darwin (1977), The Panda’s Thumb (1980), The Mismeasure of Man (1981), Hen’s Teeth and Horse’s Shoes (1983), The Flamingo’s Smile (1985), Wonderful Life (1989), Bully for Brontosaurus (1991), Eight Little Piggies (1993), and Leonardo’s Mountain of Clams and the Diet of Worms (1999). There are four areas where Gould and his colleagues differ from Dawkins, Dennett, and the others. The first concerns a concept known as ‘punctuated equilibrium.’ This idea dates from 1972, when Eldredge and Gould published a paper in a book on palaeontology entitled ‘Punctuated Equilibrium: An Alternative to Phyletic Gradualism.’57 The thrust of this was that an examination of fossils showed that whereas all orthodox Darwinians tended to see evolutionary change as gradual, in fact there were in the past long periods of stasis, where nothing happened, followed by sudden and rapid periods of dramatic change. This, they said, helped account for why there weren’t intermediate forms, and also explained speciation, how new species arise – suddenly, when the habitat changes dramatically. For a while, the theory also gained adherents as a metaphor for sudden revolution as a form of social change (Gould’s father had been a well-known Marxist). However, after nearly thirty years, punctuated equilibrium has lost a lot of its force. ‘Sudden’ in geological terms is not really sudden in human terms – it involves hundreds of thousands if not a few million years. The rate of evolution can be expected to vary from time to time.

The second area of disagreement arose in 1979, in a paper by Gould and Lewontin in the Proceedings of the Royal Society, entitled ‘The Spandrels of San Marco and the Panglossian Paradigm: A Critique of the Adaptationist Programme.’58 The central point of this paper, which explains the strange architectural reference, is that a spandrel, the tapering triangular space formed by the intersection of two rounded arches at a right angle, isn’t really a design feature. Gould and Lewontin had seen these features at San Marco in Venice and concluded that they were inevitable by-products of other, more important features – i.e., the arches. Though harmonious, they were not really ‘adaptations’ to the structure, but simply what was left when the main design was put in place. Gould and Lewontin thought there were parallels to be drawn with regard to biology, that not all features seen in nature were direct adaptations – that, they said, was Panglossian. Instead, there were biological spandrels that were also by-products. As with punctuated equilibrium, Gould and Lewontin thought that the spandrel approach was a radical revision of Darwinism. A claim was even made for language being a biological spandrel, an emergent phenomenon that came about by accident, in the course of the brain’s development in other directions. This was too much, and too important, to be left alone by Dawkins, Dennett, and others. It was shown that even in architecture a spandrel isn’t inevitable – there are other ways of treating what happens where two arches meet at right angles – and again, like punctuated equilibrium, the idea of language as a spandrel, a by-product of some other set of adaptations, has not really stood the test of time.

The third area where Gould differed from his colleagues came in 1989 in his book Wonderful Life.59 This was a reexamination and retelling of the story of the Burgess Shale, a fossil-rich rock formation in British Columbia, Canada, which has been well known to geologists and palaeontologists since the turn of the century. The lesson that Gould drew from these studies was that an explosion of life forms occurred in the Cambrian period, ‘far surpassing in variety of bodily forms today’s entire animal kingdom. Most of these forms were wiped out in mass extinctions; but one of the survivors was the ancestor of the vertebrates, and of the human race.’ Gould went on to say that if the ‘tape’ of evolution were to be run again, it need not turn out in the same way – a different set of survivors would be here now. This was a notable heresy, and once again the prevailing scientific opinion is now against Gould. As we saw in the section on Dennett and Kauffman, only a certain number of design solutions exist to any problem, and the general feeling now is that, if one could run evolution all over again, something very like humans would result. Even Gould’s account of the Burgess Shale has been attacked. In a book published in 1998 Simon Conway Morris, part of the palaeontological group from Cambridge that has spent decades studying the Shale, concluded in The Crucible of Creation that in fact the vast army of trilobites does fit with accepted notions of evolution; comparisons can be made with living animal families, although we may have made mistakes with certain groupings.60

One might think that the repeated rebuffs which Gould received to his attempts to reshape classical Darwinism would have dampened his enthusiasm. Not a bit of it. And in any case, the fourth area where he, Lewontin, and others have differed from their neo-Darwinist colleagues has had a somewhat different history. Between 1981 and 1991, Gould and Lewontin published three books that challenged in general the way ‘the doctrine of DNA,’ as Lewontin put it, had been used, again to quote Lewontin, to ‘justify inequalities within and between societies and to claim that those inequalities can never be changed.’ In The Mismeasure of Man (1981), Gould looked at the history of the controversy over IQ, what it means, and how it is related to class and race.61 In 1984 Lewontin and two others, Steven Rose and Leon J. Kamin, published Not in Our Genes: Biology, Ideology and Human Nature, in which they rooted much biology in a bourgeois political mentality of the nineteenth century, arguing that the quantification of such things as the IQ is crude and that attempts to describe mental illness only as a biochemical illness avoid certain politically inconvenient facts.62 Lewontin took this further in 1991 in The Doctrine of DNA, where he argued that DNA fits perfectly into the prevailing ideology; that the link between cause and effect is simple, mainly one on one; that for the present DNA research holds out no prospect of a cure for the major illnesses that affect mankind – for example, cancer, heart disease and stroke – and that the whole edifice is more designed to reward scientists than help science, or patients. Most subversive of all, he writes, ‘It has been clear since the first discoveries in molecular biology that “genetic engineering,” the creation to order of genetically altered organisms, has an immense possibility for producing private profit…. No prominent molecular biologist of my acquaintance is without a financial stake in the biotechnology business.’63 He believes that human nature, as described by the evolutionary biologists such as E. O. Wilson, is a ‘made-up story,’ designed to fit the theories the theorists already hold.

Given the approach of Gould and Lewontin in particular, it comes as no surprise to find them fully embroiled in yet another (but very familiar) biological controversy, which erupted in 1994. This was the publication of Richard J. Herrnstein and Charles Murray’s The Bell Curve: Intelligence and Class Structure in American Life.64

Ten years in the making, the main argument of The Bell Curve was twofold. In some places, it is straight out of Michael Young’s Rise of the Meritocracy, though Herrnstein and Murray are no satirists but in deadly earnest. In the twentieth century, they say, as more and more colleges have opened up to the general population, as IQ tests have improved and been shown to be better predictors of job performance than other indicators (such as college grades, interviews, or biographical data), and as the social environment has become more uniform for most of the population, a ‘cognitive elite’ has begun to emerge in society. Three phenomena are the result of this sorting process, and mean that it will accelerate in the future: the cognitive elite is getting richer, at a time when everybody else is having to struggle to stay even; the elite is increasingly segregated physically from everyone else, especially at work and in the neighbourhoods they inhabit; and the cognitive elite is increasingly likely to intermarry.65 Herrnstein and Murray also analysed afresh the results of the National Longitudinal Study of Youth (NLSY), a database of about 4 million Americans drawn from a population that was born in the 1960s. This enables them to say, for example, that low intelligence is a stronger precursor of poverty than coming from a low socioeconomic status background, that students who drop out of school come almost entirely from the bottom quartile of the IQ distribution (i.e., the lowest 25 percent), that low-IQ people are more likely to divorce early on in married life and to have illegitimate children. They found that low-IQ parents are more likely to be on welfare and to have low-birthweight children. Low IQ men are more likely to be in prison. Then there was the racial issue. Herrnstein and Murray spend a lot of time prefacing their remarks by saying that a high ‘I Q ‘does not necessarily make someone admirable or the kind to be cherished, and they concede that the racial differences in IQ are diminishing. But, after controlling for education and poverty, they still find that people of Asian stock in America outperform ‘whites,’ who outperform blacks on tests of IQ.66 They also find that recent immigrants to America have a lower IQ score than native-born Americans. And finally, they voice their concerns that the IQ level of America is declining. This is due partly, they say, to a dysgenic trend – people of lower IQ are having more children – but that is not the only reason. In practice, the American schooling system has been ‘dumbed down’ to meet the needs of average and below-average students, which means that the performance of the average students has not, contrary to popular opinion, been adversely affected. It is the brighter students who have been most affected, their SAT (Scholastic Aptitude Test) scores dropping by 41 percent between 1972 and 1993. They also blame parents, who seem not to want their children to work harder anymore, and television, which has replaced newsprint as a source of information, and the telephone, which has replaced letter writing as a form of self-expression.67 Further, they express their view that affirmative-action programs have not helped disadvantaged people, indeed have made their situation worse. But it is the emergence of the cognitive elite, this ‘invisible migration,’ the ‘secession of the successful,’ and the blending of the interests of the affluent with the cognitive elite that Herrnstein and Murray see as the most important, and pessimistic, of their findings. This elite, they say, will fear the ‘underclass’ that is emerging, and will in effect control it with ‘kindness’ (which is basically what Murray’s rival, J. K. Galbraith had said in The Culture of Contentment). They will provide welfare for the underclass so long as it is out of sight and out of mind. They hint, though, that such measures are likely to fail: ‘racism will re-emerge in a new and more virulent form.’68

Herrnstein and Murray are traditionalists. They would like to see a return to old-fashioned families, small communities, and the familiar forms of education, where pupils are taught history, literature, arts, ethics, and the sciences in such a way as to be able to weigh, analyse, and evaluate arguments according to exacting standards.69 For them, the IQ test not only works – it is a watershed in human society. Allied to the politics of democracy and the homogenising successes of modern capitalism, the IQ aids what R. A. Fisher called runaway evolution, promoting the rapid layering of society, divided according to IQ – which, of course, is mainly inherited. We are indeed witnessing the rise of the meritocracy.

The Bell Curve provoked a major controversy on both sides of the Atlanric. This was no surprise. Throughout the century white people, people on the ‘right’ side of the divide they were describing, have concluded that whole segments of the population were dumb. What sort of reaction did they expect? Many people countered the claims of Herrnstein and Murray, with at least six other books being produced in 1995 or 1996 to examine (and in many cases refute) the arguments of The Bell Curve. Stephen Jay Gould’s The Mismeasure of Man was reissued in 1996 with an extra chapter giving his response to The Bell Curve. His main point was that this was a debate that needed technical expertise. Too many of the reviewers who had joined the debate (and the book provoked nearly two hundred reviews or associated articles) did not feel themselves competent to judge the statistics, for example. Gould did, and dismissed them. In particular, he attacked Herrnstein and Murray’s habit of giving the form of the statistical association but not the strength. When this was examined, he said, the links they had found always explained less than 20 percent of the variance, ‘usually less than 10 percent and often less than 5 percent. What this means in English is that you cannot predict what a given person will do from his IQ score.’70 This was the conclusion Christopher Jencks had arrived at, thirty years before.

By the time The Bell Curve rumpus erupted, the infrastructure was in place for a biological project capable of generating controversy on an even bigger scale. This was the scramble to map the human genome, to draw up a plan to describe exactly all the nucleotides that constitute man’s inheritance and that, in time, will offer at least the possibility of interfering in our genetic makeup.

Interest in this idea grew throughout the 1980s. Indeed, it could be said that the Human Genome Project (HGP), as it came to be called, had been simmering since Victor McKusick, a Boston doctor, began collecting a comprehensive record, ‘Mendelian Inheritance in Man,’ a list of all known genetic diseases, first published in 1966.71 But then, as research progressed, first one scientist then another began to see sense in mapping the entire genome. On 7 March 1986, in Science, Renato Dulbecco, Nobel Prize-winning president of the Salk Institute, startled his colleagues by asserting that the war on cancer would be over quicker if geneticists were to sequence the human genome.72 Various U.S. government departments, including the Department of Energy and the National Institutes of Health, became interested at this point, as did scientists in Italy, the United Kingdom, Russia, Japan, and France (in roughly that order; Germany was backward, owing to the controversial role biology had played in Nazi times). A major conference, organised by the Howard Hughes Medical Institute, was held in Washington in July 1986 to bring together the various interested parties, and this had two effects. In February 1988 the US. National Research Council issued its report, Mapping and Sequencing the Human Genome, which recommended a concerted research program with a budget of $200 million a year.73 James Watson, appropriately enough, was appointed associate director of NIH, later that year, with special responsibility for human genome research. And in April 1988, HUGO, the Human Genome Organisation, was founded. This was a consortium of international scientists to spread the load of research, and to make sure there was as little duplication as possible, the aim being to finalise the mapping as early as possible in the twenty-first century. The experience of the Human Genome Project has not been especially happy. In April 1992 James Watson resigned his position over an application by certain NIH scientists to patent their sequences. Watson, like many others, felt that the human genome should belong to everyone.74

The genome project came on stream in 1988–89. This was precisely the time that communism was collapsing in the Soviet Union and the Berlin Wall was dismantled. A new era was beginning politically, but so too in the intellectual field. For HUGO was not the only major innovation introduced in 1988. That year also saw the birth of the Internet.

Whereas James Watson took a leading role in the genome project, his former colleague and co-discoverer of the double helix, Francis Crick, took a similar position in what is perhaps the hottest topic in biology as we enter the twenty-first century: consciousness studies. In 1994 Crick published The Astonishing Hypothesis, which advocated a research assault on this final mystery/problem.75 Consciousness studies naturally overlap with neurological studies, where there have been many advances in identifying different structures of the brain, such as language centres, and where MRI, magnetic resonance imaging, can show which areas are being used when people are merely thinking about the meaning of words. But the study of consciousness itself is still as much a matter for philosophers as biologists. As John Maddox put it in his 1998 book, What Remains to be Discovered, ‘No amount of introspection can enable a person to discover just which set of neurons in which part of his or her head is executing some thought-process. Such information seems to be hidden from the human user. ‘76

It should be said that some people think there is nothing to explain as regards consciousness. They believe it is an ‘emergent property’ that automatically arises when you put a ‘bag of neurons’ together. Others think this view absurd. A good explanation of emergent property is given by John Searle, Mills Professor of Philosophy at the University of California, Berkeley, regarding the liquidity of water. The behaviour of the H20 molecules explains liquidity, but the individual molecules are not liquid. At the moment, the problem with consciousness is that our understanding is so rudimentary that we don’t even know how to talk about it – even after the ‘Decade of the Brain,’ which was adopted by the U.S. Congress on 1 January 1990.77 This inaugurated many innovations and meetings that underlined the new fashion for consciousness studies. For example, the first international symposium on the science of consciousness was held at the University of Arizona at Tucson in April 1994, attended by no fewer than a thousand delegates.78 In that same year the first issue of the Journal of Consciousness Studies was published, with a bibliography of more than 1,000 recent articles. At the same time a whole raft of books about consciousness appeared, of which the most important were: Neural Darwinism: The Theory of Neuronal Group Selection, by Gerald Edelman (1987), The Remembered Present: A Biological Theory of Consciousness, by Edelman (1989), The Emperor’s New Mind, by Roger Penrose (1989), The Problem of Consciousness, by Colin McGinn (1991), Consciousness Explained, by Daniel Dennett (1991), The Rediscovery of the Mind, by John Searle (1992), Bright Air, Brilliant Fire, by Edelman (1992), The Astonishing Hypothesis, by Francis Crick (1994), Shallows of the Mind: A Search for the Missing Science of Consciousness, by Roger Penrose (1994), and The Conscious Mind: In Search of a Fundamental Theory, by David Chalmers (1996). Other journals on consciousness were also started, and there were two international symposia on the subject at Jesus College, Cambridge, published as Nature’s Imagination (1994) and Consciousness and Human Identity (1998), both edited by John Cornwell.

Thus consciousness has been very much the flavour of the decade, and it is fair to say that those involved in the subject fall into four camps. There are those, like the British philosopher Colin McGinn, who argue that consciousness is resistant to explanation in principle and for all time.79 Philosophers we have met before – such as Thomas Nagel and Hilary Putnam – also add that at the present (and maybe for all time) science cannot account for qualia, the first-person phenomenal experience that we understand as consciousness. Then there are two types of reductionist. Those like Daniel Dennett, who claim not only that consciousness can be explained by science but that construction of an artificially intelligent machine that will be conscious is not far off, may be called the ‘hard’ reductionists.80 The soft reductionists, typified by John Searle, believe that consciousness does depend on the physical properties of the brain but think we are nowhere near solving just how these processes work, and dismiss the very idea that machines will ever be conscious.81 Finally, there are those like Roger Penrose who believe that a new kind of dualism is needed, that in effect a whole new set of physical laws may apply inside the brain, which account for consciousness.82 Penrose’s particular contribution is that quantum physics operate within tiny structures, known as tubules, within the nerve cells of the brain to produce – in some as yet unspecified way – the phenomena we recognise as consciousness.83 Penrose actually thinks that we live in three worlds – the physical, the mental, and the mathematical: ‘The physical world grounds the mental world, which in turn grounds the mathematical world and the mathematical world is the ground of the physical world and so on around the circle.’84 Many people, who find this tantalising, nonetheless don’t feel Penrose has proved anything. His speculation is enticing and original, but it is still speculation.

Instead, it is the two forms of reductionism that in the present climate attract most interest. For people like Dennett, human consciousness and identity arise from the narrative of their lives, and this can be related to specific brain states. For example, there is growing evidence that the ability to ‘apply intentional predicates to other people is a human universal’ and is associated with a specific area of the brain (the orbitofrontal cortex); in certain states of autism, this ability is defective. There is also evidence that the blood supply to the orbitofrontal cortex increases when people ‘process’ intentional verbs as opposed to non-intentional ones, and that damage to this area of the brain can lead to a failure to introspect.85 Suggestive as this is, it is also the case that the microanatomy of the brain varies quite considerably from individual to individual, and that a particular phenomenal experience is represented at several different points in the brain, which clearly require integration. Any ‘deep’ patterns relating experience to brain activity have yet to be discovered, and seem to be a long way off, though this is still the most likely way forward.

A related approach – perhaps to be expected, given other developments in recent years – is to look at the brain and consciousness in a Darwinian light. In what sense is consciousness adaptive? This approach has produced two views – one that the brain was in effect ‘jerry-built’ in evolution to accomplish very many and very different tasks. On this view, there are at base three organs: a reptilian core (the seat of our basic drives), a palaeomammalian layer, which produces such things as affection for offspring, and a neomammalian brain, the seat of reasoning, language, and other ‘higher functions.’86 The second approach is to argue that throughout evolution (and throughout our bodies) there have been emergent properties: for example, there is always a biochemical explanation underlying a physiological phenomenon – sodium/potassium flux across a membrane being also nerve action potential.87 In this sense, then, consciousness is nothing new in principle even if, at the moment, we don’t fully understand it.

Studies of nerve action through the animal kingdom have also shown that nerves work by either firing or not firing; intensity is represented by the rate of firing – the more intense the stimulation, the faster the turning on and off of any particular nerve. This of course is very similar to the way computers work, in ‘bits’ of information, where everything is represented by a configuration of either os or is. The arrival of the concept of parallel processing in computing led the philosopher Daniel Dennett to consider whether an analogous process might happen in the brain between different evolutionary levels, giving rise to consciousness. Again such reasoning, though tantalising, has not gone much further than preliminary exploration. At the moment, no one seems able to think of the next step.

Francis Crick’s aim has been fulfilled. Consciousness is being investigated as never before. But it would be rash to predict that the new century will bring advances quickly. No less a figure than Noam Chomsky has said, ‘It is quite possible – overwhelmingly probably, one might guess – that we will always learn more about human life and personality from novels than from scientific psychology.’


Загрузка...