The author is grateful to the following publishers for permission to reprint brief extracts: Harvard University Press for four extracts from Nancy Wexler's article in The Code of Codes, edited by D. Kevles and R. Hood (pp. 62-9); Aurum Press for an extract from The Gene Hunters by William Cookson (p. 78); Macmillan Press for extracts from Philosophical Essays by A. J. Ayer (p. 338) and What Remains to Be Discovered by J. Maddox (p. 194); W. H. Freeman for extracts from The Narrow Roads of Gene Land by W. D. Hamilton (p. 131); Oxford University Press for extracts from The Selfish Gene by Richard Dawkins (p. 122) and Fatal Protein by Rosalind Ridley and Harry Baker (p. 285); Weidenfeld and Nicolson for an extract from One Renegade Cell by Robert Weinberg (p. 237). The author has made every effort to obtain permission for all other extracts from published work reprinted in this book.

This book was originally published in Great Britain in 1999 by Fourth Estate Limited.

GENOME. Copyright © 1999 by Matt Ridley. All rights reserved. Printed in the United States of America. No part of this book may be used or reproduced in any manner whatsoever without written permission except in the case of brief quotations embodied in critical articles and reviews. For information address HarperCollins Publishers, Inc., 10

East 53rd Street, New York, NY 10022.

HarperCollins books may be purchased for educational, business, or sales promotional use. For information please write: Special Markets Department, HarperCollins Publishers, Inc., 10 East 53rd Street, New York, NY 10022.

FIRST U.S. EDITION

Library of Congress Cataloging-in-Publication Data Ridley, Matt

Genome: the autobiography of a species in 23 chapters/Matt Ridley.

p. cm.

Includes bibliographical references and index.

ISBN 0-06-019497-9

1. Human genome Popular works. 2. Human genetics Popular works. I. Title.

QH431.R475 2000

599.93'5—dc21 99-40933

00 01 02 03 04 RRD 10 9 8 7 6 5 4 3 2 1


C O N T E N T S

Acknowledgements l

Preface 3

1 Life 11

2 Species 23

3 History 38

4 Fate 54

5 Environment 65

6 Intelligence 76

7 Instinct 91

X and Y Conflict 107

8 Self-interest 122

9 Disease 136


10 Stress 147

11 Personality 161

12 Self-Assembly 173

13 Pre-History 185

14 Immortality 195

15 Sex 206

16 Memory 219

17 Death 231

18 Cures 243

19 Prevention 258

20 Politics 271

21 Eugenics 286

22 Free Will 301

Bibliography and Notes 314

Index 337


A L S O B Y M A T T R I D L E Y

The Red Queen: Sex and the Evolution of Human Nature

The Origins of Virtue:

Human Instincts and the Evolution of Cooperation A

C

K

N

O

W

L

E

D

G

E

M

E

N

T

S

In writing this book, I have disturbed, interrupted, interrogated, emailed and corresponded with a great variety of people, yet I have never once met anything but patience and politeness. I cannot thank everybody by name, but I would like to record my great debts of gratitude to the following: Bill Amos, Rosalind Arden, Christopher Badcock, Rosa Beddington, David Bendey, Ray Blanchard, Sam Brittan, John Burn, Francis Crick, Gerhard Cristofori, Paul Davies, Barry Dickson, Richard Durbin, Jim Edwardson, Myrna Gopnik, Anthony Gottlieb, Dean Hamer, Nick Hastie, Brett Holland, Tony Ingram, Mary James, Harmke Kamminga, Terence Kealey, Arnold Levine, Colin Merritt, Geoffrey Miller, Graeme Mitchison, Anders Moller, Oliver Morton, Kim Nasmyth, Sasha Norris, Mark Pagel, Rose Paterson, David Penny, Marion Petrie, Steven Pinker, Robert Plomin, Anthony Poole, Christine Rees, Janet Rossant, Mark Ridley, Robert Sapolsky, Tom Shakespeare, Ancino Silva, Lee Silver, Tom Strachan, John Sulston, Tim Tully, Thomas Vogt, Jim Watson, Eric Wieschaus and Ian Wilmut.

Special thanks to all my colleagues at the International Centre for Life, where we have been trying to bring the genome to life. Without the day-to-day interest and support from them in matters biological 2 G E N O M E

and genetic, I doubt I could have written this book. They are Alastair Balls, John Burn, Linda Conlon, Ian Fells, Irene Nyguist, Neil Sulli-van, Elspeth Wills and many others.

Parts of two chapters first appeared in newspaper columns and magazine articles. I am grateful to Charles Moore of the Daily Telegraph and David Goodhart of Prospect for publishing them.

My agent, Felicity Bryan, has been enthusiasm personified throughout. Three editors had more faith in this book when it was just a proposal than (I now admit) I did: Christopher Potter, Marion Manneker and Maarten Carbo.

But to one person I give deeper and more heartfelt gratitude than to all the rest put together: my wife, Anya Hurlbert.


P R E F A C E

The human genome — the complete set of human genes - comes packaged in twenty-three separate pairs of chromosomes. Of these, twenty-two pairs are numbered in approximate order of size, from the largest (number 1) to the smallest (number 22), while the remaining pair consists of the sex chromosomes: two large X chromosomes in women, one X and one small Y in men. In size, the X comes between chromosomes 7 and 8, whereas the Y is the smallest.

The number 23 is of no significance. Many species, including our closest relatives among the apes, have more chromosomes, and many have fewer. Nor do genes of similar function and type necessarily cluster on the same chromosome. So a few years ago, leaning over a lap-top computer talking to David Haig, an evolutionary biologist, I was slightly startled to hear him say that chromosome 19 was his favourite chromosome. It has all sorts of mischievous genes on it, he explained. I had never thought of chromosomes as having personalities before. They are, after all, merely arbitrary collections of genes. But Haig's chance remark planted an idea in my head and I could not get it out. Why not try to tell the unfolding story of the human genome, now being discovered in detail for the first time, chromosome by chromosome, by picking a gene from 4 G E N O M E

each chromosome to fit the story as it is told? Primo Levi did something similar with the periodic table of the elements in his autobiographical short stories. He related each chapter of his life to an element, one that he had had some contact with during the period he was describing.

I began to think about the human genome as a sort of autobiography in its own right — a record, written in 'genetish', of all the vicissitudes and inventions that had characterised the history of our species and its ancestors since the very dawn of life. There are genes that have not changed much since the very first single-celled creatures populated the primeval ooze. There are genes that were developed when our ancestors were worm-like. There are genes that must have first appeared when our ancestors were fish. There are genes that exist in their present form only because of recent epidemics of disease. And there are genes that can be used to write the history of human migrations in the last few thousand years.

From four billion years ago to just a few hundred years ago, the genome has been a sort of autobiography for our species, recording the important events as they occurred.

I wrote down a list of the twenty-three chromosomes and next to each I began to list themes of human nature. Gradually and painstakingly I began to find genes that were emblematic of my story. There were frequent frustrations when I could not find a suitable gene, or when I found the ideal gene and it was on the wrong chromosome. There was the puzzle of what to do with the X and Y chromosomes, which I have placed after chromosome 7, as befits the X chromosome's size. You now know why the last chapter of a book that boasts in its subtitle that it has twenty-three chapters is called Chapter 22.

It is, at first glance, a most misleading thing that I have done. I may seem to be implying that chromosome 1 came first, which it did not. I may seem to imply that chromosome 11 is exclusively concerned with human personality, which it is not. There are probably 60,000—80,000 genes in the human genome and I could not tell you about all of them, partly because fewer than 8,000 have P R E F A C E 5

been found (though the number is growing by several hundred a month) and partly because the great majority of them are tedious biochemical middle managers.

But what I can give you is a coherent glimpse of the whole: a whistle-stop tour of some of the more interesting sites in the genome and what they tell us about ourselves. For we, this lucky generation, will be the first to read the book that is the genome. Being able to read the genome will tell us more about our origins, our evolution, our nature and our minds than all the efforts of science to date. It will revolutionise anthropology, psychology, medicine, palaeontology and virtually every other science. This is not to claim that everything is in the genes, or that genes matter more than other factors. Clearly, they do not. But they matter, that is for sure.

This is not a book about the Human Genome Project — about mapping and sequencing techniques - but a book about what that project has found. Some time in the year 2000, we shall probably have a rough first draft of the complete human genome. In just a few short years we will have moved from knowing almost nothing about our genes to knowing everything. I genuinely believe that we are living through the greatest intellectual moment in history. Bar none. Some may protest that the human being is more than his genes. I do not deny it. There is much, much more to each of us than a genetic code. But until now human genes were an almost complete mystery. We will be the first generation to penetrate that mystery. We stand on the brink of great new answers but, even more, of great new questions. This is what I have tried to convey in this book.

P R I M E R

The second part of this preface is intended as a brief primer, a sort of narrative glossary, on the subject of genes and how they work.

I hope that readers will glance through it at the outset and return to it at intervals if they come across technical terms that are not explained. Modern genetics is a formidable thicket of jargon. I have 6 G E N O M E

tried hard to use the bare minimum of technical terms in this book, but some are unavoidable.

The human body contains approximately 100 trillion (million million) C E L L S , most of which are less than a tenth of a millimetre across. Inside each cell there is a black blob called a N U C L E U S .

Inside the nucleus are two complete sets of the human G E N O M E

(except in egg cells and sperm cells, which have one copy each, and red blood cells, which have none). One set of the genome came from the mother and one from the father. In principle, each set includes the same 60,000-80,000 G E N E S on the same twenty-three C H R O M O S O M E S . In practice, there are often small and subtle differences between the paternal and maternal versions of each gene, differences that account for blue eyes or brown, for example. When we breed, we pass on one complete set, but only after swapping bits of the paternal and maternal chromosomes in a procedure known as R E C O M B I N A T I O N .

Imagine that the genome is a book.

There are twenty-three chapters, called CHROMOSOMES.

Each chapter contains several thousand stories, called GENES.

Each story is made up of paragraphs, called EXONS, which are interrupted by advertisements called INTRONS.

Each paragraph is made up of words, called CODONS.

Each word is written in letters called BASES.

There are one billion words in the book, which makes it longer than 5,000 volumes the size of this one, or as long as 800 Bibles.

If I read the genome out to you at the rate of one word per second for eight hours a day, it would take me a century. If I wrote out the human genome, one letter per millimetre, my text would be as long as the River Danube. This is a gigantic document, an immense book, a recipe of extravagant length, and it all fits inside the microscopic nucleus of a tiny cell that fits easily upon the head of a pin.

The idea of the genome as a book is not, strictly speaking, even a metaphor. It is literally true. A book is a piece of digital information, P R E F A C E

7

written in linear, one-dimensional and one-directional form and defined by a code that transliterates a small alphabet of signs into a large lexicon of meanings through the order of their groupings.

So is a genome. The only complication is that all English books read from left to right, whereas some parts of the genome read from left to right, and some from right to left, though never both at the same time.

(Incidentally, you will not find the tired word 'blueprint' in this book, after this paragraph, for three reasons. First, only architects and engineers use blueprints and even they are giving them up in the computer age, whereas we all use books. Second, blueprints are very bad analogies for genes. Blueprints are two-dimensional maps, not one-dimensional digital codes. Third, blueprints are too literal for genetics, because each part of a blueprint makes an equivalent part of the machine or building; each sentence of a recipe book does not make a different mouthful of cake.) Whereas English books are written in words of variable length using twenty-six letters, genomes are written entirely in three-letter words, using only four letters: A, C, G and T (which stand for adenine, cytosine, guanine and thymine). And instead of being written on flat pages, they are written on long chains of sugar and phosphate called D N A molecules to which the bases are attached as side rungs. Each chromosome is one pair of (very) long D N A molecules.

The genome is a very clever book, because in the right conditions it can both photocopy itself and read itself. The photocopying is known as R E P L I C A T I O N , and the reading as T R A N S L A T I O N . Replication works because of an ingenious property of the four bases: A likes to pair with T, and G with C. So a single strand of D N A can copy itself by assembling a complementary strand with Ts opposite all the As, As opposite all the Ts, Cs opposite all the Gs and Gs opposite all the Cs. In fact, the usual state of D N A is the famous D O U B L E H E L I X of the original strand and its complementary pair intertwined.

To make a copy of the complementary strand therefore brings 8 G E N O M E

back the original text. So the sequence A C G T become T G C A in the copy, which transcribes back to A C G T in the copy of the copy.

This enables D N A to replicate indefinitely, yet still contain the same information.

Translation is a little more complicated. First the text of a gene is T R A N S C R I B E D into a copy by the same base-pairing process, but this time the copy is made not of D N A but of R N A , a very slightly different chemical. R N A , too, can carry a linear code and it uses the same letters as D N A except that it uses U, for uracil, in place of T. This R N A copy, called the M E S S E N G E R R N A , is then edited by the excision of all introns and the splicing together of all exons (see above).

The messenger is then befriended by a microscopic machine called a R I B O S O M E , itself made partly of R N A . The ribosome moves along the messenger, translating each three-letter codon in turn into one letter of a different alphabet, an alphabet of twenty different A M I N O A C I D S , each brought by a different version of a molecule called T R A N S F E R R N A . Each amino acid is attached to the last to form a chain in the same order as the codons. When the whole message has been translated, the chain of amino acids folds itself up into a distinctive shape that depends on its sequence. It is now known as a P R O T E I N .

Almost everything in the body, from hair to hormones, is either made of proteins or made by them. Every protein is a translated gene. In particular, the body's chemical reactions are catalysed by proteins known as E N Z Y M E S . Even the processing, photocopying error-correction and assembly of D N A a n d R N A molecules themselves — the replication and translation - are done with the help of proteins. Proteins are also responsible for switching genes on and off, by physically attaching themselves to P R O M O T E R and E N H A N C E R sequences near the start of a gene's text. Different genes are switched on in different parts of the body.

When genes are replicated, mistakes are sometimes made. A letter (base) is occasionally missed out or the wrong letter inserted. Whole sentences or paragraphs are sometimes duplicated, omitted or P R E F A C E 9

reversed. This is known as M U T A T I O N . Many mutations are neither harmful nor beneficial, for instance if they change one codon to another that has the same amino acid 'meaning': there are sixty-four different codons and only twenty amino acids, so many D N A

'words' share the same meaning. Human beings accumulate about one hundred mutations per generation, which may not seem much given that there are more than a million codons in the human genome, but in the wrong place even a single one can be fatal.

All rules have exceptions (including this one). Not all human genes are found on the twenty-three principal chromosomes; a few live inside little blobs called mitochondria and have probably done so ever since mitochondria were free-living bacteria. Not all genes are made of D N A : some viruses use R N A instead. Not all genes are recipes for proteins. Some genes are transcribed into R N A but not translated into protein; the R N A goes direcdy to work instead either as part of a ribosome or as a transfer R N A . Not all reactions are catalysed by proteins; a few are catalysed by R N A instead. Not every protein comes from a single gene; some are put together from several recipes. Not all of the sixty-four three-letter codons specifies an amino acid: three signify S T O P commands instead. And finally, not all D N A spells out genes. Most of it is a jumble of repetitive or random sequences that is rarely or never transcribed: the so-called junk D N A .

That is all you need to know. The tour of the human genome can begin.


C H R O M O S O M E 1

L i f e

All forms that perish other forms supply,

(By turns we catch the vital breath and die) Like bubbles on the sea of matter borne,

They rise, they break, and to that sea return.

Alexander Pope, An Essay on Man

In the beginning was the word. The word proselytised the sea with its message, copying itself unceasingly and forever. The word discovered how to rearrange chemicals so as to capture little eddies in the stream of entropy and make them live. The word transformed the land surface of the planet from a dusty hell to a verdant paradise.

The word eventually blossomed and became sufficiently ingenious to build a porridgy contraption called a human brain that could discover and be aware of the word itself.

My porridgy contraption boggles every time I think this thought.

In four thousand million years of earth history, I am lucky enough to be alive today. In five million species, I was fortunate enough to be born a conscious human being. Among six thousand million people on the planet, I was privileged enough to be born in the 1 2 G E N O M E

country where the word was discovered. In all of the earth's history, biology and geography, I was born just five years after the moment when, and just two hundred miles from the place where, two members of my own species discovered the structure of D N A and hence uncovered the greatest, simplest and most surprising secret in the universe. Mock my zeal if you wish; consider me a ridiculous materialist for investing such enthusiasm in an acronym. But follow me on a journey back to the very origin of life, and I hope I can convince you of the immense fascination of the word.

'As the earth and ocean were probably peopled with vegetable productions long before the existence of animals; and many families of these animals long before other families of them, shall we conjecture that one and the same kind of living filaments is and has been the cause of all organic life?' asked the polymathic poet and physician Erasmus Darwin in 1794.1 It was a startling guess for the time, not only in its bold conjecture that all organic life shared the same origin, sixty-five years before his grandson Charles' book on the topic, but for its weird use of the word 'filaments'. The secret of life is indeed a thread.

Yet how can a filament make something live? Life is a slippery thing to define, but it consists of two very different skills: the ability to replicate, and the ability to create order. Living things produce approximate copies of themselves: rabbits produce rabbits, dandelions make dandelions. But rabbits do more than that. They eat grass, transform it into rabbit flesh and somehow build bodies of order and complexity from the random chaos of the world. They do not defy the second law of thermodynamics, which says that in a closed system everything tends from order towards disorder, because rabbits are not closed systems. Rabbits build packets of order and complexity called bodies but at the cost of expending large amounts of energy. In Erwin Schrodinger's phrase, living creatures 'drink orderliness' from the environment.

The key to both of these features of life is information. The ability to replicate is made possible by the existence of a recipe, the information that is needed to create a new body. A rabbit's egg L I F E 1 3

carries the instructions for assembling a new rabbit. But the ability to create order through metabolism also depends on information -

the instructions for building and maintaining the equipment that creates the order. An adult rabbit, with its ability to both reproduce and metabolise, is prefigured and presupposed in its living filaments in the same way that a cake is prefigured and presupposed in its recipe. This is an idea that goes right back to Aristotle, who said that the 'concept' of a chicken is implicit in an egg, or that an acorn was literally 'informed' by the plan of an oak tree. When Aristotle's dim perception of information theory, buried under generations of chemistry and physics, re-emerged amid the discoveries of modern genetics, Max Delbruck joked that the Greek sage should be given a posthumous Nobel prize for the discovery of D N A . 2

The filament of D N A is information, a message written in a code of chemicals, one chemical for each letter. It is almost too good to be true, but the code turns out to be written in a way that we can understand. Just like written English, the genetic code is a linear language, written in a straight line. Just like written English, it is digital, in that every letter bears the same importance. Moreover, the language of D N A is considerably simpler than English, since it has an alphabet of only four letters, conventionally known as A, C, G and T.

Now that we know that genes are coded recipes, it is hard to recall how few people even guessed such a possibility. For the first half of the twentieth century, one question reverberated unanswered through biology: what is a gene? It seemed almost impossibly mysterious. Go back not to 1953, the year of the discovery of D N A ' s symmetrical structure, but ten years further, to 1943. Those who will do most to crack the mystery, a whole decade later, are working on other things in 1943. Francis Crick is working on the design of naval mines near Portsmouth. At the same time James Watson is just enrolling as an undergraduate at the precocious age of fifteen at the University of Chicago; he is determined to devote his life to ornithology. Maurice Wilkins is helping to design the atom bomb in the United States. Rosalind Franklin is studying the structure of coal for the British government.


1 4 G E N O M E

In Auschwitz in 1943, Josef Mengele is torturing twins to death in a grotesque parody of scientific inquiry. Mengele is trying to understand heredity, but his eugenics proves not to be the path to enlightenment. Mengele's results will be useless to future scientists.

In Dublin in 1943, a refugee from Mengele and his ilk, the great physicist Erwin Schrodinger is embarking on a series of lectures at Trinity College entitled What is life?' He is trying to define a problem. He knows that chromosomes contain the secret of life, but he cannot understand how: 'It is these chromosomes . . . that contain in some kind of code-script the entire pattern of the individual's future development and of its functioning in the mature state.' The gene, he says, is too small to be anything other than a large molecule, an insight that will inspire a generation of scientists, including Crick, Watson, Wilkins and Franklin, to tackle what suddenly seems like a tractable problem. Having thus come tantalisingly close to the answer, though, Schrodinger veers off track. He thinks that the secret of this molecule's ability to carry heredity lies in his beloved quantum theory, and is pursuing that obsession down what will prove to be a blind alley. The secret of life has nothing to do with quantum states. The answer will not come from physics.3

In New York in 1943, a sixty-six-year-old Canadian scientist, Oswald Avery, is putting the finishing touches to an experiment that will decisively identify D N A as the chemical manifestation of heredity. He has proved in a series of ingenious experiments that a pneumonia bacterium can be transformed from a harmless to a virulent strain merely by absorbing a simple chemical solution. By 1943, Avery has concluded that the transforming substance, once purified, is D N A . But he will couch his conclusions in such cautious language for publication that few will take notice until much later.

In a letter to his brother Roy written in May 1943, Avery is only slightly less cautious:4

If we are right, and of course that's not yet proven, then it means that nucleic acids [DNA] are not merely structurally important but functionally active substances in determining the biochemical activities and specific L I F E 1 5

characteristics of cells — and that by means of a known chemical substance it is possible to induce predictable and hereditary changes in cells. That is something that has long been the dream of geneticists.

Avery is almost there, but he is still thinking along chemical lines.

'All life is chemistry', said Jan Baptista van Helmont in 1648, guessing.

At least some life is chemistry, said Friedrich Wohler in 1828 after synthesising urea from ammonium chloride and silver cyanide, thus breaking the hitherto sacrosanct divide between the chemical and biological worlds: urea was something that only living things had produced before. That life is chemistry is true but boring, like saying that football is physics. Life, to a rough approximation, consists of the chemistry of three atoms, hydrogen, carbon and oxygen, which between them make up ninety-eight per cent of all atoms in living beings. But it is the emergent properties of life — such as heritability

- not the constituent parts that are interesting. Avery cannot conceive what it is about D N A that enables it to hold the secret of heritable properties. The answer will not come from chemistry.

In Bletchley, in Britain, in 1943, in total secrecy, a brilliant mathematician, Alan Turing, is seeing his most incisive insight turned into physical reality. Turing has argued that numbers can compute numbers. To crack the Lorentz encoding machines of the German forces, a computer called Colossus has been built based on Turing's principles: it is a universal machine with a modifiable stored program.

Nobody realises it at the time, least of all Turing, but he is probably closer to the mystery of life than anybody else. Heredity is a modifiable stored program; metabolism is a universal machine. The recipe that links them is a code, an abstract message that can be embodied in a chemical, physical or even immaterial form. Its secret is that it can cause itself to be replicated. Anything that can use the resources of the world to get copies of itself made is alive; the most likely form for such a thing to take is a digital message - a number, a script or a word.5

In New Jersey in 1943, a quiet, reclusive scholar named Claude Shannon is ruminating about an idea he had first had at Princeton l 6 G E N O M E

a few years earlier. Shannon's idea is that information and entropy are opposite faces of the same coin and that both have an intimate link with energy. The less entropy a system has, the more information it contains. The reason a steam engine can harness the energy from burning coal and turn it into rotary motion is because the engine has high information content — information injected into it by its designer. So does a human body. Aristotie's information theory meets Newton's physics in Shannon's brain. Like Turing, Shannon has no thoughts about biology. But his insight is of more relevance to the question of what is life than a mountain of chemistry and physics. Life, too, is digital information written in D N A . 6

In the beginning was the word. The word was not D N A . That came afterwards, when life was already established, and when it had divided the labour between two separate activities: chemical work and information storage, metabolism and replication. But D N A contains a record of the word, faithfully transmitted through all subsequent aeons to the astonishing present.

Imagine the nucleus of a human egg beneath the microscope.

Arrange the twenty-three chromosomes, if you can, in order of size, the biggest on the left and the smallest on the right. Now zoom in on the largest chromosome, the one called, for purely arbitrary reasons, chromosome 1. Every chromosome has a long arm and a short arm separated by a pinch point known as a centromere. On the long arm of chromosome 1, close to the centromere, you will find, if you read it carefully, that there is a sequence of 120 letters

- As, Cs, Gs and Ts - that repeats over and over again. Between each repeat there lies a stretch of more random text, but the 120-letter paragraph keeps coming back like a familiar theme tune, in all more than 100 times. This short paragraph is perhaps as close as we can get to an echo of the original word.

This 'paragraph' is a small gene, probably the single most active gene in the human body. Its 120 letters are constantly being copied into a short filament of R N A . The copy is known as 5S R N A . It sets up residence with a lump of proteins and other R N A s , carefully intertwined, in a ribosome, a machine whose job is to translate L I F E 1 7

D N A recipes into proteins. And it is proteins that enable D N A to replicate. To paraphrase Samuel Butler, a protein is just a gene's way of making another gene; and a gene is just a protein's way of making another protein. Cooks need recipes, but recipes also need cooks. Life consists of the interplay of two kinds of chemicals: proteins and D N A .

Protein represents chemistry, living, breathing, metabolism and behaviour - what biologists call the phenotype. D N A represents information, replication, breeding, sex - what biologists call the genotype. Neither can exist without the other. It is the classic case of chicken and egg: which came first, D N A or protein? It cannot have been D N A , because D N A is a helpless, passive piece of mathematics, which catalyses no chemical reactions. It cannot have been protein, because protein is pure chemistry with no known way of copying itself accurately. It seems impossible either that D N A invented protein or vice versa. This might have remained a baffling and strange conundrum had not the word left a trace of itself faintly drawn on the filament of life. Just as we now know that eggs came long before chickens (the reptilian ancestors of all birds laid eggs), so there is growing evidence that R N A came before proteins.

R N A is a chemical substance that links the two worlds of D N A and protein. It is used mainly in the translation of the message from the alphabet of D N A to the alphabet of proteins. But in the way it behaves, it leaves little doubt that it is the ancestor of both. R N A was Greece to D N A ' s Rome: Homer to her Virgil.

R N A was the word. R N A left behind five little clues to its priority over both protein and D N A . Even today, the ingredients of D N A are made by modifying the ingredients of R N A , not by a more direct route. Also D N A ' s letter Ts are made from R N A ' s letter Us. Many modern enzymes, though made of protein, rely on small molecules of R N A to make them work. Moreover, R N A , unlike D N A and protein, can copy itself without assistance: give it the right ingredients and it will stitch them together into a message.

Wherever you look in the cell, the most primitive and basic functions require the presence of R N A . It is an RNA-dependent enzyme l 8 G E N O M E

that takes the message, made of R N A , from the gene. It is an RNA-containing machine, the ribosome, that translates that message, and it is a little R N A molecule that fetches and carries the amino acids for the translation of the gene's message. But above all, R N A — unlike D N A — can act as a catalyst, breaking up and joining other molecules including R N A s themselves. It can cut them up, join the ends together, make some of its own building blocks, and elongate a chain of R N A . It can even operate on itself, cutting out a chunk of text and splicing the free ends together again.7

The discovery of these remarkable properties of R N A in the early 1980s, made by Thomas Cech and Sidney Altman, transformed our understanding of the origin of life. It now seems probable that the very first gene, the 'ur-gene', was a combined replicator—catalyst, a word that consumed the chemicals around it to duplicate itself. It may well have been made of R N A . By repeatedly selecting random R N A molecules in the test tube based on their ability to catalyse reactions, it is possible to 'evolve' catalytic R N A s from scratch —

almost to rerun the origin of life. And one of the most surprising results is that these synthetic R N A s often end up with a stretch of R N A text that reads remarkably like part of the text of a ribosomal R N A gene such as the 5S gene on chromosome 1.

Back before the first dinosaurs, before the first fishes, before the first worms, before the first plants, before the first fungi, before the first bacteria, there was an R N A world — probably somewhere around four billion years ago, soon after the beginning of planet earth's very existence and when the universe itself was only ten billion years old. We do not know what these 'ribo-organisms'

looked like. We can only guess at what they did for a living, chemically speaking. We do not know what came before them. We can be pretty sure that they once existed, because of the clues to R N A ' s role that survive in living organisms today.8

These ribo-organisms had a big problem. R N A is an unstable substance, which falls apart within hours. Had these organisms ventured anywhere hot, or tried to grow too large, they would have faced what geneticists call an error catastrophe - a rapid decay of L I F E 1 9

the message in their genes. One of them invented by trial and error a new and tougher version of R N A called D N A and a system for making R N A copies from it, including a machine we'll call the proto-ribosome. It had to work fast and it had to be accurate. So it stitched together genetic copies three letters at a time, the better to be fast and accurate. Each threesome came flagged with a tag to make it easier for the proto-ribosome to find, a tag that was made of amino acid. Much later, those tags themselves became joined together to make proteins and the three-letter word became a form of code for the proteins - the genetic code itself. (Hence to this day, the genetic code consists of three-letter words, each spelling out a particular one of twenty amino acids as part of a recipe for a protein.) And so was born a more sophisticated creature that stored its genetic recipe on D N A , made its working machines of protein and used R N A to bridge the gap between them.

Her name was Luca, the Last Universal Common Ancestor. What did she look like, and where did she live? The conventional answer is that she looked like a bacterium and she lived in a warm pond, possibly by a hot spring, or in a marine lagoon. In the last few years it has been fashionable to give her a more sinister address, since it became clear that the rocks beneath the land and sea are impregnated with billions of chemical-fuelled bacteria. Luca is now usually placed deep underground, in a fissure in hot igneous rocks, where she fed on sulphur, iron, hydrogen and carbon. To this day, the surface life on earth is but a veneer. Perhaps ten times as much organic carbon as exists in the whole biosphere is in thermophilic bacteria deep beneath the surface, where they are possibly responsible for generating what we call natural gas.9

There is, however, a conceptual difficulty about trying to identify the earliest forms of life. These days it is impossible for most creatures to acquire genes except from their parents, but that may not always have been so. Even today, bacteria can acquire genes from other bacteria merely by ingesting them. There might once have been widespread trade, even burglary, of genes. In the deep past chromosomes were probably numerous and short, containing just one 2 0 G E N O M E

gene each, which could be lost or gained quite easily. If this was so, Carl Woese points out, the organism was not yet an enduring entity.

It was a temporary team of genes. The genes that ended up in all of us may therefore have come from lots of different 'species' of creature and it is futile to try to sort them into different lineages. We are descended not from one ancestral Luca, but from the whole community of genetic organisms. Life, says Woese, has a physical history, but not a genealogical one.10

You can look on such a conclusion as a fuzzy piece of comforting, holistic, communitarian philosophy - we are all descended from society, not from an individual species - or you can see it as the ultimate proof of the theory of the selfish gene: in those days, even more than today, the war was carried on between genes, using organisms as temporary chariots and forming only transient alliances; today it is more of a team game. Take your pick.

Even if there were lots of Lucas, we can still speculate about where they lived and what they did for a living. This is where the second problem with the thermophilic bacteria arises. Thanks to some brilliant detective work by three New Zealanders published in 1998, we can suddenly glimpse the possibility that the tree of life, as it appears in virtually every textbook, may be upside down. Those books assert that the first creatures were like bacteria, simple cells with single copies of circular chromosomes, and that all other living things came about when teams of bacteria ganged together to make complex cells. It may much more plausibly be the exact reverse.

The very first modern organisms were not like bacteria; they did not live in hot springs or deep-sea volcanic vents. They were much more like protozoa: with genomes fragmented into several linear chromosomes rather than one circular one, and 'polyploid' — that is, with several spare copies of every gene to help with the correction of spelling errors. Moreover, they would have liked cool climates.

As Patrick Forterre has long argued, it now looks as if bacteria came later, highly specialised and simplified descendants of the Lucas, long after the invention of the DNA-protein world. Their trick was to drop much of the equipment of the R N A world specifically to L I F E 2 1

enable them to live in hot places. It is we that have retained the primitive molecular features of the Lucas in our cells; bacteria are much more 'highly evolved' than we are.

This strange tale is supported by the existence of molecular 'fossils' - little bits of R N A that hang about inside the nucleus of your cells doing unnecessary things such as splicing themselves out of genes: guide R N A , vault R N A , small nuclear R N A , small nucleolar R N A , self-splicing introns. Bacteria have none of these, and it is more parsimonious to believe that they dropped them rather than we invented them. (Science, perhaps surprisingly, is supposed to treat simple explanations as more probable than complex ones unless given reason to think otherwise; the principle is known in logic as Occam's razor.) Bacteria dropped the old R N A s when they invaded hot places like hot springs or subterranean rocks where temperatures can reach 170 °C — to minimise mistakes caused by heat, it paid to simplify the machinery. Having dropped the R N A s , bacteria found their new streamlined cellular machinery made them good at competing in niches where speed of reproduction was an advantage - such as parasitic and scavenging niches. We retained those old R N A s , relics of machines long superseded, but never entirely thrown away.

Unlike the massively competitive world of bacteria, we — that is all animals, plants and fungi - never came under such fierce competition to be quick and simple. We put a premium instead on being complicated, in having as many genes as possible, rather than a streamlined machine for using them.11

The three-letter words of the genetic code are the same in every creature. C G A means arginine and G C G means alanine - in bats, in beetles, in beech trees, in bacteria. They even mean the same in the misleadingly named archaebacteria living at boiling temperatures in sulphurous springs thousands of feet beneath the surface of the Atlantic ocean or in those microscopic capsules of deviousness called viruses. Wherever you go in the world, whatever animal, plant, bug or blob you look at, if it is alive, it will use the same dictionary and know the same code. All life is one. The genetic code, bar a few tiny local aberrations, mostly for unexplained reasons in the ciliate 2 2 G E N O M E

protozoa, is the same in every creature. We all use exactly the same language.

This means - and religious people might find this a useful argument - that there was only one creation, one single event when life was born. Of course, that life might have been born on a different planet and seeded here by spacecraft, or there might even have been thousands of kinds of life at first, but only Luca survived in the ruthless free-for-all of the primeval soup. But until the genetic code was cracked in the 1960s, we did not know what we now know: that all life is one; seaweed is your distant cousin and anthrax one of your advanced relatives. The unity of life is an empirical fact.

Erasmus Darwin was outrageously close to the mark: 'One and the same kind of living filaments has been the cause of all organic life.'

In this way simple truths can be read from the book that is the genome: the unity of all life, the primacy of R N A , the chemistry of the very earliest life on the planet, the fact that large, single-celled creatures were probably the ancestors of bacteria, not vice versa.

We have no fossil record of the way life was four billion years ago.

We have only this great book of life, the genome. The genes in the cells of your little finger are the direct descendants of the first replicator molecules; through an unbroken chain of tens of billions of copyings, they come to us today still bearing a digital message that has traces of those earliest struggles of life. If the human genome can tell us things about what happened in the primeval soup, how much more can it tell us about what else happened during the succeeding four million millennia. It is a record of our history written in the code for a working machine.


C H R O M O S O M E 2

S p e c i e s

Man with all his noble qualities still bears in his bodily frame the indelible stamp of his lowly origin.

Charles Darwin

Sometimes the obvious can stare you in the face. Until 1955, it was agreed that human beings had twenty-four pairs of chromosomes.

It was just one of those facts that everybody knew was right. They knew it was right because in 1921 a Texan named Theophilus Painter had sliced thin sections off the testicles of two black men and one white man castrated for insanity and 'self-abuse', fixed the slices in chemicals and examined them under the microscope. Painter tried to count the tangled mass of unpaired chromosomes he could see in the spermatocytes of the unfortunate men, and arrived at the figure of twenty-four. 'I feel confident that this is correct,' he said.

Others later repeated his experiment in other ways. All agreed the number was twenty-four.

For thirty years, nobody disputed this 'fact'. One group of scientists abandoned their experiments on human liver cells because they could only find twenty-three pairs of chromosomes in each cell.


2 4 G E N O M E

Another researcher invented a method of separating the chromosomes, but still he thought he saw twenty-four pairs. It was not until 1955, when an Indonesian named Joe-Hin Tjio travelled from Spain to Sweden to work with Albert Levan, that the truth dawned.

Tjio and Levan, using better techniques, plainly saw twenty-three pairs. They even went back and counted twenty-three pairs in photographs in books where the caption stated that there were twenty-four pairs. There are none so blind as do not wish to see.1

It is actually rather surprising that human beings do not have twenty-four pairs of chromosomes. Chimpanzees have twenty-four pairs of chromosomes; so do gorillas and orangutans. Among the apes we are the exception. Under the microscope, the most striking and obvious difference between ourselves and all the other great apes is that we have one pair less. The reason, it immediately becomes apparent, is not that a pair of ape chromosomes has gone missing in us, but that two ape chromosomes have fused together in us. Chromosome 2, the second biggest of the human chromosomes, is in fact formed from the fusion of two medium-sized ape chromosomes, as can be seen from the pattern of black bands on the respective chromosomes.

Pope John-Paul II, in his message to the Pontifical Academy of Sciences on 22 October 1996, argued that between ancestral apes and modern human beings, there was an 'ontological discontinuity'

— a point at which God injected a human soul into an animal lineage.

Thus can the Church be reconciled to evolutionary theory. Perhaps the ontological leap came at the moment when two ape chromosomes were fused, and the genes for the soul lie near the middle of chromosome 2.

The pope notwithstanding, the human species is by no means the pinnacle of evolution. Evolution has no pinnacle and there is no such thing as evolutionary progress. Natural selection is simply the process by which life-forms change to suit the myriad opportunities afforded by the physical environment and by other life-forms.

The black-smoker bacterium, living in a sulphurous vent on the floor of the Atlantic ocean and descended from a stock of bacteria S P E C I E S 2 5

that parted company with our ancestors soon after Luca's day, is arguably more highly evolved than a bank clerk, at least at the genetic level. Given that it has a shorter generation time, it has had more time to perfect its genes.

This book's obsession with the condition of one species, the human species, says nothing about that species' importance. Human beings are of course unique. They have, perched between their ears, the most complicated biological machine on the planet. But complexity is not everything, and it is not the goal of evolution.

Every species on the planet is unique. Uniqueness is a commodity in oversupply. None the less, I propose to try to probe this human uniqueness in this chapter, to uncover the causes of our idiosyncrasy as a species. Forgive my parochial concerns. The story of a briefly abundant hairless primate originating in Africa is but a footnote in the history of life, but in the history of the hairless primate it is central. What exactly is the unique selling point of our species?

Human beings are an ecological success. They are probably the most abundant large animal on the whole planet. There are nearly six billion of them, amounting collectively to something like 300 million tons of biomass. The only large animals that rival or exceed this quantity are ones we have domesticated - cows, chickens and sheep - or that depend on man-made habitats: sparrows and rats. By contrast, there are fewer than a thousand mountain gorillas in the world and even before we started slaughtering them and eroding their habitat there may not have been more than ten times that number. Moreover, the human species has shown a remarkable capacity for colonising different habitats, cold or hot, dry or wet, high or low, marine or desert.

Ospreys, barn owls and roseate terns are the only other large species to thrive in every continent except Antarctica and they remain strictly confined to certain habitats. No doubt, this ecological success of the human being comes at a high price and we are doomed to catastrophe soon enough: for a successful species we are remarkably pessimistic about the future. But for now we are a success.

Yet the remarkable truth is that we come from a long line of failures. We are apes, a group that almost went extinct fifteen million 2 6 G E N O M E

years ago in competition with the better-designed monkeys. We are primates, a group of mammals that almost went extinct forty-five million years ago in competition with the better-designed rodents.

We are synapsid tetrapods, a group of reptiles that almost went extinct 200 million years ago in competition with the better-designed dinosaurs. We are descended from limbed fishes, which almost went extinct 360 million years ago in competition with the better-designed ray-finned fishes. We are chordates, a phylum that survived the Cambrian era 500 million years ago by the skin of its teeth in competition with the brilliantly successful arthropods. Our ecological success came against humbling odds.

In the four billion years since Luca, the word grew adept at building what Richard Dawkins has called 'survival machines': large, fleshy entities known as bodies that were good at locally reversing entropy the better to replicate the genes within them. They had done this by a venerable and massive process of trial and error, known as natural selection. Trillions of new bodies had been built, tested and enabled to breed only if they met increasingly stringent criteria for survival. At first, this had been a simple business of chemical efficiency: the best bodies were cells that found ways to convert other chemicals into D N A and protein. This phase lasted for about three billion years and it seemed as if life on earth, whatever it might do on other planets, consisted of a battle between competing strains of amoebae. Three billion years during which trillions of trillions of single-celled creatures lived, each one reproducing and dying every few days or so, amounts to a big heap of trial and error.

But it turned out that life was not finished. About a billion years ago, there came, quite suddenly, a new world order, with the invention of bigger, multicellular bodies, a sudden explosion of large creatures. Within the blink of a geological eye (the so-called Cambrian explosion may have lasted a mere ten or twenty million years), there were vast creatures of immense complexity: scuttling trilobites nearly a foot long; slimy worms even longer; waving algae half a yard across. Single-celled creatures still dominated, but these great unwieldy forms of giant survival machines were carving out a niche S P E C I E S 2 7

for themselves. And, strangely, these multicellular bodies had hit upon a sort of accidental progress. Although there were occasional setbacks caused by meteorites crashing into the earth from space, which had an unfortunate tendency to extirpate the larger and more complex forms, there was a trend of sorts discernible. The longer animals existed, the more complex some of them became. In particular, the brains of the brainiest animals were bigger and bigger in each successive age: the biggest brains in the Paleozoic were smaller than the biggest in the Mesozoic, which were smaller than the biggest in the Cenozoic, which were smaller than the biggest present now.

The genes had found a way to delegate their ambitions, by building bodies capable not just of survival, but of intelligent behaviour as well. Now, if a gene found itself in an animal threatened by winter storms, it could rely on its body to do something clever like migrate south or build itself a shelter.

Our breathless journey from four billion years ago brings us to just ten million years ago. Past the first insects, fishes, dinosaurs and birds to the time when the biggest-brained creature on the planet (corrected for body size) was probably our ancestor, an ape.

At that point, ten million years before the present, there probably lived at least two species of ape in Africa, though there may have been more. One was the ancestor of the gorilla, the other the common ancestor of the chimpanzee and the human being. The gorilla's ancestor had probably taken to the montane forests of a string of central African volcanoes, cutting itself off from the genes of other apes. Some time over the next five million years the other species gave rise to two different descendant species in the split that led to human beings and to chimpanzees.

The reason we know this is that the story is written in the genes.

As recendy as 1950 the great anatomist J. Z. Young could write that it was still not certain whether human beings descended from a common ancestor with apes, or from an entirely different group of primates separated from the ape lineage more than sixty million years ago. Others still thought the orangutan might prove our closest cousin.2 Yet we now know not only that chimpanzees separated from 2 8 G E N O M E

the human line after gorillas did, but that the chimp—human split occurred not much more than ten, possibly even less than five, million years ago. The rate at which genes randomly accumulate spelling changes gives a firm indication of relationships between species. The spelling differences between gorilla and chimp are greater than the spelling differences between chimp and human being — in every gene, protein sequence or random stretch of D N A that you care to look at. At its most prosaic this means that a hybrid of human and chimpanzee D N A separates into its constituent strands at a higher temperature than do hybrids of chimp and gorilla D N A , or of gorilla and human D N A .

Calibrating the molecular clock to give an actual date in years is much more difficult. Because apes are long-lived and breed at a comparatively advanced age, their molecular clocks tick rather slowly (the spelling mistakes are picked up mostly at the moment of replication, at the creation of an egg or sperm). But it is not clear exactly how much to correct the clock for this factor; nor do all genes agree. Some stretches of D N A seem to imply an ancient split between chimps and human beings; others, such as the mitochondria, suggest a more recent date. The generally accepted range is five to ten million years.3

Apart from the fusion of chromosome 2, visible differences between chimp and human chromosomes are few and tiny. In thirteen chromosomes no visible differences of any kind exist. If you select at random any 'paragraph' in the chimp genome and compare it with the comparable 'paragraph' in the human genome, you will find very f e w 'letters' are different: on average, less than two in every hundred. We are, to a ninety-eight per cent approximation, chimpanzees,\and they are, with ninety-eight per cent confidence limits, human beings. If that does not dent your self-esteem, consider that chimpanzees are only ninety-seven per cent gorillas; and humans are also ninety-seven per cent gorillas. In other words we are more chimpanzee-like than gorillas are.

How can this be? The differences between me and a chimp are immense. It is hairier, it has a different shaped head, a different S P E C I E S 2 9

shaped body, different limbs, makes different noises. There is nothing about chimpanzees that looks ninety-eight per cent like me. Oh really? Compared with what? If you took two Plasticene models of a mouse and tried to turn one into a chimpanzee, the other into a human being, most of the changes you would make would be the same. If you took two Plasticene amoebae and turned one into a chimpanzee, the other into a human being, almost all the changes you would make would be the same. Both would need thirty-two teeth, five fingers, two eyes, four limbs and a liver. Both would need hair, dry skin, a spinal column and three little bones in the middle ear. From the perspective of an amoeba, or for that matter a fertilised egg, chimps and human beings are ninety-eight per cent the same.

There is no bone in the chimpanzee body that I do not share. There is no known chemical in the chimpanzee brain that cannot be found in the human brain. There is no known part of the immune system, the digestive system, the vascular system, the lymph system or the nervous system that we have and chimpanzees do not, or vice versa.

There is not even a brain lobe in the chimpanzee brain that we do not share. In a last, desperate defence of his species against the theory of descent from the apes, the Victorian anatomist Sir Richard Owen once claimed that the hippocampus minor was a brain lobe unique to human brains, so it must be the seat of the soul and the proof of divine creation. He could not find the hippocampus minor in the freshly pickled brains of gorillas brought back from the Congo by the adventurer Paul du Chaillu. Thomas Henry Huxley furiously responded that the hippocampus minor was there in ape brains.

'No, it wasn't', said Owen. Was, too', said Huxley. Briefly, in 1861, the 'hippocampus question' was all the rage in Victorian London and found itself satirised in Punch and Charles Kingsley's novel The water babies. Huxley's point - of which there are loud modern echoes

- was more than just anatomy:4 'It is not I who seek to base Man's dignity upon his great toe, or insinuate that we are lost if an Ape has a hippocampus minor. On the contrary, I have done my best to sweep away this vanity.' Huxley, by the way, was right.

After all, it is less than 300,000 human generations since the 3 0 G E N O M E

common ancestor of both species lived in central Africa. If you held hands with your mother, and she held hands with hers, and she with hers, the line would stretch only from New York to Washington before you were holding hands with the 'missing link' - the common ancestor with chimpanzees. Five million years is a long time, but evolution works not in years but in generations. Bacteria can pack in that many generations in just twenty-five years.

What did the missing link look like? By scratching back through the fossil record of direct human ancestors, scientists are getting remarkably close to knowing. The closest they have come is probably a little ape-man skeleton called Ardipithecus from just over four million years ago. Although a few scientists have speculated that Ardipithecus predates the missing link, it seems unlikely: the creature had a pelvis designed chiefly for upright walking; to modify that back to the gorilla-like pelvis design in the chimpanzee's lineage would have been drastically improbable. We need to find a fossil several million years older to be sure we are looking at a common ancestor of us and chimps. But we can guess, from Ardipithecus, what the missing link looked like: its brain was probably smaller than a modern chimp's. Its body was at least as agile on two legs as a modern chimp's. Its diet, too, was probably like a modern chimp's: mostly fruit and vegetation. Males were considerably bigger than females. It is hard, from the perspective of human beings, not to think of the missing link as more chimp-like than human-like.

Chimps might disagree, of course, but none the less it seems as if our lineage has seen grosser changes than theirs.

Like every ape that had ever lived, the missing link was probably a forest creature: a model, modern, Pliocene ape at home among the trees. At some point, its population became split in half. We know this because the separation of two parts of a population is often the event that sparks speciation: the two daughter populations gradually diverge in genetic make-up. Perhaps it was a mountain range, or a river (the Congo river today divides the chimpanzee from its sister species, the bonobo), or the creation of the western Rift Valley itself about five million years ago that caused the split, S P E C I E S 3 1

leaving human ancestors on the dry, eastern side. The French paleon-tologist Yves Coppens has called this latter theory 'East Side Story'.

Perhaps, and the theories are getting more far-fetched now, it was the newly formed Sahara desert that isolated our ancestor in North Africa, while the chimp's ancestor remained to the south. Perhaps the sudden flooding, five million years ago, of the then-dry Mediterranean basin by a gigantic marine cataract at Gibraltar, a cataract one thousand times the volume of Niagara, suddenly isolated a small population of missing links on some large Mediterranean island, where they took to a life of wading in the water after fish and shellfish. This 'aquatic hypothesis' has all sorts of things going for it except hard evidence.

Whatever the mechanism, we can guess that our ancestors were a small, isolated band, while those of the chimpanzees were the main race. We can guess this because we know from the genes that human beings went through a much tighter genetic bottleneck (i.e., a small population size) than chimpanzees ever did: there is much less random variability in the human genome than the chimp genome.5

So let us picture this isolated group of animals on an island, real or virtual. Becoming inbred, flirting with extinction, exposed to the forces of the genetic founder effect (by which small populations can have large genetic changes thanks to chance), this little band of apes shares a large mutation: two of their chromosomes have become fused. Henceforth they can breed only with their own kind, even when the 'island' rejoins the 'mainland'. Hybrids between them and their mainland cousins are infertile. (I'm guessing again - but scientists show remarkably little curiosity about the reproductive isolation of our species: can we breed with chimps or not?) By now other startling changes have begun to come about. The shape of the skeleton has changed to allow an upright posture and a bipedal method of walking, which is well suited to long distances in even terrain; the knuckle-walking of other apes is better suited to shorter distances over rougher terrain. The skin has changed, too.

It is becoming less hairy and, unusually for an ape, it sweats profusely in the heat. These features, together with a mat of hair to shade the 3 2 G E N O M E

head and a radiator-shunt of veins in the scalp, suggest that our ancestors were no longer in a cloudy and shaded forest; they were walking in the open, in the hot equatorial sun.6

Speculate as much as you like about the ecology that selected such a dramatic change in our ancestral skeleton. Few suggestions can be ruled out or in. But by far the most plausible cause of these changes is the isolation of our ancestors in a relatively dry, open grassland environment. The habitat had come to us, not vice versa: in many parts of Africa the savannah replaced the forest about this time. Some time later, about 3.6 million years ago, on freshly wetted volcanic ash recently blown from the Sadiman volcano in what is now Tanzania, three hominids walked purposefully from south to north, the larger one in the lead, the middle-sized one stepping in the leader's footsteps and the small one, striding out to keep up, just a little to the left of the others. After a while, they paused and turned to the west briefly, then walked on, as upright as you or me.

The Laetoli fossilised footprints tell as plain a tale of our ancestors'

upright walking as we could wish for.

Yet we still know too little. Were the Laetoli ape-people a male, a female and a child or a male and two females? What did they eat?

What habitat did they prefer? Eastern Africa was certainly growing drier as the Rift Valley interrupted the circulation of moist winds from the west, but that does not mean they sought dry places.

Indeed, our need for water, our tendency to sweat, our peculiar adaptation to a diet rich in the oils and fats of fish and other factors (even our love of beaches and water sports) hint at something of an aquatic preference. We are really rather good at swimming. Were we at first to be found in riverine forests or at the edges of lakes?

In due time, human beings would turn dramatically carnivorous.

A whole new species of ape-man, indeed several species, would appear before that, descendants of Laetoli-like creatures, but not ancestors of people, and probably dedicated vegetarians. They are called the robust australopithecines. The genes cannot help us here, because the robusts were dead ends. Just as we would never have known about our close cousinship with chimps if we could not read S P E C I E S 3 3

genes, so we would never have been aware of the existence of our many and closer australopithecine cousins if we had not found fossils (by 'we', I mean principally the Leakey family, Donald Johanson and others). Despite their robust name (which refers only to their heavy jaws), robust australopithecines were little creatures, smaller than chimps and stupider, but erect of posture and heavy of face: equipped with massive jaws supported by giant muscles. They were into chewing - probably grasses and other tough plants. They had lost their canine teeth the better to chew from side to side. Eventually, they became extinct, some time around a million years ago. We may never know much more about them. Perhaps we ate them.

After all, by then our ancestors were bigger animals, as big as modern people, maybe slightly bigger: strapping lads who would grow to nearly six foot, like the famous skeleton of the Nariokotome boy of 1.6 million years ago described by Alan Walker and Richard Leakey.7 They had begun to use stone tools as substitutes for tough teeth. Perfectly capable of killing and eating a defenceless robust australopithecine — in the animal world, cousins are not safe: lions kill leopards and wolves kill coyotes - these thugs had thick craniums and stone weapons (the two probably go together). Some competitive impulse was now marching the species towards its future explosive success, though nobody directed it - the brain just kept getting bigger and bigger. Some mathematical masochist has calculated that the brain was adding 150 million brain cells every hundred thousand years, the sort of useless statistic beloved of a tourist guide.

Big brains, meat eating, slow development, the 'neotenised' retention into adulthood of childhood characters (bare skin, small jaws and a domed cranium) - all these went together. Without the meat, the protein-hungry brain was an expensive luxury. Without the neotenised skull, there was no cranial space for the brain. Without the slow development, there was no time for learning to maximise the advantages of big brains.

Driving the whole process, perhaps, was sexual selection. Besides the changes to brains, another remarkable change was going on.

Females were getting big relative to males. Whereas in modern 3 4 G E N O M E

chimpanzees and australopithecines and the earliest ape-men fossils, males were one-and-a-half times the size of females, in modern people the ratio is much less. The steady decline of that ratio in the fossil record is one of the most overlooked features of our pre-history. What it means is that the mating system of the species was changing. The promiscuity of the chimp, with its short sexual li-aisons, and the harem polygamy of the gorilla, were being replaced with something much more monogamous: a declining ratio of sexual dimorphism is unambiguous evidence for that. But in a more monogamous system, there would now be pressure on each sex to choose its mate carefully; in polygamy, only the female is choosy. Long pair-bonds shackled each ape-man to its mate for much of its reproductive life: quality rather than quantity was suddenly important. For males it was suddenly vital to choose young mates, because young females had longer reproductive lives ahead of them. A preference for youthful, neotenous characters in either sex meant a preference for the large, domed cranium of youth, so it would have begun the drive towards bigger brains and all that followed therefrom.

Pushing us towards habitual monogamy, or at least pulling us further into it, was the sexual division of labour over food. Like no other species on the planet, we had invented a unique partnership between the sexes. By sharing plant food gathered by women, men had won the freedom to indulge the risky luxury of hunting for meat. By sharing hunted meat gathered by men, women had won access to high-protein, digestible food without having to abandon their young in seeking it. It meant that our species had a way of living on the dry plains of Africa that cut the risk of starvation; when meat was scarce, plant food filled the gap; when nuts and fruits were scarce, meat filled the gap. We had therefore acquired a high-protein diet without developing an intense specialisation for hunting the way the big cats did.

The habit acquired through the sexual division of labour had spread to other aspects of life. We had become compulsively good at sharing things, which had the new benefit of allowing each individual to specialise. It was this division of labour among specialists, S P E C I E S 3 5

unique to our species, that was the key to our ecological success, because it allowed the growth of technology. Today we live in societies that express the division of labour in ever more inventive and global ways.

From the here and now, these trends have a certain coherence.

Big brains needed meat (vegans today avoid protein-deficiency only by eating pulses); food sharing allowed a meaty diet (because it freed the men to risk failure in pursuit of game); food sharing demanded big brains (without detailed calculating memories, you could be easily cheated by a freeloader); the sexual division of labour promoted monogamy (a pair-bond being now an economic unit); monogamy led to neotenous sexual selection (by putting a premium on youthful-ness in mates). And so on, round and round the theories we go in a spiral of comforting justification, proving how we came to be as we are. We have built a scientific house of cards on the flimsiest foundations of evidence, but we have reason to believe that it will one day be testable. The fossil record will tell us only a little about behaviour; the bones are too dry and random to speak. But the genetic record will tell us more. Natural selection is the process by which genes change their sequences. In the process of changing, though, those genes laid down a record of our four-billion year biography as a biological lineage. They are, if we only know how to read them, a more valuable source of information on our past than the manuscripts of the Venerable Bede. In other words, a record of our past is etched into our genes.

Some two per cent of the genome tells the story of our different ecological and social evolution from that of chimpanzees, and theirs from us. When the genome of a typical human being has been fully transcribed into our computers, when the same has been done for the average chimpanzee, when the active genes have been extracted from the noise, and when the differences come to be listed, we will have an extraordinary glimpse of the pressures of the Pleistocene era on two different species derived from a common stock. The genes that will be the same will be the genes for basic biochemistry and body planning. Probably the only differences will be in genes 3 6 G E N O M E

for regulating growth and hormonal development. Somehow in their digital language, these genes will tell the foot of a human foetus to grow into a flat object with a heel and a big toe, whereas the same genes in a chimpanzee tell the foot of a chimp foetus to grow into a more curved object with less of a heel and longer, more prehensile toes.

It is mind-boggling even to try to imagine how that can be done

— science still has only the vaguest clues about how growth and form are generated by genes - but that genes are responsible is not in doubt. The differences between human beings and chimpanzees are genetic differences and virtually nothing else. Even those who would stress the cultural side of the human condition and deny or doubt the importance of genetic differences between human individuals or races, accept that the differences between us and other species are primarily genetic. Suppose the nucleus of a chimpanzee cell were injected into an enucleated human egg and that egg were implanted into a human womb, and the resulting baby, if it survived to term, were reared in a human family. What would it look like?

You do not even need to do the (highly unethical) experiment to know the answer: a chimpanzee. Although it started with human cytoplasm, used a human placenta and had a human upbringing, it would not look even partly human.

Photography provides a helpful analogy. Imagine you take a photograph of a chimpanzee. To develop it you must put it in a bath of developer for the requisite time, but no matter how hard you try, you cannot develop a picture of a human being on the negative by changing the formula of the developer. The genes are the negative; the womb is the developer. Just as a photograph needs to be immersed in a bath of developer before the picture will appear, so the recipe for a chimpanzee, written in digital form in the genes of its egg, needs the correct milieu to become an adult - the nutrients, the fluids, the food and the care - but it already has the information to make a chimpanzee.

The same is not quite true of behaviour. The typical chimpanzee's hardware can be put together in the womb of a foreign species, but S P E C I E S 3 7

its software would be a little awry. A baby chimpanzee would be as socially confused if reared by human beings as Tarzan would be if reared by chimps. Tarzan, for instance, would not learn to speak, and a human-reared chimp would not learn precisely how to appease dominant animals and intimidate subordinates, to make tree nests or to fish for termites. In the case of behaviour, genes are not sufficient, at least in apes.

But they are necessary. If it is mind-boggling to imagine how small differences in linear digital instructions can direct the two per cent difference between a human body and a chimpanzee body, how much more mind-boggling is it to imagine that a few changes in the same instructions can alter the behaviour of a chimpanzee so precisely. I wrote glibly of the mating system of different apes —

the promiscuous chimpanzee, the harem-polygamous gorilla and the long-pair-bond human being. In doing so I assumed, even more glibly, that every species behaves in a characteristic way, which, further, assumes that it is somehow at least partly genetically constrained or influenced. How can a bunch of genes, each one a string of quaternary code, make an animal polygamous or monogamous?

Answer: I do not have the foggiest idea, but that it can do so I have no doubt. Genes are recipes for both anatomy and behaviour.


C H R O M O S O M E 3

H i s t o r y

We've discovered the secret of life.

Francis Crick, 28 February 1953

Though he was only forty-five in 1902, Archibald Garrod was already a pillar of the British medical establishment. He was the son of a knighted professor, the famous Sir Alfred Baring Garrod, whose treatise on that most quintessential of upper-class afflictions, gout, was reckoned a triumph of medical research. His own career was effortlessly distinguished and in due course the inevitable knighthood (for medical work in Malta during the First World War) would be followed by one of the most glittering prizes of all: the Regius professorship of medicine at Oxford in succession to the great Sir William Osier.

You can just picture him, can you not? The sort of crusty and ceremonious Edwardian who stood in the way of scientific progress, stiff in collar, stiff in lip and stiff in mind. You would be wrong.

In that year, 1902, Archibald Garrod risked a conjecture that would reveal him to be a man far ahead of his time and somebody who had all but unknowingly put his finger on the answer to the greatest H I S T O R Y 3 9

biological mystery of all time: what is a gene? Indeed, so brilliant was his understanding of the gene that he would be long dead before anybody got the point of what he was saying: that a gene was a recipe for a single chemical. What is more, he thought he had found one.

In his work at St Bartholomew's Hospital and Great Ormond Street in London, Garrod had come across a number of patients with a rare and not very serious disease, known as alkaptonuria.

Among other more uncomfortable symptoms such as arthritis, their urine and the ear wax turned reddish or inky black on exposure to the air, depending on what they had been eating. In 1901, the parents of one of these patients, a little boy, had a fifth child who also had the affliction. That set Garrod to thinking about whether the problem ran in families. He noticed that the two children's parents were first cousins. So he went back and re-examined the other cases: three of the four families were first-cousin marriages, and of the seventeen alkaptonuria cases he saw, eight were second cousins of each other. But the affliction was not simply passed on from parent to child. Most sufferers had normal children, but the disease could reappear later in their descendants. Luckily, Garrod was abreast of the latest biological thinking. His friend William Bateson was one of those who was excited by the rediscovery just two years before of the experiments of Gregor Mendel, and was writing tomes to popularise and defend the new creed of Mendelism, so Garrod knew he was dealing with a Mendelian recessive - a character that could be carried by one generation but would only be expressed if inherited from both parents. He even used Mendel's botanical terminology, calling such people 'chemical sports'.

This gave Garrod an idea. Perhaps, he thought, the reason that the disease only appeared in those with a double inheritance was because something was missing. Being well versed not only in genetics but also in chemistry, he knew that the black urine and ear wax was caused by a build-up of a substance called homogentisate.

Homogentisate might be a normal product of the body's chemistry set, but one that was in most people then broken down and disposed of. The reason for the build-up, Garrod supposed, was because the 4 0 G E N O M E

catalyst that was meant to be breaking down the homogentisate was not working. That catalyst, he thought, must be an enzyme made of protein, and must be the sole product of an inherited factor (or gene, as we would now say). In the afflicted people, the gene produced a defective enzyme; in the carriers this did not matter because the gene inherited from the other parent could compensate.

Thus was born Garrod's bold hypothesis of the 'inborn errors of metabolism', with its far-reaching assumption that genes were there to produce chemical catalysts, one gene to each highly specialised catalyst. Perhaps that was what genes were: devices for making proteins. 'Inborn errors of metabolism', Garrod wrote, 'are due to the failure of a step in the metabolic sequence due to loss or malfunction of an enzyme.' Since enzymes are made of protein, they must be the 'seat of chemical individuality'. Garrod's book, published in 1909, was widely and positively reviewed, but his reviewers comprehensively missed the point. They thought he was talking about rare diseases, not something fundamental to all life. The Garrod theory lay neglected for thirty-five years and had to be rediscovered afresh.

By then, genetics was exploding with new ideas and Garrod had been dead for a decade.1

We now know that the main purpose of genes is to store the recipe for making proteins. It is proteins that do almost every chemical, structural and regulatory thing that is done in the body: they generate energy, fight infection, digest food, form hair, carry oxygen and so on and on. Every single protein in the body is made from a gene by a translation of the genetic code. The same is not quite true in reverse: there are genes, which are never translated into protein, such as the ribosomal-RNA gene of chromosome 1, but even that is involved in making other proteins. Garrod's conjecture is basically correct: what we inherit from our parents is a gigantic list of recipes for making proteins and for making protein-making machines - and little more.

Garrod's contemporaries may have missed his point, but at least they honoured him. The same could not be said of the man on whose shoulders he stood, Gregor Mendel. You could hardly imagine a H I S T O R Y 4 1

more different background from Garrod's than Mendel's. Christened Johann Mendel, he was born in the tiny village of Heinzendorf (now Hynoice) in Northern Moravia in 1822. His father, Anton, was a smallholder who paid his rent in work for his landlord; his health and livelihood were shattered by a falling tree when Johann was sixteen and doing well at the grammar school in Troppau. Anton sold the farm to his son-in-law so he could afford the fees for his son at school and then at university in Olmiitz. But it was a struggle and Johann needed a wealthier sponsor, so he became an Augustinian friar, taking the name Brother Gregor. He trundled through theological college in Brunn (now Brno) and emerged a priest. He did a stint as a parish priest, but it was not a success. He tried to become a science teacher after studying at Vienna University, but failed the examination.

Back to Brunn he went, a thirty-one-year-old nonentity, fit only for monastic life. He was good at mathematics and chess playing, had a decent head for figures and possessed a cheerful disposition.

He was also a passionate gardener, having learnt from his father how to graft and breed fruit trees. It is here, in the folk knowledge of the peasant culture, that the roots of his insight truly lay. The rudiments of particulate inheritance were dimly understood already by the breeders of cattle and apples, but nobody was being systematic. 'Not one [experiment]', wrote Mendel, 'has been carried out to such an extent and in such a way as to make it possible to determine the number of different forms with certainty according to their separate generations, or definitely to ascertain their statistical relations.' You can hear the audience dozing off already.

So Father Mendel, aged thirty-four, started a series of experiments on peas in the monastery gardens that were to last eight years, involve the planting of over 30,000 different plants - 6,000 in 1860

alone - and eventually change the world forever. Afterwards, he knew what he had done, and published it clearly in the proceedings of the Brunn society for the study of natural science, a journal that found its way to all the best libraries. But recognition never came and Mendel gradually lost interest in the gardens as he rose to 4 2 G E N O M E

become the abbot of Brunn, a kindly, busy and maybe not very pious friar (good food gets more mention in his writing than God).

His last years were taken up with an increasingly bitter and lonely campaign against a new tax levied on monasteries by the government, Mendel being the last abbot to pay it. Perhaps his greatest claim to fame, he might have reflected in old age, was that he made Leos Janacek, a talented nineteen-year-old boy in the choir school, the choirmaster of Brunn.

In the garden, Mendel had been hybridising: crossing different varieties of pea plant. But this was no amateur gardener playing at science; this was a massive, systematic and carefully thought-out experiment. Mendel chose seven pairs of varieties of peas to cross.

He crossed round-seeded peas with wrinkled ones; yellow cotyledons with green ones; inflated seed pods with wrinkled seed pods; grey seed coats with white seed coats; green unripe pods with yellow unripe pods; axial flowers with terminal flowers; tall stems with dwarf stems. How many more he tried we do not know; all of these not only breed true, but are due to single genes so he must have chosen them knowing already from preliminary work what result to expect. In every case, the resulting hybrids were always like just one parent. The other parent's essence seemed to have vanished. But it had not: Mendel allowed the hybrids to self-fertilise and the essence of the missing grandparent reappeared intact in roughly one-quarter of the cases. He counted and counted - 19,959 plants in the second generation, with the dominant characters outnumbering the reces¬

sives by 14,949 to 5,010, or 2.98 to 1. It was, as Sir Ronald Fisher pointed out in the next century, too suspiciously close to a ratio of three. Mendel, remember, was good at mathematics and he knew well before the experiments were over what equation his peas were obeying.2

Like a man possessed, Mendel turned from peas to fuschias, maize and other plants. He found the same results. He knew that he had discovered something profound about heredity: characteristics do not mix. There is something hard, indivisible, quantum and particulate at the heart of inheritance. There is no mingling of fluids, no blending of blood; there is instead a temporary joining together of H I S T O R Y 4 3

lots of little marbles. In retrospect, this was obvious all along. How else could people account for the fact that a family might contain a child with blue eyes and a child with brown? Darwin, who none the less based his theory on blending inheritance, hinted at the problem several times. 'I have lately been inclined to speculate', he wrote to Huxley in 1857, 'very crudely and indistinctly, that propagation by true fertilisation will turn out to be a sort of mixture, and not true fusion, of two distinct individuals . . . I can understand on no other view the way in which crossed forms go back to so large an extent to ancestral forms.'3

Darwin was not a little nervous on the subject. He had recently come under attack from a fierce Scottish professor of engineering, strangely named Fleeming Jenkin, who had pointed out the simple and unassailable fact that natural selection and blending inheritance did not mix. If heredity consisted of blended fluids, then Darwin's theory probably would not work, because each new and advantageous change would be lost in the general dilution of descent. Jenkin illustrated his point with the story of a white man attempting to convert an island of black people to whiteness merely by breeding with them. His white blood would soon be diluted to insignificance.

In his heart Darwin knew Jenkin was right, and even the usually ferocious Thomas Henry Huxley was silenced by Jenkin's argument, but Darwin also knew that his own theory was right. He could not square the two. If only he had read Mendel.

Many things are obvious in retrospect, but still take a flash of genius to become plain. Mendel's achievement was to reveal that the only reason most inheritance seems to be a blend is because it involves more than one particle. In the early nineteenth century John Dalton had proved that water was actually made up of billions of hard, irreducible little things called atoms and had defeated the rival continuity theorists. So now Mendel had proved the atomic theory of biology. The atoms of biology might have been called all sorts of things: among the names used in the first years of this century were factor, gemmule, plastidule, pangene, biophor, id and idant. But it was 'gene' that stuck.


4 4 G E N O M E

For four years, starting in 1866, Mendel sent his papers and his ideas to Karl-Wilhelm Nageli, professor of botany in Munich. With increasing boldness he tried to point out the significance of what he had found. For four years Nageli missed the point. He wrote back to the persistent monk polite but patronising letters, and told him to try breeding hawkweed. He could not have given more mischievous advice if he tried: hawkweed is apomictic, that is it needs pollen to breed but does not incorporate the genes of the pollinating partner, so cross-breeding experiments give strange results. After struggling with hawkweed Mendel gave up and turned to bees. The results of his extensive experiments on the breeding of bees have never been found. Did he discover their strange 'haplo-diploid' genetics?

Nageli meanwhile published an immense treatise on heredity that not only failed to mention Mendel's discovery; it also gave a perfect example of it from Nageli's own work - and still missed the point.

Nageli knew that if you crossed an angora cat with another breed, the angora coat disappeared completely in the next generation, but re-emerged intact in the kittens of the third generation. A clearer example of a Mendelian recessive could hardly be found.

Yet even in his lifetime Mendel came tantalisingly close to full recognition. Charles Darwin, normally so diligent at gleaning ideas from the work of others, even recommended to a friend a book, by W. O. Focke, that contained fourteen different references to Mendel's paper. Yet he seems not to have noticed them himself.

Mendel's fate was to be rediscovered, in 1900, long after his own and Darwin's deaths. It happened almost simultaneously in three different places. Each of his rediscoverers — Hugo de Vries, Carl Correns and Erich von Tschermak, all botanists - had laboriously duplicated Mendel's work on different species before he found Mendel's paper.

Mendelism took biology by surprise. Nothing about evolutionary theory demanded that heredity should come in lumps. Indeed, the notion seemed to undermine everything that Darwin had strived to establish. Darwin said that evolution was the accumulation of slight H I S T O R Y 4 5

and random changes through selection. If genes were hard things that could emerge intact from a generation in hiding, then how could they change gradually or subtly? In many ways, the early twentieth century saw the triumph of Mendelism over Darwinism.

William Bateson expressed the views of many when he hinted that particulate inheritance at least put limits on the power of natural selection. Bateson was a man with a muddled mind and a leaden prose style. He believed that evolution occurred in large leaps from one form to another leaving no intermediates. In pursuit of this eccentric notion, he had published a book in 1894 arguing that inheritance was particulate and had been furiously attacked by 'true'

Darwinists ever since. Little wonder he welcomed Mendel with open arms and was the first to translate his papers into English. 'There is nothing in Mendelian discovery which runs counter to the cardinal doctrine that species have arisen [by natural selection]', wrote Bateson, sounding like a theologian claiming to be the true interpreter of St Paul. 'Nevertheless, the result of modern inquiry has unquestionably been to deprive that principle of those supernatural attributes with which it has sometimes been invested . . . It cannot in candour be denied that there are passages in the works of Darwin which in some measure give countenance to these abuses of the principle of Natural Selection, but I rest easy in the certainty that had Mendel's paper come into his hands, those passages would have been immediately revised.'4

But the very fact that the dreaded Bateson was Mendelism's champion led European evolutionists to be suspicious of it. In Britain, the bitter feud between Mendelians and 'biometricians' persisted for twenty years. As much as anything this passed the torch to the United States where the argument was less polarised. In 1903

an American geneticist called Walter Sutton noticed that chromosomes behave just like Mendelian factors: they come in pairs, one from each parent. Thomas Hunt Morgan, the father of American genetics, promptly became a late convert to Mendelism, so Bateson, who disliked Morgan, gave up being right and fought against the chromosomal theory. By such petty feuds is the history of science 4 6 G E N O M E

often decided. Bateson sank into obscurity while Morgan went on to great things as the founder of a productive school of genetics and the man who lent his name to the unit of genetic distance: the centimorgan. In Britain, it was not until the sharp, mathematical mind of Ronald Fisher was brought to bear upon the matter in 1918

that Darwinism and Mendelism were at last reconciled: far from contradicting Darwin, Mendel had brilliantly vindicated him.

'Mendelism', said Fisher, 'supplied the missing parts of the structure erected by Darwin.'

Yet the problem of mutation remained. Darwinism demanded variety upon which to feed. Mendelism supplied stability instead. If genes were the atoms of biology, then changing them was as heretical as alchemy. The breakthrough came with the first artificial induction of mutation by somebody as different from Garrod and Mendel as could be imagined. Alongside an Edwardian doctor and an Augustinian friar we must place the pugnacious Hermann Joe Muller.

Muller was typical of the many brilliant, Jewish scientific refugees crossing the Atlantic in the 1930s in every way except one: he was heading east. A native New Yorker, son of the owner of a small metal-casting business, he had been drawn to genetics at Columbia University, but fell out with his mentor, Morgan, and moved to the University of Texas in 1920. There is a whiff of anti-semitism about Morgan's attitude to the brilliant Muller, but the pattern was all too typical. Muller fought with everybody all his life. In 1932, his marriage on the rocks and his colleagues stealing his ideas (so he said), he attempted suicide, then left Texas for Europe.

Muller's great discovery, for which he was to win the Nobel prize, was that genes are artificially mutable. It was like Ernest Rutherford's discovery a few years before that atomic elements were transmutable and that the word 'atom', meaning in Greek uncuttable, was inappropriate. In 1926, he asked himself, '[Is] mutation unique among biological processes in being itself outside the reach of modification or control, — that it occupies a position similar to that till recently characteristic of atomic transmutation in physical science?'

The following year he answered the question. By bombarding H I S T O R Y 4 7

fruit flies with X-rays, Muller caused their genes to mutate so that their offspring sported new deformities. Mutation, he wrote, 'does not stand as an unreachable god playing its pranks upon us from some impregnable citadel in the germplasm.' Like atoms, Mendel's particles must have some internal structure, too. They could be changed by X-rays. They were still genes after mutation, but not the same genes.

Artificial mutation kick-started modern genetics. Using Muller's X-rays, in 1940 two scientists named George Beadle and Edward Tatum created mutant versions of a bread mould called Neurospora.

They then worked out that the mutants failed to make a certain chemical because they lacked the working version of a certain enzyme. They proposed a law of biology, which caught on and has proved to be more or less correct: one gene specifies one enzyme.

Geneticists began to chant it under their breath: one gene, one enzyme. It was Garrod's old conjecture in modern, biochemical detail. Three years later came Linus Pauling's remarkable deduction that a nasty form of anaemia afflicting mostly black people, in which the red cells turned into sickle shapes, was caused by a fault in the gene for the protein haemoglobin. That fault behaved like a true Mendelian mutation. Things were gradually falling into place: genes were recipes for proteins; mutations were altered proteins made by altered genes.

Muller, meanwhile, was out of the picture. In 1932 his fervent socialism and his equally fervent belief in the selective breeding of human beings, eugenics (he wanted to see children carefully bred with the character of Marx or Lenin, though in later editions of his book he judiciously altered this to Lincoln and Descartes), led him across the Atlantic to Europe. He arrived in Berlin just a few months before Hitler came to power. He watched, horrified, as the Nazis smashed the laboratories of his boss, Oscar Vogt, for not expelling the Jews under his charge.

Muller went east once more, to Leningrad, arriving in the laboratory of Nikolay Vavilov just before the anti-Mendelist Trofim Lysenko caught the ear of Stalin and began his persecution of 4 8 G E N O M E

Mendelian geneticists in support of his own crackpot theories that wheat plants, like Russian souls, could be trained rather than bred to new regimes; and that those who believed otherwise should not be persuaded, but shot. Vavilov died in prison. Ever hopeful, Muller sent Stalin a copy of his latest eugenic book, but hearing it had not gone down well, found an excuse to get out of the country just in time. He went to the Spanish Civil War, where he worked in the blood bank of the International Brigade, and thence to Edinburgh, arriving with his usual ill luck just in time for the outbreak of the Second World War. He found it hard to do science in a blacked-out Scottish winter wearing gloves in the laboratory and he tried desperately to return to America. But nobody wanted a belligerent, prickly socialist who lectured ineptly and had been living in Soviet Russia.

Eventually Indiana University gave him a job. The following year he won the Nobel prize for his discovery of artificial mutation.

But still the gene itself remained an inaccessible and mysterious thing, its ability to specify precise recipes for proteins made all the more baffling by the fact that it must itself be made of protein; nothing else in the cell seemed complicated enough to qualify. True, there was something else in chromosomes: that dull little nucleic acid called D N A . It had first been isolated, from the pus-soaked bandages of wounded soldiers, in the German town of Tubingen in 1869 by a Swiss doctor named Friedrich Miescher. Miescher himself guessed that D N A might be the key to heredity, writing to his uncle in 1892 with amazing prescience that D N A might convey the hereditary message 'just as the words and concepts of all languages can find expression in 24—30 letters of the alphabet'. But D N A had few fans; it was known to be a comparatively monotonous substance: how could it convey a message in just four varieties?5

Drawn by the presence of Muller, there arrived in Bloomington, Indiana, a precocious and confident nineteen-year-old, already equipped with a bachelor's degree, named James Watson. He must have seemed an unlikely solution to the gene problem, but the solution he was. Trained at Indiana University by the Italian emigre Salvador Luria (predictably, Watson did not hit it off with Muller), H I S T O R Y 4 9

Watson developed an obsessive conviction that genes were made of D N A , not protein. In search of vindication, he went to Denmark, then, dissatisfied with the colleagues he found there, to Cambridge in October 1951. Chance threw him together in the Cavendish laboratory with a mind of equal brilliance captivated by the same conviction about the importance of D N A , Francis Crick.

The rest is history. Crick was the opposite of precocious. Already thirty-five, he still had no PhD (a German bomb had destroyed the apparatus at University College, London, with which he was supposed to have measured the viscosity of hot water under pressure

- to his great relief), and his sideways lurch into biology from a stalled career in physics was not, so far, a conspicuous success. He had already fled from the tedium of one Cambridge laboratory where he was employed to measure the viscosity of cells forced to ingest particles, and was busy learning crystallography at the Cavendish.

But he did not have the patience to stick to his own problems, or the humility to stick to small questions. His laugh, his confident intelligence and his knack of telling people the answers to their own scientific questions were getting on nerves at the Cavendish. Crick was also vaguely dissatisfied with the prevailing obsession with proteins. The structure of the gene was the big question and D N A , he suspected, was a part of the answer. Lured by Watson, he played truant from his own research to indulge in D N A games. So was born one of the great, amicably competitive and therefore productive collaborations in the history of science: the young, ambitious, supple-minded American who knew some biology and the effortlessly brilliant but unfocused older Briton who knew some physics. It was an exothermic reaction.

Within a few short months, using other people's laboriously gathered but under-analysed facts, they had made possibly the greatest scientific discovery of all time, the structure of D N A . Not even Archimedes leaping from his bath had been granted greater reason to boast, as Francis Crick did in the Eagle pub on 28 February 1953,

'We've discovered the secret of life.' Watson was mortified; he still feared that they might have made a mistake.


5 0 G E N O M E

But they had not. All was suddenly clear: D N A contained a code written along the length of an elegant, intertwined staircase of a double helix, of potentially infinite length. That code copied itself by means of chemical affinities between its letters and spelt out the recipes for proteins by means of an as yet unknown phrasebook linking D N A to protein. The stunning significance of the structure of D N A was how simple it made everything seem and yet how beautiful. As Richard Dawkins has put it,6 'What is truly revolutionary about molecular biology in the post-Watson—Crick era is that it has become digital . . . the machine code of the genes is uncannily computer-like.'

A month after the Watson-Crick structure was published, Britain crowned a new queen and a British expedition conquered Mount Everest on the same day. Apart from a small piece in the News Chronicle, the double helix did not make the newspapers. Today most scientists consider it the most momentous discovery of the century, if not the millennium.

Many frustrating years of confusion were to follow the discovery of D N A ' s structure. The code itself, the language by which the gene expressed itself, stubbornly retained its mystery. Finding the code had been, for Watson and Crick, almost easy — a mixture of guesswork, good physics and inspiration. Cracking the code required true brilliance. It was a four-letter code, obviously: A, C, G and T.

And it was translated into the twenty-letter code of amino acids that make up proteins, almost certainly. But how? Where? And by what means?

Most of the best ideas that led to the answer came from Crick, including what he called the adaptor molecule - what we now call transfer R N A . Independendy of all evidence, Crick arrived at the conclusion that such a molecule must exist. It duly turned up. But Crick also had an idea that was so good it has been called the greatest wrong theory in history. Crick's 'comma-free' code is more elegant than the one Mother Nature uses. It works like this. Suppose that the code uses three letters in each word (if it uses two, that only gives sixteen combinations, which is too few). Suppose that it H I S T O R Y 5 1

has no commas, and nogapsbetweenthewords. Now suppose that it excludes all words that can be misread if you start in the wrong place. So, to take an analogy used by Brian Hayes, imagine all three-letter English words that can be written with the four letters A, S, E and T: ass, ate, eat, sat, sea, see, set, tat, tea and tee. Now eliminate those that can be misread as another word if you start in the wrong place. For example, the phrase ateateat can be misread as 'a tea tea t' or as 'at eat eat' or as 'ate ate at'. Only one of these three words can survive in the code.

Crick did the same with A, C, G and T. He eliminated A A A , C C C , G G G and T T T for a start. He then grouped the remaining sixty words into threes, each group containing the same three letters in the same rotating order. For example, A C T , C T A and T A C

are in one group, because C follows A, T follows C, and A follows T in each; while A T C , T C A and C A T are in another group. Only one word in each group survived. Exactly twenty are left - and there are twenty amino acid letters in the protein alphabet! A four-letter code gives a twenty-letter alphabet.

Crick cautioned in vain against taking his idea too seriously. 'The arguments and assumptions which we have had to employ to deduce this code are too precarious for us to feel much confidence in it on purely theoretical grounds. We put it forward because it gives the magic number - twenty — in a neat manner and from reasonable physical postulates.' But the double helix did not have much evidence going for it at first, either. Excitement mounted. For five years everybody assumed it was right.

But the time for theorising was past. In 1961, while everybody else was thinking, Marshall Nirenberg and Johann Matthaei decoded a 'word' of the code by the simple means of making a piece of R N A out of pure U (uracil - the equivalent of D N A ' s T) and putting it in a solution of amino acids. The ribosomes made a protein by stitching together lots of phenylalanines. The first word of the code had been cracked: U U U means phenylalanine. The comma-free code was wrong, after all. Its great beauty had been that it cannot have what are called reading-shift mutations, in which the 52. G E N O M E

loss of one letter makes nonsense of all that follows. Yet the version that Nature has instead chosen, though less elegant, is more tolerant of other kinds of errors. It contains much redundancy with many different three-letter words meaning the same thing.7

By 1965 the whole code was known and the age of modern genetics had begun. The pioneering breakthroughs of the 1960s became the routine procedures of the 1990s. And so, in 1995, science could return to Archibald Garrod's long-dead patients with their black urine and say with confidence exactly what spelling mistakes occurred in which gene to cause their alkaptonuria. The story is twentieth-century genetics in miniature. Alkaptonuria, remember, is a very rare and not very dangerous disease, fairly easily treated by dietary advice, so it had lain untouched by science for many years.

In 1995, lured by its historical significance, two Spaniards took up the challenge. Using a fungus called Aspergillus, they eventually created a mutant that accumulated a purple pigment in the presence of phenylalanine: homogentisate. As Garrod suspected, this mutant had a defective version of the protein called homogentisate dioxygenase.

By breaking up the fungal genome with special enzymes, identifying the bits that were different from normal and reading-the code therein, they eventually pinned down the gene in question. They then searched through a library of human genes hoping to find one similar enough to stick to the fungal D N A . They found it, on the long arm of chromosome 3, a 'paragraph' of D N A 'letters' that shares fifty-two per cent of its letters with the fungal gene. Fishing out the gene in people with alkaptonuria and comparing it with those who do not have it, reveals that they have just one different letter that counts, either the 690th or the 901st. In each case just a single letter change messes up the protein so it can no longer do its job.8

This gene is the epitome of a boring gene, doing a boring chemical job in boring parts of the body, causing a boring disease when broken. Nothing about it is surprising or unique. It cannot be linked with IQ or homosexuality, it tells us nothing about the origin of life, it is not a selfish gene, it does not disobey Mendel's laws, it cannot kill or maim. It is to all intents and purposes exactly the H I S T O R Y 5 3

same gene in every creature on the planet — even bread mould has it and uses it for precisely the same job that we do. Yet the gene for homogentisate dioxygenase deserves its little place in history for its story is in microcosm the story of genetics itself. And even this dull little gene now reveals a beauty that would have dazzled Gregor Mendel, because it is a concrete expression of his abstract laws: a story of microscopic, coiled, matching helices that work in pairs, of four-letter codes, and the chemical unity of life.


C

H

R

O

M

O

S

O

M

E 4

F a t e

Sir, what ye're telling us is nothing but scientific Calvinism. Anonymous Scottish soldier

to William Bateson after a popular lecture1

Open any catalogue of the human genome and you will be confronted not with a list of human potentialities, but a list of diseases, mostly ones named after pairs of obscure central-European doctors.

This gene causes Niemann—Pick disease; that one causes Wolf—

Hirschhorn syndrome. The impression given is that genes are there to cause diseases. 'New gene for mental illness', announces a website on genes that reports the latest news from the front, 'The gene for early-onset dystonia. Gene for kidney cancer isolated. Autism linked to serotonin transporter gene. A new Alzheimer's gene. The genetics of obsessive behaviour.'

Yet to define genes by the diseases they cause is about as absurd as defining organs of the body by the diseases they get: livers are there to cause cirrhosis, hearts to cause heart attacks and brains to cause strokes. It is a measure, not of our knowledge but of our ignorance that this is the way the genome catalogues read. It is F A T E 5 5

literally true that the only thing we know about some genes is that their malfunction causes a particular disease. This is a pitifully small thing to know about a gene, and a terribly misleading one. It leads to the dangerous shorthand that runs as follows: 'X has got the Wolf-Hirschhorn gene.' Wrong. We all have the Wolf-Hirschhorn gene, except, ironically, people who have Wolf-Hirschhorn syndrome. Their sickness is caused by the fact that the gene is missing altogether. In the rest of us, the gene is a positive, not a negative force. The sufferers have the mutation, not the gene.

Wolf-Hirschhorn syndrome is so rare and so serious - its gene is so vital — that its victims die young. Yet the gene, which lies on chromosome 4, is actually the most famous of all the 'disease' genes because of a very different disease associated with it: Huntington's chorea. A mutated version of the gene causes Huntington's chorea; a complete lack of the gene causes Wolf-Hirschhorn syndrome. We know very little about what the gene is there to do in everyday life, but we now know in excruciating detail how and why and where it can go wrong and what the consequence for the body is. The gene contains a single 'word', repeated over and over again: C A G , C A G , C A G , C A G . . . The repetition continues sometimes just six times, sometimes thirty, sometimes more than a hundred times. Your destiny, your sanity and your life hang by the thread of this repetition.

If the 'word' is repeated thirty-five times or fewer, you will be fine.

Most of us have about ten to fifteen repeats. If the 'word' is repeated thirty-nine times or more, you will in mid-life slowly start to lose your balance, grow steadily more incapable of looking after yourself and die prematurely. The decline begins with a slight deterioration of the intellectual faculties, is followed by jerking limbs and descends into deep depression, occasional hallucination and delusions. There is no appeal: the disease is incurable. But it takes between fifteen and twenty-five horrifying years to run its course. There are few worse fates. Indeed, many of the early psychological symptoms of the disease are just as bad in those who live in an affected family but do not get the disease: the strain and stress of waiting for it to strike are devastating.


5 6 G E N O M E

The cause is in the genes and nowhere else. Either you have the Huntington's mutation and will get the disease or not. This is determinism, predestination and fate on a scale of which Calvin never dreamed. It seems at first sight to be the ultimate proof that the genes are in charge and that there is nothing we can do about it. It does not matter if you smoke, or take vitamin pills, if you work out or become a couch potato. The age at which the madness will appear depends strictly and implacably on the number of repetitions of the 'word' C A G in one place in one gene. If you have thirty-nine, you have a ninety per cent probability of dementia by the age of seventy-five and will on average get the first symptoms at sixty-six; if forty, on average you will succumb at fifty-nine; if forty-one, at fifty-four; if forty-two, at thirty-seven; and so on until those who have fifty repetitions of the 'word' will lose their minds at roughly twenty-seven years of age. The scale is this: if your chromosomes were long enough to stretch around the equator, the difference between health and insanity would be less than one extra inch.2

No horoscope matches this accuracy. No theory of human causality, Freudian, Marxist, Christian or animist, has ever been so precise. No prophet in the Old Testament, no entrail-gazing oracle in ancient Greece, no crystal-ball gipsy clairvoyant on the pier at Bognor Regis ever pretended to tell people exactly when their lives would fall apart, let alone got it right. We are dealing here with a prophecy of terrifying, cruel and inflexible truth. There are a billion three-letter 'words' in your genome. Yet the length of just this one little motif is all that stands between each of us and mental illness.

Huntington's disease, which became notorious when it killed the folk singer Woody Guthrie in 1967, was first diagnosed by a doctor, George Huntington, in 1872 on the eastern tip of Long Island. He noticed that it seemed to run in families. Later work revealed that the Long Island cases were part of a much larger family tree originating in New England. In twelve generations of this pedigree more than a thousand cases of the disease could be found. All were descended from two brothers who emigrated from Suffolk in 1630.

Several of their descendants were burnt as witches in Salem in 1693, F A T E 5 7

possibly because of the alarming nature of the disease. But because the mutation only makes itself manifest in middle age, when people have already had children, there is little selective pressure on it to die out naturally. Indeed, in several studies, those with the mutations appear to breed more prolifically than their unaffected siblings.3

Huntington's was the first completely dominant human genetic disease to come to light. That means it is not like alkaptonuria in which you must have two copies of the mutant gene, one from each parent, to suffer the symptoms. Just one copy of the mutation will do. The disease seems to be worse if inherited from the father and the mutation tends to grow more severe, by the lengthening of the repeat, in the children of progressively older fathers.

In the late 1970s, a determined woman set out to find the Huntington gene. Following Woody Guthrie's terrible death from the disease, his widow started the Committee to Combat Huntington's Chorea; she was joined by a doctor named Milton Wexler whose wife and three brothers-in-law were suffering from the disease. Wexler's daughter, Nancy, knew she stood a fifty per cent chance of having the mutation herself and she became obsessed with finding the gene.

She was told not to bother. The gene would prove impossible to find. It would be like looking for a needle in a haystack the size of America. She should wait a few years until the techniques were better and there was a realistic chance. 'But', she wrote, 'if you have Huntington's disease, you do not have time to wait.' Acting on the report of a Venezuelan doctor, Americo Negrette, in 1979 she flew to Venezuela to visit three rural villages called San Luis, Barranquitas and Laguneta on the shores of Lake Maracaibo. Actually a huge, almost landlocked gulf of the sea, Lake Maracaibo lies in the far west of Venezuela, beyond the Cordillera de Merida.

The area contained a vast, extended family with a high incidence of Huntington's disease. The story they told each other was that the affliction came from an eighteenth-century sailor, and Wexler was able to trace the family tree of the disease back to the early nineteenth century and a woman called, appropriately, Maria Conception. She lived in the Pueblos de Agua, villages of houses built 5 8 G E N O M E

on stilts over the water. A fecund ancestor, she had 11,000

descendants in eight generations, 9,000 of whom were still alive in 1981. No less than 371 of them had Huntington's disease when Wexler first visited and 3,600 carried a risk of at least a quarter that they would develop the disease, because at least one grandparent had the symptoms.

Wexler's courage was extraordinary, given that she too might have the mutation. 'It is crushing to look at these exuberant children', she wrote,4 'full of hope and expectation, despite poverty, despite illiteracy, despite dangerous and exhausting work for the boys fishing in small boats in the turbulent lake, or for even the tiny girls tending house and caring for ill parents, despite a brutalising disease robbing them of parents, grandparents, aunts, uncles, and cousins - they are joyous and wild with life, until the disease attacks.'

Wexler started searching the haystack. First she collected blood from over 5 00 people: 'hot, noisy days of drawing blood'. Then she sent it to Jim Gusella's laboratory in Boston. He began to test genetic markers in search of the gene: randomly chosen chunks of D N A , that might or might not turn out to be reliably different in the affected and unaffected people. Fortune smiled on him and by mid-1983 he had not only isolated a marker close to the gene affected, but pinned it down to the tip of the short arm of chromosome 4. He knew which three-millionth of the genome it was in. Home and dry? Not so fast. The gene lay in a region of the text one million

'letters' long. The haystack was smaller, but still vast. Eight years later the gene was still mysterious: 'The task has been arduous in the extreme', wrote Wexler,4 sounding like a Victorian explorer, 'in this inhospitable terrain at the top of chromosome 4. It has been like crawling up Everest over the past eight years.'

The persistence paid off. In 1993, the gene was found at last, its text was read and the mutation that led to the disease identified.

The gene is the recipe for a protein called huntingtin: the protein was discovered after the gene - hence its name. The repetition of the 'word' C A G in the middle of the gene results in a long stretch of glutamines in the middle of the protein (CAG means glutamine in F A T E 5 9

'genetish'). And, in the case of Huntington's disease, the more glutamines there are at this point, the earlier in life the disease begins.5

It seems a desperately inadequate explanation of the disease.

If the huntingtin gene is damaged, then why does it work all right for the first thirty years of life? Apparently, the mutant form of huntingtin very gradually accumulates in aggregate chunks. Like Alzheimer's disease and BSE, it is this accumulation of a sticky lump of protein within the cell that causes the death of the cell, perhaps because it induces the cell to commit suicide. In Huntington's disease this happens mostly within the brain's dedicated movement-control room, the cerebellum, with the result that movement becomes progressively less easy or controlled.6

The most unexpected feature of the stuttering repetition of the word C A G is that it is not confined to Huntington's disease. There are five other neurological diseases caused by so-called 'unstable C A G repeats' in entirely different genes. Cerebellar ataxia is one.

There is even a bizarre report that a long C A G repeat deliberately inserted into a random gene in a mouse caused a late-onset, neurological disease rather like Huntington's disease. C A G repeats may therefore cause neurological disease whatever the gene in which they appear. Moreover, there are other diseases of nerve degeneration caused by other stuttering repeats of 'words' and in every case the repeated 'word' begins with C and ends in G. Six different C A G

diseases are known. C C G or C G G repeated more than 200 times near the beginning of a gene on the X chromosome causes 'fragile X', a variable but unusually common form of mental retardation (fewer than sixty repeats is normal; up to a thousand is possible).

C T G repeated from fifty to one thousand times in a gene on chromosome 19 causes myotonic dystrophy. More than a dozen human diseases are caused by expanded three-letter word repeats -

the so-called polyglutamine diseases. In all cases the elongated protein has a tendency to accumulate in indigestible lumps that cause their cells to die. The different symptoms are caused by the fact that different genes are switched on in different parts of the body.

What is so special about the 'word' C*G, apart from the fact that 6 0 G E N O M E

it means glutamine? A clue comes from a phenomenon known as anticipation. It has been known for some time that those with a severe form of Huntington's disease or fragile X are likely to have children in whom the disease is worse or begins earlier than it did in themselves. Anticipation means that the longer the repetition, the longer it is likely to grow when copied for the next generation. We know that these repeats form little loopings of D N A called hairpins.

The D N A likes to stick to itself, forming a structure like a hairpin, with the Cs and Gs of the C*G 'words' sticking together across the pin. When the hairpins unfold, the copying mechanism can slip and more copies of the word insert themselves.8

A simple analogy might be helpful. If I repeat a word six times in this sentence — cag, cag, cag, cag, cag, cag — you will count it fairly easily. But if I repeat it thirty-six times - cag, cag, cag, cag, cag, cag, cag, cag, cag, cag, cag, cag, cag, cag, cag, cag, cag, cag, cag, cag, cag, cag, cag, cag, cag, cag, cag, cag, cag, cag, cag, cag, cag, cag, cag, cag — I am willing to bet you lose count. So it is with the D N A . The more repeats there are, the more likely the copying mechanism is to insert an extra one. Its finger slips and loses its place in the text. An alternative (or possibly additional) explanation is that the checking system, called mismatch repair, is good at catching small changes, but not big ones in C*G repeats.9

This may explain why the disease develops late in life. Laura Mangiarini at Guy's Hospital in London created transgenic mice, equipped with copies of part of the Huntington's gene that contained more than one hundred repeats. As the mice grew older, so the length of the gene increased in all their tissues save one. Up to ten extra C A G 'words' were added to it. The one exception was the cerebellum, the hindbrain responsible for controlling movement.

The cells of the cerebellum do not need to change during life once the mice have learnt to walk, so they never divide. It is when cells and genes divide that copying mistakes are made. In human beings, the number of repeats in the cerebellum falls during life, though it increases in other tissues. In the cells from which sperm are made, the C A G repeats grow, which explains why there is a relationship F A T E 6 1

between the onset of Huntington's disease and the age of the father: older fathers have sons who get the disease more severely and at a younger age. (Incidentally, it is now known that the mutation rate, throughout the genome, is about five times as high in men as it is in women, because of the repeated replication needed to supply fresh sperm cells throughout life.)10

Some families seem to be more prone to the spontaneous appearance of the Huntington's mutation than others. The reason seems to be not only that they have a repeat number just below the threshold (say between twenty-nine and thirty-five), but that it jumps above the threshold about twice as easily as it does in other people with similar repeat numbers. The reason for that is again a simple matter of letters. Compare two people: one has thirty-five C A G s followed by a bunch of C C A s and C C G s . If the reader slips and adds an extra C A G , the repeat number grows by one. The other person has thirty-five C A G s , followed by a CAA then two more C A G s . If the reader slips and misreads the C A A as a C A G , the effect is to add not one but three to the repeat number, because of the two C A G s already waiting.11

Though I seem to be getting carried away, and deluging you with details about C A G s in the huntingtin gene, consider: almost none of this was known five years ago. The gene had not been found, the C A G repeat had not been identified, the huntingtin protein was unknown, the link with other neurodegenerative diseases was not even guessed at, the mutation rates and causes were mysterious, the paternal age effect was unexplained. From 1872 to 1993 virtually nothing was known about Huntington's disease except that it was genetic. This mushroom of knowledge has grown up almost over-night since then, a mushroom vast enough to require days in a library merely to catch up. The number of scientists who have published papers on the Huntington's gene since 1993 is close to 100. All about one gene. One of 60,000-80,000 genes in the human genome. If you still need convincing of the immensity of the Pan-dora's box that James Watson and Francis Crick opened that day in 1953, the Huntington's story will surely persuade you. Compared 6 2 G E N O M E

with the knowledge to be gleaned from the genome, the whole of the rest of biology is but a thimbleful.

And yet not a single case of Huntington's disease has been cured.

The knowledge that I celebrate has not even suggested a remedy for the affliction. If anything, in the heartless simplicity of the C A G

repeats, it has made the picture look even bleaker for those seeking a cure. There are 100 billion cells in the brain. How can we go in and shorten the C A G repeats in the huntingtin genes of each and every one?

Nancy Wexler relates a story about a woman in the Lake Maracaibo study. She came to Wexler's hut to be tested for neurological signs of the disease. She seemed fine and well but Wexler knew that small hints of Huntington's can be detected by certain tests long before the patient herself sees signs. Sure enough this woman showed such signs. But unlike most people, when the doctors had finished their examination, she asked them what their conclusion was. Did she have the disease? The doctor replied with a question: What do you think? She thought she was all right. The doctors avoided saying what they thought, mentioning the need to get to know people better before they gave diagnoses. As soon as the woman left the room, her friend came rushing in, almost hysterical.

What did you tell her? The doctors recounted what they had said.

'Thank God', replied the friend and explained: the woman had said to the friend that she would ask for the diagnosis and if it turned out that she had Huntington's disease, she would immediately go and commit suicide.

There are several things about that story that are disturbing. The first is the falsely happy ending. The woman does have the mutation.

She faces a death sentence, whether by her hand or much more slowly. She cannot escape her fate, however nicely she is treated by the experts. And surely the knowledge about her condition is hers to do with as she wishes. If she wishes to act on it and kill herself, who are the doctors to withhold the information? Yet they did the

'right thing', too. Nothing is more sensitive than the results of a test for a fatal disease; telling people the result starkly and coldly F A T E 6 3

may well not be the best thing to do - for them. Testing without counselling is a recipe for misery. But above all the tale drives home the uselessness of diagnosing without curing. The woman thought she was all right. Suppose she had five more years of happy ignorance ahead of her; there is no point in telling her that after that she faces lurching madness.

A person who has watched her mother die from Huntington's disease knows she has a fifty per cent chance of contracting it. But that is not right, is it? No individual can have fifty per cent of this disease. She either has a one hundred per cent chance or zero chance, and the probability of each is equal. So all that a genetic test does is unpackage the risk and tell her whether her ostensible fifty per cent is actually one hundred per cent or is actually zero.

Nancy Wexler fears that science is now in the position of Tiresias, the blind seer of Thebes. By accident Tiresias saw Athena bathing and she struck him blind. Afterwards she repented and, unable to restore his sight, gave him the power of soothsaying. But seeing the future was a terrible fate, since he could see it but not change it. 'It is but sorrow', said Tiresias to Oedipus, 'to be wise when wisdom profits not.' Or as Wexler puts it, 'Do you want to know when you are going to die, especially if you have no power to change the outcome?' Many of those at risk from Huntington's disease, who since 1986 can have themselves tested for the mutation, choose ignorance. Only about twenty per cent of them choose to take the test. Curiously, but perhaps understandably, men are three times as likely to choose ignorance as women. Men are more concerned with themselves rather than their progeny.12

Even if those at risk choose to know, the ethics are byzantine.

If one member of a family takes the test, he or she is in effect testing the whole family. Many parents take the test reluctantly but for the sake of their children. And misconceptions abound, even in textbooks and medical leaflets. Half your children may suffer, says one, addressing parents with the mutation. Not so: each child has a fifty per cent chance, which is very different. How the result of the test is presented is also immensely sensitive. Psychologists have 6 4 G E N O M E

found that people feel better about being told they have a three-quarter chance of an unaffected baby than if they are told they have a one-quarter chance of an affected one. Yet they are the same thing.

Huntington's disease is at the far end of a spectrum of genetics.

It is pure fatalism, undiluted by environmental variability. Good living, good medicine, healthy food, loving families or great riches can do nothing about. Your fate is in your genes. Like a pure Augustinian, you go to heaven by God's grace, not by good works.

It reminds us that the genome, great book that it is, may give us the bleakest kind of self-knowledge: the knowledge of our destiny, not the kind of knowledge that you can do something about, but the curse of Tiresias.

Yet Nancy Wexler's obsession with finding the gene was driven by her desire to mend it or cure it when she did find it. And she is undoubtedly closer to that goal now than ten years ago. 'I am an optimist', she writes,4 'Even though I feel this hiatus in which we will be able only to predict and not to prevent will be exceedingly difficult . . . I believe the knowledge will be worth the risks.'

What of Nancy Wexler herself? Several times in the late 1980s, she and her elder sister Alice sat down with their father Milton to discuss whether either of the women should take the test. The debates were tense, angry and inconclusive. Milton was against taking the test, stressing its uncertainties and the danger of a false diagnosis.

Nancy had been determined that she wanted the test, but her determination gradually evaporated in the face of a real possibility. Alice chronicled the discussions in a diary that later became a soul-searching book called Mapping fate. The result was that neither woman took the test. Nancy is now the same age as her mother was when she was diagnosed.13


C H R O M O S O M E 5

E n v i r o n m e n t

Errors, like straws, upon the surface flow;

He who would search for pearls must dive below.

John Dryden, All for Love

It is time for a cold shower. Reader, the author of this book has been misleading you. He has repeatedly used the word 'simple' and burbled on about the surprising simplicity at the heart of genetics.

A gene is just a sentence of prose written in a very simple language, he says, preening himself at the metaphor. Such a simple gene on chromosome 3 is the cause, when broken, of alkaptonuria. Another gene on chromosome 4 is the cause, when elongated, of Huntington's chorea. You either have mutations, in which case you get these genetic diseases, or you don't. No need for waffle, statistics or fudge.

It is a digital world, this genetics stuff, all particulate inheritance.

Your peas are either wrinkled or they are smooth.

You have been misled. The world is not like that. It is a world of greys, of nuances, of qualifiers, of 'it depends'. Mendelian genetics is no more relevant to understanding heredity in the real world than Euclidean geometry is to understanding the shape of an oak tree.


6 6 G E N O M E

Unless you are unlucky enough to have a rare and serious genetic condition, and most of us do not, the impact of genes upon our lives is a gradual, partial, blended sort of thing. You are not tall or dwarf, like Mendel's pea plants, you are somewhere in between. You are not wrinkled or smooth, but somewhere in between. This comes as no great surprise, because just as we know it is unhelpful to think of water as a lot of little billiard balls called atoms, so it is unhelpful to think of bodies as the products of single, discrete genes. We know in our folk wisdom that genes are messy. There is a hint of your father's looks in your face, but it blends with a hint of your mother's looks, too, and yet is not the same as your sister's - there is something unique about your own looks.

Welcome to pleiotropy and pluralism. Your looks are affected not by a single 'looks' gene, but by lots of them, and by non-genetic factors as well, fashion and free will prominently among them.

Chromosome 5 is a good place to start muddying the genetic waters by trying to build a picture that is a little more complicated, a little more subtle and a little more grey than I have painted so far. But I shall not stray too far into this territory yet. I must take things one step at a time, so I will still talk about a disease, though not a very clear-cut one and certainly not a 'genetic' one. Chromosome 5

is the home of several of the leading candidates for the tide of the

'asthma gene'. But everything about them screams out pleiotropy —

a technical term for multiple effects of multiple genes. Asthma has proved impossible to pin down in the genes. It is maddeningly resistant to being simplified. It remains all things to all people.

Almost everybody gets it or some other kind of allergy at some stage in their life. You can support almost any theory about how or why they do so. And there is plenty of room for allowing your political viewpoint to influence your scientific opinion. Those fighting pollution are keen to blame pollution for the increase in asthma. Those who think we have gone soft attribute asthma to central heating and fitted carpets. Those who mistrust compulsory education can lay the blame for asthma at the feet of playground colds. Those who don't like washing their hands can blame E N V I R O N M E N T 6 7

excessive hygiene. Asthma, in other words, is much more like real life.

Asthma, moreover, is the tip of an iceberg of 'atopy'. Most asthmatics are also allergic to something. Asthma, eczema, allergy and anaphylaxis are all part of the same syndrome, caused by the same 'mast' cells in the body, alerted and triggered by the same immunoglobulin-E molecules. One person in ten has some form of allergy, the consequences in different people ranging from the mild inconvenience of a bout of hay fever to the sudden and fatal collapse of the whole body caused by a bee sting or a peanut. Whatever factor is invoked to explain the increase in asthma must also be capable of explaining other outbreaks of atopy. In children with a serious allergy to peanuts, if the allergy fades in later life then they are less likely to have asthma.

Yet just about every statement you care to make about asthma can be challenged, including the assertion that it is getting worse.

One study asserts that asthma incidence has grown by sixty per cent in the last ten years and that asthma mortality has trebled. Peanut allergy is up by seventy per cent in ten years. Another study, published just a few months later, asserts with equal confidence that the increase is illusory. People are more aware of asthma, more ready to go to the doctor with mild cases, more prepared to define as asthma something that would once have been called a cold. In the 1870s, Armand Trousseau included a chapter on asthma in his Clinique Medicale. He described two twin brothers whose asthma was bad in Marseilles and other places but who were cured as soon as they went to Toulon. Trousseau thought this very strange. His emphasis hardly suggests a rare disease. Still, the balance of probability is that asthma and allergy are getting worse and that the cause is, in a word, pollution.

But what kind of pollution? Most of us inhale far less smoke than our ancestors, with their wood fires and poor chimneys, would have done. So it seems unlikely that general smoke can have caused the recent increase. Some modern, synthetic chemicals can cause dramatic and dangerous attacks of asthma. Transported about the 6 8 G E N O M E

countryside in tankers, used in the manufacture of plastics and leaked into the air we breathe, chemicals such as isocyanates, trimellitic anhydride and phthalic anhydride are a new form of pollution and a possible cause of asthma. When one such tanker spilled its load of isocyanate in America it turned the policeman who directed traffic around the wreck into an acute and desperate asthmatic for the remainder of his life. Yet there is a difference between acute, concentrated exposure and the normal levels encountered in everyday life.

So far there is no link between low-level exposure to such chemicals and asthma. Indeed, asthma appears in communities that never encounter them. Occupational asthma can be triggered in people who work in much more low-tech, old-fashioned professions, such as grooms, coffee roasters, hairdressers or metal grinders. There are more than 250 defined causes of occupational asthma. By far the commonest asthma trigger — which accounts for about half of all cases - is the droppings of the humble dust mite, a creature that likes our fondness for central-heated indoor winter stuffiness and makes its home inside our carpets and bedding.

The list of asthma triggers given by the American Lung Association covers all walks of life: pollen, feathers, moulds, foods, colds, emotional stress, vigorous exercise, cold air, plastics, metal vapours, wood, car exhaust, cigarette smoke, paint, sprays, aspirin, heart drugs'

— even, in one kind of asthma, sleep. There is material here for anybody to grind any axe they wish. For instance, asthma is largely an urban problem, as proved by its sudden appearance in places becoming urban for the first time. Jimma, in south-west Ethiopia, is a small city that has sprung up in the last ten years. Its local asthma epidemic is ten years old. Yet the meaning of this fact is uncertain. Urban centres are generally more polluted with car exhaust and ozone, true, but they are also somewhat sanitised.

One theory holds that people who wash themselves as children, or encounter less mud in everyday life, are more likely to become asthmatics: that hygiene, not lack of it, is the problem. Children with elder siblings are less likely to get asthma, perhaps because their siblings bring dirt into the house. In a study of 14,000 children E N V I R O N M E N T 6 9

near Bristol, it emerged that those who washed their hands five times a day or more and bathed twice a day, stood a twenty-five per cent chance of having asthma, while those who washed less than three times a day and bathed every other day had slightly over half that risk of asthma. The theory goes that dirt contains bacteria, especially mycobacteria, which stimulate one part of the immune system, whereas routine vaccination stimulates a different part of the immune system. Since these two parts of the immune system (the Th1 cells and the Th2 cells respectively) normally inhibit each other, the modern, sanitised, disinfected and vaccinated child is bequeathed a hyperactive Th2 system, and the Th2 system is specially designed to flush parasites from the wall of the gut with a massive release of histamine. Hence hay fever, asthma and eczema. Our immune systems are set up in such a way that they 'expect' to be educated by soil mycobacteria early in childhood; when they are not, the result is an unbalanced system prone to allergy. In support of this theory, asthmatic attacks can be staved off in mice that have been made allergic to egg-white proteins by the simple remedy of forcing them to inhale mycobacteria. Among Japanese schoolchildren, all of whom receive the B C G inoculation against tuberculosis but only sixty per cent of whom become immune as a result, the immune ones are much less likely to develop allergies and asthma than the non-immune ones. This may imply that giving the Th1 cells some stimulation with a mycobacterial inoculation enables them to suppress the asthmatic effects of their Th2 colleagues. Throw away that bottle steriliser and seek out mycobacteria.1

Another, somewhat similar, theory holds that asthma is the unleashed frustration of the worm-fighting element in the immune system. Back in the rural Stone Age (or the Middle Ages, for that matter), the immunoglobulin-E system had its hands full fighting off roundworms, tapeworms, hookworms and flukes. It had no time for being precious about dust mites and cat hairs. Today, it is kept less busy and gets up to mischief instead. This theory rests on a slightly dubious assumption about the ways in which the body's immune system works, but it has quite a lot of support. There is 7 0 G E N O M E

no dose of hay fever that a good tapeworm cannot cure, but then which would you rather have?

Another theory holds that the connection with urbanisation is actually a connection with prosperity. Wealthy people stay indoors, heat their houses and sleep on feather pillows infested with dust mites. Yet another theory is based on the undoubted fact that mild, casual-contact viruses (things like common colds) are increasingly common in societies with rapid transport and compulsory education.

Schoolchildren harvest new viruses from the playground at an alarming rate, as every parent knows. When nobody travelled much, the supply of new viruses soon ran out, but today, with parents jetting off to foreign lands or meeting strangers at work all the time, there is an endless supply of new viruses to sample at the saliva-rich, germ-amplifying stations we call primary schools. Over 200 different kinds of virus can cause what is collectively known as the common cold. There is a definite connection between childhood infection with mild viruses, such as respiratory syncitial virus, and asthma susceptibility. The latest vogue theory is that a bacterial infection, which causes non-specific urethritis in women and has been getting commoner at roughly the same rate as asthma, may set up the immune system in such a way that it responds aggressively to allergens in later life. Take your pick. My favourite theory, for what it is worth, is the hygiene hypothesis, though I wouldn't go to the stake for it. The one thing you cannot argue is that asthma is on the increase because 'asthma genes' are on the increase. The genes have not changed that quickly.

So why do so many scientists persist in emphasising that asthma is at least partly a 'genetic disease'? What do they mean? Asthma is a constriction of the airways, which is triggered by histamines, which are in turn released by mast cells, whose transformation is triggered by their immunoglobulin-E proteins, whose activation is caused by the arrival of the very molecule to which they have been sensitised.

It is, as biological chains of cause and effect go, a fairly simple concatenation of events. The multiplicity of causes is effected by the design of immunoglobulin E, a protein specially designed to E N V I R O N M E N T 7 1

come in many forms, any one of which can fit on to almost any outside molecule or allergen. Although one person's asthma may be triggered by dust mites and another's by coffee beans, the underlying mechanism is still the same: the activation of the immunoglobulin-E

system.

Where there are simple chains of biochemical events, there are genes. Every protein in the chain is made by a gene, or, in the case of immunoglobulin E, two genes. Some people are born with, or develop, immunological hair-triggers, presumably because their genes are subtly different from those of other people, thanks to certain mutations.

That much is clear from the fact that asthma tends to run in families (a fact known, incidentally, to the twelfth-century Jewish sage of Cordoba, Maimonides). In some places, by accident of history, asthma mutations are unusually frequent. One such place is the isolated island of Tristan da Cunha, which must have been populated by descendants of an asthma-susceptible person. Despite a fine maritime climate, over twenty per cent of the inhabitants have overt symptoms of asthma. In 1997 a group of geneticists funded by a biotechnology company made the long sea voyage to the island and collected the blood of 270 of the 300 islanders to seek the mutations responsible.

Find those mutant genes and you have found the prime cause of the underlying mechnanism of asthma and with it all sorts of possibilities for a cure. Although hygiene or dust mites can explain why asthma is increasing on average, only differences in genes may explain why one person in a family gets asthma and another does not.

Except, of course, here for the first time we encounter the difficulty with words like 'normal' and 'mutant'. In the case of alkaptonuria it is pretty obvious that one version of the gene is normal and the other one is 'abnormal'. In the case of asthma, it is by no means so obvious. Back in the Stone Age, before feather pillows, an immune system that fired off at dust mites was no handicap, because dust mites were not a pressing problem in a temporary 7 2 G E N O M E

hunting camp on the savannah. And if that same immune system was especially good at killing gut worms, then the theoretical 'asthmatic'

was normal and natural; it was the others who were the abnormals and

'mutants' since they had genes that made them more vulnerable to worm infestations. Those with sensitive immunoglobulin-E systems were probably more resistant to worm infestations than those without. One of the dawning realisations of recent decades is just how hard it is to define what is 'normal' and what is mutant.

In the late 1980s, off went various groups of scientists in confident pursuit of the 'asthma gene'. By mid-1998 they had found not one, but fifteen. There were eight candidate genes on chromosome 5

alone, two each on chromosomes 6 and 12, and one on each of chromosomes 11, 13 and 14. This does not even count the fact that two parts of immunoglobulin E, the molecule at the centre of the process, are made by two genes on chromosome 1. The genetics of asthma could be underwritten by all of these genes in varying orders of importance or by any combination of them and others, too.

Each gene has its champion and feelings run high. William Cookson, an Oxford geneticist, has described how his rivals reacted to his discovery of a link between asthma-susceptibility and a marker on chromosome 11. Some were congratulatory. Others rushed into print contradicting him, usually with flawed or small sample sizes.

One wrote haughty editorials in medical journals mocking his 'logical disjunctions' and 'Oxfordshire genes'. One or two turned vitriolic in their public criticism and one anonymously accused him of fraud.

(To the outside world the sheer nastiness of scientific feuds often comes as something of a surprise; politics, by contrast, is a relatively polite affair.) Things were not improved by a sensational story exaggerating Cookson's discovery in a Sunday newspaper, followed by a television programme attacking the newspaper story and a com-plaint to the broadcasting regulator by the newspaper. 'After four years of constant scepticism and disbelief, says Cookson mildly,2

'we were all feeling very tired.'

This is the reality of gene hunting. There is a tendency among E N V I R O N M E N T 7 3

ivory-towered moral philosophers to disparage such scientists as gold-diggers seeking fame and fortune. The whole notion of 'genes for' such things as alcoholism and schizophrenia has been mocked, because such claims have often been later retracted. The retraction is taken not as evidence against that genetic link but as a condemnation of the whole practice of seeking genetic links. And the critics have a point. The simplistic headlines of the press can be very misleading. Yet anybody who gets evidence of a link between a disease and a gene has a duty to publish it. If it proves an illusion, little harm is done. Arguably, more damage has been done by false negatives (true genes that have been prematurely ruled out on inadequate data) than by false positives (suspicions of a link that later prove unfounded).

Cookson and his colleagues eventually got their gene and pinned down a mutation within it that the asthmatics in their sample had more often than others did. It was an asthma gene of sorts. But it only accounted for fifteen per cent of the explanation of asthma and it has proved remarkably hard to replicate the finding in other subjects, a maddening feature of asthma-gene hunting that has recurred with distressing frequency. By 1994 one of Cookson's rivals, David Marsh, was suggesting a strong link between asthma and the gene for interleukin 4, on chromosome 5, based on a study of eleven Amish families. That, too, proved hard to replicate. By 1997 a group of Finns was comprehensively ruling out a connection between asthma and the same gene. That same year a study of a mixed-race population in America concluded that eleven chromosomal regions could be linked to susceptibility to asthma, of which ten were unique to only one racial or ethnic group. In other words, the gene that most defined susceptiblity to asthma in blacks was not the same gene that most defined susceptibility to asthma in whites, which was different again from the gene that most defined susceptibility to asthma in Hispanics.3

Gender differences are just as pronounced as racial ones. According to research by the American Lung Association, whereas ozone from petrol-burning cars triggers asthma in men, particulates from 7 4 G E N O M E

diesel engines are more likely to trigger asthma in women. As a rule, males seem to have an early bout of allergy and to outgrow it, while females develop allergies in their mid or late twenties and do not outgrow them (though rules have exceptions, of course, including the rule that rules have exceptions). This could explain something peculiar about asthma inheritance: people often appear to inherit it from allergic mothers, but rarely from their fathers. This could just mean that the father's asthma was long ago in his youth and has been largely forgotten.

The trouble seems to be that there are so many ways of altering the sensitivity of the body to asthma triggers, all along the chain of reactions that leads to the symptoms, that all sorts of genes can be

'asthma genes', yet no single one can explain more than a handful of cases. ADRB2, for example, lies on the long arm of chromosome 5. It is the recipe for a protein called the beta- 2-adrenergic receptor, which controls bronchodilation and bronchoconstriction - the actual, direct symptom of asthma in the tightening of the windpipe.

The commonest anti-asthma drugs work by attacking this receptor.

So surely a mutation in ADRB2 would be a prime 'asthma gene'?

The gene was pinned down first in cells derived from the Chinese hamster: a fairly routine 1,239-letter long recipe of D N A . Sure enough a promising spelling difference between some severe nocturnal asthmatics and some non-nocturnal asthmatics soon emerged: letter number 46 was G instead of A. But the result was far from conclusive. Approximately eighty per cent of the nocturnal asthmatics had a G, while fifty-two per cent of the non-nocturnal asthmatics had G. The scientists suggested that this difference was sufficient to prevent the damping down of the allergic system that usually occurs at night.4

But nocturnal asthmatics are a small minority. To muddy the waters still further, the very same spelling difference has since been linked to a different asthmatic problem: resistance to asthma drugs.

Those with the letter G at the same forty-sixth position in the same gene on both copies of chromosome 5 are more likely to find that their asthma drugs, such as formoterol, gradually become ineffective E N V I R O N M E N T 7 5

over a period of weeks or months than those with a letter A on both copies.

'More likely' . . . 'probably' . . . 'in some of: this is hardly the language of determinism I used for Huntington's disease on chromosome 4. The A to G change at position 46 on the ADRB2 gene plainly has something to do with asthma susceptibility, but it cannot be called the 'asthma gene', nor used to explain why asthma strikes some people and not others. It is at best a tiny part of the tale, applicable in a small minority or having a small influence easily overridden by other factors. You had better get used to such indeterminacy. The more we delve into the genome the less fatalistic it will seem. Grey indeterminacy, variable causality and vague predisposition are the hallmarks of the system. This is not because what I said in previous chapters about simple, particulate inheritance is wrong, but because simplicity piled upon simplicity creates complexity. The genome is as complicated and indeterminate as ordinary life, because it is ordinary life. This should come as a relief. Simple determinism, whether of the genetic or environmental kind, is a depressing prospect for those with a fondness for free will.


C H R O M O S O M E 6

I n t e l l i g e n c e

The hereditarian fallacy is not the simple claim that IQ is to some degree 'heritable' [but] the equation of

'heritable' with 'inevitable'. Stephen Jay Gould I have been misleading you, and breaking my own rule into the bargain. I ought to write it out a hundred times as punishment: G E N E S A R E N O T T H E R E T O C A U S E D I S E A S E S .

Even if a gene causes a disease by being 'broken', most genes are not 'broken' in any of us, they just come in different flavours. The blue-eyed gene is not a broken version of the brown-eyed gene, or the red-haired gene a broken version of the brown-haired gene.

They are, in the jargon, different alleles - alternative versions of the same genetic 'paragraph', all equally fit, valid and legitimate. They are all normal; there is no single definition of normality.

Time to stop beating about the bush. Time to plunge headlong into the most tangled briar of the lot, the roughest, scratchiest, most impenetrable and least easy of all the brambles in the genetic forest: the inheritance of intelligence.

Chromosome 6 is the best place to find such a thicket. It was on I N T E L L I G E N C E 7 7

chromosome 6, towards the end of 1997, that a brave or perhaps foolhardy scientist first announced to the world that he had found a gene 'for intelligence'. Brave, indeed, for however good his evidence, there are plenty of people out there who refuse to admit that such things could exist, let alone do. Their grounds for scepticism are not only a weary suspicion, bred by politically tainted research over many decades, of anybody who even touches the subject of hereditary intelligence, but also a hefty dose of common sense.

Mother Nature has plainly not entrusted the determination,of our intellectual capacities to the blind fate of a gene or genes; she gave us parents, learning, language, culture and education to program ourselves with.

Yet this is what Robert Plomin announced that he and his colleagues had discovered. A group of especially gifted teenage children, chosen from all over America because they are close to genius in their capacity for schoolwork, are brought together every summer in Iowa. They are twelve- to fourteen-year-olds who have taken exams five years early and come in the top one per cent. They have an IQ of about 160. Plomin's team, reasoning that such children must have the best versions of just about every gene that might influence intelligence, took a blood sample from each of them and went fishing in their blood with little bits of D N A from human chromosome 6. (He chose chromosome 6 because he had a hunch based on some earlier work.) By and by, he found a bit on the long arm of chromosome 6 of the brainboxes which was frequently different from the sequence in other people. Other people had a certain sequence just there, but the clever kids had a slightly different one: not always, but often enough to catch the eye. The sequence lies in the middle of the gene called I G F 2 R . 1

The history of IQ is not uplifting. Few debates in the history of science have been conducted with such stupidity as the one about intelligence. Many of us, myself included, come to the subject with a mistrustful bias. I do not know what my IQ is. I took a test at school, but was never told the result. Because I did not realise the test was against the clock, I finished little of it and presumably 7 8 G E N O M E

scored low. But then not realising that the test is against the clock does not especially suggest brilliance in itself. The experience left me with little respect for the crudity of measuring people's intelligence with a single number. To be able to measure such a slippery thing in half an hour seems absurd.

Indeed, the early measurement of intelligence was crudely prejudiced in motivation. Francis Galton, who pioneered the study of twins to tease apart innate and acquired talents, made no bones about why he did so:2

My general object has been to take note of the varied hereditary faculties of different men, and of the great differences in different families and races, to learn how far history may have shown the practicability of supplanting inefficient human stock by better strains, and to consider whether it might not be our duty to do so by such efforts as may be reasonable, thus exerting ourselves to further the ends of evolution more rapidly and with less distress than if events were left to their own course.

In other words he wanted to selectively cull and breed people as if they were cattle.

But it was in America that intelligence testing turned really nasty.

H. H. Goddard took an intelligence test invented by the Frenchman Alfred Binet and applied it to Americans and would-be Americans, concluding with absurd ease that not only were many immigrants to America 'morons', but that they could be identified as such at a glance by trained observers. His IQ tests were ridiculously subjective and biased towards middle-class or western cultural values. How many Polish Jews knew that tennis courts had nets in the middle?

He was in no doubt that intelligence was innate:3 'the consequent grade of intellectual or mental level for each individual is determined by the kind of chromosomes that come together with the union of the germ cells: that it is but little affected by any later influences except such serious accidents as may destroy part of the mechanism.'

With views like these, Goddard was plainly a crank. Yet he prevailed upon national policy sufficiently to be allowed to test I N T E L L I G E N C E 7 9

immigrants as they arrived at Ellis Island and was followed by others with even more extreme views. Robert Yerkes persuaded the United States army to let him administer intelligence tests to millions of recruits in the First World War, and although the army largely ignored the results, the experience provided Yerkes and others with the platform and the data to support their claim that intelligence testing could be of commercial and national use in sorting people quickly and easily into different streams. The army tests had great influence in the debate leading to the passage in 1924 by Congress of an Immigration Restriction Act setting strict quotas for southern and eastern Europeans on the grounds that they were stupider than the 'Nordic' types that had dominated the American population prior to 1890. The Act's aims had little to do with science. It was more an expression of racial prejudice and union protectionism. But it found its excuses in the pseudoscience of intelligence testing.

The story of eugenics will be left for a later chapter, but it is little wonder that this history of intelligence testing has left most academics, especially those in the social sciences, with a profound distrust of anything to do with IQ tests. When the pendulum swung away from racism and eugenics just before the Second World War, the very notion of hereditarian intelligence became almost a taboo.

People like Yerkes and Goddard had ignored environmental influences on ability so completely that they had tested non-English speakers with English tests and illiterate people with tests requiring them to wield a pencil for the first time. Their belief in heredity was so wishful that later critics generally assumed they had no case at all. Human beings are capable of learning, after all. Their IQ can be influenced by their education so perhaps psychology should start from the assumption that there was no hereditary element at all in intelligence: it is all a matter of training.

Science is supposed to advance by erecting hypotheses and testing them by seeking to falsify them. But it does not. Just as the genetic determinists of the 1920s looked always for confirmation of their ideas and never for falsification, so the environmental determinists of the 1960s looked always for supporting evidence and averted 8 o G E N O M E

their eyes from contrary evidence, when they should have been actively seeking it. Paradoxically, this is a corner of science where the 'expert' has usually been more wrong than the layman. Ordinary people have always known that education matters, but equally they have always believed in some innate ability. It is the experts who have taken extreme and absurd positions at either end of the spectrum.

There is no accepted definition of intelligence. Is it thinking speed, reasoning ability, memory, vocabulary, mental arithmetic, mental energy or simply the appetite of somebody for intellectual pursuits that marks them out as intelligent? Clever people can be amazingly dense about some things — general knowledge, cunning, avoiding lamp-posts or whatever. A soccer player with a poor school record may be able to size up in a split second the opportunity and way to make a telling pass. Music, fluency with language and even the ability to understand other people's minds are capacities and talents that frequently do not seem necessarily to go together. Howard Gardner has argued forcefully for a theory of multiple intelligence that recognises each talent as a separate ability. Robert Sternberg has suggested instead that there are essentially three separate kinds of intelligence - analytic, creative and practical. Analytic problems are ones formulated by other people, clearly defined, that come accompanied by all the information required to solve them, have only one right answer, are disembedded from ordinary experience and have no intrinsic interest: a school exam, in short. Practical problems require you to recognise and formulate the problem itself, are poorly defined, lacking in some relevant information, may or may not have a single answer but spring directly out of everyday life. Brazilian street children who have failed badly at mathematics in school are none the less sophisticated at the kind of mathematics they need in their ordinary lives. IQ is a singularly poor predictor of the ability of professional horse-race handicappers. And some Zambian children are as good at IQ tests that use wire models as they are bad at ones requiring pencil and paper - English children the reverse.

Almost by definition, school concentrates on analytic problems and so do IQ tests. However varied they may be in form and I N T E L L I G E N C E 8 l

content, IQ tests are inherently biased towards certain kinds of minds. And yet they plainly measure something. If you compare people's performance on different kinds of IQ tests, there is a tendency for them to co-vary. The statistician Charles Spearman first noticed this in 1904 - that a child who does well in one subject tends to do well in others and that, far from being independent, different intelligences do seem well correlated. Spearman called this general intelligence, or, with admirable brevity, 'g'. Some statisticians argue that 'g' is just a statistical quirk - one possible solution among many to the problem of measuring different performances. Others think it is a direct measurement of a piece of folklore: the fact that most people can agree on who is 'clever' and who is not. Yet there is no doubt that 'g' works. It is a better predictor of a child's later performance in school than almost any other measure. There is also some genuinely objective evidence for 'g': the speed with which people perform tasks involving the scanning and retrieval of information correlates with their I Q . And general IQ remains surprisingly constant at different ages: between six and eighteen, your intelligence increases rapidly, of course, but your IQ relative to your peers changes very little. Indeed, the speed with which an infant habituates to a new stimulus correlates quite strongly with later I Q , as if it were almost possible to predict the adult IQ of a baby when only a few months old, assuming certain things about its education. IQ scores correlate strongly with school test results.

High-IQ children seem to absorb more of the kind of things that are taught in school.4

Not that this justifies fatalism about education: the enormous inter-school and international differences in average achievement at mathematics or other subjects shows how much can still be achieved by teaching. 'Intelligence genes' cannot work in a vacuum; they need environmental stimulation to develop.

So let us accept the plainly foolish definition of intelligence as the thing that is measured by the average of several intelligence tests

- 'g' - and see where it gets us. The fact that IQ tests were so crude and bad in the past and are still far from perfect at pinning 8 2 G E N O M E

down something truly objective makes it more remarkable, not less, that they are so consistent. If a correlation between IQ and certain genes shows through what Mark Philpott has called 'the fog of imperfect tests',5 that makes it all the more likely that there is a strongly heritable element to intelligence. Besides, modern tests have been vastly improved in their objectivity and their insensitivity to cultural background or specific knowledge.

In the heyday of eugenic IQ testing in the 1920s, there was no evidence for heritability of I Q . It was just an assumption of the practitioners. Today, that is no longer the case. The heritability of IQ (whatever IQ is) is a hypothesis that has been tested on two sets of people: twins and adoptees. The results, however you look at them, are startling. No study of the causes of intelligence has failed to find a substantial heritability.

There was a fashion in the 1960s for separating twins at birth, especially when putting them up for adoption. In many cases this was done with no particular thought, but in others it was deliberately done with concealed scientific motives: to test and (it was hoped) demonstrate the prevailing orthodoxy — that upbringing and environment shaped personality and genes did not. The most famous case was that of two New York girls named Beth and Amy, separated at birth by an inquisitive Freudian psychologist. Amy was placed in the family of a poor, overweight, insecure and unloving mother; sure enough, Amy grew up neurotic and introverted, just as Freudian theory would predict. But so - down to the last details - did Beth, whose adoptive mother was rich, relaxed, loving and cheerful. The differences between Amy's and Beth's personalities were almost undetectable when they rediscovered each other twenty years later.

Far from demonstrating the power of upbringing to shape our minds, the study proved the very opposite: the power of instinct.6

Started by environmental determinists, the study of twins reared apart was later taken up by those on the other side of the argument, in particular Thomas Bouchard of the University of Minnesota.

Beginning in 1979, he collected pairs of separated twins from all over the world and reunited them while testing their personalities I N T E L L I G E N C E 8 3

and I Q s . Other studies, meanwhile, concentrated on comparing the I Q s of adopted people with those of their adoptive parents and their biological parents or their siblings. Put all such studies together, totting up the IQ tests of tens of thousands of individuals, and the table looks like this. In each case the number is a percentage correlation, one hundred per cent correlation being perfect identity and zero per cent being random difference.

The same person tested twice 87

Identical twins reared together 86

Identical twins reared apart 76

Fraternal twins reared together 5 5

Biological siblings 47

Parents and children living together 40

Parents and children living apart 31

Adopted children living together 0

Unrelated people living apart 0

Not surprisingly, the highest correlation is between identical twins reared together. Sharing the same genes, the same womb and the same family, they are indistinguishable from the same person taking the test twice. Fraternal twins, who share a womb but are genetically no more similar than two siblings, are much less similar, but they are more similar than ordinary brothers, implying that things experienced in the womb or early family life can matter a little. But the astonishing result is the correlation between the scores of adopted children reared together: zero. Being in the same family has no discernible effect on IQ at all.7

The importance of the womb has only recently been appreciated.

According to one study, twenty per cent of the similarity in intelligence of a pair of twins can be accounted for by events in the womb, while only five per cent of the intelligence of a pair of siblings can be accounted for by events in the womb. The difference is that twins share the same womb at the same time, whereas siblings do not. The influence upon our intelligence of events that happened 8 4 G E N O M E

in the womb is three times as great as anything our parents did to us after our birth. Thus even that proportion of our intelligence that can be attributed to 'nurture' rather than nature is actually determined by a form of nurture that is immutable and firmly in the past. Nature, on the other hand, continues to express genes throughout youth. It is nature, not nurture, that demands we do not make fatalistic decisions about children's intelligence too young.8

This is positively bizarre. It flies in the face of common sense: surely our intelligence is influenced by the books and conversations found in our childhood homes? Yes, but that is not the question.

After all, heredity could conceivably account for the fact that both parents and children from the same home like intellectual pursuits.

No studies have been done - except for twin and adoption studies

- that discriminate between the hereditary and parental-home explanation. The twin and adoption studies are unambiguous at present in favouring the hereditary explanation for the coincidence of parents' and children's I Q s . It remains possible that the twin and adoption studies are misleading because they come from too narrow a range of families. These are mostly white, middle-class families, and very few poor or black families are included in the samples.

Perhaps it is no surprise that the range of books and conversations found in all middle-class, American, white families is roughly the same. When a study of trans-racial adoptees was done, a small correlation was found between the children's IQ and that of their adoptive parents (nineteen per cent).

But it is still a small effect. The conclusion that all these studies converge upon is that about half of your IQ was inherited, and less than a fifth was due to the environment you shared with your siblings - the family. The rest came from the womb, the school and outside influences such as peer groups. But even this is misleading.

Not only does your IQ change with age, but so does its heritability.

As you grow up and accumulate experiences, the influence of your genes increases. What? Surely, it falls off? No: the heritability of childhood IQ is about forty-five per cent, whereas in late adolescence it rises to seventy-five per cent. As you grow up, you gradually express I N T E L L I G E N C E 8 5

your own innate intelligence and leave behind the influences stamped on you by others. You select the environments that suit your innate tendencies, rather than adjusting your innate tendencies to the environments you find yourself in. This proves two vital things: that genetic influences are not frozen at conception and that environmental influences are not inexorably cumulative. Heritability does not mean immutability.

Francis Galton, right at the start of this long debate, used an analogy that may be fairly apt. 'Many a person has amused himself, he wrote, 'with throwing bits of stick into a tiny brook and watching their progress; how they are arrested, first by one chance obstacle, then by another; and again, how their onward course is facilitated by a combination of circumstances. He might ascribe much importance to each of these events, and think how largely the destiny of the stick had been governed by a series of trifling accidents.

Nevertheless, all the sticks succeed in passing down the current, and in the long run, they travel at nearly the same rate.' So the evidence suggests that intensively exposing children to better tuition has a dramatic effect on their IQ scores, but only temporarily. By the end of elementary school, children who have been in Head Start programmes are no further ahead than children who have not.

If you accept the criticism that these studies mildly exaggerate heritability because they are of families from a single social class, then it follows that heritability will be greater in an egalitarian society than an unequal one. Indeed, the definition of the perfect meritoc-racy, ironically, is a society in which people's achievements depend on their genes because their environments are equal. We are fast approaching such a state with respect to height: in the past, poor nutrition resulted in many children not reaching their 'genetic' height as adults. Today, with generally better childhood nutrition, more of the differences in height between individuals are due to genes: the heritability of height is, therefore, I suspect, rising. The same cannot yet be said of intelligence with certainty, because environmental variables - such as school quality, family habits, or wealth — may be growing more unequal in some societies, rather than more equal. But 8 6 G E N O M E

it is none the less a paradox: in egalitarian societies, genes matter more.

These heritability estimates apply to the differences between individuals, not those between groups. IQ heritability does seem to be about the same in different populations or races, which might not have been the case. But it is logically false to conclude that because the difference between the IQ of one person and another is approximately fifty per cent heritable, that the difference between the average IQ s of blacks and whites or between whites and Asians is due to genes. Indeed, the implication is not only logically false, it so far looks empirically wrong, too. Thus does a large pillar of support for part of the thesis of the recent book The bell curve9

crumble. There are differences between the average IQ scores of blacks and whites, but there is no evidence that these differences are themselves heritable. Indeed, the evidence from cases of cross-racial adoption suggests that the average I Q s of blacks reared by and among whites is no different from that of whites.

If IQ is fifty per cent heritable individually, then some genes must influence it. But it is impossible to tell how many. The only thing one can say with certainty is that some of the genes that influence it are variable, that is to say they exist in different versions in different people. Heritability and determinism are very different things. It is entirely possible that the most important genes affecting intelligence are actually non-varying, in which case there would be no heritability for differences caused by those genes, because there would be no such differences For instance, I have five fingers on each hand and so do most people. The reason is that I inherited a genetic recipe that specified five fingers. Yet if I went around the world looking for people with four fingers, about ninety-five per cent of the people I found, possibly more, would be people who had lost fingers in accidents. I would find that having four fingers is something with very low heritability: it is nearly always caused by the environment. But that does not imply that genes had nothing to do with determining finger number. A gene can determine a feature of our bodies that is the same in different people just as surely as it can determine features that are different in different I N T E L L I G E N C E 8 7

people. Robert Plomin's gene-fishing expeditions for IQ genes will only find genes that come in different varieties, not genes that show no variation. They might therefore miss some important genes.

Plomin's first gene, the IGF2R gene on the long arm of chromosome 6, is at first sight an unlikely candidate for an 'intelligence gene'. Its main claim to fame before Plomin linked it with intelligence was that it was associated with liver cancer. It might have been called a 'liver-cancer gene', thus neatly demonstrating the foolishness of identifying genes by the diseases they cause. At some point we may have to decide if its cancer-suppressing function is its main task and its ability to influence intelligence a side-effect, or vice versa. In fact, they could both be side-effects. The function of the protein it encodes is mystifyingly dull: 'the intracellular trafficking of phosphorylated lysosomal enzymes from the Golgi complex and the cell surface to the lysosomes'. It is a molecular delivery van.

Not a word about speeding up brain waves.

IGF2R is an enormous gene, with 7,473 letters in all, but the sense-containing message is spread out over a 98,000-letter stretch of the genome, interrupted forty-eight times by nonsense sequences called introns (rather like one of those irritating magazine articles interrupted by forty-eight advertisements). There are repetitive stretches in the middle of the gene that are inclined to vary in length, perhaps affecting the difference between one person's intelligence and another. Since it seems to be a gene vaguely connected with insulin-like proteins and the burning of sugar, it is perhaps relevant that another study has found that people with high I Q s are more

'efficient' at using glucose in their brains. While learning to play the computer game called Tetris, high-I Q people show a greater fall in their glucose consumption as they get more practised than do low-IQ people. But this is to clutch at straws. Plomin's gene, if it proves real at all, will be one of many that can influence intelligence in many different ways.10

The chief value of Plomin's discovery lies in the fact that, while people may still dismiss the studies of twins and adoptees as too indirect to prove the existence of genetic influences on intelligence, 8 8 G E N O M E

they cannot argue with a direct study of a gene that co-varies with intelligence. One form of the gene is about twice as common in the superintelligent Iowan children as in the rest of the population, a result extremely unlikely to be accidental. But its effect must be small: this version of the gene can only add four points to your I Q , on average. It is emphatically not a 'genius gene'. Plomin hints at up to ten more 'intelligence genes' to come from his Iowa brainboxes. Yet the return of heritable IQ to scientific respectability is greeted with dismay in many quarters. It raises the spectre of eugenic abuse that so disfigured science in the 1920s and 1930s. As Stephen Jay Gould, a severe critic of excessive hereditarianism, has put it: 'A partially inherited low IQ might be subject to extensive improvement through proper education. And it might not. The mere fact of its heritability permits no conclusion.' Indeed. But that is exactly the trouble. It is by no means inevitable that people will react to genetic evidence with fatalism. The discovery of genetic mutations behind conditions like dyslexia has not led teachers to abandon such conditions as incurable - quite the reverse; it has encouraged them to single out dyslexic children for special teaching.11

Indeed, the most famous pioneer of intelligence testing, the Frenchman Alfred Binet, argued fervently that its purpose was not to reward gifted children but to give special attention to less gifted ones. Plomin cites himself as a perfect example of the system at work. As the only one of thirty-two cousins from a large family in Chicago to go to college, he credits his fortune to good results on an intelligence test, which persuaded his parents to send him to a more academic school. America's fondness for such tests is in remarkable contrast to Britain's horror of them. The short-lived and notorious eleven-plus exam, predicated on probably-faked data produced by Cyril Burt, was Britain's only mandatory intelligence test. Whereas in Britain the eleven-plus is remembered as a disastrous device that condemned perfectly intelligent children to second-rate schools, in meritocratic America similar tests are the passports to academic success for the gifted but impoverished.

Perhaps the heritability of IQ implies something entirely different, I N T E L L I G E N C E 8 9

something that once and for all proves that Galton's attempt to discriminate between nature and nurture is misconceived. Consider this apparently fatuous fact. People with high I Q s , on average, have more symmetrical ears than people with low I Q s . Their whole bodies seem to be more symmetrical: foot breadth, ankle breadth, finger length, wrist breadth and elbow breadth each correlates with I Q .

In the early 1990s there was revived an old interest in bodily symmetry, because of what it can reveal about the body's development during early life. Some asymmetries in the body are consistent: the heart is on the left side of the chest, for example, in most people.

But other, smaller asymmetries can go randomly in either direction.

In some people the left ear is larger than the right; in others, vice versa. The magnitude of this so-called fluctuating asymmetry is a sensitive measure of how much stress the body was under when developing, stress from infections, toxins or poor nutrition. The fact that people with high I Q s have more symmetrical bodies suggests that they were subject to fewer developmental stresses in the womb or in childhood. Or rather, that they were more resistant to such stresses. And the resistance may well be heritable. So the heritability of IQ might not be caused by direct 'genes for intelligence' at all, but by indirect genes for resistance to toxins or infections — genes in other words that work by interacting with the environment. You inherit not your IQ but your ability to develop a high IQ under certain environmental circumstances. How does one parcel that one into nature and nurture? It is frankly impossible.12

Support for this idea comes from the so-called Flynn effect. A New Zealand-based political scientist, James Flynn, noticed in the 1980s that IQ is increasing in all countries all the time, at an average rate of about three IQ points per decade. Quite why is hard to determine. It might be for the same reason that height is increasing: improved childhood nutrition. When two Guatemalan villages were given ad-lib protein supplements for several years, the IQ of children, measured ten years later, had risen markedly: a Flynn effect in miniature. But IQ scores are still rising just as rapidly in well-nourished western countries. Nor can school have much to do with 9 0 G E N O M E

it, because interruptions to schooling have demonstrably temporary effects on IQ and because the tests that show the most rapid rises are the ones that have least to do with what is taught in school. It is the ones that test abstract reasoning ability that show the steepest improvements. One scientist, Ulric Neisser, believes that the cause of the Flynn effect is the intense modern saturation of everyday life with sophisticated visual images — cartoons, advertisements, films, posters, graphics and other optical displays — often at the expense of written messages. Children experience a much richer visual environment than once they did, which helps develop their skills in visual puzzles of the kind that dominate IQ tests.13

But this environmental effect is, at first sight, hard to square with the twin studies suggesting such a high heritability for I Q . As Flynn himself notes, an increase of fifteen IQ points in five decades implies either that the world was full of dunces in 1950 or that it is full of geniuses today. Since we are not experiencing a cultural renaissance, he concludes that IQ measures nothing innate. But if Neisser is right, then the modern world is an environment that encourages the development of one form of intelligence - facility with visual symbols. This is a blow to 'g', but it does not negate the idea that these different kinds of intelligence are at least partly heritable. After two million years of culture, in which our ancestors passed on learnt local traditions, human brains may have acquired (through natural selection) the ability to find and specialise in those particular skills that the local culture teaches, and that the individual excels in. The environment that a child experiences is as much a consequence of the child's genes as it is of external factors: the child seeks out and creates his or her own environment. If she is of a mechanical bent, she practises mechanical skills; if a bookworm, she seeks out books. The genes may create an appetite, not an aptitude.

After all, the high heritability of short-sightedness is accounted for not just by the heritability of eye shape, but by the heritability of literate habits. The heritability of intelligence may therefore be about the genetics of nurture, just as much as the genetics of nature. What a richly satisfying end to the century of argument inaugurated by Galton.


C H R O M O S O M E 7

I n s t i n c t

The tabula of human nature was never rasa.

W. D. Hamilton

Nobody doubts that genes can shape anatomy. The idea that they also shape behaviour takes a lot more swallowing. Yet I hope to persuade you that on chromosome 7 there lies a gene that plays an important part in equipping human beings with an instinct, and an instinct, moreover, that lies at the heart of all human culture.

Instinct is a word applied to animals: the salmon seeking the stream of its birth; the digger wasp repeating the behaviour of its long-dead parents; the swallow migrating south for the winter -

these are instincts. Human beings do not have to rely on instinct; they learn instead; they are creative, cultural, conscious creatures.

Everything they do is the product of free will, giant brains and brainwashing parents.

So goes the conventional wisdom that has dominated psychology and all other social sciences in the twentieth century. To think otherwise, to believe in innate human behaviour, is to fall into the trap of determinism, and to condemn individual people to a heartless 9 2 G E N O M E

fate written in their genes before they were born. No matter that the social sciences set about reinventing much more alarming forms of determinism to take the place of the genetic form: the parental determinism of Freud; the socio-economic determinism of Marx; the political determinism of Lenin; the peer-pressure cultural determinism of Franz Boas and Margaret Mead; the stimulus—response determinism of John Watson and B. F. Skinner; the linguistic determinism of Edward Sapir and Benjamin Whorf. In one of the great diversions of all time, for nearly a century social scientists managed to persuade thinkers of many kinds that biological causality was determinism while environmental causality preserved free will; and that animals had instincts, but human beings did not.

Between 1950 and 1990 the edifice of environmental determinism came tumbling down. Freudian theory fell the moment lithium first cured a manic depressive, where twenty years of psychoanalysis had failed. (In 1995a woman sued her former therapist on the grounds that three weeks on Prozac had achieved more than three years of therapy.) Marxism fell when the Berlin wall was built, though it took until the wall came down before some people realised that subservience to an all-powerful state could not be made enjoyable however much propaganda accompanied it. Cultural determinism fell when Margaret Mead's conclusions (that adolescent behaviour was infinitely malleable by culture) were discovered by Derek Freeman to be based on a combination of wishful prejudice, poor data collection and adolescent prank-playing by her informants. Behaviourism fell with a famous 1950s experiment in Wisconsin in which orphan baby monkeys became emotionally attached to cloth models of their mothers even when fed only from wire models, thus refusing to obey the theory that we mammals can be conditioned to prefer the feel of anything that gives us food — a preference for soft mothers is probably innate.1

In linguistics, the first crack in the edifice was a book by Noam Chomsky, Syntactic structures, which argued that human language, the most blatantly cultural of all our behaviours, owes as much to instinct as it does to culture. Chomsky resurrected an old view of language, I N S T I N C T 9 3

which had been described by Darwin as an 'instinctive tendency to acquire an art'. The early psychologist William James, brother of the novelist Henry, was a fervent protagonist of the view that human behaviour showed evidence of more separate instincts than animals, not fewer. But his ideas had been ignored for most of the twentieth century. Chomsky brought them back to life.

By studying the way human beings speak, Chomsky concluded that there were underlying similarities to all languages that bore witness to a universal human grammar. We all know how to use it, though we are rarely conscious of that ability. This must mean that part of the human brain comes equipped by its genes with a specialised ability to learn language. Plainly, the vocabulary could not be innate, or we would all speak one, unvarying language. But perhaps a child, as it acquired the vocabulary of its native society, slotted those words into a set of innate mental rules. Chomsky's evidence for this notion was linguistic: he found regularities in the way we spoke that were never taught by parents and could not be inferred from the examples of everyday speech without great difficulty. For example, in English, to make a sentence into a question we bring the main verb to the front of the statement. But how do we know which verb to bring? Consider the sentence, 'A unicorn that is eating a flower is in the garden.' You can turn that sentence into a question by moving the second 'is' to the front: 'Is a unicorn that is eating a flower in the garden?' But you make no sense if you move the first 'is': 'Is a unicorn that eating a flower is in the garden?' The difference is that the first 'is' is part of a noun phrase, buried in the mental image conjured by not just any unicorn, but any unicorn that is eating a flower. Yet four-year-olds can comfortably use this rule, never having been taught about noun phrases. They just seem to know the rule. And they know it without ever having used or heard the phrase 'a unicorn that is eating a flower' before. That is the beauty of language - almost every statement we make is a novel combination of words.

Загрузка...