• CHAPTER II •
THE SETTING
I
If we were somehow to bring the Reverend Thomas Marsham back to life and restore him to his rectory, what would probably most surprise him—apart from being here at all, of course—would be to find that the house has become, as it were, invisible. Today it stands in a dense private woodland that gives it a determinedly secluded air, but in 1851, when it was brand-new, it would have stood starkly in open countryside, a pile of red bricks in a bare field.
In most other respects, however, and allowing for a little aging and the introduction of some electrical wires and a television aerial, it remains largely unchanged from 1851. It is now, as it was then, manifestly a house. It looks the way a house should look. It has a homely air.
So it is perhaps slightly surprising to reflect that nothing about this house, or any house, is inevitable. Everything had to be thought of—doors, windows, chimneys, stairs—and a good deal of that, as we are about to see, took far more time and experimentation than you might ever have thought.
Houses are really quite odd things. They have almost no universally defining qualities: they can be of practically any shape, incorporate virtually any material, be of almost any size. Yet wherever we go in the world we recognize domesticity the moment we see it. This aura of homeliness is, it turns out, extremely ancient, and the first hint of that remarkable fact was uncovered by chance just at the time the Old Rectory was being built, in the winter of 1850, when a mighty storm blew into Britain.
It was one of the worst storms in decades and it caused widespread devastation. At the Goodwin Sands, off the Kent coast, five ships were dashed to pieces with the loss of all hands. Off Worthing, in Sussex, eleven men going to the aid of a distressed ship drowned when their lifeboat was upended by a giant wave. At a place called Kilkee, an Irish sailing ship named Edmund, bound for America, lost its steering, and passengers and crew watched helplessly as the ship drifted onto rocks and was smashed to splinters. Ninety-six people drowned, though a few managed to struggle ashore, including one elderly lady clinging to the back of the brave captain, whose name was Wilson and who was, the Illustrated London News noted with grim satisfaction, English. Altogether more than two hundred people lost their lives in waters around the British Isles that night.
In London at the half-built Crystal Palace rising in Hyde Park, newly installed glass panes lifted and banged but stayed in place, and the building itself withstood the battering winds with barely a groan, much to the relief of Joseph Paxton, who had promised that it was stormproof but appreciated the confirmation.
Seven hundred miles to the north, on the Orkney Islands of Scotland, the storm raged for two days. At a place called the Bay o’ Skaill the gale stripped the grassy covering off a large irregular knoll, of a type known locally as a howie, which had stood as a landmark for as long as anyone had known it. When at last the storm cleared and the islanders came upon their newly reconfigured beach, they were astounded to find that where the howie had stood were now revealed the remains of a compact, ancient stone village, roofless but otherwise marvelously intact. Consisting of nine houses, all still holding many of their original contents, the village dates from five thousand years ago. It is older than Stonehenge and the Great Pyramids, older than all but a handful of built structures on Earth. It is immensely rare and important. It is known as Skara Brae.
Thanks to its completeness and preservation, Skara Brae offers a scene of intimate, almost eerie domesticity. Nowhere is it possible to get a more potent sense of household life in the Stone Age. As everyone remarks, it is as if the inhabitants have only just left. What never fails to astonish at Skara Brae is the sophistication. These were the dwellings of Neolithic people, but the houses had locking doors, a system of drainage and even, it seems, elemental plumbing with slots in the walls to sluice away wastes. The interiors were capacious. The walls, still standing, were up to ten feet high, so they afforded plenty of headroom, and the floors were paved. Each house has built-in stone dressers, storage alcoves, boxed enclosures presumed to be beds, water tanks, and damp courses that would have kept the interiors snug and dry. The houses are all of one size and built to the same plan, suggesting a kind of genial commune rather than a conventional tribal hierarchy. Covered passageways ran between the houses and led to a paved open area—dubbed “the marketplace” by early archaeologists—where tasks could be done in a social setting.
Life appears to have been pretty good for the Skara Brae residents. They had jewelry and pottery. They grew wheat and barley, and enjoyed bounteous harvests of shellfish and fish, including a codfish that weighed seventy-five pounds. They kept cattle, sheep, pigs, and dogs. The one thing they lacked was wood. They burned seaweed for warmth, and seaweed makes a most reluctant fuel, but that chronic challenge for them was good news for us. Had they been able to build their houses of wood, nothing would remain of them and Skara Brae would have gone forever unimagined.
It is impossible to overstate Skara Brae’s rarity and value. Prehistoric Europe was a largely empty place. As few as two thousand people may have lived in the whole of the British Isles fifteen thousand years ago. By the time of Skara Brae, the number had risen to perhaps twenty thousand, but that is still just one person per three thousand acres, so to come across any sign of Neolithic life is always an excitement. It would have been pretty exciting even then.
Skara Brae offered some oddities, too. One dwelling, standing slightly apart from the others, could be bolted only from the outside, indicating that anyone within was being confined, which rather mars the impression of a society of universal serenity. Why it was necessary to detain someone in such a small community is obviously a question that cannot be answered over such a distance of time. Also slightly mystifying are the water-tight storage containers found in each dwelling. The common explanation is that these were used to hold limpets, a hard-shelled mollusk that abounds in the vicinity, but why anyone would want a stock of fresh limpets near at hand is a question not easy to answer even with the luxury of conjecture, for limpets are a terrible food, providing only about one calorie apiece and so rubbery as to be practically inedible anyway; they actually take more energy to chew than they return in the form of nutrition.
We don’t know anything at all about these people—where they came from, what language they spoke, what led them to settle on such a lonesome outpost on the treeless edge of Europe—but from all the evidence it appears that Skara Brae enjoyed six hundred years of uninterrupted comfort and tranquillity. Then one day in about 2500 BC the occupants vanished—quite suddenly, it seems. In the passageway outside one dwelling ornamental beads, almost certainly precious to the owner, were found scattered, suggesting that a necklace had broken and the owner had been too panicked or harried to retrieve them. Why Skara Brae’s happy idyll came to a sudden end is, like so much else, impossible to say.
Remarkably, after Skara Brae’s discovery more than three quarters of a century passed before anyone got around to having a good look at it. William Watt, from nearby Skaill House, salvaged a few items; more horrifyingly, a later house party, armed with spades and other implements, emerged from Skaill House and cheerfully plundered the site one weekend in 1913, taking away goodness knows what as souvenirs, but that was about all the attention Skara Brae attracted. Then in 1924 another storm swept a section of one of the houses into the sea, after which it was decided that the site should be formally examined and made secure. The job fell to an interestingly odd but brilliant Australian-born Marxist professor from the University of Edinburgh who loathed fieldwork and didn’t really like going outside at all if he could possibly help it. His name was Vere Gordon Childe.
Childe wasn’t a trained archaeologist. Few people in the early 1920s were. He had read classics and philology at the University of Sydney, where he had also developed a deep and abiding attachment to communism, a passion that blinded him to the excesses of Joseph Stalin but colored his archaeology in interesting and surprisingly productive ways. In 1914, he came to the University of Oxford as a graduate student, and there he began the reading and thinking that led to his becoming the foremost authority of his day on the lives and movements of early peoples. In 1927, the University of Edinburgh appointed him to the brand-new post of Abercrombie Professor of Prehistoric Archaeology. This made him the only academic archaeologist in Scotland, so when something like Skara Brae needed investigating the call went out to him. Thus it was in the summer of 1927 that Childe traveled north by train and boat to Orkney.
Vere Gordon Childe at Skara Brae, 1930 (photo credit 2.1)
Nearly every written description of Childe dwells almost lovingly on his oddness of manner and peculiar looks. His colleague Max Mallowan (now best remembered, when remembered at all, as the second husband of Agatha Christie) said he had a face “so ugly that it was painful to look at.” Another colleague recalled Childe as “tall, ungainly and ugly, eccentric in dress and often abrupt in manner [with a] curious and often alarming persona.” The few surviving photographs of Childe certainly confirm that he was no beauty—he was skinny and chinless, with squinting eyes behind owlish spectacles, and a mustache that looked as if it might at any moment stir to life and crawl away—but whatever unkind things people might say about the outside of his head, the inside was a place of golden splendor. Childe had a magnificent, retentive mind and an exceptional facility for languages. He could read at least a dozen, living and dead, which allowed him to scour texts both ancient and modern on any subject that interested him, and there was hardly a subject that didn’t. The combination of weird looks, mumbling diffidence, physical awkwardness, and intensely overpowering intellect was more than many people could take. One student recalled how in a single ostensibly sociable evening Childe had addressed those present in half a dozen languages, demonstrated how to do long division in Roman numerals, expounded critically upon the chemical basis of Bronze Age datings, and quoted lengthily from memory from a range of literary classics. Most people simply found him exhausting.
He wasn’t a born excavator, to put it mildly. A colleague, Stuart Piggott, noted almost with awe Childe’s “inability to appreciate the nature of archaeological evidence in the field, and the processes involved in its recovery, recognition and interpretation.” Nearly all his many books were based on reading rather than personal experience. Even his command of languages was only partial: although he could read them flawlessly, he used his own made-up pronunciations, which no one who spoke the languages could actually understand. In Norway, hoping to impress colleagues, he once tried to order a dish of raspberries and was brought twelve beers.
Whatever his shortcomings of appearance and manner, he was unquestionably a force for good in archaeology. Over the course of three and a half decades he produced six hundred articles and books, popular as well as academic, including the best sellers Man Makes Himself (1936) and What Happened in History (1942), which many later archaeologists said inspired them to take up the profession. Above all he was an original thinker, and at just the time that he was excavating at Skara Brae he had what was perhaps the single biggest and most original idea of twentieth-century archaeology.
The human past is traditionally divided into three very unequal epochs—the Paleolithic (or Old Stone Age), which ran from 2.5 million years ago to about 10,000 years ago; the Mesolithic (Middle Stone Age), covering the period of transition from hunter-gathering lifestyles to the widespread emergence of agriculture, from 10,000 to 6,000 years ago; and the Neolithic (New Stone Age), which covers the closing but extremely productive 2,000 years or so of prehistory, up to the Bronze Age. Within each of these periods are many further subperiods—Olduwan, Mousterian, Gravettian, and so on—that are mostly of concern to specialists and needn’t distract us here.
The important thought to hold on to is that for the first 99 percent of our history as beings we didn’t do much of anything but procreate and survive. Then people all over the world discovered farming, irrigation, writing, architecture, government, and the other refinements of being that collectively add up to what we fondly call civilization. This has been many times described as the most momentous transformation in human history, and the first person who fully recognized and conceptualized the whole complex process was Vere Gordon Childe. He called it the Neolithic Revolution.
It remains one of the great mysteries of human development. Even now scientists can tell you where it happened and when, but not why. Almost certainly (well, we think almost certainly), it had something to do with some big changes in the weather. About twelve thousand years ago, the Earth began to warm quite rapidly; then for reasons unknown it plunged back into bitter cold for a thousand years or so—a kind of last gasp of the ice ages. This period is known to scientists as the Younger Dryas. (It was named for an arctic plant, the dryas, which is one of the first to recolonize land after an ice sheet withdraws. There was an Older Dryas period, too, but it wasn’t important for human development.) After ten further centuries of biting cold, the world warmed rapidly again and has stayed comparatively warm ever since. Almost everything we have done as advanced beings has been done in this brief spell of climatological glory.
The interesting thing about the Neolithic Revolution is that it happened all over the Earth, among people who could have no idea that others in distant places were doing precisely the same things. Farming was independently invented at least seven times—in China, the Middle East, New Guinea, the Andes, the Amazon basin, Mexico, and West Africa. Cities likewise emerged in six places—China, Eygpt, India, Mesopotamia, Central America, and the Andes. That all of these things happened all over, often without any possibility of shared contact, seems uncanny. As one historian has put it: “When Cortés landed in Mexico he found roads, canals, cities, palaces, schools, law courts, markets, irrigation works, kings, priests, temples, peasants, artisans, armies, astronomers, merchants, sports, theatre, art, music, and books”—all invented quite independently of similar developments on other continents. And some of it is a little uncanny, to be sure. Dogs, for instance, were domesticated at much the same time in places as far apart as England, Siberia, and North America.
It is tempting to think of this as a kind of global lightbulb moment, but that is really stretching things. Most of the developments actually involved vast periods of trial, error, and adjustment, often over the course of thousands of years. Agriculture started 11,500 years ago in the Levant, but 8,000 years ago in China and only a little over 5,000 years ago in most of the Americas. People had been living with domesticated animals for 4,000 years before it occurred to anyone to put the bigger of them to work pulling plows; Westerners used a clumsy, heavy, exceedingly inefficient straight-bladed plow for a further 2,000 years before someone introduced them to the simple curved plow the Chinese had been using since time immemorial. Mesopotamians invented and used the wheel, but neighboring Egypt waited 2,000 years before adopting it. In Central America, the Maya also independently invented the wheel but couldn’t think of any practical applications for it and so reserved it exclusively for children’s toys. The Incas didn’t have wheels at all, or money or iron or writing. The march of progress, in short, has been anything but predictable and rhythmic.
For a long time it was thought that settling down—sedentism, as it is known—and farming went hand in hand. People, it was assumed, abandoned nomadism and took up farming in order to guarantee their food supplies. Killing wild game is difficult and chancy, and hunters must often have come home empty-handed. Much better to control your food sources and have them permanently and conveniently at hand. In fact, researchers realized quite early on that sedentism was not nearly as straightforward as that. At about the time that Childe was excavating at Skara Brae, a Cambridge University archaeologist named Dorothy Garrod, working in Palestine at a place called Shuqba, discovered an ancient culture that she dubbed the Natufian, after a wadi, or dried riverbed, that lay nearby. The Natufians built the first villages and founded Jericho, which became the world’s first true city. So they were very settled people. But they didn’t farm. This was most unexpected. However, other excavations across the Middle East showed that it was not uncommon for people to settle in permanent communities long before they took up farming—sometimes by as much as eight thousand years.
So if people didn’t settle down to take up farming, why then did they embark on this entirely new way of living? We have no idea—or actually, we have lots of ideas, but we don’t know if any of them are right. According to the historian Felipe Fernández-Armesto, at least thirty-eight theories have been put forward to explain why people took to living in communities: that they were driven to it by climatic change, or by a wish to stay near their dead, or by a powerful desire to brew and drink beer, which could only be indulged by staying in one place. One theory, evidently seriously suggested (Jane Jacobs cites it in her landmark work of 1969, The Economy of Cities), was that “fortuitous showers” of cosmic rays caused mutations in grasses that made them suddenly attractive as a food source. The short answer is that no one knows why agriculture developed as it did.
Making food out of plants is hard work. The conversion of wheat, rice, corn, millet, barley, and other grasses into staple foodstuffs is one of the great achievements of human history, but also one of the more unexpected ones. You have only to consider the lawn outside your window to realize that grass in its natural state is not an obvious foodstuff for nonruminants such as ourselves. For us, making grass edible is a challenge that can be solved only with a lot of careful manipulation and protracted ingenuity. Take wheat. Wheat is useless as a food until made into something much more complex and ambitious like bread, and that takes a great deal of effort. Somebody must first separate out the grain and grind it into meal, then convert the meal into flour, then mix that with other components like yeast and salt to make dough. Then the dough must be kneaded to a particular consistency, and finally the resulting lump must be baked with precision and care. The scope for failure in the last step alone is so great that in every society in which bread has featured, baking has been turned over to professionals from the earliest stages.
It is not as if farming brought a great improvement in living standards either. A typical hunter-gatherer enjoyed a more varied diet and consumed more protein and calories than settled people, and took in five times as much vitamin C as the average person today. Even in the bitterest depths of the ice ages, we now know, nomadic people ate surprisingly well—and surprisingly healthily. Settled people, by contrast, became reliant on a much smaller range of foods, which all but ensured dietary insufficiencies. The three great domesticated crops of prehistory were rice, wheat, and maize, but all had significant drawbacks as staples. As the journalist John Lanchester explains: “Rice inhibits the activity of Vitamin A; wheat has a chemical that impedes the action of zinc and can lead to stunted growth; maize is deficient in essential amino acids and contains phytates, which prevent the absorption of iron.” The average height of people actually fell by almost six inches in the early days of farming in the Near East. Even on Orkney, where prehistoric life was probably as good as it could get, an analysis of 340 ancient skeletons showed that hardly any people lived beyond their twenties.
What killed the Orcadians was not dietary deficiency but disease. People living together are vastly more likely to spread illness from household to household, and the close exposure to animals through domestication meant that flu (from pigs or fowl), smallpox and measles (from cows and sheep), and anthrax (from horses and goats, among others) could become part of the human condition, too. As far as we can tell, virtually all of the infectious diseases have become endemic only since people took to living together. Settling down also brought a huge increase in “human commensals”—mice, rats, and other creatures that live with and off us—and these all too often acted as disease vectors.
So sedentism meant poorer diets, more illness, lots of toothache and gum disease, and earlier deaths. What is truly extraordinary is that these are all still factors in our lives today. Out of the thirty thousand types of edible plants thought to exist on Earth, just eleven—corn, rice, wheat, potatoes, cassava, sorghum, millet, beans, barley, rye, and oats—account for 93 percent of all that humans eat, and every one of them was first cultivated by our Neolithic ancestors. Exactly the same is true of husbandry. The animals we raise for food today are eaten not because they are notably delectable or nutritious or a pleasure to be around, but because they were the ones first domesticated in the Stone Age.
We are, in the most fundamental way, Stone Age people ourselves. From a dietary point of view, the Neolithic period is still with us. We may sprinkle our dishes with bay leaves and chopped fennel, but underneath it all is Stone Age food. And when we get sick, it is Stone Age diseases we suffer.
II
If, ten thousand years ago, you had been asked to guess which area of the world would be the seat of the greatest future civilizations, you would probably have settled on some part of Central or South America on the basis of the amazing things they were doing with food there. Academics call this portion of the New World Mesoamerica, an accommodatingly vague term that could fairly be defined as Central America plus as much or as little of North and South America as are needed to support a hypothesis.
Mesoamericans were the greatest cultivators in history, but of all their many horticultural innovations none was more lastingly important or unexpected than the creation of maize, or corn as it is known where I come from.* We still don’t have any idea how they did it. If you look at primitive forms of barley, rice, or wheat set beside their modern counterparts, you can see the affinities at once. But nothing in the wild remotely resembles modern corn. Genetically, its nearest relative is a wispy grass called teosinte, but beyond the level of chromosomes there is no discernible kinship. Corn grows into a hefty cob on a single stalk and its grains are encased in a stiff, protective husk. An ear of teosinte, in comparison, is less than an inch long, has no husk, and grows on a multiplicity of stems. Teosinte is almost valueless as a food; one kernel of corn is more nutritious than a whole ear of teosinte.
It is beyond us to divine how any people could have bred cobs of corn from such a thin and unpropitious plant—or even thought to try. Hoping to settle the matter once and for all, food scientists from around the world convened in 1969 at a conference on the origin of corn at the University of Illinois, but the debates grew so vituperative and bitter, and at times so personal, that the conference broke up in confusion and no papers from it were ever published. Nothing like it has been attempted since. Scientists are now pretty sure, however, that corn was first domesticated on the plains of western Mexico, and are in no doubt, thanks to the persuasive wonders of genetics, that somehow it was coaxed into being from teosinte, but how it was done remains as much of a mystery as it ever did.
However they did it, the Mesoamericans created the world’s first fully engineered plant—a plant so thoroughly manipulated that it is now wholly dependent on us for its survival. Corn kernels do not spontaneously disengage from their cobs, so unless they are deliberately stripped and planted, no corn will grow. Had people not been tending it continuously for these thousands of years, corn would be extinct. The inventors of corn not only created a new kind of plant, they also created—conceived from nothing really—a new type of ecosystem that existed nowhere in their world. In Mesopotamia natural meadows grew everywhere already, so cultivation was largely a matter of transforming natural grain fields into superior managed ones. In the arid scrubs of Central America, however, fields were unknown. They had to be created from scratch by people who had never seen such a thing before. It was like someone in a desert imagining lawns.
Today corn is far more indispensable than most people realize. Cornstarch is used in the manufacture of soda pop, chewing gum, ice cream, peanut butter, library paste, ketchup, automobile paint, embalming fluid, gunpowder, insecticides, deodorants, soap, potato chips, surgical dressings, nail polish, foot powder, salad dressing, and several hundred things more. To borrow from Michael Pollan, author of The Omnivore’s Dilemma, it is not so much as if we have domesticated corn as it has domesticated us.
The worry is that as crops are engineered to a state of uniform genetic perfection, they will lose their protective variability. When you drive past a field of corn today, every stalk in it is identical to every other—not just extremely similar, but eerily, molecularly identical. Replicants live in perfect harmony since none can outcompete any others. But they also have matching vulnerabilities. In 1970, the corn world suffered a real fright when a disease called southern corn-leaf blight started killing corn across America and it was realized that practically the entire national crop was planted from seeds with genetically identical cytoplasm. Had the cytoplasm been directly affected or the disease proved more virulent, food scientists all over the world might now be scratching their heads over ears of teosinte and we would all be eating potato chips and ice cream that didn’t taste quite right.
Potatoes, the other great food crop of the New World, present an almost equally intriguing batch of mysteries. Potatoes are from the nightshade family, which is of course notoriously toxic, and in their wild state they are full of poisonous glycoalkaloids—the same stuff, at lower doses, that puts the zip in caffeine and nicotine. Making any wild potatoes safe to eat required reducing the glycoalkaloid content to between one-fifteenth and one-twentieth of its normal level. This raises a lot of questions, beginning most obviously with: How did they do it? And while they were doing it, how did they know they were doing it? How do you tell that the poison content has been reduced by, say, 20 percent or 35 percent or some other intermediate figure? How do you assess progress in such a process? Above all, how did they know that the whole exercise was worth the effort and that they would get a safe and nutritious foodstuff in the end?
Of course, a nontoxic potato might equally have mutated spontaneously, saving them generations of experimental selective breeding. But if so, how did they know that it had mutated and that out of all the poisonous wild potatoes around them here at last was one that was safe to eat?
The fact is, people in the ancient world were often doing things that are not just surprising but unfathomable.
III
While Mesoamericans were harvesting corn and potatoes (and avocados and tomatoes and beans and about a hundred other plants we would be desolate to be without now), people on the other side of the planet were building the first cities. These are no less mysterious and surprising.
Just how surprising was brought home by a discovery in Turkey in 1958. One day toward the end of that year, a young British archaeologist named James Mellaart was driving through an empty corner of central Anatolia with two colleagues when he noticed an unnatural-looking earthen mound—a “thistle-covered hump”—stretching across the arid plain. It was fifty or sixty feet high and two thousand feet long. Altogether it covered about thirty-three acres—a mysteriously immense area. Returning the next year, Mellaart did some experimental digging and, to his astonishment, discovered that the mound contained the remains of an ancient city.
This wasn’t supposed to happen. Ancient cities, as even laymen knew, were phenomena of Mesopotamia and the Levant. They were not supposed to exist in Anatolia. Yet here was one of the very oldest—possibly the very oldest—bang in the middle of Turkey and of a size that was astoundingly unprecedented. Çatalhöyük (the name means “forked mound”) was nine thousand years old. It had been lived in continuously for well over a thousand years and at its peak had a population of eight thousand.
Mellaart called Çatalhöyük the world’s first city, a conclusion given additional weight and publicity by Jane Jacobs in her influential work The Economy of Cities, but that is incorrect on two counts. First, it wasn’t a city but really just a very large village. (The distinction to archaeologists is that cities have not just size but also a discernible administrative structure.) Even more pertinently, other communities—Jericho in Palestine, Mallaha in Israel, Abu Hureyra in Syria—are now known to be considerably older. None, however, would prove stranger than Çatalhöyük.
Vere Gordon Childe, father of the Neolithic Revolution, didn’t quite live long enough to learn about Çatalhöyük. Shortly before its discovery, he made his first visit home to Australia in thirty-five years. He had been away for well over half his lifetime. While walking in the Blue Mountains, he either fell to his death or jumped. In either case, he was found at the bottom of an eminence called Govett’s Leap. A thousand feet above, a passerby found his jacket carefully folded, with his glasses, compass, and pipe neatly arranged on top.
Childe would almost certainly have been fascinated with Çatalhöyük because almost nothing about the place made sense. The town was built without streets or lanes. The houses huddled together in a more or less solid mass. Those in the middle of the mass could be reached only by clambering over the roofs of many other houses, all of differing heights—a staggeringly inconvenient arrangement. There were no squares or marketplaces, no municipal or administrative buildings—no signs of social organization at all. Each builder put up four new walls, even when building against existing walls. It was as if the inhabitants hadn’t got the hang of collective living yet. It may well be that they hadn’t. It is certainly a vivid reminder that the nature of communities and the buildings within them is not preordained. It may seem to us natural to have doors at ground level and houses separated from one another by streets and lanes, but the people of Çatalhöyük clearly saw it another way altogether.
No roads or tracks led to or from the community either. It was built on marshy ground, on a floodplain. For miles around was nothing but space, and yet the people packed themselves densely together as if pressed by incoming tides on all sides. Nothing at all indicates why people should have congregated there in the thousands when they might have spread out across the surrounding countryside.
The people farmed—but on farms that were at least seven miles away. The land around the village provided poor grazing, and offered nothing at all in the way of fruits, nuts, or other natural sources of nutrition. There was no wood for fuel either. In short, there wasn’t any very obvious reason for people to settle there at all, and yet clearly they did in large numbers.
Çatalhöyük was not a primitive place by any means. It was strikingly advanced and sophisticated for its time—full of weavers, basketmakers, carpenters, joiners, beadmakers, bowmakers, and many others with specialized skills. The inhabitants practiced art of a high order and produced not only fabrics but also a variety of stylish weaves. They could even produce stripes—not evidently an easy thing to do. Looking good was important to them. It is remarkable to think that people thought of striped fabrics before they thought of doors and windows.
All this is just another reminder of how little we know, or can even begin to guess, about the lifestyles and habits of people from the ancient past. And with that thought in mind let’s go into the house at last and begin to see how little we know about it, too.
* In Britain corn has meant any grain since the time of the Anglo-Saxons. It also came to signify any small round object, which explains the corns on your feet. Corned beef is so called because originally it was cured in kernels of salt. Because of the importance of maize in America, the word corn became attached to maize exclusively in the early eighteenth century.