PART II WE ALMOST DIDN’T MAKE IT

6. THE AFRICAN BOTTLENECK

MOST OF US are familiar with the basic outlines of the human evolutionary story. Our distant ancestors were a group of apelike creatures who started walking upright millions of years ago in Africa, eventually developing bigger brains and scattering throughout the world to become the humans of today. But there’s another story that has received less attention. Advances in genetics have given us a sharper understanding of what happened between the “walking upright” and the “buying the latest tablet computer” chapters of the tale.

Written into our genomes is the signature left behind by an event when the early human population dwindled to such a small size that our ancient ancestors living in Africa may have come close to extinction. Population geneticists call events like these bottlenecks. They’re periods when the diversity of a species becomes so constrained that evidence of genetic culling is obvious even thousands of generations later. Sometimes the shrinking of a population is the result of mass deaths, and indeed, there is evidence that humans may have been fleeing a natural disaster when we walked out of Africa roughly 70 thousand years ago. But our species probably experienced multiple genetic bottlenecks beginning as far back as 2 million years. And those earlier bottlenecks were caused by a force far more powerful than mass death: the process of evolution itself.

In fact, the African bottlenecks are an example of the paradoxical nature of human survival. They provide evidence that humans nearly died out many times, but also tell a story about how we evolved to survive in places very far away from our evolutionary home in Africa.

The Fundamental Mystery of Human Evolution

Given our enormous, globe-spanning population size, humans have remarkably low genetic diversity—much lower than other mammal species. All 6 billion of us are descended from a group of people who numbered in the mere tens of thousands. When population geneticists describe this peculiar situation, they talk about the difference between humanity’s actual population size and our “effective population size.” An effective population size is a subgroup of the actual population that reasonably represents the genetic diversity of the whole. Put another way, humanity is like a giant dance party full of billions of diverse people. But population geneticists, elite party animals that they are, have managed to find the one ideal VIP area that contains a small group of people who very roughly capture the diversity of the party as a whole. In theory, that room contains the party’s effective population size. If they all started randomly having sex with each other, their children might loosely reproduce the diversity and genetic drift of our actual, billions-strong population.

The weird part is that compared with our actual population size, the human effective population in that VIP area is very low. In fact, today’s human effective population size is estimated at about 10,000 people. As a point of comparison, the common house mouse is estimated to have an effective population size of 160,000. How could there be so many of us, and so little genetic diversity?

This is one of the fundamental mysteries of human evolution, and is the subject of great debate among scientists. There are a few compelling theories, which we’ll discuss shortly, but there is one point that nearly all evolutionary biologists will agree on. We are descended from a group of proto-humans who were fairly diverse 2 million years ago, but whose diversity crashed and passed through a bottleneck while Homo sapiens evolved. That crash limited our gene pool, creating the small effective population size we have today. Does some kind of terrible disaster lurk in the human past? An event that could have winnowed our population down to a small group of survivors, who became our ancestors? That’s definitely one possibility. Evolutionary biologist Richard Dawkins has popularized the idea that the population crash came in the wake of the Toba catastrophe, a supervolcano that rocked Indonesia 80,000 years ago. It’s possible this enormous blast cooled the African climate for many years, destroying local food sources and starving everybody to death before sending fearful bands of Homo sapiens running out of Africa.

But, as John Hawks, an anthropologist at the University of Wisconsin, Madison, put it to me, a careful examination of the genetic evidence doesn’t reveal anything as dramatic as a single megavolcanic wipeout. Instead of some Hollywood special-effects extravaganza, human history was more like a perilous immigration story. To understand how immigration can turn a vast population into a tiny one, we need to travel back a few million years to the place and time where we evolved.

The Human Diaspora

Humanity’s first great revolution, according to the anthropologist Ian Tattersall of the American Museum of Natural History, was when it learned to walk upright, more than 5 million years ago. At the time, we were part of a hominin group called Australopithecus that shared a very recent common ancestor with apes. Australopithecines hailed from the temperate, lush East African coast. They were short—about the size of an eight-year-old child—and covered in a light layer of fur. They may have started walking on their hind legs because it helped them hunt and find the fruits that dominated their diets. Whatever the reason, walking upright was unique to Australopithecus. Her fellow primates continued to prefer a four-legged gait, as they do today.

Over the next few million years, Australopithecus walked from the tip of what is now South Africa all the way up to where Chad and Sudan are today. Our ancestors also grew larger skulls, anticipating a trend that has continued throughout human evolution. By about 2 million years ago, Australopithecus was evolving into a very human-looking hominin called Homo ergaster (sometimes called Homo erectus). Similar in height to humans today, a couple of H. ergaster individuals could put on jeans and T-shirts and blend in fairly well on a typical city street—as long as they wore hats to hide their slightly prominent brows and sloping foreheads. Another thing that would make our H. ergasters feel perfectly comfortable loping down Market Street is the way so many in the crowd around them would be clutching small, hand-sized tools. Our tools may contain microchips whose components are the products of advanced chemical processing, but the typical smartphone’s size and heft are comparable to the carefully crafted hand axes that anthropologists have identified as a key component of H. ergaster’s tool kit. H. ergaster wouldn’t need anyone to explain the meat slowly cooking over low flames in kebab stands, either: There’s evidence that their species had mastered fire 1.5 million years ago.

In this map, you can see the different waves of human expansion out of Africa, starting over one million years ago and continuing up into the Homo sapiens diaspora about 100,000 years ago. (illustration credit ill.6)
(Click here to see a larger image.)

There are many ways to tell the story of what happened to H. ergaster and her children, who eventually built those smart phones and invented the tasty perfection that is a kebab. H. ergaster was one of many bipedal, tool-using hominids roaming southern and eastern Africa who had evolved from Australopithecus. The fossil record from this time is fairly sparse, so we can’t be sure how many groups there were, what kinds of relationships they formed with each other, or even (in some cases) which ones evolved into what. But each group had its own unique collection of genes, some of which still survive today in Homo sapiens. And those are the groups whose paths we’re going to follow.

This path is both a physical and a genetic one. A visitor to the American Museum of Natural History in New York can track its progress in fossils. Glass-enclosed panoramas offer glimpses of what we know about how H. ergaster and her progeny created hand axes by striking one stone against another until enough pieces had flaked off that only a sharp blade was left. Reconstructed early human skeletons stand near sparse fossils and tools, a reminder that our ideas about these people come, literally, from mere fragments of their bodies and cultures. Ian Tattersall has spent most of his career poring over those fragments, trying to reconstruct the tangled root structure of humanity’s evolutionary tree.

One thing we know for sure is that early humans were wanderers. Not only did they spread across Africa, but they actually crossed out of it many times, starting about 2 million years ago. Anthropologists can track the journeys taken by H. ergaster and her progeny by tracing the likely paths between what remains of these peoples’ campsites and villages, often identifying the group who lived there based on the kinds of tools they used.

Tattersall believes there were at least three major radiations, or population dispersals, out of Africa. Despite the popularity of Dawkins’s Toba volcano theory, Tattersall believes there was “no environmental reason” for these immigrations. Instead, they were all spurred by evolutionary developments that allowed humans to master their environments. “The first radiation seems to have coincided with a change in body structure,” he mused. Members of H. ergaster had a more modern skeletal structure featuring longer legs than their hominid cohorts, which meant they could walk quickly and efficiently over a variety of terrains. Tattersall explained that there were environmental changes in Africa during this time, but not enough to suggest that humans fled environmental destruction to greener pastures. Instead they were simply well suited to explore “unfamiliar environments, ones very unlike their ancestral environments,” he said. H. ergaster’s rolling gait was an adaptation that allowed the species to continue adapting, by spreading into new lands where other hominids literally could not tread.

As early humans walked into new regions, they separated into different, smaller bands. Each of these bands continued to evolve in ways that suited the environments where they eventually settled. We’re going to focus on four major players in this evolutionary family drama: our early ancestor H. ergaster and three siblings she spawned—Homo erectus, Homo neanderthalensis, and Homo sapiens.

H. erectus was likely the evolutionary product of that first exodus out of Africa that Tattersall described. About 1.8 million years ago, H. erectus crossed out of Africa through what is today Egypt and spread from there all the way across Asia. These hominins soon found themselves in a very different environment from their siblings back in Africa; the winds were cold and snowy, and the steppes were full of completely unfamiliar wildlife. Over the millennia, H. erectus’s skull shape changed and so did her tool sets. We can actually track how our ancestors’ tools changed more easily than how their bodies did because stone preserves better than bone. Scientists have reconstructed the spread of H. erectus by unearthing caches of tools whose shapes are quite distinct from what other groups used. From what we can piece together, it seems that H. erectus founded cultures and communities that lasted for hundreds of thousands of years, and spread throughout China and down into Java.

Over the next million years, other groups of humans followed in H. erectus’s footsteps, walking through Egypt to take their siblings’ route out of Africa. But as the Stanford paleoanthropologist Richard Klein told me, these journeys probably weren’t distinct waves of migration. Walking in small groups, these humans were slowly expanding the boundaries of the hominin neighborhood.

Fossil remains in Europe suggest that about 500,000 to 600,000 years ago, some of H. ergaster’s progeny, on emerging from Africa, decided to go left instead of right, wandering into the western and central parts of the Eurasian continent. These Europeans evolved into H. neanderthalensis. They often set up homes in generously sized cave systems, and there’s evidence that some groups lived for dozens of generations in the same caves, scattered throughout Italy, Spain, England, Russia, and Slovenia, among other countries. Neanderthals evolved a thicker brow and more barrel-chested body to cope with the colder climate. We’ll talk more about them in the next chapter.

Back in Africa, H. ergaster was busy, too, establishing home bases all over the coasts of the continent, reaching from southern Africa all the way up to regions that are today Algeria and Morocco. By 200,000 years ago, H. ergaster’s skeletal shape was indistinguishable from that of modern humans. A species we would recognize as H. sapiens had emerged. And that’s when human beings made their next evolutionary leap—one that perfectly complemented our ability to walk upright into new domains.

How We Evolved to Tell Stories

“When Homo sapiens came along there was something totally radical about it,” Tattersall enthused. “For a hundred thousand years, Homo sapiens behaved in basically the same ways its ancestors had. But suddenly something happened that started a different pattern.” Put simply, humans started to use the giant brains they’d evolved to fit inside their gradually enlarging craniums. What changed? Tattersall said there are no easy answers, but evolution often works in jumps and starts like that. For example, birds evolved feathers millions of years before they started flying, and animals had limbs long before they started walking. “We had a big brain with symbolic potential before we used it for symbolic thought,” he concluded. In what anthropologists call a cultural explosion over the past 100,000 years, humans developed complex symbolic communication, from language and art to fashion and complex tools. Instead of looking at the world as a place to avoid danger and score food, humans disassembled it into mental symbols that allowed us to imagine new worlds, or new versions of the world we lived in.

Humans’ new facility with symbols allowed us to learn about the world around us from other humans rather than starting from scratch with direct observations each time we went to a new place. Like walking, symbolic thought is an adaptation that leads to more adaptations. Modern humans could venture into new territory, discover its resources and perils, then tell other bands of humans about it. They might even pass along designs for tools that helped us gain access to foods specific to a certain area, like crushers for nuts or scoops for tubers. Aided by our new capacity for imagination, those bands of humans could familiarize themselves with alien regions before ever visiting them. For the first time in history, people could figure out how to adapt to a place before arriving there—just by hearing stories from their comrades. Symbolic thought is what allowed us to thrive in environments far from warm, coastal Africa, where we began. It was the perfect evolutionary development for a species whose body propelled us easily into new places. Indeed, one might argue that the farther we wandered, the more we evolved our skills as storytellers.

Let’s go back, for a moment, to that first radiation out of Africa, nearly 2 million years ago when H. ergaster, with her small but effective tool kit, crossed into the Sinai Peninsula. At roughly the same time, we find evidence of humanity’s first genetic bottleneck. And yet, as Tattersall and many others have pointed out, there is no evidence of a giant disaster thinning the population, leaving the survivors to flee across the Middle East and Asia. The bottleneck is clearly a sign of a population crash, but what caused it? As I said earlier, the effective population size for H. sapiens is estimated at roughly 10,000 individuals; but the University of Utah geneticist Chad Huff recently argued that soon after H. ergaster left, our effective population size was about 18,500. It’s likely this bottleneck is actually a record of human groups growing smaller as they thinned out across the Eurasian continent, meeting adversity every step of the way. At the same time, according to anthropologist John Hawks, the bottleneck is a mark of evolutionary changes that could only happen to a population that was always on the move.

It started with that first trek out of Africa, which split humanity into several groups. As Hawks explained in a paper he published with colleagues in 2000, one cause for a genetic bottleneck can be speciation, or the process of one species splitting into two or more genetically distinct groups. We’ve already touched on how H. ergaster evolved into at least three sibling groups, but that’s a dramatic oversimplification. For example, H. ergaster likely evolved into a group called Homo heidelbergensis in Africa, which then speciated into H. sapiens and another group that speciated into Neanderthals and their close relatives the Denisovans later on. There are many complexities in the lineage of H. erectus, too, especially once the group reached Asia. Evolution is a messy process, with many byways and dead ends. By the time H. ergaster reached the Sinai, the group would have undergone at least one speciation event—the one that led to early H. erectus. That means only a subset of H. ergaster genes survived in H. erectus, and a subset of its genes survived in the H. ergaster groups who stayed behind. If these groups remained small, and there’s ample reason to believe that they did, you now have two isolated gene pools that are less diverse than the original one. That’s how speciation creates a genetic bottleneck.

But even without speciation events, humans’ habit of walking all over the place would have caused a bottleneck. The very act of wandering far from home, into many dangers, can shrink both the population and the gene pool over the course of generations. Population geneticists call this process the founder effect. To see how the founder effect works, let’s follow a band of H. erectus passing through lands edging the Mediterranean Sea and finding its way into India. Remember, this isn’t one long trek. Maybe the coast of today’s state of Gujarat appeals to a few members of H. erectus, and so a band decides to settle down for a while in that region. This settlement is called a founder group, and it has less diversity than the group it came from simply because it has fewer members. In the next generation, a new group splits off from the Gujaratis and heads south along the coast. Generally we assume that each time a group left for untouched lands, it left a group behind. So each new group becomes a founder population in its own right, and has less genetic diversity than the group back in Gujarat—even if you factor in some intermarriage between different founder groups. Multiple founder events in a row would have had the odd effect of increasing humanity’s population while decreasing human genetic diversity. Now, consider the fact that our H. erectus explorers in India are a microcosm of the way all humans spread across the Earth. After hundreds of generations of wandering, humans managed to increase their populations gradually while retaining the low diversity caused by genetic bottlenecks.

Back in Africa, early humans were also speciating and wandering, forming new bands, each of whose genetic diversity was lower than the last. But when a small band of hominins called H. sapiens evolved, about 200,000 years ago, something strange happened. Tattersall believes that humans underwent some kind of genetic change that spurred a cultural shift. Suddenly, between 100,000 and 50,000 years ago, the fossil record is full of sculpture, shell jewelry, complex tools made from multiple kinds of material, ochre-and-carbon cave paintings, and elaborate burial sites. Possibly, as Randall White, an anthropologist at New York University, suggests in his book Prehistoric Art, humans were using jewelry and clothing to proclaim allegiance with particular groups. H. sapiens wasn’t just interacting with the world. They were using symbols to mediate their relationship with it. But why the sudden shift from a hominin with the capacity for cultural expression to a hominin who actively created culture?

It could be that one small group of H. sapiens developed a genetic mutation that led to experiments with cultural expression. Then, the capacity to do it spread via mating between groups because storytelling and symbolic thought were invaluable survival skills for a species that regularly encountered unfamiliar environments. Using language and stories, one group could explain to another how to hunt the local animals and which plants were safe to eat. Armed with this information, humans could conquer territory more quickly. Any group that could do this would have a higher chance of surviving relocation time and again. The more those groups survived, the more able they were to pass along any genetic predisposition for symbolic communication.

Perhaps H. sapiens’ knack for symbolic culture was also a result of sexual selection, in which certain genes spread because their bearers are more attractive to the opposite sex. Put simply, these attractive people get laid more often, and therefore have more chances to spread their genes to the next generation. In his book The Mating Mind, evolutionary psychologist Geoffrey Miller argues that among ancient humans, the most attractive people were good with language and tools. The result would be a population in which sexual selection created successively more symbol-oriented people. Two anthropologists, Gregory Cochran and Henry Harpending, amplify this point. They argue that some of the genes that spread like wildfire through the human population over the past 50,000 years are associated with cranial capacity—brain size—and language ability. “Life is a breeding experiment,” Cochran and Harpending write in their book The 10,000 Year Explosion.

Our capacity for symbolism evolved quickly, partly because our mating choices would have been shaped by our needs as creatures who evolved to survive by founding new communities. Over the past million years, humans bred themselves to be the ultimate survivors, capable of both exploring the world and adapting to it by sharing stories about what we found there.

How Can We Possibly Know All This?

A lot of the evidence we have for the routes that humans took out of Africa comes from objects and places you can see with your own eyes. Paleontologists have found our ancestors’ ancient bones, as well as their tools. To figure out the ages of these tools and skeletons, we use the same kinds of dating techniques that geologists use to discover the history of rocks. In fact, when an anthropologist talks about “dating the age of fossils,” he or she isn’t actually talking about the bones themselves—to date old bones, anthropologists carefully excavate them and take samples of the rock surrounding them. Then they pin a date on those rocks, under the assumption that the bones come from roughly the same era as the rocks or sand that covered them up. Basically, we date fossils by association, which is why you’ll often hear scientists suggesting that a particular fossil might be between 100,000 and 80,000 years old. Though we can’t pin an exact month or year on each fossil discovery, we do have ample evidence that certain humans like H. ergaster came before other humans like H. erectus in evolutionary and geological time.

Over the past decade, however, the study of ancient bones has been revolutionized by new technologies for sequencing genomes, including DNA extracted from the fossils of Neanderthals and other hominins who lived in the past 50,000 years (sadly, we don’t have the ability to sequence DNA from Australopithecus or H. ergaster bones—their DNA is too decayed). At the Max Planck Institute in Leipzig, Germany, an evolutionary geneticist named Svante Pääbo and his team have developed technology to extract nearly intact genomes from Neanderthal bones. First they grind the bones to dust and chemically amplify whatever DNA molecules they can find, then analyze this genetic material using the same kinds of sequencers that decode the DNA of living creatures today. We’ll deal with the Neanderthal genome more in the next chapter, but suffice it to say that we have pretty solid evidence about the genetic relationships between H. sapiens and its sibling species H. neanderthalensis.

A lot of the evidence for humans’ low genetic diversity has been made possible by DNA-reading technologies developed since the first human genome was sequenced, in the early 1990s. Though that first human genome took over a decade to sequence, we now have machines capable of reading the entire set of letters making up one genome in just a few hours. As a result, population geneticists are accumulating a diverse sampling of sequenced human genomes, from people all over the world. Many of these genomes are collected into data sets that scientists can feed into software that does everything from make very simple comparisons between two genomes (literally analyzing the similarities and differences between one long string of letters and another), to extremely complex simulations of how these genomes might have evolved over time.

One of the first pieces of genetic evidence for the serial-founder theory emerged when scientists had collected DNA sequences from enough people that we could start to analyze genetic diversity in different regions all over the world. Geneticists discovered a telltale pattern: People born in Africa and India tend to have much greater genetic diversity than people born elsewhere. This is precisely the kind of pattern you’d expect to see in a world population that grew out of founder groups originating in Africa. Remember, each successive founder group has less and less genetic diversity. So people descended from groups that stayed in Africa or India are from early founder groups. People in Europe, Australia, Asia, and the Americas were the result of hundreds of generations of founder effects—so we’d expect them to have less genetic diversity. When you add this genetic evidence to the physical evidence from fossils and tools left behind by people leaving Africa, you wind up with a fairly solid theory that founder effects created our genetic bottleneck.

An Eruption That Launched Humanity

Though it’s likely that the genetic bottlenecks we observe in the human population were caused mostly by founder effects and sexual selection, there is some evidence that the final human radiation out of Africa was precipitated by a catastrophe. Ancient humans had been crossing the Sinai out of Africa and into the rest of the world for over a million years, but roughly 80,000 years ago there was an extremely large migration that changed the world and every human on it. H. sapiens, a human with language, clothing, and sophisticated tools, took over Africa, then migrated beyond its borders. Certainly it’s possible that this wave of human immigrants was spurred by mass deaths in the wake of the Toba eruption. But that’s debatable.

What’s certain is another explosion that nobody denies: the one in human symbolic communication. Our capacity for culture is what allowed us to survive in the perilous lands beyond the warm, fecund West African regions where Australopithecus first stood up. We never stayed in any one place for long. We moved into new places, founding new communities. And when we evolved complex symbolic intelligence, our growing facility with tools and language made these migrations easier. We could take advantage of many kinds of environments, teaching each other about their bounties and dangers in advance.

As H. sapiens poured off the continent of our birth, we discovered lands inhabited by our sibling hominins. We had to adapt to a world that already had humans in it. What came next will take us into one of the most controversial areas of population genetics and human evolutionary history.

7. MEETING THE NEANDERTHALS

NEANDERTHALS WERE HUMANS who went extinct between 20,000 and 30,000 years ago. Though there is some debate about who these people were, there is no question that there are none left. All that remains of the hundreds of Neanderthal groups that roved across Europe and Central Asia are a handful of ambiguous funeral sites, bones, tools, and pieces of art—along with some DNA that modern humans inherited from them. How can we avoid meeting the Neanderthals’ fate? That depends on what you think wiped out these early humans in the millennia after they met H. sapiens.

By 40,000 years ago, humans had spread in waves across most of the world, from Africa to Europe, Asia, and even Australia. But these humans were not all perfectly alike. When some groups of H. sapiens poured out of Africa, they walked north, then west. In this thickly forested land, they came face-to-face with other humans, stockier and lighter skinned than themselves, who had been living for thousands of years in the cold wilds of Europe, Russia, and Central Asia. Today we call these humans Neanderthals, a name derived from the Neander Valley caves in Germany where the first Neanderthal skull was identified in the nineteenth century.

Neanderthals were not one unified group. They had spread far enough across Europe, Asia, and the Middle East that they formed regional groups, something like modern human tribes or races, who probably looked fairly different from each other. Neanderthals used tools and fire, just as H. sapiens did, and the different Neanderthal groups probably had a variety of languages and cultural traditions. But in many ways they were dramatically unlike H. sapiens, leading isolated lives in small bands of 10 to 15 people, with few resources. They had several tools, including spears for hunting and sharpened flints for scraping hides, cutting meat, and cracking bones. Unlike H. sapiens, who ate a wide range of vegetables and meat, Neanderthals were mostly meat-eaters who endured often horrifically difficult seasons with very little food. Still, there is evidence that they cared for each other through hardship: fossils retrieved from a cave in Iraq include the skeleton of a Neanderthal who had been terribly injured, with a smashed eye socket and severed arm, whose bones had nevertheless healed over time. Like humans today, these hominins nursed each other back to health after life-threatening injuries.

Roughly 10,000 years after their first meeting with H. sapiens, all the Neanderthal groups were extinct and H. sapiens was the dominant hominin on Earth. What happened during those millennia when H. sapiens lived alongside creatures who must have looked to them like humanoid aliens?

A few decades ago, most scientists would have answered that it was a nightmare. Stanford’s Richard Klein, who spent years in France comparing the tools of Neanderthals and early H. sapiens, lowered his voice a register when I recently asked him to describe the meeting between these hominin groups. “You don’t like to think about a holocaust, but it’s quite possible,” he said. He referred to the long-standing belief among many anthropologists that H. sapiens exterminated Neanderthals with superior weapons and intellect. For a long time, there seemed to be no other explanation for the rapid disappearance of Neanderthals after H. sapiens arrived in their territories.

Today, however, there is a growing body of evidence from the field of population genetics that tells a very different story about what happened when the two groups of early humans lived together, sharing the same caves and hearths. Anthropologists like Milford Wolpoff, of the University of Michigan, and John Hawks have suggested that the two groups formed a new, hybrid human culture. Instead of exterminating Neanderthals, their theory goes, H. sapiens had children with them until Neanderthals’ genetic uniqueness slowly dissolved into H. sapiens over the generations. This idea is supported by compelling evidence that modern humans carry Neanderthal genes in our DNA.

Regardless of whether H. sapiens murdered or married the Neanderthals they met in the frozen forests of Europe and Russia, the fact remains that our barrel-chested cousins no longer walk among us. They are a group of humans who went extinct. The story of how that happened is as much about survival as it is about destruction.

The Neanderthal Way of Life

We have only fragmentary evidence of what Neanderthal life was like before the arrival of H. sapiens. Though they would have looked different from H. sapiens, they were not another species. Some anthropologists call Neanderthals a “subspecies” to indicate their evolutionary divergence from us, but there is strong evidence that Neanderthals could and did interbreed with H. sapiens. Contrary to popular belief, Neanderthals probably weren’t swarthy; it’s likely that these early humans were pale-skinned, possibly with red hair. We know that they used their spears to hunt mammoths and other big game. Many Neanderthal skeletons are distorted by broken bones that healed, often crookedly; this suggests that they killed game in close combat with it, sustaining many injuries in the process. They struggled with dramatic climate changes too. The European and Asian climates swung between little ice ages and warmer periods during the height of Neanderthal life, and these temperature changes would have constantly pushed the Neanderthals out of familiar hunting grounds. Many of them took shelter from the weather in roomy caves overlooking forested valleys or coastal cliffs.

Though their range extended from Western Europe to Central Asia, the Neanderthal population was probably quite small—a generous estimate would put it at 100,000 individuals total at its apex, and many scientists believe it could have been under 10,000. By examining the growth of enamel on Neanderthal teeth, anthropologists have determined that many suffered periods of extreme hunger while they were young. This problem may have been exacerbated by their meat-heavy diets. When mammoth hunting didn’t go well, or a particularly cold season left their favored game skinny or sick, the Neanderthals would have gone through months of malnutrition. Though Neanderthals buried their dead, made tools, and (at least in one case) built houses out of mammoth bones, we have no traditional evidence that they had language or culture as we know them. Usually such evidence comes in the form of art or symbolic items left behind. Neanderthals did make art and complex tools after meeting H. sapiens, but we have yet to find any art that is unambiguously Neanderthal in origin.

Still, there are intriguing hints. A 60,000-year-old Neanderthal grave recently discovered in Spain suggests that Neanderthals may have had symbolic communication before H. sapiens arrived. Researchers discovered the remains of three Neanderthals who appeared to have been gently laid in identical positions, their arms raised over their heads, then covered in rocks. The severed paws of a panther were found with the bodies, heightening the impression that the discovery represented a funeral ritual complete with “burial goods,” or symbolic items placed in the graves. Erik Trinkhaus, an anthropologist at Washington University in St. Louis, says this site shows that Neanderthals might have had symbolic intelligence like modern humans.

Gravesites like these have led many scientists, including Trinkhaus, to believe that Neanderthals talked or even sang. But we haven’t found enough archaeological evidence to sway the entire scientific community one way or the other.

By contrast, the H. sapiens groups who lived at the time of first contact with Neanderthals left behind ample evidence of symbolic thought. Bone needles attest to the fact that H. sapiens sewed clothing, and pierced shells suggest jewelry. There are even traces of red-ochre mixtures found in many H. sapiens campsites, which could have been used for anything from paint or dye to makeup. Added together, these bits of evidence suggest that H. sapiens groups weren’t just using tools for survival; they were using them for adornment. And culture as we know it probably started with those simple adornments.

Looked at from the perspective of Neanderthals, then, there might have been a vast gulf between themselves and the newly arrived H. sapiens. The newcomers not only looked different—they were taller, slimmer, and had smaller skulls—but they probably chattered in an incomprehensibly complex language and wore bizarre garments. Would Neanderthals have tried to communicate with these people, or invited them to a dinner of mammoth meat?

For anthropologists like Klein, who spoke about a Neanderthal holocaust, the answer is an emphatic no. He’s part of a school of anthropological thought that holds that H. sapiens would have met the Neanderthals with nothing but hate, disgust, and indifference to their plight. After those Neanderthals watched H. sapiens arrive, the next chapter in their lives would have been marked by bloodshed and starvation as H. sapiens murdered and outhunted them with their superior weaponry. Neanderthals were so poor, and had such a small population, that their extinction was inevitable.

This story might sound familiar to anyone versed in the colonial history of the Americas. It’s as if H. sapiens is playing the role of Europeans arriving in their ships, and Neanderthals are playing that of the soon-to-be-exterminated natives. But Klein sees a sharp contrast between Neanderthals and the natives that Europeans met in America. When H. sapiens arrived, he asserted, “there was no cultural exchange” because the Neanderthals had no culture. Imagine what might have happened if the Spanish had arrived in the Americas, but the locals had no wealth, science, sprawling cities, nor vast farms. The Neanderthals had nothing to trade with H. sapiens, and so the newcomers saw them as animals. Neanderthals may have had fleeting sexual relationships with H. sapiens here and there, admitted Klein, but “modern human males will mate with anything.” Tattersall agreed. “Maybe there was some Pleistocene hanky-panky,” he joked. But it wasn’t a sign of cultural bonding. For anthropologists like Klein and Tattersall, any noncombative relationships forged between the two human groups were more like fraternization than fraternity.

But there is a counter-narrative told by a new generation of anthropologists. Bolstered by genetic discoveries that have revealed traces of Neanderthal genes in the modern human genome, these scientists argue that there was a lot more than hanky-panky going on. Indeed, there is evidence that the arrival of H. sapiens may have dramatically transformed the impoverished Neanderthal culture. Some Neanderthal cave sites hold a mixture of traditional Neanderthal tools and H. sapiens tools. It’s hard to say whether these remains demonstrate an evolving hybrid culture, or if H. sapiens simply took over Neanderthal caves and began leaving their garbage in the same pits that the Neanderthals once used. Still, many caves that housed Neanderthals shortly before the group went extinct are full of ornaments, tools, and even paints. Were they emulating their H. sapiens counterparts? Had they become part of an early human melting pot, engaging in the very cultural exchange that Klein and Tattersall have dismissed?

Extermination and Assimilation

The complicated debate over what happened to Neanderthals can be boiled down to two dominant theories: Either H. sapiens destroyed the other humans, or joined up with them.

The “African replacement” theory, sometimes called the recent African origins theory, holds that H. sapiens charged out of Africa and crushed H. neanderthalensis underfoot. This fits with Klein’s account of a Neanderthal holocaust. Basically, H. sapiens groups replaced their distant cousins, probably by making war on them and taking over their territories. This theory is simple, and has the virtue of matching the archaeological evidence we find in caves where Neanderthal remains are below those of H. sapiens, as if modern humans pushed their Neanderthal counterparts out into the cold to die.

In the late 1980s, a University of Hawaii biochemist named Rebecca Cann and her colleagues found a way to support the African replacement theory with genetic evidence, too. Cann’s team published the results of an exhaustive study of mitochondrial DNA, small bits of genetic material that pass unchanged from mothers to children. They discovered that all humans on Earth could trace their genetic ancestry back to a single H. sapiens woman from Africa, nicknamed Mitochondrial Eve. If all of us can trace our roots back to one African woman, then how could we be the products of crossbreeding? We must have rolled triumphantly over the Neanderthals, spreading Mitochondrial Eve’s DNA everywhere we went. But mitochondrial DNA offers us only a small part of the genetic picture. When scientists sequenced the full genomes of Neanderthals, they discovered several DNA sequences shared by modern humans and their Neanderthal cousins.

Besides, how likely is it that a group of H. sapiens nomads would attack a community of Neanderthals? These were explorers, after all, probably carrying their lives on their backs. Neanderthals may not have had a lot of tools, but they did have deadly spears they used to bring down mammoths. They had fire. Even with H. sapiens’ greater numbers, would these interlopers have had the resources to mount a civilization-erasing attack? Rather than starting a resource-intensive war against their neighbors, many H. sapiens could have opted to trade with the odd-looking locals, and eventually move in next to them. Over time, through trade (and, yes, the occasional battle) the two groups would have shared so much culturally and genetically that it would become impossible to tell them apart.

This is precisely the kind of thinking that animates what’s called the multiregional theory of human development. Popularized by Wolpoff and his colleague John Hawks, this theory fits with the same archaeological evidence that supports the African replacement theory—it’s just a very different interpretation.

Wolpoff’s idea hinges on the notion that the ancestors of Neanderthals and H. sapiens didn’t leave Africa as distinct groups, never to see each other again until the fateful meeting that Klein described with such horror. Instead, Wolpoff suggests, humans leaving Africa 1.8 million years ago forged a pathway that many other archaic humans walked—in both directions. Instead of embarking on several distinct migrations off the continent, humans expanded their territories little by little, essentially moving next door to their old communities rather than trekking thousands of kilometers to new homes. Indeed, the very notion of an “out of Africa” migration is based on an artificial political boundary between Africa and Asia, which would have been meaningless to our ancestors. They expanded to fill the tropical forests they loved, which happened to stretch across Africa and Asia during many periods in human evolution. Early humans would have been drifting back and forth between Africa, Asia, and Europe for hundreds of thousands of years. It was all just forest to Neanderthals and H. sapiens.

If scientists like Wolpoff are right—and Hawks has presented compelling genetic evidence to back them up—then H. sapiens probably didn’t march out of Africa all at once and crush all the other humans. Instead, they evolved all over the world through an extended kinship network that may have included Neanderthals as well as other early humans like Denisovans and H. erectus.

It’s important to understand that the multiregional theory does not suggest that two or three separate human lineages evolved in parallel, leading to present-day racial groups. That’s a common misinterpretation. Multiregionalism describes a human migration scenario similar to those we’re familiar with among humans today, where people cross back and forth between regions all the time. For multiregionalists, there were never two distinct waves of immigration, with one leading to Neanderthals, and the other packed with H. sapiens hundreds of thousands of years later. Instead, the migration (and evolution) of H. sapiens started 1.8 million years ago and never stopped.

Many anthropologists believe that the truth lies somewhere in between African replacement and multiregionalism. Perhaps there were a few distinct waves of migration, such anthropologists will concede, but H. sapiens didn’t “replace” the Neanderthals. Instead, H. sapiens bands probably assimilated their unusual cousins through the early human version of intermarriage.

Perhaps, when Neanderthals stood in the smooth stone entries to their caves and watched H. sapiens first entering their wooded valleys, they saw opportunity rather than a confusing threat. In this version of events, our ancient human siblings may have had few resources and lived a hardscrabble life, but they were H. sapiens’ mental equals. They exchanged ideas with the newcomers, developed ways of communicating, and raised families together. Their hybrid children deeply affected the future of our species, with a few of the most successful Neanderthal genes drifting outward into some of the H. sapiens population. Neanderthals went extinct, but their hybrid children survived by joining us.

Whether you believe that humans exterminated or assimilated Neanderthals depends a lot on what you believe about your own species. Klein doesn’t think Neanderthals were inferior humans doomed to die—he simply believes that early H. sapiens would have been more likely to kill and rape their way across Europe in a Neanderthal holocaust, rather than making alliances with the locals. As his comment about the sexual predilections of modern men makes clear, Klein is basing his theory on what he’s observed of H. sapiens in the contemporary world. Tattersall amplified Klein’s comments by saying that he thinks humans 40,000 years ago probably treated Neanderthals the way we treat each other today. “Today, Homo sapiens is the biggest threat to its own survival. And [the Neanderthal extinction] fits that picture,” he said. Ultimately, Tattersall believes that we wiped out the Neanderthals just the way we’re wiping ourselves out today.

Hawks, on the other hand, described a more complicated relationship between H. sapiens and Neanderthals. He believes that Neanderthals had the capacity to develop culture, but simply didn’t have the resources. “They made it in a world where very few of us would make it,” he said, referring to the incredible cold and food scarcity in the regions Neanderthals called home. Anthropologists, according to Hawks, often ask the wrong questions of our extinct siblings: “Why didn’t you invent a bow and arrow? Why didn’t you build houses? Why didn’t you do it like we would?” He thinks the answer isn’t that the Neanderthals couldn’t but that they didn’t have the same ability to share ideas between groups the way H. sapiens did. Their bands were so spread out and remote that they didn’t have a chance to share information and adapt their tools to life in new environments. “They were different, but that doesn’t mean there was a gulf between us,” Hawks concluded. “They did things working with constraints that people today have trouble understanding.” Put another way, Neanderthals spent all day in often fatal battles to get enough food for their kids to eat. As a result, they didn’t have the energy to invent bows and arrows in the evening. Despite these limitations, they formed their small communities, hunted collectively, cared for each other, and honored their dead.

When H. sapiens arrived, Neanderthals finally had access to the kind of symbolic communication and technological adaptations they’d never been able to develop before. Ample archaeological evidence shows that they quickly learned the skills H. sapiens had brought with them, and started using them to adapt to a world they shared with many other groups who exchanged ideas on a regular basis. Instead of being driven into extinction, they enjoyed the wealth of H. sapiens’ culture and underwent a cultural explosion of their own. To put it another way, H. sapiens assimilated the Neanderthals. This process was no doubt partly coercive, the way assimilation so often is today.

More evidence for Hawks’s claims comes from Neanderthal DNA. Samples of their genetic material can reveal just what happened after all that Pleistocene hanky-panky. A group of geneticists at the Max Planck Institute, led by Svante Pääbo, sequenced the genomes of a few Neanderthals who had died less than 38,000 years ago. After isolating a few genetic sequences that appear unique to Neanderthals, they found evidence that a subset of these sequences entered the H. sapiens genome after the first contact between the two peoples. Though this evidence does not prove definitively that genes flowed from Neanderthals into modern humans, it’s a strong argument for an assimilationist scenario rather than extermination.

A big question for anthropologists has been whether H. sapiens comes from a “pure” lineage that springs from a single line of hominins like Mitochondrial Eve. The more the genetic evidence piles up, however, the more likely it seems that our lineage is a patchwork quilt of many peoples and cultures who intermingled as they spread across the globe. Present-day humans are the offspring of people who survived grueling immigrations, harsh climates, and Earth-shattering disasters.

Most anthropologists are comfortable admitting that we just don’t know what happened when early humans left Africa, and are used to revising their theories when new evidence presents itself. Klein’s influential textbook The Human Career is full of caveats about how many of these theories are under constant debate and revision. In 2011, for example, the anthropologist Simon Armitage published a paper suggesting that H. sapiens emerged from Africa as early as 200,000 years ago, settling in the Middle East. This flies in the face of previous theories, which hold that H. sapiens didn’t leave Africa until about 70,000 years ago. The story of how our ancestors emerged from their birthplaces in Africa turns out to be as complicated as a soap opera—and it likely includes just as much sex and death, too.

Who Survived to Tell the Tale?

Whether humans destroyed Neanderthals or merged with them, we’re left with a basic fact of anthropological history, which is that modern humans survived and Neanderthals did not. It’s possible that members of H. sapiens were better survivors than their hominin siblings because Neanderthals didn’t exchange symbolic information; they were too sparse, spread out, and impoverished to achieve a cultural critical mass the way their African counterparts did. But it seems that Neanderthals were still swept up into H. sapiens’ way of life in the end. Our Neanderthal siblings survive in modern human DNA because they formed intimate bonds with their new human neighbors.

Svante Pääbo, who led the Neanderthal DNA sequencing project, recently announced a new discovery that also sheds light on why H. sapiens might have been a better survivor than H. neanderthalensis. After analyzing a newly sequenced genome from a Denisovan, a hominin more closely related to Neanderthals than H. sapiens are, Pääbo’s team concluded that there were a few distinct regions of DNA that H. sapiens did not share with either Neanderthals or Denisovans. Several of those regions contain genes connected to the neurological connections that humans can form in their brains. In other words, it’s possible that H. sapiens’ greater capacity for symbolic thought is connected to unique strands of DNA that the Neanderthals didn’t have.

“It makes a lot of sense to speculate that what had happened is about connectivity in the brain, because … Neanderthals had just as large brains as modern humans had,” Pääbo said at a press conference in 2012 after announcing his discovery. “Relative to body size, they had even a bit larger brains [than H. sapiens]. Yet there is something special that happens with modern humans. It’s sort of this extremely rapid technological cultural development and large societal systems, and so on.” In other words, H. sapiens’ brains were wired slightly differently than their fellow hominins. And once Neanderthals merged with H. sapiens’ communities, bearing children with the new arrivals, their mixed offspring may have had brains that were wired differently, too. Looked at in this light, it’s as if H. sapiens assimilated Neanderthals both biologically and culturally into an idea-sharing tradition that facilitated rapid adaptation even to extremely harsh conditions.

Early humans evolved brains that helped us spread ideas to our compatriots even as we scattered to live among new families and communities. It’s possible that this connectedness—both neurological and social—is what allowed groups of H. sapiens to assimilate their siblings, the Neanderthals. Still, our storytelling abilities are also what allow us to remember these distant, strange ancestors today.

Humans’ greatest strength 30,000 years ago may have been an uncanny ability to assimilate other cultures. But in more recent human history, this kind of connectedness almost did us in. Once human culture scaled up to incorporate unprecedentedly enormous populations, our appetite for assimilation spread plagues throughout the modern world, almost destroying humanity many times over. And it spawned deadly famines, too. As we’ll see in the next two chapters, humanity’s old community-building habits can become pathological on a mass scale. Thousands of years after the merging of Neanderthals and H. sapiens, the practices that helped us survive in pre–ice age Europe became, in some contexts, liabilities. They wiped out whole civilizations and made it necessary for us to change the structures of human community forever.

8. GREAT PLAGUES

[Death] hath a thousand slayn this pestilence.

And, maister, er ye come in his presence,

Me thinketh that it were necessarie

For to be war of switch an adversarie.

Beth redy for to meete hym everemoore.

—Geoffrey Chaucer, “The Pardoner’s Tale,” The Canterbury Tales, 1380s

MOST PEOPLE KNOW British poet Geoffrey Chaucer because he wrote one of the earliest works of literature in English, The Canterbury Tales. What you may not know is that Chaucer came of age in a postapocalyptic world. Born in the 1340s, Chaucer would have been a little boy when the Black Death first struck England, in 1348; in the next few years it wiped out over 60 percent of the population of the British Isles. The son of a wealthy wine merchant, Chaucer grew up in London, already a bustling city where traders arriving in ships from Europe would have brought news of the “pestilence” ripping through the continent. The late 1340s marked the first great pandemic of what would later be called the bubonic plague, and the death tolls were so high that most bodies were thrown into mass graves because churchyards were overflowing. Even if there had been room for the corpses, it’s likely there were not enough clergy left to coordinate burials. Chaucer came of age in the wake of a pandemic so deadly that half the population of London perished.

We hear of the Black Death only rarely in Chaucer’s considerable body of work, most memorably in the lines I’ve quoted above from The Canterbury Tales. The corrupt Pardoner is telling his fellow travelers a tale of three angry drunks who decide to kill Death, to avenge their friend’s murder. Violently intoxicated, they demand that a little boy carrying corpses to the graveyard tell them where to find “Deeth.” The boy warns them that Death “hath a thousand slayne this pestilence,” or slain a thousand people during the last bout of plague. The boy adds that they should be ready to meet Death “everemoore,” anytime. This casual reference to “pestilence,” written over 40 years after the plague first shattered England, indicates how ordinary the specter of mass death had become for people of Chaucer’s generation. The disease had returned again and again to claim thousands of lives during the late fourteenth century, though not with the ferocity that it had in Chaucer’s boyhood. The pestilence may not have touched the poet’s writings much, but its social reverberations marked his life and those of all his countrymen.

Plague was a symptom of the problems humans had adapting to our own growing societies. By the time the Middle Ages rolled around, we were old pros at the symbolic-culture game that helped us outlast the Neanderthals, but we still had little experience with using that symbolic culture to unite large societies made up of many disparate groups. Humans first began experimenting with such societies during classical antiquity, in sprawling ancient empires like those of the Assyrians, the Romans, the Han, and the Inca. But these civilizations were exceptions rather than the norm for most people. During Chaucer’s lifetime in the Middle Ages, however, humanity began laying the foundations for what would over the next five centuries grow into modern, global society. And this transformation meant that for the first time, the greatest threat to humanity came not from nature, but from ourselves.

A Revolutionary Pestilence

In a country whose population was only 40 percent of what it had been in the years before his birth, Chaucer grew up with opportunities he might never have had otherwise. A man of lively intelligence, he got an uncommonly good education working as an esquire at the court of Edward III, a typical role for the child of a wealthy merchant. He managed to get good legal training by studying with attorneys who worked in the court’s “Inner Temple,” essentially a medieval law school. And then he found paying work as a representative for various members of the royal family, conducting business for them abroad (where he learned French and Italian) and eventually in London. Because Chaucer did so much business for the crown, he left a surprisingly detailed paper trail, including travel authorizations, expense accounts, promises of payment, and legal documents. Scholars have pieced together his life from these scraps of paper.

We know from these records that for twelve years, the former esquire lived with his wife and children in Aldgate, one of the wealthiest neighborhoods of London. Chaucer’s home was a fine set of rooms right above a gate in the ancient defensive walls that surrounded the city. In typical feudal fashion, the mayor had granted the dwelling to Chaucer rent-free at roughly the same time that Edward III put the future poet in charge of managing export taxes on wool in London’s Custom House. Apparently, Chaucer was quite good at his job. He did valuable accounting for the kingdom during the day, and probably made his first efforts at writing poetry during the evenings. Though Chaucer managed vast sums of money for the crown, he and his family were what would one day be called middle class. Connections to the royal family gave them just enough stature to merit a good living (his salary included a daily gallon pitcher of wine), and a nice home. One side effect of the Black Death was a dramatic reshuffling in the upper echelons of society, whose members had been thinned by the pestilence. Chaucer flitted from one good job to the next, always working closely with the crown, because there was a shortage of educated men who did not owe blood allegiance to one aristocratic family or another.

The same grisly population crash that sent Chaucer and his family up the social ladder was affecting the peasant class, too. And in 1381, Chaucer’s life was threatened by the results. It was the year of the Peasants’ Revolt, a series of violent riots fomented by peasants demanding better wages and treatment. They happened literally beneath the Chaucer family’s windows in Aldgate, and many of the rioters were armed with weapons and torches; the angry protesters left a lot of Chaucer’s rich neighbors dead.

But why did a disease epidemic lead to riots for better pay? Call it cultural adaptation in action. Jo Hays, a historian at Loyola University whose work focuses on pandemics in the ancient world and the Middle Ages, explained that the Black Death had upset a stalemate in class relations that had lasted centuries. Peasants, long tied to the land by the feudal system, had been trapped in serfdom because there were no other options. As the peasant population ballooned, their lords could afford to grant them less and less—it was a landlord’s market, as it were. The poor starved, but had no options. When the Black Death hit, most of the people who died were impoverished folk whose health was already compromised by lack of food. The survivors, like Chaucer, found themselves in a world where jobs were suddenly plentiful. Couple this situation with the rise of money wages (like those Chaucer’s family got as merchants) instead of land grants, and for the first time in centuries, peasants could exercise a degree of choice over their work.

An image from the British Library (circa 1385–1400), depicting the Mayor of London executing Wat Tyler, leader of the Peasants’ Revolt, while King Richard II looks on. (illustration credit ill.7)

The magnitude of the plague also called into question every form of authority in the medieval world. Though the Church called the pestilence a punishment from God, it was hard to avoid noticing that so-called godly men and women were meeting the same fate as the godless—nor did their prayers prevent the plague. Indeed, Chaucer was highly critical of Church officials in The Canterbury Tales, a sentiment he voiced because it reflected popular, though controversial, ideas in his day. Likewise, the common people began questioning government authorities in the wake of the Black Death, especially once serfs had a better bargaining position.

Still, those authorities did their best to cling to the old rules. In the wake of the Black Death, the British government reacted to the labor shortage by trying to limit wages by law. In 1351, just four years after the first plague year, the English Statute of Laborers stipulated that all wages should be held at the pre-pestilence levels of 1340. The situation quickly became untenable. Angry peasants demanding better wages stormed Chaucer’s neighborhood, burning homes and dragging rich people from their rooms to be killed. It’s not clear whether Chaucer was actually at home when the riots happened, but some of his friends and business associates were murdered during the mob violence. Soon after, the government repealed the statute, along with similar laws, allowing peasants to earn higher wages and achieve some autonomy from their landlords. Within days of the Peasants’ Revolt, Chaucer gave up his cozy rooms in Aldgate and moved permanently to Kent, near Canterbury, beginning a new career as an estate manager and city planner. The city where he’d grown up had transformed so radically that he no longer felt welcome there.

Similar rebellions racked France and Italy, as the surviving laborers realized how valuable they were in a depopulated Europe. Though bloody at first, this kind of rebellion led to better treatment of workers and eventually to the rise of the middle classes. The benefits of these social reforms extended even to women, a group who had almost never been part of the traditional labor force. After the Black Death, there was a rise in the number of women running their own taverns. (We see the evidence in records of who was purchasing grain for brewing beer, where there is a notable uptick in female buyers.)

The first wave of plague also led directly to some of the first city-planning efforts aimed at improving the lives of the general populace. In the wake of the pandemic, many cities established boards of public health, which by the fifteenth century were responsible for sanitation and waste disposal in cities like Florence and Milan. These boards also engaged in what today we’d call “health surveillance,” compiling weekly lists of people who died from epidemic diseases so that officials could spot a pandemic before it became widespread. Just as we do today, city officials in early Renaissance Florence would establish quarantines for people afflicted by disease, to prevent a major outbreak.

“The Poor That Cannot Be Taken Notice Of”

The scale of the fatalities from the Black Death during the 1300s was caused by the structure of the societies where it spread. Close living conditions made a deadly disease into an apocalyptic pandemic. SUNY Albany anthropologist Sharon DeWitte worked with a group of biologists who sequenced bacterial DNA from medieval victims of the epidemic. They discovered ample evidence that bubonic plague was caused by the bacterium Yersinia pestis, which is almost always spread by contact, not air. Urban societies, with their closely packed populations, were therefore ideal breeding grounds for Y. pestis. The other major cause of the near-extinction events during Chaucer’s childhood was lack of food. In 1348, the Black Death came soon after a terrible famine had weakened the immune systems of people, mostly the poor, who had the least to eat. An epidemic became a “Black Death” in part because of how the ruling class allocated economic resources.

The plagues of the late fourteenth century called attention to one of the greatest threats to human adaptability in an urbanizing world. Put in the simplest possible terms, that threat is a stark class division between rich and poor. When many people live in close proximity, but a large portion of them are trapped in deprived, unhealthy conditions, the entire society is put at risk of extinction. Pandemics spread rapidly among the vulnerable, bringing death to everyone. But this isn’t just a matter of epidemiology. Feudalism was an economic system which kept a major part of the population locked into poverty, and it was so rigid that any perturbation of the social order left it open to disruption. The pestilence that Chaucer described coming “everemoore” attacked a society whose rules made it both biologically and culturally vulnerable.

And yet the humans who survived one of the greatest disease apocalypses in our history did not respond with despair and a descent into savagery. There was no zombie freakout scenario, as we like to imagine today. Instead, the Peasants’ Revolt led to social reforms that improved the lot of the poor in the decades that followed. Our facility for cultural adaptation can bloom even in the wake of seeming apocalypse. Though it would be centuries before the renewed interest in science that arose with the Age of Enlightenment, let alone the germ theory of disease, the humans who survived the plague managed to lay the foundations for political structures in which every class could advocate for its own best interests. At the same time, newly created health boards stood a chance of protecting vulnerable populations, too.

As European cities grew and feudalism crumbled, the rise of the market economy forged new connections between urban societies through international trade. Humans once again raced to adapt to the dangers created by a global civilization with a massive, vulnerable population.

Though epidemics seemed to hit roughly once a generation in England, the plague summer of 1665 killed so many people that it was called simply the Great Plague. It was also a period of rapid cultural change, when class divisions again took on deadly proportions. In the diaries of a young, well-connected naval official named Samuel Pepys (pronounced peeps), we have a record of daily life during this time.

In late August of 1665, Pepys described the streets of London in his diary:

But now, how few people I see, and those walking like people that have taken leave of the world…. Thus this month ends, with great sadness upon the public through the greatness of the plague, everywhere through the Kingdom almost. Every day sadder and sadder news of its increase. In the City died this week 7496; and all of them, 6102 of the plague. But it is feared that the true number of the dead this week is near 10000—partly from the poor that cannot be taken notice of through the greatness of the number, and partly from the Quakers and others that will not have any bell ring for them. As to myself, I am very well; only, in fear of the plague.

Pepys’s horror grows as the death tolls rise, even as he must continue going about his business—which, by the way, was booming in the plague year. As his neighbors fall prey to the disease, he sees “Searchers,” groups mostly of women, who inspect houses looking for evidence of the Black Death. Where they find it, the Searchers impose quarantine on the people living there, and mark the doors with red crosses. In the passage I’ve excerpted from above, Pepys describes empty streets where a few brave people outside like himself walk around in a daze. He’s wandering through an apocalyptic urban landscape, recognizable to anyone who has watched movies like 28 Days Later or seen the TV series The Walking Dead.

Without access to medicine, crowded together in densely packed slums, the London poor succumbed to plague swiftly. New York University’s literary historian Ernest Gilman has pored over writings from this era, where representatives of the Church insisted that the Black Death was a punishment from God. But, he noted, by the seventeenth century these men were in dialogue with a group sneeringly referred to as “mere naturians,” or proto-scientific thinkers who believed the plague had a purely earthly origin. Though most medicine at the time would be called quackery today, the official government position was nevertheless that the disease was contagious. It was said to spread through “the miasma,” the air. These ideas led to the practice of state-enforced quarantining, but also to people wearing face masks and even washing coins in vinegar when they changed hands. Medicine and science were ideas that had achieved some social currency during Pepys’s time; a lot had changed since Chaucer dared to make fun of the Catholic Church in The Canterbury Tales.

What had also changed was the marketplace. A new class of merchants and tradespeople had come to occupy a central place in England and Europe’s economic systems, and they established trade routes throughout the world. Cities became central to this new economy, and impoverished groups flocked to the slums of big cities like London, hoping to find their fortunes in the world of trade rather than farming. As a result, Pepys could observe the stark class division between those touched by plague and those decimated by it. Involved as he was in naval matters and trade, Pepys could also profit from the very market systems that most helped set up conditions for the Great Plague. Tragically, the first stirrings of global capitalism and disease seemed to go hand in hand.

In his diary, Pepys also noted something that’s crucial for understanding the spread of epidemics during the seventeenth century and beyond: Mortality rates among the poor were skyrocketing, and yet at the same time were not being recorded. He wrote that many believed the death toll was likely 2,500 more people than officially reported, “partly from the poor that cannot be taken notice of through the greatness of the number.”

Nothing would make that more obvious than the devastating epidemics that were sweeping the Americas while Pepys was getting rich back in London.

The Plagues of Colonialism

In Pepys’s time, Europeans had been carving out colonies in the Americas for over a century and a half. A lucrative trade in goods and people turned the Atlantic into a maze of shipping lanes, packed with cargo vessels bearing everything from gold and slaves to animals and produce. They also bore disease.

One of the enduring questions in American history is why ragtag groups of European and English colonists were able, in just a couple of centuries, to claim the riches of two continents packed with enormous cities in the Aztec and Inca empires, along with highly trained armies, vast farms, and millions of people. In the seventeenth century, the dominant theory would have been that God was dishing out justice to the heathen natives. Up until the mid-twentieth century, historians and anthropologists offered rationales that weren’t much better. They believed the natives were too innocent, savage, stupid, or inferior to mount a decent defense against the European invaders. In the late 1990s, Jared Diamond argued in Guns, Germs, and Steel that the Inca weren’t culturally inferior, but instead victims of historical and environmental circumstances. Diamond popularized the idea that the Inca fell to the Spanish because the Americans’ “stone age” technology and lack of writing left them unprepared to deal with the Europeans’ guns, cavalry, and greater stores of knowledge. These issues, as much as the plagues Europeans brought, were what left the Inca empire vulnerable to conquest.

Over the past decade, however, new information has emerged about the civilizations of the Americas. As Charles Mann explains in his book 1491, an exploration of new scholarship on pre-Columbian life, the Inca were technologically advanced enough to have defeated the Spanish. They had a highly developed system of writing called quipu, created by making different kinds of knots out of string, which is only today finally being deciphered (sadly, the Spanish burned most of the quipu libraries). True, the Inca did not have horses or metal weapons, but they had textile technologies that allowed them to weave massive boats from rushes, hurl flaming rocks over great distances using slingshots, and of course they had the advantage of a hilly terrain that was nearly impossible for horses to climb. What felled the Inca was quite simply a plague on the scale of what Chaucer witnessed as a boy, coupled with a raging civil war caused by a power vacuum left when several Inca leaders succumbed to smallpox.

By the time the conquistadors arrived in full force, South America was already riddled with plagues that spread easily on the vast trade routes of the Inca and Aztec empires. Imagine if a group of warriors, armed to the teeth, had descended on London in the wake of the Peasants’ Revolt. The city was depleted, and its inhabitants were squabbling violently over what to do next. It would have been easy for even a small band of foreign soldiers to step in and lay waste to the city. As Mann would have it, Europeans did not conquer the Americas with technology and writing. Instead, they inadvertently imported smallpox, influenza, and bubonic plague to the Americas, and those diseases destroyed native communities long before conquistadors like Francisco Pizarro could come in to claim victory over them.

Many of these new theories were first popularized in the historian Alfred Crosby’s influential 1972 book The Columbian Exchange: Biological and Cultural Consequences of 1492. In it, Crosby argues that European and American meetings constituted a vast environmental experiment, in which plants, animals, and microbes that had been separated for sometimes millions of years were suddenly thrown together. Peas, squash, potatoes, tomatoes, maize, and other native American crops were brought back to Europe; horses, pigs, and cows were brought to the Americas. Syphilis returned from the New World in the bodies of explorers, and Europe’s plagues arrived in the New World the same way.

Just as people in Pepys’s London had no scientific way to respond to plague, neither did citizens of the great cities like Tenochtitlán and Cuzco, in regions that later became Mexico and Peru. Based mostly on written accounts of the period from explorers, as well as death records in missions, many historians and anthropologists believe that as much as 90 percent of the American population was eventually felled by epidemics. Arizona State University forensic archaeologist Jane Buikstra, who has studied the remains of people who lived before colonial contact in Mexico and Peru, believes that the Columbian plagues hit populations that were already vulnerable. In the bones of people born before Europeans arrived, she said, “you see evidence of warfare and malnutrition. Some groups were highly stressed, living in constrained, unhealthy conditions with a lot of garbage around them.” Stressed groups would be more vulnerable to introduced disease—much the way the urban slum dwellers in seventeenth-century London were, or the starved peasants in the Middle Ages.

Unlike England, however, the Americas were in the process of being colonized by foreign powers. And that, according to the historian Paul Kelton, of the University of Kansas, may have meant the difference between the typical European epidemic death tolls (up to 60 percent in the wake of the Black Death) and typical American ones (up to 90 percent). Kelton has studied historical records of North American native cultures like the Cherokee, and believes that the social and economic upheavals caused by colonialism exacerbated the virulence of American plagues. Without the colonists’ trade routes linking previously isolated groups and regions together, Kelton argues, epidemics wouldn’t have spread as rapidly. The main vector of disease on these trade routes would have been slaves taken from the American population. Even though most people don’t know about it today, the traffic in native American slaves was a thriving business in seventeenth-century America. Slavers played into already existing tensions between rival groups, encouraging the victors in battle to trade their captives with Europeans for goods ranging from guns and powder to horses and wool clothes. Slave raids, in turn, intensified warfare, and disrupted centuries-old patterns of hunting and farming. Groups decimated by slavery, their strongest warriors shipped overseas to sugar plantations in the West Indies, hid from their enemies in fortified villages and were unable to secure food supplies they needed. When smallpox hit these villages, the death tolls were stupendous.

Worse than the initial wave of epidemics was the fact that American groups had no time to recover from their losses. In England, after the Black Death hit, common people were able to continue in the same jobs and homes they’d had before the pestilence. In fact, as the Peasants’ Revolt makes clear, many were able to demand a better way of life because their labor had become more valuable. But in the Americas, new colonial governments and militias used their power to force plague-weakened groups off their lands, or tempted them into trading their livelihoods away for guns and horses. Biological catastrophe was followed by political catastrophe, which led to the kinds of displacement and poverty that can correlate with high death tolls in epidemics. In many regions, missionaries would push native groups to live in missions where men and women were separated, thus ensuring that the population couldn’t rebuild itself by forming new families and having children.

David S. Jones, a Harvard historian and medical doctor, sums up the issues succinctly in his influential paper “Virgin Soils Revisited”:

Any factor that causes mental or physical stress—displacement, warfare, drought, destruction of crops, soil depletion, overwork, slavery, malnutrition, social and economic chaos—can increase susceptibility to disease. These same social and environmental factors also decrease fertility, preventing a population from replacing its losses.

American epidemics were likely triggered by the same factors as the European ones. The main difference was that the rapid advance of colonial intrusions in the Americas prevented populations from recovering before they were hit with another wave of disease.

Survival and Survivance

What the plagues of the past 700 years reveal is that human mass societies can magnify the effects of threats that come from the environment, like disease. As our cultures grow, so, too, do our vulnerabilities to extinction. There are many failure modes when we try to adapt to our new circumstances as creatures who can no longer wander off to found a new community the way our ancestors did. Rigid class divisions and warfare are two such failure modes, and they are often accompanied by pandemics. As the University of Colorado history professor Susan Kent explains in her recent book about the 1918–19 flu epidemic—the worst in human history since Chaucer’s time—this pandemic virus quickly became more virulent because it spread with the movements of soldiers during World War I. As waves of soldiers succumbed to the flu, new ones came to replace the dying. The virus always had fresh new hosts, who were generally being shipped across the globe to new locations where the flu would take hold. Like the Black Death, the 1918–19 pandemic led to reforms in health care and indirectly sparked several colonial rebellions reminiscent of the Peasants’ Revolt.

We can excavate a grim survivor’s lesson from the piles of bones these pandemics left behind. We are currently struggling to adapt to life in a global society, where the dangers of culture-saturated, densely populated cities have replaced the dangers of the wilderness. And we are adapting. With each plague, there arise social movements that inch us closer to economic equality and clarify what’s required to take a scientific approach to public health.

The lingering social effects of the American plagues are nevertheless a reminder that there’s a lot of work that remains to be done. Those waves of colonial-era pandemics helped usher in an era of economic inequality between colonizers and the colonized, undermining civilizations that had thrived for thousands of years. Anishinaabe author and University of New Mexico American Studies professor Gerald Vizenor argues that native cultures and peoples have survived throughout the Americas, though often in dramatically altered form. They’ve done it by maintaining communities, passing on stories to younger generations, and fighting for political sovereignty when they could.

Vizenor coined the term “survivance” to describe the practices of natives today who are connected to their cultural traditions, but also living them dynamically, reshaping them to suit life in a world forever changed by colonial contact. The difference between survival and survivance is the difference between maintaining existence at a subsistence level and leading a life that is freely chosen. As we contemplate the ways humanity will endure, it’s worth keeping the idea of survivance in mind. One of the best things about H. sapiens is that we are more than the sum of our biological parts. We are minds, cultures, and civilizations. I don’t mean to say that people can live on ideals alone: That’s obviously stupid. But when we aspire to survive disaster, we are perhaps without realizing it aspiring to survive as independent beings. We aren’t aiming for a form of survival that looks like slavery or worse.

The European and American plagues changed the world, both environmentally and economically. They also revealed a basic truth: Survival is cultural as well as biological. To live, we need food and shelter. To live autonomously, we must remember who we are and where we came from. As we’ll see in the next chapter, this is especially the case when it comes to another one of the greatest threats to human survival: famine. Often, the regions most deeply stricken by hunger are also places where people have been deprived of social and economic power.

9. THE HUNGRY GENERATIONS

IT’S BEEN CALLED Black ’47, the Great Irish Famine, and the Potato Famine. From 1845 to 1850, Ireland was hit with one of the most brutal famines of the nineteenth century after several annual harvests of potatoes were ruined by blight. Two million people died, and the harshness of the experience sent at least a million more to seek new homes in the United States and England. Though the Irish population was 8 million in 1841, today it hovers at roughly half of that. The country still, over 160 years later, has not recovered from the aftereffects of a disaster that changed not just Ireland but the entire way we conceive of famine.

Famines have been recorded in historical documents and religious books for almost as long as humans have been writing, and yet they are still among the most poorly understood causes of mass death among humans. The University College Dublin economist Cormac Ó Gráda has spent most of his career studying famine, and admitted to me that it’s very difficult to say how many people die of starvation when “most people in famine die of diseases.” In fact, he added, malnutrition is a bigger killer than generally believed because it leads to the scenarios we explored in the previous chapter, where people are more vulnerable to epidemic disease. It’s only in the last few centuries that we find reliable, complete records of famine that scholars like Ó Gráda can use to piece together the events that lead to masses of people dying from want of the most basic resource: food.

The Potato That Starves Us

Before the events of Black ’47, the dominant theory of famine came from the eighteenth-century demographer Thomas Malthus, who believed that epidemics and famines were natural “checks” on human populations to keep them in balance with resources available. From the Malthusian perspective, famines should come on a regular basis, especially when a population is outstripping its ability to subsist in a particular area. When the Irish Potato Famine first began to unfold, however, journalists covering the events realized that the Malthusian explanation wasn’t adequate. This famine had its roots in politics. In the decades leading up to Black ’47, industrialization had completely reshaped the Irish landscape. Lands that were once dotted with small farms devoted to a variety of crops were taken over by landlords who converted these farms to pastureland—there, they raised animals for export. Seeking a high-yield crop that could feed their families, peasants on the land that remained available for farming switched from grain crops to potatoes. Many of these peasants were working entirely for “conacres,” or the right to grow their own subsistence food on a landlord’s property. They were living hand-to-mouth, on potatoes, and had no cash to use if their crop failed. When the blight struck, the Irish poor lost both food and money at once. Even at the time it seemed likely that the famine, rather than being a natural “check” on the island’s growing population, was the result of political and economic disaster (itself partly the result of Ireland’s colonial relationship with Britain).

In the century and a half since people began to perceive the political underpinnings of famines, it has become commonplace to view them as caused primarily by economic problems. The Nobel Prize–winning economist Amartya Sen first advanced this theory in the 1980s, in his highly influential book Poverty and Famines: An Essay on Entitlement and Deprivation. There, Sen lays out the details of his famous theory. He explains that “entitlements” are avenues by which people acquire food, and famine is always the result of how those entitlements work in a particular society. Direct entitlements refer to subsistence methods of getting food, like growing potatoes. Indirect entitlements are avenues that allow people to get food from others, usually by earning money and buying it. Transfer entitlements are ways that people get food when they have neither direct nor indirect means—generally, from famine-relief groups. Sen’s theory has allowed economists to diagnose the causes of famines by looking at the causes of “entitlement failures.” In the case of Black ’47, most Irish people suffered all three forms of entitlement failure.

But there is a missing piece in Sen’s theory of entitlements, and it’s one that is only going to become more important as we move into the next century. That piece is the environment. Evan Fraser, a geographer at the University of Guelph, in Ontario, researches food security and land use. He argues that the Irish Potato Famine reveals how poor environmental management can lead to mass death. He believes we should supplement Sen’s theory of entitlements with an understanding of how “ecological systems are vulnerable to disruption.” In other words, famines are undeniably the result of how we use (or abuse) our environments to extract food from them.

Many famines begin when economic or political circumstances encourage people to convert environments into what Fraser calls “specialized landscapes,” good for nothing but growing a limited set of crops. In Ireland, for example, landlords pushed farmers to transform diverse regions into landscapes that could yield only one crop: the potato. Often, this kind of farming appears to work brilliantly in the short term. Black ’47 was preceded by years of escalating productivity as Irish farmers converted grain crops to potatoes. Paradoxically, the wealth of the ecosystem meant that it was also precarious. Any change to these “specialized landscapes,” whether a blight or a slight change in temperature or rainfall, can wipe out not just one farm but every single farm. What’s bad for one potato farm is bad for all of them. As Fraser put it, “You get one bad year, and you’re stuffed.” In the case of the Irish famine, there were at least two bad years of blight before true famine set in, during 1847.

The ecosystem vulnerabilities leading to Black ’47 could very well become common over the next century. Seemingly minor problems like temperature and rainfall changes could spell death for regions that depend on a single crop that is sensitive to such changes. The most immediate areas of concern lie in the breadbaskets of the American Midwest, a vast region of prairies that stretches from Saskatchewan to Texas. “Most societies are interested in grain, and if you think in terms of wheat, then you need a hundred and ten days of growing conditions,” Fraser said. “You need weather that’s not too hot and not too cold, and abundant but not excessive rainfall. If you get long periods of good weather, you don’t realize there could be a problem. And then you get one bad year, and it all unravels very quickly.”

During the dust-bowl famines of the 1930s, farmers saw a collapse of the Midwest ecosystem. And we’re going to see a return to the dust bowl again. “We know from climate records that the Midwest experiences two-hundred- to three-hundred-year droughts. There are periods where we see centuries of below-average precipitation. The twentieth century was above average. We’ve had a long rhythm of good weather. But the next hundred years will be much drier. We’ve already seen droughts hitting in Texas. It’s going to be hard to maintain productivity then.” I spoke to Fraser in early 2012, before the worst drought in over 50 years hit the Midwest that summer, destroying crops and livelihoods. According to the weekly U.S. Drought Monitor that year, “About 62.3 percent of the contiguous U.S. (about 52.6 percent of the U.S. including Alaska, Hawaii, and Puerto Rico) was classified as experiencing moderate to exceptional drought at the end of August.” Fraser’s predictions for drought are already coming to pass, and he and his colleagues believe more is yet to come.

Does this mean we are witnessing the leading edge of a global famine? Yes and no. The issue here isn’t that people are inevitably victims of their environment, nor that mild changes in the weather always lead to famine. Trouble comes when you see a growing group of people who are extremely poor, combined with a vulnerable ecosystem that’s not diverse and therefore can’t withstand any kind of climate change or pests. A failed crop is a tragedy, but it doesn’t become a famine unless people don’t have the money to buy food elsewhere.

In Black ’47, the main way that people survived the famine was a method that spoke to both problems. They emigrated, moving themselves and their families away from a failing economy and a failing ecosystem. But during many famines, people don’t have that option.

The War That Divides Us

There were many famines during the mid-twentieth century, and most of them were related to war. A lot of these deaths were probably from diseases exacerbated by conflict and malnutrition. War rationing and deprivation leave people weakened, vulnerable to dysentery and epidemic disease. But in the early 2000s, Newcastle University historian and demographer Violetta Hionidou found evidence of a terrifying period during the Axis occupation of Greece from 1941 to 1944, when 5 percent of the population died directly from want of food.

The Greek Army staged a highly successful resistance to Italian invasion in 1940, and the Greek premier refused to buckle under to Mussolini even after a protracted battle. Indeed, had it not been for the aid that Bulgaria and Germany gave to the Italians, it’s likely that Greece would have held fast. Instead, after intense German bombing, Greece fell and was occupied by troops from Italy, Bulgaria, and Germany—all of whom were taking orders from Nazi commanders in Germany, and securing the public’s docility through a Greek puppet government in Athens. In the wake of the occupation, England withdrew its support (the British had been aiding the Greek military) and set up a blockade to prevent supplies from reaching the country. Carved up into territories occupied by three different hostile nations, and cut off from its former allies, Greece was fragmented and intensely vulnerable.

That fragmentation—political, economic, and, in the case of the famine-stricken islands Syros, Mykonos, and Chios, geographical—is what led to the horrors that came next. A first wave of famine struck Athens, claiming as many as 300,000 lives after the German occupying forces requisitioned food and demanded that the Greek government pay the costs of the occupation. In one fell swoop, the people of Athens had lost what Sen would call direct and indirect entitlements, and the blockade prevented transfer entitlements from easing their suffering. Often, that is where the story of the Great Famine of Greece is said to end, with a few nods made to the fact that there were also poor harvests. But according to Hionidou, there was much more to the story than that.

First of all, the production levels from Greek farming did not actually dip below the norm. She found that the numbers historians have used to make this claim are entirely based on products that the occupying government collected tax on. But there was widespread resistance to paying tax, as well as the simple fact that people couldn’t afford it. Therefore a lot of foodstuffs that Greece produced during the famine went untaxed and unrecorded. Still, Greek citizens relied for much of their food on imports and trade; remote island areas were especially dependent on imports. The occupying forces restricted people’s movements to small areas, which meant that nearly the entire Greek population had to sneak around and participate in a black market for food. Of course, as Hionidou pointed out, “Those who couldn’t afford the black market died.”

And then there were those who had no access to the black market at all. On some islands, there was no way, physically, to sneak past the blockades and get to people selling food. On Syros, Mykonos and Chios, for example, people had to depend entirely on the food they produced to eat. And there simply wasn’t enough of it. The mortality patterns Hionidou found during her research flew in the face of the traditional idea that death from starvation is rare. She pored over records from the time, amazed to discover that accounts from both the Axis side and the Greek side matched up. “Greek doctors were reporting the cause of death as starvation, and some could argue that they had good reason to report starvation to blame the occupying forces,” she said. “But the occupying forces produce documents talking about starvation, too. They don’t try to cover it up by saying it’s disease. They’re not denying it at all.”

The only way that people survived these outbreaks of famine was to hold out until 1942, when the blockade was loosened up and food aid reached the Greek people. Some managed to escape the country into Hungary, while others got rich on the black market. But the artificial barriers that the occupation erected between people did more to starve them than any failed crop ever could. For Hionidou the lesson of the Great Famine in Greece is stark. When I asked her how a famine is stopped, she said firmly, “I think it’s political will.”

Nothing could underscore her assertion more than the greatest famine of the twentieth century, which ripped China apart just a little over a decade after World War II came to an end.

The Great Leap Forward That Sets Us Back

It started as a crazy dream based on the urge to transform the world. Mao Zedong, chairman of the People’s Republic of China, wanted to secure his political power in the party and turn China into an industrial powerhouse that could rival Britain. He’d grown up on the utopian promises of Marxism, and as an adult revolutionary leader was awed by the massive engineering projects of the Soviet Union. So when Mao informed his fellow Communist leaders at a 1957 meeting in Moscow that China would surpass Britain in the production of basic goods like grain and steel, he drew up a plan that sounded like something out of science fiction. Under the Great Leap Forward, he said, the Chinese people would turn their prodigious energies to a massive geoengineering project—damming up some of the country’s greatest rivers, halting deadly floods, and creating enough stored water to irrigate even the most arid regions in the mountains. Unfortunately, the plan turned out to be more fiction than science. Mao refused to listen to the advice of engineers, and pushed local party leaders to harness every citizen’s energies to dig dams that failed and divert rivers in ways that didn’t irrigate the soil. Worst of all, these projects prevented farmers from doing crucial labor on farms.

In 1958 and 1959, as Mao moved into the next phases of the Great Leap Forward, he demanded that each province meet fantastically high quotas on agricultural and steel output. According to regional records uncovered recently by the University of Hong Kong history professor Frank Dikötter, this was when the famine began to claim millions of lives—eventually killing as many as 45 million people. People died as a result of two policies: dispossession and fruitless labor. First, the government created vast work collectives by confiscating all private property, dispossessing people of their food stores, homes, and other belongings. Then local party representatives forced the understandably reluctant members of these new collectives to engage in unscientific, misguided experimental agricultural methods and steel manufacture. Farmers were told to plant rice seeds very close together, extremely deep in the soil, because it was considered a scientific method of producing a higher-yield crop. Meanwhile, to meet steel quotas and avoid punishment, people took to melting down farm equipment and anything else they could. Needless to say, the experimental farming techniques left the collectives with little food, and the steel production often left them more impoverished than ever.

Though China wasn’t at war, Mao borrowed the language of militarization to propagandize on behalf of the policies that were starving his people. China’s Great Leap Forward spawned terms like “the People’s Army,” which Mao used to characterize the displaced masses that party leaders deployed to work on China’s industrialization projects. Dikötter suggests that Mao favored terms like this because being in a state of war—if only a metaphoric one—would inspire people to sacrifice even more for the good of the country. It’s easy to see similarities between this strategy and the situation in occupied Greece. In both cases, war was used to justify abuses that led to millions of deaths.

China’s Great Famine was the worst famine of the twentieth century, and it was entirely manufactured by human political choices, which in turn affected land use. Dikötter calls the famine a mass murder, while the Chinese government considers it to be the result of tragically misguided policies. Regardless, it’s clear that the worst famines in recent history are cultural disasters rather than natural ones.

Survivors of the Great Famine included people who were willing to bend the political rules that Mao and his representatives had imposed on them. They secreted away foods that were supposed to go to the communes, engaged in illegal forms of trade, and, in a few cases, formed armed mobs and robbed trains, communes, and other villages. It wasn’t until 1961 that Mao acknowledged the desperate conditions in some provinces and called off the programs of the Great Leap Forward. When people were allowed to live in more permanent homes and return to tried-and-true methods of farming, the famine slowly abated.

Are We Going to Kill Ourselves?

Stories of recent famines raise the same question that stories of war always do: Are we humans going to exterminate ourselves more efficiently than a megavolcano ever could? It’s undeniable that one of the greatest threats we face is ourselves. Though famine has historically been a less efficient killer than other disasters like pandemics, and our systems for dealing with it have improved immensely over time, our survival is still at risk from malnutrition caused by environmental change and what demographer Hionidou called political will.

Evan Fraser’s predictions about environmental change in North America’s breadbaskets are already being borne out by the dire drought conditions that struck in the summer of 2012. Many farmers in Africa have suffered similar droughts for decades because they depend entirely on rainfall rather than irrigation systems.

Some of the environmental changes we’re witnessing in the grain baskets of Africa and North America are cyclical changes that have nothing to do with humans’ use of fossil fuels. But if the Intergovernmental Panel on Climate Change’s recent models of rising global temperatures from carbon emissions turn out to be accurate, we’ll soon be dealing with cyclical drought conditions, exacerbated by the heat humans are adding to the party. Many regions will suffer the same problems that farmers face in Africa every season, when drought can wreck an entire region’s hope for food and incomes. It’s very possible that our dreams for a global society in an industrialized world will have the unintended consequence of pushing most people on Earth into lives of poverty, hunger, and disease.

Leaving aside questions of environmental change, we’re still contemplating an exceptionally harsh future. As UC Berkeley economics professor Brad DeLong put it to me:

You get a famine if the price of food spikes far beyond that of some people’s means. This can be because food is short, objectively. This can be because the rich have bid the resources normally used to produce food away to other uses. You also get famines even when the price of food is moderate if the incomes of large groups collapse…. In all of this, the lesson is that a properly functioning market does not seek to advance human happiness but rather to advance human wealth. What speaks in the market is money: purchasing power. If you have no money, you have no voice in the market. The market then acts as if it does not know that you exist and does not care whether you live or die.

DeLong describes a marketplace that leaves people to die—not out of malice, but out of indifference. Coupling this idea with Sen’s entitlement theory, you might say oppression and war deprive people of the entitlements necessary to feed themselves. The problem is that the market doesn’t care if people starve or grow ill. Based on historical evidence from famines in Ireland, Greece, and China, we can reasonably expect that if our economic systems remain unchanged, we will continue to suffer periods of mass death from famine. These famines will get worse and worse while the market continues to ignore the growing impoverished class.

Of all the forms of mass death we’ve looked at so far, famine can be understood as the least natural of all disasters. The good news is that famines (often accompanied by pandemics), unlike megavolcanoes and asteroid strikes, are human-made problems with human solutions. If we consider the examples of famine we’ve explored in this chapter, there are a few common themes that emerge in the stories of survivors. All of them have to do with ways that countries have acted collectively to fight mass death. One key lesson we can draw from Black ’47 is that mobility—movement either internally or across national borders—often saves lives. A million Irish immigrants escaped death, thanks in part to other nations allowing them to relocate. Today, Somalian and Ethiopian refugees are attempting to do the same thing as they stream out of regions where food supplies have dried up. By contrast, during the Great Leap Forward, Mao’s government prevented the Chinese who lived in famine-racked provinces from fleeing to other places with better food security. The death tolls that resulted were staggering. Similarly, Greeks who suffered the worst effects of famine during World War II were trapped on islands, unable to flee even if they had wanted to risk the dangers of slipping through the blockades.

Still, global cooperation did ultimately prevent the Greek famine from reaching the proportions that the Great Leap Forward did. A few Greeks left the country, but for the most part the population was saved by humanitarian aid coming in from outside. Like immigration, food aid is a solution that requires other nations or regions to cooperatively step in. This solution to famine involves what Sen called transfer entitlements. To survive, starving regions must rely on the kindness and generosity of regions that can ship in their surplus food.

There is another lesson to be drawn from Black ’47 and the Great Leap Forward that is especially important in today’s drought-stricken times. Mass societies need to adapt better to their environments, figuring out ways to farm sustainably so that a few years of bumper crops don’t give way to decades of blight and dust bowls. It is one of history’s great tragedies that Mao’s attempt to revolutionize China’s land use was so horrifically misguided and ill-informed. He was right that farming methods needed to change radically to sustain China’s huge population. But to say that his implementation was faulty is a gross understatement. Changes in our land use have to be based on an understanding of how ecosystems actually function over the long term. Ultimately, as we’ve learned from studying both human and geological history, the safest route is to maintain diversity. Farmers need to move away from specialized landscapes and monocultures that can make a region’s food security vulnerable to climate change, plant diseases, and pests.

None of these solutions—immigration, aid, and transformed land use—is foolproof, certainly, but they can all prevent large groups from being extinguished. These are solutions that also require mass cooperation, often on a global scale. Preventing famine, like preventing pandemics, has meant changing our social structures. But those changes are always ongoing, often spurred by protests and political upheaval. We even have today’s version of the Peasants’ Revolt in the form of the Occupy movement, whose goals those London rioters in 1381 would undoubtedly have recognized and understood. Still, sometimes it feels as if change doesn’t come soon enough. Famines and their accompanying pandemics are problems that we’ve been trying desperately to solve for hundreds of years. How are we ever going to survive over the next several hundred?

In the rest of this book, we’re going to explore the answer to that question. As we’ve seen, human mass death is caused by a tangle of social and environmental factors. Our survival strategy will need to address both factors. We need a way forward based on rationally assessing likely threats, which we’ve learned about from our planet’s long geological history and our experiences as a species. But we also need a plan that’s based on an optimistic map of where we as a human civilization want to go in the future. To draw that map, we’ll take our cues from some of the survivors around us today, human and otherwise. Those survivors and their stories are what we’ll explore next.

Загрузка...