7 The Two-Legged Host

Humanity has but three great enemies: fever, famine and war; of these by far the greatest, by far the most terrible, is fever.

—William Osler


The beauty of parasites is an inhuman one. It’s inhuman not because parasites have come from another planet to enslave us but because they have been on this planet so much longer than we have. I sometimes think about Justin Kalesto, the Sudanese boy who was so racked by sleeping sickness that he could only whimper in his bed. He was twelve years old, and on his own he’d be no match against a dynasty of parasites that have lived in almost every sort of mammal—in reptiles, birds, dinosaurs, amphibians—everything backboned since fish came ashore, that have lived inside fish before anything walked on land, that have evolved their way into the guts of insects as well as vertebrates, that even thrive inside trees. The entire human race is a child like Justin: a young species perhaps only a few hundred thousand years old, a tender new host for trypanosomes and other parasites to make their own.

Of course parasites have never encountered a host quite like us. We can fight against them with inventions such as medicines and sewers as no animal has before. And we’ve changed the planet around us as well. After billions of years of glorious success, parasites now must live in the world we’ve made: a world of shrinking forests and swelling shanty towns, of vanishing snow leopards and multiplying chickens. But thanks to their adaptability, they’re doing well overall. We should worry about the disappearance of condors and lemurs; their extinction will show us how badly we’re stewarding the planet. But we shouldn’t worry about the extinction of parasites. The tick species that live on black rhinos will probably disappear with their hosts in the next century. But there is no danger of parasites in general disappearing from the planet during the lifetime of our species; just about all of them will probably still be here when we’re gone.

While parasites must live in the world we’ve made, the opposite is true as well. They have structured the ecosystems that we depend on, and they have sculpted the genes of their hosts for billions of years, our own included.

It is surprising just how precisely they’ve shaped us. When immunologists began studying antibodies, they found that they could sort them into categories. Some had hinged branches; some were built like five-rayed stars. Each group of antibodies has evolved to work against particular sorts of parasites. Immunoglobulin A works against the influenza virus and little else. The star-shaped immunoglobulin M staples its rays to bacteria like Streptococcus and Staphylococcus.

And then there was a strange little antibody called immunoglobulin E (IgE). When scientists first found this antibody, they couldn’t figure out what it was for. It would remain at barely detectable levels in most people, except during a bout of hay fever or asthma or some other allergic reaction, when it would suddenly surge through the body. Immunologists have worked out how IgE helps trigger these reactions. When certain harmless substances get into the body—ragweed pollen, for example, or cat dander, or cotton fibers—B cells make IgE antibodies tailored to their shape. These antibodies then are anchored to special immune cells called mast cells that are found in the skin, the lungs, and the gut. Later, the harmless substance for which the IgE was made enters the body again. If it latches onto a single IgE antibody on a mast cell, nothing happens. But if it should latch onto two of them sitting side by side on the mast cell, the harmless substance switches it into action. Suddenly the the mast cell blasts out a flood of chemicals that make muscles contract, fluids pour in, and other immune cells flood the site. Hence the sneezing of hay fever, the wheezing of asthma, the red hives of a bee sting.

Since allergies serve no good purpose, immunologists could only look on IgE as one of the rare shortcomings of the immune system. But then they discovered that IgE can be good for something: fighting parasitic animals. IgE may be rare in the United States and the few other parts of the world that are now free of intestinal worms, blood flukes, and their like, but the rest of humanity (not to mention the rest of Mammalia) carry a heavy load of flukes, worms, and IgE. Experiments on rats and mice have shown that IgE is crucial for fighting these parasites; if animals are robbed of their IgE, they’re overrun by parasites.

The immune system has, in a sense, recognized that parasitic animals are different from the other creatures that live in our bodies; they’re bigger and their coats are far more complex than those of single-celled organisms. As a result, it has devised a new strategy against them that depends on the IgE antibody. Exactly how that strategy works isn’t completely clear, and it may be a bit different for each parasite. It’s been worked out best for Trichinella, the parasitic worm that grows up in muscle cells and then enters a new host in a piece of meat tumbling into the stomach.

Once Trichinella has thrashed its way free, it moves through its host’s gut by spearing through the projections that line the bowels. Immune cells in the lining of the intestines pick up some of the proteins from the parasite’s coat and travel to the lymph node that lies just behind the intestines. They present the Trichinella proteins to T cells and B cells in the node, setting off the creation of millions of cells targeting the parasite. These B and T cells then come pouring out of the lymph node and swarm through the lining of the intestines.

The B cells make antibodies, including IgE, which spread over the surface of the intestines and form a shield that Trichinella can’t penetrate to anchor itself. At the same time, the mast cells are switched on, bringing on sudden spasms and floods through the intestines. Unable to get any purchase on the intestines, the parasites are washed away.

This precise strategy against a particular parasite—and many others—was in place long before our first primate ancestors swung through the trees 60 million years ago. And if monkeys and apes are any guide today, they needed all the help they could get: primates today are rife with parasites—malaria in their blood, tapeworms and other creatures in their intestines, fleas and ticks in their fur, botflies under their skin, and flukes in their veins.

At some point before 5 million years ago, our own ancestors, living somewhere in Africa, split off from those of today’s chimps. Hominids began standing on two legs and gradually moving from lush jungles to sparser forests and savannas, where they scavenged kills and gathered plants. Some of the parasites of our ancestors followed along with them, branching as their hosts branched into new species. But hominids also picked up new parasites as they shifted to a new ecology. According to Eric Hoberg, they stumbled into the life cycle of tapeworms that beforehand had traveled between big cats and their prey. At the same time, hominids began to spend much of their time at the few watering holes on the savannahs. There they drank from the same water that many other animals did, including rats. A blood fluke that swam from snails to rats stumbled across the skin of a hominid and tried it out. It liked what it found, and gradually a new species of fluke evolved that specialized only in hominids. Ever since then, the fluke Schistosoma mansoni has lived in our veins.

Hominids began moving out of Africa about a million years ago in a series of waves, hiking out across the Old World from Spain to Java. In a popular model of evolution, none of these people have any descendants left on Earth today. Instead, all living humans descend from a final wave that came out of eastern Africa a hundred thousand years ago or so and replaced every other hominid they encountered. On these travels out of the mother continent, our ancestors escaped some parasites. Sleeping sickness depends on tsetse flies to carry trypanosomes, and the flies don’t live outside Africa, so sleeping sickness remained an African disease. But humans also became home to new parasites in their travels. In China, another blood fluke that had been living in rats, Schistosoma japonicum, moved into humans.

At least fifteen thousand years ago, some peoples headed north and east, arcing into the New World through Alaska, and there they encountered a new batch of parasites. The trypanosomes humans had left behind in Africa had existed on that continent for hundreds of millions of years. Before 100 million years ago South America was fused to Africa’s western flank, and the parasites swarmed across the entire landmass. But then plate tectonics tore the two continents apart and poured an ocean between them. The trypanosomes carried away on South America began evolving on their own, into Trypanosoma cruzi and other species. It was long after the split between these two branches of parasites that the first primates evolved in Africa, and for tens of millions of years our ancestors struggled only with sleeping sickness. Humans migrating out of Africa escaped that scourge, but when they finally arrived in South America, the cousins of their old parasites were already there, waiting to greet them with Chagas disease.

By ten thousand years ago, humans had colonized every continent except Antarctica, but they still lived in small groups, eating animals they hunted or wild plants they gathered. Their parasites had to live according to these rules. In those early days, parasites did best if they had reliable routes into humans—tapeworms in big game, for instance, or Plasmodium carried by a blood-hungry mosquito, or blood flukes waiting in the water. Parasites that needed close contact might have brief flashes of glory—Ebola virus racing through a band here or there in central Africa—but the sparseness of humans didn’t allow them to spread beyond that single band, so they remained rare.

That changed when humans began to domesticate wild animals and plants and eat them. The agricultural revolution sprang up independently, first in the Near East ten thousand years ago, then shortly after in China, and a couple of thousand years later in Africa and the New World. Just about every parasite boomed with the dawn of agriculture and the birth of settled towns and cities that followed. Tapeworms didn’t have to wait for humans to scavenge the right carcass or hunt down the right game; they could live in livestock. After humans ate tainted pork and passed tapeworm eggs, it didn’t take long for some snuffling pig to swallow them and let a new generation of parasites begin. By spreading cats and rats around most of the world, humans made Toxoplasma perhaps the most common parasite on Earth. Along the Andes, the houses that Incas built were ideal places for assassin bugs to live, and their llama caravans carried the insect and the parasite across much of the continent. For blood flukes, farming may have been the best thing ever to happen. With people setting up irrigation systems and rice paddies in southern Asia, huge new habitats opened up for the snail hosts of flukes, and the farmers who worked the fields were always in easy reach. Viruses and bacteria could move from person to person in the crowded, dirty conditions in the towns. And faring best of all was Plasmodium. The mosquitoes that carry malaria prefer to lay their eggs in open standing water, and as farmers cleared forests they brought exactly those sorts of pools into existence. The rising swarms of mosquitoes discovered new targets far more easily than their ancestors had: people toiling in fields during the day and clustering in villages at night.

For hundreds of millions of years, parasites have been shaping the evolution of our ancestors, and in the past ten thousand years they have not stopped. Malaria alone has done strange, profound things to our bodies. The hemoglobin that Plasmodium devours is made up of two pairs of chains, called alpha and beta, and each kind of chain is built according to instructions in our genes. We carry two genes for alpha chains—one inherited from our fathers, one from our mothers—and the same goes for the beta chains. If a mutation appears in any of those hemoglobin genes, it can damage a person’s blood. One sort of mutation in the beta chain causes a hereditary disease called sickle cell anemia. In this condition, hemoglobin can’t hold its shape if it’s not clamped around oxygen. Without it, the defective hemoglobin collapses into needle-shaped clumps, which then turn the cell itself into a sickle shape. The sickle cells snag in small capillaries, and the blood can no longer supply as much oxygen to the body. People who inherit only one copy of this defective beta chain gene can get by on the hemoglobin made by the remaining normal copy. But people who receive two copies of the bad gene make nothing but defective hemoglobin, and they’re usually dead by the time they’re thirty.

A person who dies of sickle cell anemia is less likely to pass on the defective gene, and that means that the disease should be exceedingly rare. But it’s not—one in four hundred American blacks has sickle cell anemia, and one in ten carries a single copy of the defective gene. The only reason the gene stays in such high circulation is that it also happens to be a defense against malaria. The needle-shaped clumps of hemoglobin don’t only threaten a blood cell; they can also impale the parasite inside. And as a sickle cell collapses, it lose its ability to pump in potassium, an element Plasmodium depends on. You need only one copy of the gene in order to enjoy this protection. The lives saved from malaria by single copies of the gene balance off the ones lost when people get two copies of the gene and die. As a result, people whose ancestors lived in many places where malaria has been intense—throughout much of Asia, Africa, and the Mediterranean—carry the gene at high levels.

Sickle cell anemia is actually just one of several blood disorders created in the fight between humans and malaria. In Southeast Asia, for example, you can find people whose blood cells have walls that are so rigid they can’t slip through capillaries. Called ovalocytosis, this disorder follows the same genetic rules as sickle cell anemia: it’s mild if a person only inherits the defective gene from one parent, but severe if both parents pass it on—so severe, in fact, that a baby with two genes will almost always die before it’s born. But ovalocytosis also makes red blood cells less hospitable to Plasmodium. Their membranes become so stiff that the parasite has a hard time pushing its way inside, and their rigidity seems to harm its ability to pump in chemicals such as phosphates and sulphates that the parasite needs to survive.

Humans have probably been fighting malaria with these sorts of changes to the blood for thousands of years, but the evidence is hard to come by. One of the few clear signs from antiquity is a condition called thalassemia, another defect of hemoglobin. People with thalassemia make the ingredients of their hemoglobin in the wrong amounts. Their genes produce too many or too few of the chains, and once the full hemoglobin molecules have been assembled from them, extra chains are left over. These end up binding together into clumps, which can wreak havoc inside a blood cell. They can grab an oxygen molecule the way normal hemoglobin can, but they can’t completely enclose it. Oxygen is a dangerously charismatic element; it can carry a powerful charge that attracts other molecules in the cell. They pull the oxygen out of the defective hemoglobin clumps and carry it away. As the oxygen roams the cell, it can react with still other molecules, wrecking them in the process.

People with severe forms of thalassemia usually die before birth, but in milder forms they can survive, although often suffering from anemia. The body of a person with thalassemia may try to compensate for its defective blood cells by making more blood in the bone marrow. The marrow swells up as a result and can spread into the surrounding bone, interfering with its growth. People with thalassemia can end up with distinctively deformed skeletons—curved, stunted arm and leg bones. And archaeologists in Israel have found bones with these deformities dating back eight thousand years.

Thalassemia has lingered for so long—and has become the most common blood disorder on Earth in that time—because it helps fight malaria. If you look at a map of a malaria-prone country like New Guinea, the rates of thalassemia match up closely with the prevalence of the parasite. While a severe form of thalassemia may kill, a milder case saves. Researchers suspect that the defective hemoglobin in a red blood cell makes life worse for the parasite inside than for the host. The loose hemoglobin strands grab oxygen, which slips free and can then damage Plasmodium. The parasites don’t seem to have any way of repairing themselves, so they can’t grow properly. When Plasmodium finally emerges from a red blood cell, it’s deformed and sluggish, and it can’t invade new cells. As a result, people with thalassemia who get malaria tend to have mild cases rather than fatal ones.

These blood disorders may do more against malaria than make life hard for the parasites. They may provide a natural vaccination program for children. Children who are bitten by a Plasmodium-laden mosquito for the first time reach a turning point in their lives: Will their naive immune systems be able to recognize the parasite and fight it off before it kills them? Stunting the growth of parasites—whether by thalassemia, ovalocytosis, or sickle cell anemia—gives the immune system more time to get beyond Plasmodium’s evasions, recognize it, and mount a response. These mild cases of malaria immunize children to malaria and let them live to adulthood.


* * *

Given how much parasites have shaped the human body, it’s tempting to wonder whether they’ve shaped human nature. Do women choose men for their parasite-proof immune systems the way a hen chooses a rooster? In 1990, a biologist named Bobbi Low at the University of Michigan reviewed the marriage systems in cultures plagued with parasites such as blood flukes, Leishmania, and trypanosomes. She found that the heavier a culture’s parasite load, the more likely the men were to have multiple wives or concubines. You might expect that sort of result from Hamilton and Zuk’s theory, since healthy men would be so highly valued in parasite-burdened places that many women would marry each one. How would women judge men for signs of parasite-proof immune systems? Men don’t have roosters’ combs, but they do have thick beards and broad shoulders, both of which are dependent on testosterone. The signs might not be visible either—a huge amount of communication goes on between people by odor that scientists haven’t begun to decode.

If there is some connection between parasites and love, it’s probably tangled up with many other evolutionary forces and slathered over with a heavy crust of cultural variations. I spoke to Marlene Zuk one afternoon about her work, which she divides between exploring the Hamilton-Zuk hypothesis and studying the songs of crickets. When I asked her what she thought of trying to apply her ideas to people, she was cautious. “It’s easy to construct these adaptive scenarios and almost impossible to test them,” she said. “I’m not saying people shouldn’t study human behavior, that there’s anything immoral about it. But I do think that there’s been some shoddy work done that’s gotten attention because people think, ‘Isn’t it cool that this thing is being applied to humans?’ When people do things with humans, they get captivated with their pet theories. But I don’t even understand what’s going on in the structure of cricket songs.”

Still, there’s no crime in speculating. Could parasites have helped drive the evolution of the human mind? Primates spend huge amounts of their day—between 10 and 20 percent—grooming each other. Like other grooming animals, they have to fend off an endless assault of lice and other skin parasites. Simply picking off these parasites is soothing, because touch releases mild narcotics in the primate brain. According to Robin Dunbar of the University of Liverpool, this parasite-driven pleasure took on a new importance when the common ancestor of monkeys and apes and humans moved into habitats with lots of predators about 20 million years ago. These primates had to huddle together in order not to be killed, but they then had to compete with one another for food. As social stresses emerged, the primates began to depend on the soothing sensation that comes from grooming, not for its previous function—getting rid of parasites—but as a kind of currency to buy the alliance of other monkeys. Grooming became political, in other words, and in order to keep track of the larger and larger groups, apes evolved larger brains and had to dedicate more of their time to grooming. Hominids eventually reached the point, at about one hundred fifty members to a band, where there wasn’t enough time in the day to groom one another to keep the band intact. And it was then, Dunbar claims, that language arose and took grooming’s place.

Defending against parasites could have played a part in the evolution of human intelligence in another way—an even more speculative one, but one that might be more significant. Perhaps medicine played a role. When a woolly bear is attacked by a parasitic fly and switches its diet from lupine to hemlock, it does so purely by instinct. It doesn’t pause on its leaf and think to itself, “I seem to have a maggot growing inside me, and it will leave me a hollow shell if I don’t do something.” Its tastes presumably just shift from one kind of plant to another. For most animals that engage in this protomedicine, the process is probably the same. But something different seems to be going on in primates, particularly chimpanzees, our closest relatives. Sick chimps will sometimes search for strange food. They will swallow certain kinds of leaves whole; they will strip the bark of other plants and eat the bitter pith inside. The plants have almost no nutrition in them, but they have another value. The leaves seem to be able to clear out worms from the intestines, and the bitter pith is used as medicine by people who share the forests with the chimps. When scientists have analyzed the plants in laboratories, they’ve discovered that they can kill many parasites.

Chimps, in other words, may be medicating themselves. As the years go by, more evidence accrues to the chimp-doctor theory, but it is slow to gain acceptance. It demands far more proof than a typical idea in biology, since scientists need to demonstrate that chimps are sick with particular parasites when they choose their plants, and they need to show how the plants fight the parasites. Proving this as you run to keep up with chimps racing along rainforest ridges makes for slow scientific progress. But Michael Huffman, the primatologist who has done most of the running, has indeed shown that after chimps eat certain plants, their parasite load drops and their health improves. He argues that the chimps are a lot more sophisticated in their medicine than instinct-driven woolly bears. When they select only the pith of the plant Vernonia amydalina, casting aside the bark and leaves, they are avoiding the poisonous part of the plant and taking only the part of the plant that has steroid glucosides that kill nematodes and other parasites. A hungry goat will eat too much of the plant and sometimes die.

If Huffman is right, the chimps must accrue medical lore and carry the information through time by teaching and observing one another. Huffman once watched a male chimp eat some Vernonia and throw it to the ground; a baby chimp tried to pick it up, but his mother stopped him, put her foot on the pith, and carried him away. Chimps must have some remarkable cognitive sophistication if Huffman is right. They can recognize the symptoms of particular parasites and associate eating plants with their cure. They may even eat some plants preventively, which would put the association on an even more abstract plane.

You usually hear this sort of talk—abstraction, an awareness of the potential uses of things in nature—when people are discussing one of the most important steps in human evolution: the ability to make tools. Chimpanzees can strip sticks to use them to fish out termites from nests; they can smash shells between rocks; they can even fashion themselves sandals to cross expanses of low thorny bushes. As our closest primate relatives, they may embody some of the abilities of the earliest hominids 5 million years ago. Later, as our ancestors moved out of dense forests, they evolved the ability to make more sophisticated tools by flaking stones to butcher meat. The ability to connect the shape of a tool to the job it could do brought a reward of more food. This abstract thought made it possible to make better tools, and survival became even easier. Tools, in other words, may have made our brains swell.

Conceivably, that same argument could apply to medicine as well. Could the ability to recognize how plants could fight various parasites have given hominids longer lives and more children? And could that success have driven more powerful brains in order to find better cures against parasites? If that’s true, perhaps a better name for us would be Homo medicus.


* * *

In 1955, Paul Russell, a scientist at Rockefeller University, wrote a book to which he gave the title—a title he thought was entirely reasonable and realistic—Man’s Mastery of Malaria. The parasite that had taken so many lives (by some counts, half of all the people who were ever born) was on the verge of succumbing to the powers of modern medicine. “For the first time it is economically feasible for nations, however underdeveloped and whatever the climate, to banish malaria completely from their borders.” The end of malaria was so much a fait accompli that Russell ended his book warning that a population boom would hit the world when the parasite had been destroyed.

As I write these words, forty-four years later at the close of the twentieth century, a person dies of malaria every twelve seconds. In the time between Russell and me, scientists have unbraided the mystery of DNA; they have stared closely at the face of cells; they have climbed some of the chains, link by link, from genes to action. And yet, malaria still romps through the human race.

For that matter, so do many other parasites. Beyond the bacteria and viruses that Americans and Europeans may be familiar with, protozoans and animals are having a field day in their human hosts. There are more human intestinal worms than humans. Filarial worms, the parasites that cause elephantiasis, infect 120 million people; there are 200 million cases of schistosomiasis, the disease caused by blood flukes. Even a parasite limited in geography, such as the trypanosome that causes Chagas disease, infects close to 20 million people.

The toll taken by these parasites is overlooked for several reasons. One is that it happens mostly to the poorest people in the poorest countries. Another is that many of these parasites aren’t outright fatal. Although 1.3 billion people carry hookworm, only 65,000 people actually die of it each year. But the effects of chronic infections with parasites are still devastating, leaving people listless and undernourished. Parasites like hookworm and whipworm make it hard for children to learn in school; all it takes is a dose of antiwhipworm medicine to make some slow children bright again.

Epidemiologists have tried to quantify this sort of loss with something they call the disability-adjusted life year. Simply put, this unit measures the estimated value of the years of healthy life lost to a disease. It is a grim exercise in statistics, replete with the cold-hearted calculations of labor—getting blood flukes at age twenty-five counts for much more than at age fifty-five. Depending on how bad a disease is, a year still living counted as only a fraction of a life lived parasite-free. Roundworm may slow down a child’s growth, but if it’s caught in time, the condition is reversed and the child begins to grow again. Left too long, though, roundworm can leave the child stunted into adulthood. When considered in this way, parasites are a staggering drain on life. Malaria robs the world’s population of 35.7 million life-years every year. Parasitic worms of the gut—hookworms, roundworms, whipworms most importantly—are far less fatal than malaria but actually rob more life: 39 million life-years. Taken together, the leading parasites destroy almost 80 million life-years a year, almost twice as many as those claimed by tuberculosis.

In the United States, most people aren’t aware of the damage that parasites wreak (or even know what these parasites are) because they’re such a small threat to their own health today. It wasn’t always the case. Most Americans don’t know that in the 1800s, malaria’s range swept all the way up the Great Plains into North Dakota, or that in 1901, a fifth of the population of Staten Island carried the parasite. Most don’t know that people in the southern United States once had a reputation for being lazy and stupid because so many of them were being drained by hookworm. Most don’t know that in the 1930s, 25 percent of the pork sold in the United States carried Trichinella.

The United States no longer has to worry about these parasites, but not because anyone invented a magic bullet. They’ve been overwhelmed by the slow, dogged work of public health, of building outhouses, of inspecting food, of treating infections to break the cycles that parasites had taken for thousands of generations before. There’s still plenty of life in this simple approach. Consider the hideous case of guinea worms. Even at the middle of the twentieth century, guinea worms were fantastically successful parasites. One estimate in the 1940s had them crawling out of the legs of 48 million people every year. Today there is still no vaccine for guinea worm disease, nor is there even a medicine known to work against it. But in the early 1980s, public health workers began a campaign that may eradicate it from the face of the Earth.

Their strategy was simple. They made people in the guinea worm zone aware of the parasite’s ways. They helped set up wells in some places and issued cheesecloth in other places to filter out parasite-carrying copepods from pond water. They stopped people from helping the guinea worm complete its life cycle by putting bandages on the abscesses the parasites formed. As the guinea worms were spooled out of their hosts, their hosts were kept away from water. In a matter of years the guinea worm population started to crash. In 1989, there were 892,000 reported cases (the actual cases were probably far more); in 1998, the number had dropped to 80,000. Guinea worms disappeared from Pakistan altogether in 1993. It’s conceivable that within a few years, guinea worms will be completely wiped out. After smallpox, guinea worms would then become only the second disease to have been eradicated in the history of medicine.

Two other pernicious parasites also have life cycles that make them good candidates for eradication. One is Onchocerca volvulus, the worm that travels in black flies and causes river blindness. Seventeen million people carry the parasite, mostly in Africa. Short of wiping out all the flies or issuing insect spray to all Africans at risk, there would be no way to keep people from getting infected. Like guinea worms, O. volvulus has no vaccine, but it does have a partial cure. Sheep ranchers give their animals a drug called ivermectin to cure them of intestinal worms. Ivermectin seems to paralyze the worms so that they can’t feed or swim, and they get flushed out of the body. Parasitologists have discovered that ivermectin actually works effectively against many other parasites, including O. volvulus. If a person with river blindness takes the drug, the baby worms that wander through the skin die. It’s not a complete cure, since the adult worms are left snuggled happily in their nodule, where they can give birth to thousands more baby worms. But it’s the babies that cause the worst symptoms of the disease—the agonizing itchiness and the scarring of the eye that leads to blindness. Researchers found that if an infected person took one pill once a year, he would be free of the babies. Since an adult worm lives ten years, he would have to take it ten times to be completely cured. The pharmaceutical colossus Merck has donated as much ivermectin as will be necessary to cure the world of river blindness, and 100 million doses have been handed out so far.

More recently, parasitologists have found that ivermectin can work as efficiently against the filiarial worms that cause elephantiasis. The filarial worms have essentially the same life cycle as O. volvulus, and the same susceptibility to ivermectin. The project is far more ambitious—120 million people throughout much of the tropical world are infected. If these researchers should be successful and if these three parasites are destroyed, the world should honor them for waging these campaigns. We can look forward to a time when people will have a hard time believing that there was anything on Earth that could have caused human agony in such elaborate ways. They will be the dragons and the basilisks of the twenty-second century.

Yet, in their vulnerability these three parasites are exceptions rather than the rule. Many others thrive on the poverty that most of the world lives in, and it takes more than some good intentions to stop them. Schistosomiasis is easily curable if you’ve got the twenty dollars to buy the drug praziquantel. If you’re too poor to afford it on your own but someone gives it to you free, the chances are you’ll just get sick again because you have to get your water from a pond instead of a clean well. And often the supposed cures for poverty make the lives of parasites easier. When giant dams are built and submerge vast regions of dry land, they create new homes for the snails that carry blood flukes, and new epidemics of schistosomiasis reliably follow.

The most important reason that parasites do so well today is that they evolve. Parasites are not life’s dead ends, as was once thought; they are continually adapting to their circumstances. Not only has malaria been forcing us to evolve; it has been evolving to adapt to us. And after adapting to natural human defenses for many thousands of years, Plasmodium now simply has to go up against drugs rather than some new T cell receptor.

Before the 1950s, the malaria a person contracted anywhere in the world could be treated with a few doses of the benign drug chloroquine. Chloroquine cures malaria by turning Plasmodium’s food into poison. As Plasmodium feeds on the hemoglobin in red blood cells, the parasite chops off the arms of the molecule, leaving behind the iron-rich core. This core is dangerous to the parasite, because it can lodge in Plasmodium’s membrane and disrupt the flow of molecules in and out. The parasite neutralizes the poison in two ways. It strings some of the molecules into harmless hemozoin; the rest it processes with enzymes until it can no longer react with the membrane.

Chloroquine works its way into Plasmodium and bonds with the hemoglobin core before the parasite can neutralize it. In its new form, the compound won’t fit on the end of a hemozoin chain, and the parasite’s enzymes can no longer react with it. Instead it builds up in Plasmodium’s membrane and makes it leaky. The parasite can no longer pump in the atoms like potassium that it needs, or pump out the ones that it has to get rid of, and it eventually dies.

Now huge parts of the globe harbor malaria that’s chloroquine-proof. In the late 1950s, two chloroquine-resistant parasites were born—one in South America, the other in Southeast Asia. Researchers aren’t exactly sure what makes them so stubborn, but they suspect that they have a mutant protein that snags chloroquine before it penetrates too deeply into the parasite. These mutants have probably cropped up regularly for thousands of years, but the odd proteins they produced served no good purpose. They probably even slowed down the parasite’s feast of blood, so they were squelched by natural selection.

But starting in the 1950s, any parasite that could block chloroquine had plenty of space—human bodies—for colonizing. Year by year, the children of those two Plasmodium mutants spread from their homelands. The South American mutant spread to cover every malarial region of the entire continent. The Southeast Asian mutant, meanwhile, was even more cosmopolitan: by the 1960s it had overrun Indonesia and New Guinea to the east, while to the west it spread in the 1970s through India and the Middle East. In 1978, the first record of this Southeast Asian form was recorded in East Africa, and in the 1980s it had made its way to most of sub-Saharan Africa. Now it’s much harder to stop the spread of malaria because other antimalarial drugs are more expensive, and resistant strains of Plasmodium are rising up against them as well.

The resurgence of parasites like Plasmodium has made parasitologists yearn for a vaccine. But even though vaccines work well against some viruses and bacteria, there’s no commercially available vaccine against a eukaryote. None. The problem is that eukaryote parasites are complex, evasive creatures. They go through different stages within their host, one stage looking nothing like the next. Protozoans and animals are accomplished at fooling our immune systems—just consider the way trypanosomes can peel off their molecular fur and grow one with a completely different pattern of chemical stripes, the way blood flukes snatch our own molecules for a mask while producing other chemicals that turn us against ourselves.

The first attempts to make parasite vaccines were crude affairs. Scientists would destroy live parasites with radiation and then inject their remains into lab animals. They provided only a little protection. In the last twenty years, scientists have learned how to tailor their vaccines much more carefully. They’ve turned their attention from entire parasites to single molecules the parasites carry on their coats. Their hope has been to find a handful of molecules that the immune system can use to prime itself for fighting these invaders. But still the failures have kept coming. The World Health Organization organized an aggressive campaign to create a schistosomiasis vaccine in the 1980s. They backed not one molecule but six, each tested by a squadron of immunologists. None of them offered any significant protection, so the grand scheme has been scrapped as the vaccine developers look for new molecules.

Yet, parasites do not by definition defy vaccines. It’s still possible that there is a molecule they simply can’t live without, that the immune system can identify regularly enough to use as a guide for their attacks. In 1998, human trials began for a vaccine for malaria created by scientists with the United States Navy. Their vaccine is even more sophisticated than current ones. They want to get the human immune system to attack Plasmodium at its early stage in the liver cell. The liver cells display bits of Plasmodium’s proteins in the receptors for major histocompatibility complex (MHC) on its surface. Normally our bodies can’t fight malaria at this stage, because by the time killer T cells have recognized the fragments and multiplied into a parasite-killing army, the Plasmodium has already escaped the liver and slipped into the bloodstream.

But if the killer T cell were already primed to recognize those fragments, they would be able to start destroying the infected liver cells immediately. To create an army of these T cells, the navy scientists want to give people a false case of malaria. They have fashioned a sequence of DNA that they are injecting into the muscles of volunteers. The DNA makes its way into the muscle cells, where it starts making the same protein that is made by Plasmodium and displayed by liver cells. The muscle cells should, in theory, carry this vaccine protein to their own surface, and killer T cells that come across it will be able to fight off an actual infection when it comes.

It’s a long way, though, from human trials to an actual vaccine campaign—particularly against diseases such as malaria and schistosomiasis that affect hundreds of millions of people in the poorest parts of the world. “What’s the best you could expect from a vaccine?” asks Armand Kuris, who has spent a large part of his career looking for ways to control schistosomiasis. “A molecular biologist will say, ‘It’s expensive, it will require revaccination every five to seven years, it will require perfect cold delivery.’ That means refrigeration from its manufacture to the point when you’re taking out a vial and sticking a syringe into it. Did you ever get a vaccination for smallpox? I received a vaccination on the border of Costa Rica where the nurse had the vaccine in a shot glass and tattooed me with a sewing needle. Now that’s a vaccine.” He points out that praziquantel, the cure for schistosomiasis, costs twenty dollars. “In Kenya in the villages where I work, the best-off families may be able to get the drug for a favored child. If that’s economically impossible, then if I gave you a vaccine, what the hell could you do with it? I’m not saying don’t do any research in it. The navy may have to go to a place with malaria—Peace Corps workers, diplomats … but in terms of the 200 million people who suffer from schistosomiasis, the vaccine has no chance of working. And yet my calculation is that three-quarters of the money spent on schisto in the past twenty years has been spent on vaccines.”

Even if researchers could produce a vaccine that met Kuris’s shot-glass standard, the parasites might well find a way around it. The World Health Organization has decided that even if a schistosome vaccine provided only 40 percent protection, it would be worth backing. That doesn’t mean that 40 percent of the 200 million people with schistosomiasis would be rid of their parasites. That means that each person would lose 40 percent of the worms inside his veins. It sounds like a worthy goal, but it ignores the sophistication of schistosomes. These flukes can sense how many of their fellow flukes are in their host, and as that number gets higher, each female produces fewer and fewer eggs. It’s probably a mechanism the blood flukes have evolved to take care of their hosts. If every female were to crank out as many eggs as she possibly could, they’d cause so much scarring to the host’s liver that the host might die. A vaccine that killed 40 percent of the worms in a person might create the opposite situation: the surviving schistosomes would sense that they had less competition and ratchet up the egg production, making the disease worse.

Vaccines also run the risk of tearing down our hard-earned ability to immunize ourselves. Say that the navy vaccine against the liver stage of malaria works, and that it is decided to inject it into millions of children around the world. Now say that the vaccine works brilliantly for a few years. Now say that countries let the program lapse because of civil war or because speculators sell off the national currencies. Or, if you like, say that a mutant strain of malaria sweeps through, different enough to keep T cells trained on the vaccine from recognizing it. Now the people would have no protection in their livers, and wouldn’t have had the opportunity to build up their own resistance to the blood stage of the parasite. The vaccine could then conceivably cause more harm than good.

For some parasites, it may actually make more sense to find a better coexistence than to try for eradication. In schistosomiasis, for example, the adult blood flukes themselves don’t cause much harm. They’re so well cloaked from the immune system that they don’t trigger a damaging attack, and they don’t drink much blood. It’s their eggs that are trouble, as the immune system forms giant balls of scar tissue around them in the liver. Among the many signals that immune cells trade, one has the ability to stop them from making these granulomas. Scientists have found that if they give an extra dose of this signal to mice with schistosomiasis, they don’t destroy their own livers. Conceivably, this kind of medicine could save us—not from parasites but from ourselves. Another strategy could be to keep the blood flukes from mating. Scientists have discovered that males attract females with a chemical signal. If people were vaccinated so that their immune system could destroy that signal, blood fluke love would be foiled, and no eggs would be made.

Coexistence with parasites might also be possible if we could tame them. The severity of a disease caused by a parasite has a lot to do with its evolutionary options. If a virus’s best chance for survival requires it to kill its hosts quickly, it will probably evolve into a lethal strain. But the opposite is also true: if the virus has to pay a heavy price for being virulent, more benign strains will win out. For well over ten thousand years, we’ve actually been managing a lot of evolution as we’ve bred plants and animals for the qualities we desire—docile cows, for example, and sweet apples. One of the architects of the theory of virulence, Paul Ewald of Amherst College, has proposed doing the same thing with parasites in order to fight diseases. It’s actually not hard to domesticate a parasite. In many parts of the tropics, for example, public health campaigns are supplying people with screens and bed nets to keep malaria-carrying mosquitoes from biting them as they sleep. The campaigns will save lives not only by preventing mosquito bites, Ewald suspects, but by forcing the Plasmodium inside the mosquitoes to evolve into a gentler form. As it becomes less likely that a parasite can get from one host to the next, it becomes unwise, evolutionarily speaking, to kill a host.

Eradicating parasites may even create new diseases. Colitis and Crohn’s disease affect 1 million Americans today. In both cases, a person’s own immune system violently attacks the lining of the intestines. The inflammation it triggers ruins a person’s digestion, and sometimes a surgeon may have to cut out a length of the damaged bowels. Both diseases can torment a person for a lifetime, and so far there’s no cure for either. Yet, as common as they are today, you can’t find any record of colitis or Crohn’s disease before the 1930s. The first cases in the United States turned up in well-to-do Jewish families in New York City, which made doctors think they were hereditary diseases. But then whites who weren’t Jewish started getting them. Still doctors thought the diseases were hereditary because hardly any blacks fell ill. But in the 1970s, blacks started getting the diseases as well. Looking outside the United States, you can see another peculiar pattern. In the poorer countries in the world, the diseases are practically unheard of. Yet, in Japan and Korea, two countries that have quickly gone from poverty to wealth, there are now epidemics of colitis and Crohn’s disease.

Some scientists think that the spread of these diseases was caused by the eradication of intestinal worms. The idea certainly fits their history. In the United States, they appeared first in affluent people in cities—the people, in other words, who would have been the first to be cleared of tapeworms, and other worms living in their bowels. Later, when blacks began to emerge from poverty and moved to cities as well, they also fell ill. Intestinal parasites are still common in most of the world, but in countries where they’ve been recently eradicated, colitis and Crohn’s disease have followed fast. Even farm animals are starting to get bowel diseases as they’ve been getting treated with antiworm medicines like Ivermectin.

Humans may have been protected from diseases like these by the interplay between their immune systems and intestinal parasites. Parasitologists have found that intestinal worms can nudge the immune system from a poison-spouting, cell-engulfing frenzy to a gentler sort of attack. In this mellower mood, the immune system can still keep bacteria and viruses in check, but the parasitic worms can live unmolested. This arrangement benefits the host as well. When parasitic worms are abundant, it would be dangerous to attack them over and over again. But then, in an evolutionary blink, a few hundred million people lost their parasites. Without their soothing influence, some people now swing too far the other way, their immune systems unable to stop attacking their own bodies.

In 1997, scientists at the University of Iowa put this idea into startling practice. They picked out seven people with ulcerative colitis and Crohn’s disease, who had gotten no relief from any conventional treatment. They fed them eggs from an intestinal worm that normally lives in an animal, one that wouldn’t cause any disease of its own in a human gut (the scientists are still keeping the species a secret until they’ve finished their research). Within a couple of weeks the eggs had hatched, the larvae had grown, and six out of the seven people went into complete remission.

Parasite-free living may also be responsible for the rise of other immune disorders, such as allergies. Twenty percent of the population of the industrialized world suffer from allergies, but elsewhere they’re hard to find. Since it’s dangerous to generalize from country to country, an immunologist named Neil Lynch has done fine-grained studies of this pattern in Venezuela. He looked at people in upper-class homes with running water and toilets, and compared them with poor Venezuelans in slums. While 43 percent of the upper-class people had allergies, only 10 percent had light infections from intestinal worms. Among the poor, there were half the allergies as in the upper classes but twice the worms. And when Lynch studied Venezuelan Indians who live in the rain forests, the pattern was even starker: 88 percent were infected with parasites, and they had no allergies at all. Without parasitic worms exerting their influence, our immune systems may be prone to overreacting to harmless bits of cat dander and mold.

To fight these diseases, we may need to acknowledge our long marriage to parasites. That’s not to say that people with colitis should be eating Trichinella eggs unless they’d enjoy a long, agonizing death as the parasite worked its way into their muscles. But the chemicals that the parasites use to manipulate our immune systems may offer protection from modern life. Perhaps some day, along with polio vaccines children will get parasite proteins, so that their immune systems will be trained not to fly out of control. It would be a supreme final twist to the story of parasites in humans. They may not always be the disease. In some cases they may be the cure.

Загрузка...