Today cholera is known primarily as a disease that affects impoverished countries, but that wasn’t always so. In the nineteenth century, cholera struck the most modern, prosperous cities in the world, killing rich and poor alike, from Paris and London to New York City and New Orleans. In 1836, it felled King Charles X in Italy; in 1849, President James Polk in 1836, it felled King Charles X in Italy; in 1849, President James Polk in New Orleans; in 1893, the composer Pyotr Ilyich Tchaikovsky in St. Petersburg. Over the course of the nineteenth century, cholera sickened hundreds of millions, killing more than half of its victims. It was one of the fastest-moving, most feared pathogens in the world.
The medical and public-health advances developed to contain nineteenth-century pathogens like cholera were so eﬀective that for most of the twentieth century, the conventional wisdom among epidemiologists, medical historians, and other experts was that developed societies had vanquished infectious diseases for good.
The ﬁrst new infectious disease that struck the prosperous West and disrupted the notion of a “postinfection” era, the human immunodeﬁciency virus (HIV), appeared in the early 1980s. Although no one knew where it came from or how to treat it, many commentators exuded certainty that it was only a matter of time before medicine would vanquish the upstart virus. Drugs would cure it, vaccines would banish it. Public debate revolved around how to get the medical establishment to move quickly, not about the dire biological threat that HIV posed. In fact, early nomenclature seemed to negate the idea that HIV was an infectious disease at all. Some commentators, unwilling to accept the contagious nature of the virus (and willing to indulge in homophobic scapegoating) declared it a “gay cancer” instead. And then other infectious pathogens arrived, similarly impervious to the prevention strategies and containment measures we’d long taken for granted. Besides HIV, there was West Nile virus, SARS, Ebola, and new kinds of avian inﬂuenzas that could infect humans. Newly rejuvenated microbes learned to circumvent the medications we’d used to hold them in check: drug-resistant tuberculosis, resurgent malaria, and cholera itself.
In 2008, disease experts marked the spot where each new pathogen emerged on a world map, using red points. Crimson splashed across a band from 30–60° north of the equator to 30–40° south. The entire heart of the global economy was swathed in red: northeastern United States, western Europe, Japan, and southeastern Australia.
“You hear this analogy that we have to win this war against microbes,” said the UCLA infectious-disease expert Brad Spellberg to a room full of colleagues in 2012. “Really? They are so numerous that they collectively outweigh us by one-hundred-thousand-fold. I don’t think so.”
In a survey by the epidemiologist Larry Brilliant, 90 percent of epidemiologists said that a pandemic that will sicken 1 billion, kill up to 165 million, and trigger a global recession that could cost up to $3 trillion would occur sometime in the next two generations.
In 2009, H1N1 ﬂu had spread quickly and widely but caused death in less than 0.005 percent of its victims.
The scene was remarkable in several ways that might explain why SARS had begun there. One was the unusual, ecologically unprecedented conglomeration of wild animals. In a natural setting, horseshoe bats, which live in caves, never rub shoulders with palm civets, a kind of cat that lives in trees. Neither would normally come within spitting distance of people, either. But all three came together in the wet market. The fact that the virus had spread from bats into civet cats had been especially critical to SARS’s emergence. The civet cats were, for some reason, especially vulnerable to the virus. This gave the virus the opportunity to amplify its numbers, like a whistle in a tunnel. With increased replication came increased opportunities to mutate and evolve, to the extent that it evolved from a microbe that inhabited horseshoe bats to one that could infect humans. Without that ampliﬁcation, it’s hard to say whether the SARS virus would have ever emerged.
For a pathogen to cause a wave of sequential infections—an epidemic or a pandemic, depending on how far the wave traveled—it must be able to spread directly from one human to another. That is to say, its “basic reproductive number” has to be greater than 1. The basic reproductive number (also known as R0, or “R-naught,” as the Anglophiles pronounce it) describes the average number of susceptible people who are infected by a single infected person (in the absence of outside interventions). Say you have a cold, and you infect your son and his friend with it. If this hypothetical scenario were typical of the entire population, your cold’s basic reproductive number would be 2. If you infect your daughter as well, your cold’s basic reproductive number would be 3. This calculation is a critical one to make in an outbreak, for it immediately predicts its future course. If, on average, each infection results in less than one additional infection—you infect your son and his friend, but each of them infects no one else—then the outbreak will die out on its own, like a population in which each family produces fewer than two children…
More than 60 percent of our newly emerged pathogens originate in the furred and winged creatures around us. Some of these new pathogens come from domesticated animals, such as pets and livestock. Most—over 70 percent—come from wild animals.
Because intimate contact between species must be prolonged in order for an animal microbe to turn into a human pathogen, historically we are victim to some animals’ microbes more than others. Many more pathogens come from the bodies of Old World creatures, whom we’ve lived with for millions of years, compared to New World creatures, with whom we’ve been acquainted for just tens of thousands of years. A disproportionate number of human pathogens hail from other primates, who’ve bestowed upon us 20 percent of our most burdensome pathogens (including HIV and malaria), despite comprising just 0.5 percent of all vertebrates. It’s also why so many human pathogens date back to the dawn of agriculture ten thousand years ago, when people started domesticating other species and living in prolonged, intimate contact with them. From cows, we got measles and tuberculosis; from pigs, pertussis; from ducks, inﬂuenza.
Of 4,600 species of mammals on earth, 20 percent are bats. And, as a study in Paraguay found, certain bat species thrive in disturbed forests in even higher abundances than in intact ones. Unfortunately, bats are also good incubators for microbes that can infect humans. They live in giant colonies of millions of individuals. Some species, like the little brown bat, can survive for as long as thirty-ﬁve years. And they have unusual immune systems. For example, because their bones are hollow, like those of birds, they don’t produce immune cells in their bone marrow like the rest of us mammals do. As a result, bats host a wide range of unique microbes that are exotic to other mammals.
As avian diversity declined in the United States, specialist species like woodpeckers and rails disappeared, while generalist species like American robins and crows boomed. (Populations of American robins have grown by 50 to 100 percent over the past twenty-ﬁve years.)48 This reordering of the composition of the local bird population steadily increased the chances that the virus would reach a high enough concentration to spill over into humans.
The loss of species diversity in northeastern forests of the United States similarly allowed tickborne pathogens to spill over into humans. In the original, intact northeastern forests, a diversity of woodland animals such as chipmunks, weasels, and opossums abounded. These creatures imposed a limit on the local tick population, for a single opossum, through grooming, destroyed nearly six thousand ticks a week. But as the suburbs grew in the Northeast, the forest was fragmented into little wooded plots crisscrossed by roads and highways. Specialist species like opossums, chipmunks, and weasels vanished. Meanwhile, generalist species like deer and white-footed mice took over. But deer and white-footed mice, unlike opossums and chipmunks, don’t control local tick populations. When the opossums and the chipmunks disappeared, tick populations exploded.
he toll of pathogens like MRSA, SARS, West Nile virus, and even Ebola are relatively minor in the grander scheme of things. More people die in car accidents in the United States every year than these new pathogens have managed to fell during their collective tenures on Earth. The reason to pay attention to them regardless is that they’ve begun a journey that pathogens such as cholera completed. And we can see where that road leads.
Europeans and Americans felt that cholera, as a disease of backward Orientals, would never reach the enlightened West. Cholera was an “exotic production … developed in the uncultivated, arid plains of Asia,” an 1831 French tome declared. They pointedly referred to it as the “Asiatic” cholera to diﬀerentiate it from their ordinary diarrhea, which they called “cholera morbus.”8 France, for example, had little to fear: “In no country but England are the rules of hygiene more faithfully observed,” one French commentator proudly opined.9 Paris, where the wealthy enjoyed airy courtyards and marbled baths in perfumed water, was nothing like the swampy, mangrove-covered Sundarbans. On the contrary, Paris was the center of the Enlightenment. Students of medicine from around the world descended upon the city’s new hospitals to learn the latest techniques and discoveries from leading French physicians. And yet, slowly but surely, cholera arrived on Europe’s doorstep. By the fall of 1817, cholera had traveled sixteen hundred miles upstream on the Ganges, killing ﬁve thousand at a military camp. By 1824, cholera had radiated into China and Persia, before freezing out in Russia that winter. A second outbreak began in India a few years later. In 1827, British troops invaded Punjab; in 1830, Russian soldiers marched off to occupy Poland. Cholera followed them like a shadow.
Cholera took hold of Paris in late March 1832. Without the benefit of modern medicine, it killed one-half of those whom it infected, causing a set of uniquely horrifying symptoms. There was no tragic tubercular cough or romantic malarial fever. Within hours, cholera’s dehydrating effect shriveled victims’ faces, wrinkling skin and hollowing cheeks, drying up tear ducts. Fluid blood turned tarry, clotting in the bloodstream. Muscles, deprived of oxygen, shuddered so violently that they sometimes tore. As the organs collapsed in turn, victims fell into acute shock, all the while fully conscious and expelling massive amounts of liquid stool.
Mythic tales circulated of people who sat down to dinner and died by dessert; men who returned home from work to find a note on the door saying that the wife and family lay dying in the hospital; people riding the train suddenly collapsed in front of their fellow passengers. And they did not just clutch their hearts and crumple to the floor, either; their bowels released uncontrollable floods. Cholera was humiliating, uncivilized, an affront to nineteenth-century sensibilities. This exotic invader, the historian Richard Evans writes, transformed enlightened Europeans into a race of Savages.”
Physicians had been arguing for centuries over the precise symptoms of death, and about the diﬀerences between what they called “apparent” and “real” death. In 1740, the prominent French physician Jean-Jacques Winslow had argued that some of the common tests for death—pinpricks and surgical incisions—lacked a certain precision. (Poor Winslow himself had been mistakenly declared dead and boxed in a coﬃn twice as a child.) Some said that the most reliable sign of death was the putrefaction of the body. But that was a stringent and stinky test for the bereaved, who might be compelled to wait around for the decay of their loved one before mourning. And even then, some argued, the corpse might still be alive, simply comatose and gangrenous.
In the 1790s, a new system implemented at Paris mortuaries required that corpses be outﬁtted with special gloves, such that if a corpse’s ﬁnger so much as trembled, a string would be pulled and a large hammer would slam down on an alarm. Guards patrolled the mortuary under the direction of local physicians, ears peeled. (Today, we surveil the living for signs of death; back then, they surveilled the dead for signs of life.) An 1803 law required a day’s delay between an apparent death and the subsequent burial, just in case someone got it wrong. In 1819, the French physician René-Théophile-Hyacinthe Laënnec developed the stethoscope, which made audible even the sound of a faintly beating heart (simultaneously freeing gallant physicians from suggestively pressing their ears to their female patients’ chests).
In the evenings during that terrible spring, Paris’s elite attended elaborate masquerade parties where, in denial and deﬁance of cholera’s toll, they danced to “cholera waltzes,” costumed as the ghoulish corpses many would soon become. Willis, who attended one of the so-called cholera balls, described a man dressed as cholera itself, with “skeleton armor, bloodshot eyes, and other horrible appurtenances of a walking pestilence.” Every now and then, one of the revelers would rip oﬀ his mask, face purpled, and collapse. Cholera killed them so fast they went to their graves still clothed in their costumes.19 (Paris’s cholera balls, and Willis’s reporting on them, inspired a mordant thirty-three-year-old writer in Baltimore—Edgar Allan Poe—to pen “Masque of the Red Death,” a short story about a masquerade ball in which the entry of a masked ﬁgure “shrouded from head to foot in the habiliments of the grave” leads to the death of “revellers in the blood-bedewed halls of their revel.”)
The growth of wet markets created the conditions for the SARS virus to spill over and adapt to humans, but it was the modern air travel network and a single establishment—a nondescript business hotel called the Metropole in the middle of Kowloon in Hong Kong—that distributed it across the planet, triggering the global outbreak of 2003. SARS’s ﬁrst victims in south China had been rushed to local hospitals, including the Sun Yat-Sen Memorial Hospital in Guangzhou. There, clinicians working around the clock provided whatever care they could, but they also continued living their lives. One, Dr. Liu Jianlun, ﬁnished his shift tending SARS patients, then cleaned up, changed clothes, and left Guangzhou for the ninety-mile trip south to Hong Kong to attend a wedding. A few hours later, he checked into Room 911 at the Metropole, which is where the SARS virions in his body made their escape.41 So much virus took leave of his body in that room that investigators recovered genetic evidence of the virus in the carpet months later.42 Just how SARS spread from Dr. Liu to twelve other hotel residents remains unclear. Perhaps they shared an elevator ride with him or tased through the hallway outside his room after he’d coughed or vomited. Or they touched the corridor walls after he had brushed against them with a hand he’d sneezed into. Or inhaled some of the aerosolized virus that had escaped from his room after he flushed the toilet.
One of Liu’s fellow residents in 2003 was a ﬂight attendant. She made it as far as Singapore before being hospitalized, where she passed on the virus to her doctor, who planned to ﬂy to New York, where he was to attend a medical conference. He made it as far as Frankfurt, Germany. Others exposed to Liu at the Metropole boarded planes to Singapore, Vietnam, Canada, Ireland, and the United States. Within twenty-four hours, the SARS virus from Liu had spread to ﬁve countries; ultimately, SARS appeared in thirty-two countries. Thanks to the miracle of air travel, one infected man seeded a global outbreak.
Many people worry about catching bugs during air travel, but in fact only a subset of pathogens easily spreads during ﬂights themselves. Pathogens that spread via direct contact, like HIV and Ebola, are unlikely to amplify during ﬂights.
Respiratory pathogens like SARS, however, are ideally suited. By spreading through droplets released while coughing or sneezing or through aerosols, extra-tiny droplets that can hang suspended in the air, they can turn a single infected carrier upon departure into a planeload of carriers upon arrival.
Every year, for example, hundreds of thousands of so-called medical tourists from the United States, Europe, the Middle East, and elsewhere ﬂy to countries such as India to undergo surgery. Thanks to market reforms in the early 1990s, which unleashed several decades of 8 percent annual growth in the Indian economy, modern private hospitals in India now provide the same standard of care as Western hospitals. But because poverty and low wages persist in the country (among other reasons), they can do so at a fraction of the cost. As a result, foreign patients looking for an aﬀordable organ transplant, or a knee replacement, or heart surgery, arrive in droves.
Most bacteria in Indian hospitals are gram-negative, which means they are encased in tough outer membranes that make them more resistant to antibiotics and antiseptics than the gram-positive strains that dominate in Western hospitals. (The term gets its name from Hans Christian Gram, the developer of the test that distinguishes between the two types.)
The act of excretion itself didn’t require privacy or provoke shame back then as it does now. Sixteenth- and seventeenth-century monarchs such as England’s Elizabeth I and France’s Louis XIV openly relieved themselves while holding court.12 Far from reviling human feces, medieval Europeans even began to think of it as medicinal. According to a history of sanitation by the journalist Rose George, the sixteenth-century German monk Martin Luther ate a spoonful of his own feces every day. Eighteenth-century French courtiers took a diﬀerent route, ingesting their “poudrette,” dried and powdered human feces, by sniﬃng it up their noses.13 (Was this dangerous? Quite possibly. But in contrast to more immediate threats like, say, bubonic plague, the sporadic cases of diarrhea these practices may have caused would have paled.)
To those of us who’ve grown up with indoor-plumbing systems that capture every last drop of excreta in gleaming porcelain portals and flush it miles away, the waste management crisis that engulfed nineteenth-century New York seems a curiosity from another world. But it isn’t. The sanitary revolution that replaced privies and latrines with flush toilets and running water has been selective and only partially implemented. And because pathogens that enjoy transmission opportunities in one corner of the world can easily spread elsewhere, in certain ways we are as threatened today by pathogens that spread by feculence as we were almost two hundred years ago.
While Western attitudes toward human excreta have radically changed since the days of poudrette, our attitudes toward animal excreta remain relatively cavalier. For example, in the United States, where dog ownership is common, many consider dog feces harmless. That’s why many communities allow resident pets to defecate at will in streets, yards, and parks, and dog owners think nothing of walking for miles with thin plastic bags of dog feces swinging casually at their sides. A worker in the garden center at Home Depot conﬁded to me that he planted his award-winning tomato plants in the stuﬀ. According to one survey, 44 percent of dog owners make no attempt to collect or contain their dog’s waste, explaining that dog poo acts as a fertilizer.49 As a result, dog waste deposited on sidewalks and yards sinks into soils, wafts into the air, and washes into waterways. About a third of the bacterial contamination in U.S. waterways originates in dog waste, and such contamination is more common in residential areas where dogs live than in commercial areas. (Scientists call this phenomenon the “Fido Hypothesis.”)50 It’s in the air, too. One study of outdoor air pollution in Chicago, Cleveland, and Detroit found that during the winter when trees are leafless (and therefor not exuding bacteria into the air), the majority of the aerosolized bacteria comes from dog feces.
For pathogens, excreta is a perfect vehicle for spreading from one person to another. Human feces, freshly emerged from the body, teems with bacteria and viruses. By weight, nearly 10 percent is composed of bacteria, and in each gram there might be up to one billion viral particles.
Ancient Romans used water to ﬂush waste far from their settlements, where it could rot undisturbed. The Romans controlled a supply of fresh water from distant, unpopulated highlands via a network of wood and lead pipes, which brought the typical resident three hundred gallons of fresh water every day, three times more water than the average water-guzzling American uses today, according to the Environmental Protection Agency. The Romans mostly used this ﬂow of water to run bathhouses and public fountains, but they also used it in communal latrines, where they sat over keyhole-shaped openings on benches situated over large drains, a gutter of fresh running water ﬂowing at their feet.
Partly, this about-face had to do with the rise of Christianity in the fourth century A.D. The Greeks and Romans, not to mention the Hindus, the Buddhists, and the Muslims, all prescribed ritualized hygiene practices. Hindus must wash after any number of acts considered “unclean,” as well as before prayer. Muslims must perform ablutions at least three times before their ﬁve-times-daily prayers, as well as on numerous other occasions. Jews were enjoined to wash before and after each meal, before praying, and after relieving themselves. In contrast, Christianity prescribed no elaborate water-based hygiene rituals. Good Christians had only to sprinkle some holy water to consecrate their bread and wine. Jesus himself, after all, had sat down to eat without washing ﬁrst. Prominent Christians openly repudiated water’s cleansing eﬀect as superﬁcial, vain, and decadent. “A clean body and a clean dress,” opined one, “means an unclean soul.” The most holy Christians, with their lice-infested hair shirts, were among the least washed people on Earth. Not surprisingly, after the Goths disabled the Roman aqueducts in 537, the unwashed leaders of Christian Europe didn’t bother rebuilding them, or any other elaborate water-delivery system.
To those of us who’ve grown up with indoor-plumbing systems that capture every last drop of excreta in gleaming porcelain portals and ﬂush it miles away, the waste management crisis that engulfed nineteenth-century New York seems a curiosity from another world. But it isn’t. The sanitary revolution that replaced privies and latrines with ﬂush toilets and running water has been selective and only partially implemented. And because pathogens that enjoy transmission opportunities in one corner of the world can easily spread elsewhere, in certain ways we are as threatened today by pathogens that spread by feculence as we were almost two hundred years ago.
And we allow the livestock we eat to live in conditions that would be considered medieval if people were subjected to them. The chicken coop, the pigpen, the rabbit hutch—all are piles of excreta with animals sleeping and living on top.
Livestock produce thirteen times more solid waste than the human population does in the United States.
In 2011, a batch of fenugreek seeds in Egypt caused an outbreak of disease three thousand miles away in Germany. That outbreak was noteworthy for two reasons. It showed the long reach of fecal-contaminated products and how they pose a risk to everyone along the global food chain. It also showed how pathogens exploit fecally contaminated environments not only for their transmission opportunities but also to become more virulent.
Unlike creatures like us, who exchange genes “vertically,” from parents to children, microbes can exchange genes laterally, by bumping up against each other. Scientists call it “horizontal gene transfer.” Since it happens in places where microbes meet, microbe-rich, fecally contaminated environments provide a conducive setting.
As we grapple with the pathogens this new sanitary crisis imposes on us, we also face the pathogens churned out by the old sanitary crisis, which continues unabated in much of the world where poverty is rampant and governance weak. Fast-forward 178 years after New York City’s ﬁrst cholera epidemic, to the island of Hispaniola, and the nation of Haiti. The majority of the population used the same waste-management methods as nineteenth-century New Yorkers. As of 2006, only 19 percent of the Haitian population had access to toilets or latrines. “When our children have to take a poop, we put them on a little bowl,” explained a resident of Cité Soleil, Haiti’s biggest slum. “Once they are done, we throw it into an empty lot.” Others used what are euphemistically called “ﬂying toilets.” They defecate “into plastic bags which are then tossed into a mound of garbage or a nearby canal,” as watchdog journalists from the NGO Haiti Grassroots Watch wrote. And the excreta deposited into Haiti’s streets and empty lots aren’t easily moved. The ﬂow of rainwater that might wash it into the sea is regularly blocked by garbage, such as plastic bags, Styrofoam containers , vegetable scraps, and cast-off shoes in various states of decomposition.
People who live in the slums of South Asia are similarly exposed to human excreta. Nobody pays any mind when little boys, like the one I met at the Ekta Vihar slum in New Delhi, nonchalantly squat over the open gutters that run through the illegal settlement, even as a limber sari-clad woman and her three small children crouching in the dust on the gutter’s banks eat their afternoon meal not twenty yards away. Of more than 5,000 towns in India, only 232 have sewer systems that carry away human excreta, and even those systems are partial at best. Everyone else must relieve themselves outside, in the open, like 2.6 billion other people around the world. Or they may use dry latrines of some kind or another, which are periodically emptied by India’s 1.2 million “manual scavengers,” who, like New York’s nineteenth-century night scavengers, collect the excreta with their bare hands or a piece of tin. They scrape it into a basket and carry it to a designated dumping ground, such as a nearby body of water.67 Whether collected by scavengers or carried off by sewer systems, the overwhelming majority of human excreta in the developing world ends up in the same streams, rivers, lakes, and seas that people use for domestic purposes, with microbial intensity fully intact.
For the billions of people who lack adequate sanitation, this problem is a standing public-health catastrophe. Nearly 2 million die every year from diarrhea and scores of others from diseases such as intestinal worms, the helminthic infection schistosomiasis, and blindness-causing trachoma, which sanitary waste disposal systems could prevent. But it’s not a problem only for them. It’s a problem for everyone, because neglected environments contaminated with human filth provide a back channel for pathogens to amplify, spread, and hatch new pandemics that can affect all of us.
Thanks to the housing revolution, even the most crowded cities can be healthful places to live. In general, people who live in cities today live longer than those who live in rural areas. Only a few health burdens remain—higher rates of obesity and more exposure to pollution, for example. And yet, as washed clean of their past as cities like New York may appear, the housing revolution they enjoyed, like the sanitary revolution, has been partial and selective. It hasn’t penetrated many of the poorer countries of the world, and its insights haven’t been applied to our livestock. In India, due in part to poverty and in part to a lack of governance, housing regulations are as sparse and poorly enforced as in nineteenth-century New York. In Mumbai, the densest streets in slums such as Dharavi hold 1.4 million people in each square mile, more than seven times the concentration of humans packed into nineteenth-century Five Points.
Globally, more people lived outside cities than inside them. By 2030, experts estimate, that will change. The majority of humankind will live in large cities. Only a handful of these large metropolises will be as healthful and well regulated as the cities of Europe and North America. Many will be more like Mumbai. Two billion of us will live in slums like Dharavi. Our booming livestock population, which is larger today than the cumulative population of the last ten thousand years of domestication until 1960, live in the animal equivalent of slums, too. More than half of the world’s pigs and chickens are raised on factory farms, and more than 40 percent of the world’s beef is produced on feedlots, where animals are crowded together by the millions.
The most transformative eﬀect of crowds lies in the way they allow pathogens to become more deadly. This has to do with the peculiar evolutionary advantages that pathogens that infest crowds enjoy. Under most circumstances, virulence is detrimental to a pathogen’s ability to spread. Consider pathogens that spread when people breathe on each other, like inﬂuenza, or when they touch each other, like cholera or Ebola. Successful transmission depends on social contact between infected and noninfected people. Uninfected people must inhale the breath of the infected or touch their bodily ﬂuids. If they don’t, the pathogen is stuck. It can’t spread.
Of all the new pathogens emerging today, novel inﬂuenza viruses like H5N1 are the ones that keep the most virologists up at night. If H5N1 or any other novel avian inﬂuenza evolved to transmit eﬀectively between humans, the death toll would be swift and substantial. Even with low mortality rates, seasonal ﬂu viruses carry oﬀ huge numbers of victims, simply because they are so good at spreading widely among us. Every year seasonal inﬂuenza kills up to half a million people around the world. That’s the toll of ﬂu viruses that have already adapted to us, and we to them. A novel inﬂuenza virus that could spread as well as the seasonal ﬂu with an even marginally higher mortality rate could level millions.
Theoretically, H5N1 or some other bird-adapted virus could spontaneously mutate in such a way that allows it to bind to human sialic acids. But there’s another, much faster way for novel ﬂu viruses to acquire that ability, through what’s called reassortment. This is when a virus acquires a chunk of new genes from another virus, and with the newly acquired genes, all of the capabilities those new genes confer. An avian inﬂuenza virus could reassort with one that was already good at avian inﬂuenza virus could reassort with one that was already good at infecting people, for example one of the many inﬂuenza viruses already adapted to humans, such as the relatively mild ones that cause seasonal ﬂu. Then the novel avian inﬂuenza virus could acquire the ability to transmit eﬃciently in humans, too.
Reassortment of this kind could happen only in cells coinfected with both viruses at the same time. But since human inﬂuenzas bind to human sialic acids, and avian inﬂuenzas to avian sialic acids, people aren’t easily infected with bird inﬂuenzas, and birds aren’t easily infected with human inﬂuenzas. Thus, even with the massive ﬂocks of poultry moving across international borders and thousands of people exposed to bird excreta across southern China and elsewhere, opportunities for human and avian ﬂu viruses to directly exchange genes are slim.
This is where the pigs come in. Pigs have both humanlike sialic acids on the surfaces of their cells as well as avian-like ones. That means that both kinds of virus can bind to their cells. (This is also true of quails, but given the small scale of quail farming does not seem to play much of a role in the epidemiology of inﬂuenza.) Pigs living in proximity to both humans and poultry ﬂocks or wild waterfowl could be the mysterious missing link between bird viruses and human inﬂuenza pandemics. Virologists call them the perfect “mixing vessel” for novel inﬂuenza strains.
Scientists categorize inﬂuenza viruses by the type of proteins on their surfaces. Each has one of sixteen subtypes of the protein hemagglutinin (H) and one of nine subtypes of the enzyme neuraminidase (N) on its surface.
The most nightmarish ﬂu pandemic in modern times struck in 1918. The pandemic virus—H1N1—had ampliﬁed and grown virulent under the unusually crowded conditions of trench warfare during World War I. It caused more than 40 million deaths around the globe, mostly due to bacterial pneumonia, a complication of the viral infection (which would be treatable today, unless caused by a resistant strain). H1N1 sank out of sight after that. It seemed as if the virus disappeared. But it hadn’t. It had retreated into some repository somewhere, just as cholera had in New York City in the fall of 1832. And just as cholera had, it remained quiescent until a suﬃciently large crowd of susceptible humans formed, allowing it to strike out again. That happened nearly a century later, in 2009, precipitating the less deadly but still potent “swine ﬂu” pandemic of that year. The virus’s century-long hideout, the virologist Malik Peiris told me when I met him in Hong Kong, was the bodies of pigs.
The pathogen that can spill over, spread, and cause disease is a dangerous creature to be sure, but it’s actually only halfway on the multistage journey toward pandemicity. The fate of the second other half of its journey is determined by how societies respond. It’s true that sometimes pathogens crash like a tidal wave, descending too quickly or harshly or cryptically for societies to understand what to do before it’s too late. But in many cases, collective defenses of even the most crude sort—isolating the sick and warning each other of a disease’s spread, say—can act like an underwater barrier, breaking the wave of death and destruction. That levels the contest between pathogens and humans.
They could have implemented quarantine. The ﬁrst one had been enacted by Venice in 1374, when the city’s gates and ports were shut for forty days to keep out bubonic plague (thus deriving the method’s name, from quarante giorni or “forty days” in Italian).27 That was a pretty good containment measure for a pathogen like bubonic plague, which manifests itself in visible pathology in less than forty days. After being held in quarantine for that long, people, ships, and their goods were, as one historian put it, “medically harmless.”
By the end of the seventeenth century, all of the major Mediterranean ports in Western Europe had built heavily policed fortresses, called lazarettos, to hold ships, passengers, and goods in quarantine. To implement similar measures on land, lines of soldiers were positioned in what the French called cordons sanitaire, or sanitary lines. One of the largest—an army of soldiers in a twenty-mile-wide formation that stretched for twelve hundred miles across the Balkans, with orders to shoot on sight any passersby who failed to submit to quarantine—held the plague out of Turkey in the eighteenth century.
Cholera broke out in Naples, Italy, in 1911. It was the eve of a nationwide celebration of the country’s ﬁftieth anniversary, which was expected to draw millions of tourists. The Italian prime minister, more interested in protecting commerce and prestige than the health of his people, made protecting commerce and prestige than the health of his people, made his intention to ﬂout the International Sanitary Convention clear in a telegram sent to his public-health authorities: “The aim is to obtain and maintain the greatest possible secrecy” about the unfolding cholera epidemic in the country, he instructed. “The Government will be merciless with all those who are slow or negligent.”
Italian authorities paid newspapers and reporters secret monthly retainers of 50–150 lire a month to avoid mention of the dreaded “c” word; they intercepted and censored telegrams that contained the word “cholera”; they tapped the phones of those who might leak the news, threatening them with imprisonment. They conducted nighttime raids on medical societies, conﬁscating cholera-education materials. And while they continued to maintain records on each case that occurred, they stamped case reports with the word “secret,” in bold type, and the reminder: “N.B. No oﬃcial bulletin was published.” Cholera victims were transported to hospitals in the dead of night, while local papers blared, “There is no cholera, and never was!”
Italy’s cover-up was among the most daring, but it was hardly the last. Political leaders continue to prioritize commerce and national reputation over public health. In 2002, Chinese authorities treated the emergence of SARS as an oﬃcial state secret. A spokesman for the Guangdong health department said that information about the brewing epidemic would be dispensed solely by “the party propaganda unit” and that any physician or journalist who reported on the disease risked persecution, the critic Mike Davis reported. Aside from a few newspaper reports from the city of Foshan—which mentioned only a spate of unexplained respiratory deaths—the international community and public-health authorities around the world had no idea about the outbreak. It was only months later, when a local resident happened to mention what was happening in Guangzhou in a message to an online acquaintance, that the international community learned of the new pathogen’s emergence.
Of the one hundred largest economies in the world, only forty-nine are countries; ﬁfty-one are private corporations. By 2016, the richest 1 percent of people in the world will control more than half of the planet’s total wealth. The inﬂuence of these private interests dwarfs that of the public institutions that might seek to regulate them. And so when private interests run contrary to those of public health, it’s often public health that gets the shaft. A good example of this is in the area of antibiotic consumption. The fact that wanton consumption of antibiotics—using either more or less than the precise amount required to tame an infection—leads to the development of antibiotic-resistant pathogens has been long known. It was ﬁrst outlined by Alexander Fleming, the scientist who discovered penicillin. “I would like to sound one note of warning,” he said, in his speech accepting the Nobel Prize in Physiology or Medicine in 1945. “It is not diﬃcult to make microbes resistant to penicillin in the laboratory by exposing them to concentrations not suﬃcient to kill them, and the same thing has occasionally happened in the body. The time may come,” he went on presciently, when penicillin can be bought by anyone in the shops. Then there is the danger that the ignorant man may easily underdose himself and by exposing his microbes to non-lethal quantities of the drug make them resistant. Here is a hypothetical illustration. Mr. X has a sore throat. He buys some penicillin and gives himself, not enough to kill the streptococci but enough to educate them to resist penicillin. He then infects his wife. Mrs. X gets pneumonia and is treated with penicillin. As the streptococci are now resistant to penicillin the treatment fails. Mrs. X dies. Who is primarily responsible for Mrs. X’s death? Why Mr. X whose negligent use of penicillin changed the nature of the microbe.
While Fleming warned about the perils of underuse of antibiotics, the same risks apply with overuse. But while judicious use of antibiotics served the needs of public health, rampant consumption served those of private interests. In many countries, hospital physicians found it convenient to dose whole wards with antibiotics, indiscriminately. Patients found comfort in consuming antibiotics for colds and flus and other viral infections, for which they are useless. Farmers profited by giving antibiotics to their livestock, which for reasons that are still unclear made them grow faster and helped them thrive in factory farms. (Their provision of low-dose antibiotics to their livestock for “growth promotion” accounts for 80 percent of all antibiotic consumption in the United States.) Cosmetic companies enlarged their markets by packaging antibiotics in their soaps and hand lotions. By 2009, the people and animals of the United States were consuming upward of 35 million pounds of antibiotics annually.64 “Fleming’s warning,” one microbiologist writes, “has fallen on ears deafened by the sound of falling money.”
While Fleming warned about the perils of underuse of antibiotics, the same risks apply with overuse. But while judicious use of antibiotics served the needs of public health, rampant consumption served those of private interests. In many countries, hospital physicians found it convenient to dose whole wards with antibiotics, indiscriminately. Patients found comfort in consuming antibiotics for colds and ﬂus and other viral infections, for which they are useless. Farmers proﬁted by giving antibiotics to their livestock, which for reasons that are still unclear made them grow faster and helped them thrive in factory farms. (Their provision of low-dose antibiotics to their livestock for “growth promotion” accounts for 80 percent of all antibiotic consumption in the United States.) Cosmetic companies enlarged their markets by packaging antibiotics in their soaps and hand lotions.63 By 2009, the people and animals of the United States were consuming upward of 35 million pounds of antibiotics annually.64 “Fleming’s warning,” one microbiologist writes, “has fallen on ears deafened by the sound of falling money.”
Antibiotics, had they been well stewarded, experts say, could have eﬀectively treated infections for hundreds of years. Instead, one by one our bacterial pathogens have ﬁgured out how to rout the onslaught of antibiotics to which they’ve been indiscriminately subjected. We now face what some experts call an era of “untreatable infections.”
The burden of drug-resistant pathogens extends beyond the people who will die from infections for which no eﬀective treatment exists. A much larger group of people will suﬀer infections for which only a select few antibiotics will work. They will show up at hospitals and clinics with what seem to be routine infections and be erroneously treated with the wrong antibiotics. Studies suggest that anywhere from 30 to 100 percent of patients with MRSA are initially treated with ineﬀective antibiotics. Delays in eﬀective treatment allow the pathogen to progress until it is too late. A simple urinary tract infection, for example, becomes a much more serious kidney infection. A kidney infection becomes a life-threatening bloodstream infection.
The particular immigrant groups blamed for cholera’s spread varied over the decades. In the 1830s and 1840s, it was the Irish. “Being exceedingly dirty in their habits, much addicted to intemperance and crowded together in the worst portions of the city,” the New York City board of health noted in 1832, the “low” Irish “suﬀered the most” from cholera. The Irish “brought the cholera this year,” Philip Hone complained in his diary, “and they will always bring wretchedness and want.” In 1832, ﬁfty-seven Irish immigrants living in an isolated clearing in the woods of Pennsylvania—they had been hired to clear a path for a new rail line between Philadelphia and Pittsburgh—were quarantined and then secretly massacred, their shacks and personal belongings burned to the ground. “All were intemperate, and ALL ARE DEAD!” local papers gleefully reported.21 Investigators unearthed the workers’ smashed and bullet-riddled skulls from a mass grave in 2009. In the 1850s, the wave of violence that followed cholera crashed upon Muslims, in particular, of violence that followed cholera crashed upon Muslims, in particular, pilgrims on Hajj. Muslim religious stricture requires that all practitioners perform the Hajj pilgrimage to Arafat, about twelve miles east of the Saudi Arabian city of Mecca.
The authoritarian government of Hosni Mubarak ordered a similar slaughter, this one of Egypt’s three hundred thousand pigs, during the H1N1 inﬂuenza pandemic of 2009. There was no evidence that pigs had played a role in spreading H1N1. The virus originated in pigs, which is why it was initially called “swine” ﬂu, but it was a human pathogen: people caught it from each other. Egypt hadn’t suﬀered even a single case of H1N1 ﬂu at the time. Nevertheless, upon government orders, bulldozers and pickup trucks scooped up scores of swine. Some were killed with knives and clubs. “A large number of the pigs were herded into pits,” The Christian Science Monitor reported, “and buried alive.” The bloodbath did little to quell H1N1’s spread. It did, however, destroy the livelihood of the pigs’ owners, the trash collectors called zabaleen of Egypt’s embattled Christian minority. In this case, scapegoating in reaction to one pathogen increased people’s vulnerability to other ones. The pigs played an important role in protecting the public’s health, by being used by the zabaleen to consume the organic portion of the household waste they collected door-to-door. In Cairo, their pigs processed 60 percent of the city’s trash. Deprived of their pigs, the zabaleen stopped collecting trash altogether. When the government’s attempt to replace them failed—the international waste collection companies the government hired expected Egyptians to pack their garbage into bins for periodic pickup, which they didn’t like to do—trash accumulated on the streets, threatening Egyptians with filthborne contagions. “On any given day,” wrote one visiting reporter, “a given neighborhood becomes a ‘no man’s land’ of garbage.” The slaughter of the pigs, said one community leader in Cairo, “was the stupidest thing they ever did … just one more example of poorly informed decision makers.”
The psychiatrist Neel Burton, who has written about the psychology of scapegoating, sees it as a form of projection. Powerlessness and complicity, he says, are uncomfortable feelings that people naturally seek to expunge or escape, and one way to do that is to project them onto others. When those others are punished, the old feelings of powerlessness and guilt are transformed into feelings of mastery or even “piety and self-righteous indignation.”
Scapegoating is particularly disruptive during epidemics because it often targets the very groups of people most likely to be able to contain epidemics and alleviate their burden.
Deep mistrust of vaccines and vaccinators has allowed once tamed pathogens to cause outbreaks in the United States and Europe as well. Despite the fact that vaccination played a decisive role in reducing cases of pertussis, measles, and chicken pox in the United States, when the government started requiring that children receive a battery of vaccinations before entering school in the 1980s, resistance to vaccines and mistrust of vaccinators mounted. Pop-music acts such as the Refusers railed against vaccination programs, as did celebrities such as the actors Jenny McCarthy and Jim Carrey. Thousands of websites assailing the risks of vaccination sprang up on the Internet. Vaccine refusals in the United States follow the same contours as they do elsewhere. The existential crisis that seems to fuel the mistrust in this case—roughly, the industrial contamination of nature—is more amorphous, but the vaccines and vaccinators targeted for reprisals are similarly imbued with malevolent power. One of the most popular antivaccine arguments is that the measles-mumps-rubella combination vaccine is endowed with the mysterious power to cause the poorly understood and increasingly common condition of autism. This claim is as exaggerated and conspiratorial as was the claim that doctors killed people with cholera to dissect their bodies in the nineteenth century, or that the polio vaccine is designed to sterilize Muslims. It’s plainly contradicted by the facts. The 1998 research paper that alleged a link between the MMR vaccine and autism has been widely debunked and was withdrawn by the journal that published it. Plus, a 2013 study found that autism can be effectively detected in children at the age of six months, well before any would have been vaccinated against measles, negating any causal link between the two. The claim continues to make the rounds regardless.
But while vaccines neither cause autism nor drive the bottom lines of drug companies, they are the concentrated result of elaborate industrial processes. For people who fear industrial contamination, that’s suﬃcient grounds to reject them. After all, vaccine skeptics who advise families to eschew vaccines don’t object to the concept of immunization, in which bodies are exposed to a weakened pathogen to prophylactically build up immunity to it. The magazine Mothering, for example, which focuses on natural parenting techniques, suggests that in lieu of vaccination against chicken pox, families throw “pox parties,” at which children infected with chicken pox purposely infect others. “Pass a whistle from the infected child to the other children at the party,” the magazine advises. What they object to is not immunization but its delivery via the vaccine, a synthetic product of industrial processes that is injected directly into the body.
As vaccine refusals have spread, the protections they provided against pathogens are fraying. Amid a rising tide of antivaccine suspicion, nineteen U.S. states allowed parents to exempt themselves from vaccinating their school-age children for “philosophical” reasons. Fourteen states, including California, Oregon, Maryland, and Pennsylvania, passed laws making it easier for parents to exempt their children from vaccination than to actually vaccinate them. By 2011, more than 5 percent of kindergarteners in public schools in eight states had not been vaccinated. Seven percent of schoolkids in Marin County, one of the wealthiest counties in California, were unvaccinated for philosophical reasons. That’s enough to undermine “herd immunity” against pathogens like measles, whereby pathogens are deprived of suﬃcient numbers of susceptible people to spread. Without herd immunity, pathogens can infect both unvaccinated people and those who can’t be vaccinated, like infants.
Measles had been formally declared eliminated from the United States in 2000; by 2011, there’d been over a dozen new outbreaks, including one that began in late 2014 at the Disneyland theme park in California. Within two months, that outbreak had infected 140 people in seven states. (The governor of California eliminated personal and religious belief exemptions for vaccines a few months later.)
Channeling victims’ fury in court is undoubtedly a lot more constructive than acting it out in the streets. But couldn’t the judgment Joseph sought endow scapegoating with the force of law? Depending on who did the adjudicating, that group of “responsible” people could have included health-care workers in Guinea in 2014, gay people in the United States in the 1980s, and Irish immigrants in New York City in the 1830s. Even if those who introduced new pathogens were accurately pinpointed, as in Haiti, it’s unclear how much of the blame they should be forced to shoulder. Epidemics are sparked by social conditions as much as they are by introductions. Whether it’s deforestation and civil war in West Africa, the lack of sanitation and modern infrastructure in Haiti, or the crowding and ﬁlth of nineteenth-century New York City, without the right social conditions, epidemics of cholera and Ebola would have never occurred. Should health-care workers in West Africa, UN soldiers in Haiti, or Irish immigrants in nineteenth-century New York be held responsible for those, too?
The ways in which new pathogens conspire to weaken our social ties and exploit our political divisions are wide-ranging and varied. But there’s still one ﬁnal way we can defang them. It is, perhaps, the most potent one of all. We can develop speciﬁc tools to destroy or arrest them with surgical precision, tools that can be eﬀectively used by any individual with access to them, with no elaborate cooperative eﬀort required. Those tools, of course, are medicines. The right cures make all the ways we spread pathogens among us moot. With the right cures, spillovers, ﬁlth, crowding, political corruption, and social conﬂict fail to spread pathogens. Epidemics and pandemics are stillborn, and their nonevents pass unnoticed. So long as there’s a drugstore on the corner or a doctor willing to write a prescription, individuals can tame pathogens on their own.
And yet for decades, eﬀective cures for cholera eluded them. Their failure was not due to lack of technical capacity. The cure for cholera is almost comically simple. The vibrio does not destroy tissue, like, say, blood-cell-devouring malaria parasites or the lung-destroying tubercle bacilli that cause tuberculosis. It doesn’t hijack our cells and turn them against us, like HIV. As deadly as cholera is, its tenure in the body is really more like a visit from an unpleasantly demanding guest than a murderous assailant. What kills is the dehydration the vibrio causes while replicating in the gut. That means that surviving cholera requires solely that we replenish the ﬂuids it sucks dry. The cure for cholera is clean water, plus a smattering of simple electrolytes like salts. This elementary treatment reduces cholera mortality from 50 percent to less than 1 percent. Preventing cholera by separating human waste from drinking-water supplies was similarly well within the reach of nineteenth-century supplies was similarly well within the reach of nineteenth-century technology. The aqueducts and reservoirs of the ancients could have done it.
Paradigms create expectations, and expectations limit scientists’ perceptions. Psychologists have described two common cognitive snags that occur: “conﬁrmation bias” and “change blindness.” The problem of conﬁrmation bias is that people selectively notice and remember only the subset of evidence that supports their expectations. They see what they expect to see. They also fail to notice anomalies that contradict their expectations, which is “change blindness.” In one study of change blindness, experimenters purposely violated people’s expectations by covertly switching one interviewer with a diﬀerent person while the person being interviewed was momentarily distracted. Subjects assimilated the perceptual violation to such an extent that they didn’t consciously register the change. It was as if it never happened at all.
The history of medicine is replete with examples of this phenomenon. When observations and treatments were unexpected or violated reigning paradigms—and no alternative explanation could be convincingly articulated—they were thrown out on theoretical grounds alone, no matter how well supported they were by evidence. In the seventeenth century, for example, a Dutch draper named Anton van Leeuwenhoek had handcrafted a microscope and discovered bacteria. He examined rainwater, lake water, canal water, and his own feces (among other things), and everywhere he looked he found microorganisms, which he called “animalcules.” Further inquiries could have revealed the role these microbes played in human disease, but instead the study of the body through microscopy went underground for two centuries. The idea that tiny things shaped health and the body in some mechanical fashion violated the Hippocratic paradigm of health as a holistic enterprise. The seventeenth-century physician Thomas Sydenham, known as the “English Hippocrates,” dismissed Leeuwenhoek’s microscopic observations as irrelevant. His student, the doctor and philosopher John Locke, wrote that attempting to learn about disease by examining the body through microscopy was like trying to tell time by peering into the interior of a clock.
This is just what happened to cholera’s cures in the nineteenth century. The scientists who discovered cholera’s cures were not as fully indoctrinated in the paradigms of Hippocratic medicine as the elite physicians atop the medical establishment.
In one of the most convincing demonstrations of the therapy’s eﬀectiveness, in 1832 Stevens administered salty ﬂuids to more than two hundred cholera suﬀerers at a London prison and lost less than 4 percent of his patients.
But the logic of the cure—replenishing the ﬂuids lost through vomiting and diarrhea—violated Hippocratic paradigms. According to Hippocratic principles, epidemic diseases like cholera spread through foul-smelling gases called “miasmas” that poisoned those who inhaled them. That’s why cholera patients experienced dramatic vomiting and diarrhea: their bodies were attempting to get rid of the miasmatic poison.
Nineteenth-century treatments for cholera increased its death toll from 50 to 70 percent.22 Since they considered cholera patients’ vomiting and diarrhea therapeutic, doctors treated patients with compounds that intensiﬁed the very symptoms that were killing them. They administered the toxic mercury compound mercurous chloride, or “calomel,” which induced vomiting and diarrhea.
Physicians literally poisoned their patients with it, considering treatment complete only when the patient salivated excessively, his mouth turned brown, and his breath started to smell metallic—all signs that physicians today would recognize as mercury toxicity.
That new paradigm arrived in the late nineteenth century. “Germ theory” posited the idea that microbes, not miasmas, cause contagions. The theory rested on a spate of discoveries. Microscopy had ﬁnally come back into fashion, allowing scientists to revisit the microbial world ﬁrst spied by Leeuwenhoek two centuries earlier. And then, by conducting experiments on animals, they determined the speciﬁc role these microbes played in animal diseases. The French chemist Louis Pasteur discovered the microbial culprit behind a disease of silkworms in 1870; the German microbiologist Robert Koch discovered that Bacillus anthracis caused anthrax in 1876. These ﬁndings were still incendiary to miasmatists, but they were fundamentally diﬀerent from the ones they’d rejected in the past. They didn’t arrive on the scene sporadically, seemingly out of nowhere, but with an increasingly steady regularity. And they were encased in a powerful explanatory framework. Germ theory, along with accounting for the nature of contagions, provided a radically new way to think about health and illness more generally. Rather than being the result of complex disequilibria involving amorphous external and internal factors, poor health was now discernible at the microscopic level.
The stand-oﬀ between miasmatism and germ theory continued for several more years. Then an 1897 outbreak of cholera in Hamburg sealed miasmatism’s fate. According to miasmatic theory, the city’s western suburb of Altona, which like Hamburg lay along the banks of the Elbe River, should have fallen prey to the miasmas that caused cholera in Hamburg as well. And yet it didn’t. It was impossible for experts to deny the reason why: Altona ﬁltered its drinking water, while Hamburg did not. Strikingly, none of the 345 residents of an apartment block called Hamburger Hof—within the political boundaries of Hamburg but receiving water from Altona’s ﬁltered supply—fell ill at all.63 With this stark vindication of Koch’s claims (and the long-dead Snow’s), miasmatism’s last advocates were forced to surrender. Hippocratic medicine, after a two-thousand-year-long reign, had been knocked oﬀ its throne. In 1901, Pettenkofer shot himself in the head and died. A few years later, Koch received the Nobel Prize in Medicine and Physiology. The germ theory revolution was complete.
Oral rehydration therapy, by simply and quickly curing cholera and other diarrheal diseases, is considered one of the most important medical advances of the twentieth century.
But in addition to fueling the population growth, urbanization, and mobility that contribute to pandemics, the global bonﬁre of fossil fuels will heighten the likelihood of pandemics on its own, in a way that is likely to be even more consequential than all of its contributing factors put together. The voraciousness and speed with which we consumed fossil fuels—one hundred thousand times faster than they could form underground—assured that. It was like eating a lifetime’s supply of food at a single meal. The energy in fossil fuels, which derived from their carbon, had accumulated underground for millions of years. By digging it up and burning it, we released all of that ancient carbon into the atmosphere in a matter of decades, an outburst that would alter the climate and all the creatures that lived within its conﬁnes for generations.
By the mid-twentieth century, the concentration of carbon dioxide in the atmosphere had increased by more than 40 percent compared to preindustrial levels. Hanging in the atmosphere like a blanket, the excess carbon steadily warmed the air below, gently heating the surface waters of the ocean. Every decade, the temperature of the seas’ surface waters rose by a little over one-tenth of a degree. Newly warmed waters, sinking to the depths and splashing into planetwide ﬂows, altered the ocean’s constitution in subtle but transformative ways, like a shot of vodka in a glass of tomato juice. Currents, fueled by the temperature gradient between cool waters moving over warm ones, were transformed. Rainfall patterns around the globe shifted, as the growing cloud of vapor that wafted above the warmer seas swelled by 5 percent. The warming waters, expanding as they heated up, lapped ever higher on coastlines and beaches, inundating freshwater habitats with salt water. By 2012, in some parts of the world, the sea had risen eight inches above 1960 levels.
Fungi can be potent pathogens. Unlike viruses, which require living cells to survive, fungi can persist even after all their hosts are dead because they feed on dead and decayed organic material. They also can survive independently in the environment in the form of highly durable spores.
And yet while pathogenic bacteria and viruses regularly plague humans, aside from the odd yeast infection or case of athlete’s foot, we suﬀer from very few fungal pathogens. This may be a function of our warm-bloodedness, Casadevall says. Unlike the reptiles, plants, and insects that regularly fall prey to fungal pathogens, mammals keep their blood at a scorching temperature—more than 35° above Earth’s average ambient temperature of 61° Fahrenheit—regardless of the weather around us. Most fungi, adapted to environmental temperatures, can’t take the heat of our blood and perish in our oven-like bodies. Heat is such an eﬀective antidote for infection that reptiles try to do it, too, producing “artiﬁcial fevers” by sunning themselves and thus raising their internal temperatures when suﬀering infections. Similarly, scientists have shown that warming frogs’ bodies to 98° Fahrenheit cures them of chytrid fungus infection.
The problem is that our warm blood repels fungal pathogens only because its temperature diverges from the ambient temperature around us, to which fungi are accustomed. If fungal pathogens evolved to tolerate higher temperatures, that gradient would disappear. It is technically possible: in lab experiments, fungi that usually perish at temperatures above 82° Fahrenheit can be bred to tolerate temperatures up to 98°. Climate change could produce the same result, on a planetwide scale, slowly but inexorably training fungi to tolerate increasing temperatures, including, at some point, the heat of our blood.
There’s no straightforward record of the ancient pandemics that plagued us. They can be discerned only obliquely, by the contours of the long shadows they’ve cast. But according to evolutionary theory and a growing body of evidence from genetics and other ﬁelds, pandemics and the pathogens that cause them have shaped fundamental aspects of what it means to be human, from the way we reproduce to the way we die. They shaped the diversity of our cultures, the outcomes of our wars, and lasting ideas about beauty, not to mention our bodies themselves and their vulnerability to the pathogens of today. Their powerful and ancient inﬂuence informs the speciﬁc ways modern life provokes pandemics the way the tides shape the currents.
Disease is intrinsic to the fundamental relationship between microbes and their hosts. All it takes to conﬁrm that is a brief tour through the history of microbial life and a peek inside our own bodies. Humans dominate the planet in modern times, but in the past, it was the microbes that ruled. By the time our earliest ancestors, the ﬁrst multicellular organisms, clambered out of the sea around 700 million years ago, microbes had been colonizing the planet for nearly 3 billion years. They had radiated into every available habitat. They lived in the sea, in the soil, and deep inside Earth’s crust. They could withstand a wide range of conditions, from temperatures as low as 14°F to as high as 230°F, feeding on everything from sunlight to methane. Their hardiness allowed them to live in the most extreme and remote places. Microbes colonized the pores inside rocks, ice cores, volcanoes, and the ocean’s depths. They thrived in even the coldest and saltiest seas. For the microbes, our bodies were simply another niche to ﬁll, and as soon as we formed, they radiated into the new habitats our bodies provided. Microbes colonized our skin and the and the lining of our guts. They incorporated their genes into ours. Our bodies were soon home to 100 trillion microbial cells, more than ten times the number of human cells; one-third of our genomes were spiked with genes that originated in bacteria.
Did our ancestors willingly play host to the intruding microbes that colonized their interiors? Possibly. But probably not. For like the outsized military of an insecure state, we developed a swollen arsenal of weaponry to surveil, police, and destroy microbes. We shed layers of skin to rid ourselves of the microbes that would colonize its surface. We constantly blinked our eyelids to wash microbes off our eyeballs. We produced a bacteria-killing brew of hydrochloric acid and mucus in our stomachs to repel microbes that might attempt to colonize their interiors. Every cell in our body developed sophisticated methods of protecting itself from microbial invasion, and the capacity to kill itself if it failed. Specialized cells—white blood cells—coursed through our bodies with no other role but to detect, attack, and destroy intruding microbes. In the time it just took you to read these lines, a flood of them washed through your entire body surveilling for signs of microbial invasion.
It was Hamilton who oﬀered a startling explanation that solved the mystery: sex evolved, he said, because of pathogens. Sexual reproduction requires a profound genetic sacriﬁce, he noted, but the payback is that the oﬀspring of sexual reproducers are genetically distinct from their parents. That was no big advantage in surviving hostile weather or predators, Hamilton observed, but it was a huge advantage in surviving pathogens. That’s because pathogens, unlike the weather or predators, reﬁne their attacks upon us. Imagine a pathogen that ﬁrst strikes when you’re a baby. As you develop, the pathogen goes through hundreds of thousands of generations. By the time you’re an adult (if you’ve survived the ravages of the pathogen) and are ready to reproduce, the pathogen has become far better at attacking you than you are at defending yourself against it. While your genetic makeup has stayed the same, the pathogen’s has evolved. But individuals who clone themselves provide pathogens exact replicas of the target they’ve already gotten so good at stalking. They endow their oﬀspring with the worst possible chances of surviving the pathogen’s appetites. Much better, in that case, Hamilton theorized, to produce oﬀspring that are genetically distinct from you, even if that means forsaking half of your own genes.
By forcing the evolution of sex, pathogens may have forced an additional adaptation: death. The notion that death is some optional thing that can “evolve” may seem counterintuitive. The idea that deterioration and death are inevitable is central to how most of us think about life. We think of the body as a kind of machine that inevitably wears out over time. Individual parts fail and the damage accumulates. Finally, after some critical threshold is passed, the entire machine stops working. Thus we say that nobody can “cheat death.” We even equate the word “aging,” which is simply the passing of time, with diminishment. (What we really mean is what biologists call “senescence,” a gradual deterioration of function that proceeds with the passing of time and ultimately leads to death.) But senescence and death are not inevitable facets of life. There are examples of immortality all around us. Microbes live forever. Trees don’t deteriorate with time. On the contrary, as they age they get stronger and more fertile. For microbes and many plants, immortality is the rule, not the exception. There are even some animals that don’t age: clams and lobsters, for example. Death, for them, is caused solely by external factors, not internal ones.
Scientists senescence have is contrfound olled that by far from particular being an genes, inherently variously inevitable called pr“suicideocess, senescence is controlled by particular genes, variously called “suicide genes” or “death genes.” Their job is to progressively turn oﬀ the processes of self-repair that keep our bodies in good condition. They’re like a host switching oﬀ the lights at the end of a party. It happens at a certain time, no matter what.
The adaptive theory of aging posits that this was the context in which suicide genes evolved. The scenario would have gone something like this. Imagine two competing groups of sexually reproducing organisms. In one group, all are immortal. In the other, suicide genes have emerged and so some individuals slowly age and die. The ﬁrst group is like a dense forest; the second is like a regularly culled one. When the pandemic arrives, the former group will fare as poorly as the dense forest does in a forest ﬁre. The latter group will be more likely to survive, allowing suicide genes to spread. Suicide genes obviously don’t protect allowing suicide genes to spread. Suicide genes obviously don’t protect us entirely from the risk of famine and pandemic. But because individuals in our groups regularly age and die, “a little bit at a time,” as the antiaging researcher Joshua Mitteldorf puts it, the risk that those events will cause an extinction is much lower. We age and die, Mitteldorf contends, as a sacriﬁce to pandemics.
What does this mean for our epidemic past and future? According to classic natural selection theory, as articulated in 1859 by Charles Darwin and as taught in high school biology classes around the world, pathogens and their victims adapt to each other over time, evolving toward a less fractious relationship. The Red Queen Hypothesis says otherwise. For every adaptation on the part of one species, it holds, there’s a counteradaptation on the part of its rival. What that means is that pathogens and their victims don’t evolve toward greater harmony: they evolve increasingly sophisticated attacks on each other. They’re like spouses in a bad marriage. They run “very fast and for a long time,” but they don’t “get to somewhere else.” And that leads to the same conclusion as arguments about the nature of microbes and the immune system and the evolution of sex and death. That is, that the relationship between pathogens and their victims does not evolve toward greater accommodation. On the contrary, it’s a continuous battle in which each side evolves increasingly more sophisticated ways to crack the other’s defenses. This suggests that epidemics are not necessarily contingent on speciﬁc historic conditions at all. Even in the absence of canals and planes and slums and factory farms, pathogens and their hosts are locked in an endless cycle of epidemics. Far from being historical anomalies, epidemics are a natural feature of life in a microbial world.
Given the outsize role pathogens and pandemics have played in our evolution, it stands to reason that they’ve probably helped shape our behavior, too. According to psychologists, historians, and anthropologists, they have. The evolutionary psychologists Corey L. Fincher and Randy Thornhill theorize that culture itself—the diﬀerentiation of populations into behaviorally and geographically distinct groups—originated as a behavioral adaptation to an epidemic-ﬁlled past.
The theory starts with the idea of “immune behaviors.” These are social and individual practices that help people elude pathogens, such as avoiding certain landscape features like wetlands or swamps, or practicing certain culinary rituals, like adding spices with antibacterial properties to foods. These behaviors are not necessarily purposely designed to protect people from pathogens; people may not have even been aware that they helped do so. But immune behaviors, once developed, stick around because the people who indulge in them are less vulnerable to infectious diseases. The behaviors, passed down through the generations, become entrenched.
Suggestively, in places where there are more pathogens, there are more ethnic groups (among traditional peoples), and vice versa. Of all the various factors that could potentially predict the level of ethnic diversity in a given region, pathogen diversity is one of the strongest.
The diﬀerentiation of cultural groups by pathogens also dictated the outcome of confrontations between them. Groups of people have been able to vanquish other groups by wielding what McNeill calls an “immunological advantage.” They simply introduce pathogens to which they’ve adapted but against which their rivals have no immunity. It happened in West Africa three thousand years ago, when Bantu-speaking farmers who’d adapted to a deadly form of malaria penetrated the interior of the continent, bringing the pathogen with them. They rapidly defeated the hundreds of other linguistic groups believed to have populated the region in what historians call the “Bantu expansion.” invading Immunological armies from advantages northern allowed Europe, who people of perished ancient from Rome to Repel invading armies from northern Europe, who perished from the Roman fevers to which locals had adapted. The protection aﬀorded by Rome’s immunological advantage rivaled those of a standing army. “When unable to defend herself by the sword,” the poet Godfrey of Viterbo noted in 1167, “Rome could defend herself by means of the fever.”34 Most famously, Europeans conquered the Americas starting in the ﬁfteenth century by decimating native peoples with the Old World pathogens to which they had no immunity. Smallpox introduced by Spanish explorers killed the Incas in Peru and nearly half the Aztecs in Mexico. The disease spread throughout the New World, destroying native populations ahead of European settlement. Meanwhile, the people of tropical Africa repeatedly repelled the forays of European colonizers, who were felled by the malaria and yellow fever to which locals had adapted. (One unhappy result was the development of the brutal Atlantic triangle trade of the sixteenth to nineteenth centuries. Having failed to establish colonies in sub-Saharan Africa, Europeans carried captives from Africa across the ocean to the Americas to serve as slave labor on their sugar plantations.) These and other confrontations, decided by the immunological distinctions among us, continue to reverberate through modern society today.
All of which is to say that our vaunted sense of individuality is an illusion. Animals like us, as the evolutionary biologist Nicole King has said, have never been single organisms. For better or worse, we’re “host-microbe ecosystems.” Microbes shape us from without and also from within. That is to say, pathogens and pandemics are not solely the products of modern life. They’re part of our biological heritage. The predicament we ﬁnd ourselves in today, on the threshold of a new pandemic, is hardly exceptional. It’s of a piece with hundreds of millions of years of evolution.
Every pathogen is diﬀerent, and its reception is shaped by its speciﬁc qualities along with the historical context in which it debuts. Most people in North America and Europe knew, however dimly, that Ebola originated in a distant, exotic place (that is, in a village near the Ebola River, in the Democratic Republic of Congo). That alone may have made it seem inherently more dangerous to Westerners, compared to something named after a leafy Connecticut suburb, like Lyme disease, simply because its birthplace is less familiar. It’s also highly virulent. Ebola kills about half its victims, on average. In contrast, Lyme is rarely deadly and dengue hemorrhagic fever kills about 10 percent of its victims.
If virulence were the main determinant of the fear response, the most terrifying disease would have to be rabies, which kills every one of its victims in a matter of days. But culturally speaking it’s more punch line than nightmare. On an episode of the critically acclaimed comedy The Oﬃce, for example, the show’s most absurdly out-of-touch character organizes a “fun run” for rabies awareness (titled “Michael Scott’s Dunder Miﬄin Scranton Meredith Palmer Memorial Celebrity Rabies Awareness Pro-Am participating Fun by Run taking Race for taxis the and Cure”). drinking The other beer and characters shopping are indialong ﬀerent,the participating by taking taxis and drinking beer and shopping along the way. The joke is that the race organizer ludicrously thinks rabies is a terrifying disease, but the reasonable characters know that it is not.
So why do some pathogens provoke yawns while others trigger panic? It may have to do with the way they disrupt or conform to the popular conception of pathogens. This conception is clear from the way we talk about disease. The reigning metaphor, in medicine as in the culture at large, is war. We “attack” illness, we “wage war” on disease, we “arm” ourselves with medicines. “Pandemic disease and war,” as The Economist put it, “are so similar.” But the war we’re waging is not against enemies that are elusive or forbidding; on the contrary, they’re cast as easy to conquer. Complex, resilient pathogens such as malaria are seen as easily foiled. All it takes is a few dollars. (As one charitable organization put it, “For just $10, you can … save a life.”) Victory in the war against pathogens is ours, Microsoft cofounder Bill Gates argued in the wake of the 2014 Ebola epidemic. All we have to do is try a little bit harder.
During the months of Ebolanoia, there was no vaccine to prevent Ebola, no treatment to cure it. It seemed that even sophisticated Western palliative care—round-the-clock nursing, ventilators, and the like—would not make much diﬀerence in the course of the infection. Since Ebola’s untamable nature was at the root of the panic, it didn’t matter that Ebola is easily and simply avoided. The very fact of its existence disturbed. Ebola was the red six of spades, a painted clown in a darkened cellar: unexpected, unfathomable, terrifying. This also explains why Lyme, dengue, and rabies, despite being more threatening and burdensome, are considered less scary. Our chemicals can, in theory, vanquish all three. As my students told me, getting a tick bite in Lyme disease country is no big deal: you just pop a course of the antibiotic doxycycline. Both Lyme and dengue are carried by insects, which are vulnerable to a range of lethal and widely available insecticides.