U.S. flag

An official website of the United States government

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Institute of Medicine (US) Forum on Microbial Threats. Ethical and Legal Considerations in Mitigating Pandemic Disease: Workshop Summary. Washington (DC): National Academies Press (US); 2007.

Cover of Ethical and Legal Considerations in Mitigating Pandemic Disease

Ethical and Legal Considerations in Mitigating Pandemic Disease: Workshop Summary.

Show details

1Learning from Pandemics Past


As David Heymann, Executive Director for Communicable Diseases at the World Health Organization (WHO), notes in the following essay, the past provides a prologue for any discussion of emerging infectious diseases, whether that discussion concerns the biological origins of a potential pandemic or its social repercussions. Thus, like the workshop, these chapters begin with a look backward. Here that look is focused on ethical issues raised in both the influenza pandemic of 1918–1920 and in more recent outbreaks of emerging infectious diseases as well as on the profound influence that these ethical issues exert on pandemic planning and on international public health law.

Reflecting on key outbreaks of emerging infectious disease over the past three decades, Heymann examines what these episodes reveal about the roles and responsibilities of health workers in a pandemic, the consequences of infectious disease to global trade, the challenge of providing equitable access to health-care resources, and the balance of individual rights versus public welfare. He describes how increasing recognition of the threat posed by emerging infectious diseases led to greater international cooperation in reporting and responding to disease outbreaks, as illustrated during the first outbreak of severe acute respiratory syndrome (SARS) and as embodied by recent revisions to the International Health Regulations.

The chapter’s second paper, by medical historian Howard Markel, organizes common elements in the social experience of pandemic disease into narrative frameworks, thereby providing additional insights into legal and ethical issues in pandemic mitigation. He also describes a more specific application of historical data from the influenza pandemic of 1918–1920: evaluating the effectiveness of nonpharmaceutical interventions to reduce the transmission and impact of infectious disease. While Markel’s research indicates that such efforts may have contained influenza in some U.S. communities, he acknowledges that implementing similar strategies in the future would be far from straightforward, given the increased mobility of populations, as well as the influence of civil liberties on public health policy.

Heymann’s and Markel’s workshop presentations were complemented by remarks from D.A. Henderson of the University of Pittsburgh Medical Center, leader of the quarter-century campaign by the World Health Organization to eradicate smallpox (Henderson, 1999). He noted that several factors made smallpox a uniquely favorable target for elimination: the virus infects only humans; it is not infectious until a rash appears; it spreads primarily through face-to-face contact; those who recover from the disease have permanent immunity; and its vaccine, which provides long-lasting protection, does not require refrigeration. Beyond these advantages, Henderson attributed the success of the smallpox eradication campaign—the first and only successful attempt to eliminate a human infectious disease from the planet—to its judicious use of available resources in host countries, its broad goals that could be achieved in multiple ways, and its support of a wide range of clinical, epidemiological, and operational research.

Henderson also explored the ethical implications of the smallpox campaign’s central strategy, the vaccination of 80 percent of the world’s population—which, he reported, proved a far more viable means of disease control than either quarantine or isolation. He noted that advocates of disease eradication consider immunization to be an important element of distributive justice, since the benefits of vaccination extend to all members of a community; however, eradication also raises the possibility that individual rights will be compromised if mandatory vaccination becomes necessary.

Acknowledging that top-down disease eradication programs often compete for resources with bottom-up basic health initiatives, Henderson argued that providing community-wide smallpox vaccination did serve the needs of basic health services—particularly since it provided a model for vaccinating against other important diseases. Indeed, the eradication of smallpox gave birth to an infectious-disease-management paradigm for immunization programs that, by 1990, had achieved its goal of vaccinating 80 percent of the world’s children against six major diseases: tuberculosis, diphtheria, pertussis, tetanus, measles, and polio.


David Heymann, M.D.1

World Health Organization

Certainly the idea that what’s past is prologue applies to any discussion of emerging infectious diseases, whether that discussion focuses on the biological origins of infectious outbreaks, or, as is the case in this workshop, on their social repercussions. In this brief summary, four key ethical issues related to emerging and reemerging infectious diseases are highlighted: the roles and responsibilities of health workers; the consequences of infectious disease to commerce among nations; the challenge of providing equitable access to health-care resources; and the balancing of individual rights versus public welfare. These four issues were very important, for example, during the first outbreak of severe acute respiratory syndrome (SARS) in 2002–2003—an event that ushered in a new era of international public health law. And they can also be expected to have relevance in any future emerging infectious disease outbreak.

Health Workers on the Front Line

The mission hospital in Yambuku, a very small community in the rainforest of the northern Democratic Republic of Congo, came to the public’s attention in September 1976, when four Belgian sisters working there as nurses died of a hemorrhagic fever. Three of them died in Yambuku, while one was evacuated to Kinshasa for treatment and died there. A specimen of this fourth sister’s blood was sent to the Centers for Disease Control and Prevention (CDC) in Atlanta, and it was this specimen that led to the original identification of the Ebola virus as the causative agent of this hemorrhagic fever.

One of the important facts about this outbreak is that it occurred in a hospital. It began in the maternity ward with a patient who had been at the hospital’s outpatient clinic three days earlier. At that same time, another patient with a fever had been treated with an injection for what was thought to be malaria; afterwards the syringe used on that patient was rinsed with water and reused on a pregnant woman who was at the outpatient clinic on an antenatal visit. That syringe was likely the vehicle that transferred the then-unrecognized Ebola virus from one patient to another in the outpatient department and from there on to the maternity ward.

Another important feature of this outbreak is that it predominantly affected health workers and their contacts. In addition to the four Belgian sisters, the Ebola virus infected 13 African health workers plus many of their family members, most of whom died. The same situation occurred in 1995 in an Ebola outbreak in Kikwit in the Democratic Republic of Congo. A patient admitted in early March infected two hospital staff members, a laboratory worker and a nurse, who in turn passed the infection on to family members. Later, insufficient infection control practices during a surgical procedure on one of the initial cases led to other health-care workers becoming infected.

Outbreaks that spread to the community through health workers are not limited to developing countries, however. In 1978, for example, a medical photographer at a research institution in Birmingham, England, became infected with the smallpox virus and, before dying, transmitted it to her parents. Health workers were also disproportionately affected in the 1957 H2N2 influenza pandemic, in which 52 percent of unvaccinated health workers in New York City and 32 percent of unvaccinated health workers in Chicago became infected themselves. The outbreak of SARS in 2003, and the risks posed to health-care workers will be discussed in detail later in this article.

The lesson is clear: Health workers and caregivers are inevitably on the front line in a pandemic. While they have an ethical obligation to provide safe care, they do so with the knowledge that they bear a high personal risk of infection.

Issues Between Governments: Infectious Disease and Commerce

Humans have long transmitted diseases over great distances. As shown in Figure 1-1, historians have traced the paths of three ancient diseases that, over the course of decades, spread across several continents. Some of these diseases are thought to have originated in Africa, others in Asia.

FIGURE 1-1. The spread of epidemics.


The spread of epidemics. SOURCE: Heymann (2006).

Today, infections emerge, reemerge, and spread around the world with such frequency that it is difficult to keep a list of them up to date. In 2000, for example, athletes participating in an international triathlon held in Malaysia contracted leptospirosis and returned to their home countries during the incubation period. While this disease is not transmitted from person to person, its presence in the athletes did create a diagnostic challenge for health-care workers around the world. More to the point, the case illustrates the potential for transmission in a world where international travel is both rapid and common. Figure 1-2 illustrates how polio spread from northern Nigeria after immunization activities were halted there in 2003. Wild type 1 poliovirus, endemic in that area, spread rapidly to neighboring countries, and thereafter—through Saudi Arabia and Yemen—as far away as Indonesia.

FIGURE 1-2. International spread of polio from Nigeria in 2003.


International spread of polio from Nigeria in 2003. SOURCE: Heymann (2006).

By the fourteenth century, governments had clearly recognized the capacity for the international spread of disease and had legislated preventive measures, such as the establishment of quarantine in Venice. In order to keep plague out, ships arriving in that city-state were not permitted to dock for 40 days. Table 1-1 briefly traces the history of surveillance and response to global disease from this quarantine to the 1969 adoption of the International Health Regulations (IHR).2 WHO developed these regulations, along with guides for ship sanitation and for hygiene and sanitation in aviation, as a way of minimizing the international spread of disease while interfering as little as possible in world trade, transportation, and travel.

TABLE 1-1. From Quarantine to International Health Regulations: A Framework for Global Health Surveillance and Response.


From Quarantine to International Health Regulations: A Framework for Global Health Surveillance and Response.

The IHR requires that WHO be notified whenever cholera, plague, or yellow fever occur, but given today’s vast number of global microbial threats, the regulations are clearly outdated. The IHR also provides guidance to ports, airports, and frontier posts about preventing the entry of infected travelers as well as preventing the proliferation or entry of disease vectors, such as mosquitoes and rats. The regulations specify the maximum precautionary measures that countries may adopt in order to protect themselves from the three reportable diseases as well as the measures that they should undertake to deal with infectious diseases in general. Reports of cholera, plague, or yellow fever received by WHO are published in the Weekly Epidemiological Record.

In 2005, a substantial revision and modernization of the IHR was adopted. The revision addresses a long-standing problem: that countries often do not report the presence of infectious diseases within their borders because they fear the economic consequences of doing so. Trade sanctions resulting from infectious disease are often more severe than necessary, as happened, for example, following the discovery that people had contracted variant Creutzfeldt-Jakob disease by eating beef from cattle in the United Kingdom (UK). These cattle had been infected with prions that caused bovine spongiform encephalopathy (BSE). Many countries reacted by banning imports from the UK, even after the UK had taken measures that probably rendered its products more secure from BSE than those of many of these same countries. The result was that the UK lost billions of dollars in trade.

The lesson, then, is that the international spread of disease—or the threat of its spread—reduces commerce with affected areas. Governments must, therefore, attempt to balance two competing goals: to prevent infectious disease from crossing their borders while simultaneously minimizing the economic impacts of disease-related restrictions on travel and trade.

Securing Equitable Access to Health-Care Resources

Some epidemics recur year after year because the affected populations do not have access to the appropriate vaccines and drugs. This was once the case with smallpox, and it is currently true of meningitis. Every dry season in Africa, meningitis causes large epidemics with high fatality rates in a belt of countries stretching from Senegal in the west to Ethiopia in the east. In 1996, during the largest recent outbreak, 250,000 people were infected and 25,000 died. Many of these deaths occurred because vaccine did not reach affected communities fast enough.

In response, a collaboration established in the late 1990s between Doctors Without Borders, the International Federation of Red Cross and Red Crescent Societies, the United Nations Children’s Fund (UNICEF), and WHO attempted to address this problem by pre-purchasing and stockpiling vaccine for distribution to countries that reach a critical threshold of meningitis cases. In addition the Gates Foundation has provided support to a partnership between the Program for Appropriate Technology in Health (PATH) and WHO to develop an affordable conjugate meningitis vaccine that will be incorporated into routine immunization programs in Africa. Hopes for success are high, as a similar international partnership dealt with smallpox in much the same way, and that disease is now relegated to the history books.

Polio has presented a similar challenge. In 1988, polio was reported in 125 countries that lacked adequate access to the polio vaccine; by 2005, only four countries had not yet interrupted transmission of the virus. (Because the disease has spread internationally, however, seven countries are currently experiencing polio outbreaks.) Thanks to a partnership of Rotary International, the Centers for Disease Control and Prevention (CDC), UNICEF, WHO, and a group of international financial partners, there is now equitable access to polio vaccine for children throughout the world.

In the event of an influenza pandemic, however, access to vaccine will be extremely limited, particularly in the developing world. The global influenza vaccine manufacturing capacity is limited to the approximately 300 million doses of seasonal influenza vaccine, while the global population is 6.6 billion. These doses are produced and distributed each year, mainly within industrialized countries, in formulations that must track slight changes in this constantly mutating virus. This shortfall—the difference between a capacity of 300 million doses and a population of 6.6 billion—presents a challenge that can only by met through global preparation and action.

Public Health Measures: Balancing Individual Rights and the Common Good

During the smallpox eradication campaign, vaccines were offered to targeted populations using a ring vaccination strategy: vaccinating all households around that of the infected person and vaccinating any contacts that could be traced. In some cases, people were coerced to accept vaccination in the interest of the common good. Today, travelers through Asian airports during the influenza season receive mandatory thermal scans as they move through immigration. Passengers with fevers are taken aside, examined and, at times, prevented from traveling. These are only two of the many instances in which individual rights have been sacrificed in the interest of protecting the public from infectious diseases. Such choices represent the most common, yet most vexing, challenges in addressing microbial threats.

SARS: Revisiting the Past, Ushering in a “New World Order”

At a meeting in 1995 to establish an emerging infections program at the WHO, a panel of expert advisers decided that an updated version of the IHR could provide a valuable global framework for alert and response as well as for global communication and collaboration. WHO had previously collected information pertaining to the IHR solely from national governments, but the decision was made to risk using—and acting upon—information from existing regional and global networks as well. These included the Global Emerging Infections System (GEIS) of the U.S. Department of Defense; the Global Public Health Intelligence Network (GPHIN), which is developed and managed by the Public Health Agency of Canada; the WHO global laboratory network for influenza; and a broad array of region-specific surveillance networks, such as those sponsored by the Asia-Pacific Economic Cooperation (APEC) and the Association of Southeast Asian Nations (ASEAN). All of these were linked to construct a “network of networks” which was named the Global Outbreak Alert and Response Network (GOARN).

It was GOARN (with information derived from GPHIN and GEIS) that on November 16, 2002, reported to WHO that an outbreak of respiratory illness had occurred in Guangdong Province, China. WHO, as is its practice, went to the Chinese government in confidence. The Chinese government, which had been investigating the outbreak, found isolates of influenza B in 31 persons in the affected area. The findings were confirmed by an influenza laboratory in the WHO network, and the Chinese government decided that the outbreak was due to normal, seasonal influenza B activity. On that occasion, the alert system worked well.

Illness Among Health Workers

On February 11, 2003, GPHIN registered rumors about an outbreak of atypical pneumonia in Guangdong Province among health workers. On February 14, the Chinese government reported that 305 such cases had occurred, including five that resulted in death, but it described the outbreak as “under control.” WHO remained very concerned, however, in part because the 1957 and 1969 influenza pandemics are thought to have originated in southern China and partly because the outbreak had included a large number of health workers, which suggested a possible amplification of transmission in the hospital setting. The WHO network of influenza laboratories, which looks for novel influenza viruses that might have pandemic potential, was notified of this outbreak, as were the WHO offices in countries throughout the world.

On February 19, 2003, the WHO Global Influenza Surveillance Network reported that a 33-year-old Hong Kong man and his nine-year-old son had contracted influenza A H5N1—the first time this avian virus had been detected in humans in Hong Kong since its initial appearance in 1997. The father and son had traveled through Guangdong Province to Fujian Province—where the family’s 8-year-old daughter had developed a severe respiratory illness, died, and been buried—and had then returned to Hong Kong. When viewed together, these events created great concern that the Guangdong outbreak might represent the onset of an influenza pandemic.

A pandemic was indeed in its early stages, but not of influenza. Instead, a previously unknown coronavirus began to spread internationally in February 2003, when a doctor who had treated patients in the Guangdong Province traveled to a Hong Kong hotel. There, during a single day, he was somehow able to transmit the virus to other hotel guests who afterwards traveled to Canada, Singapore, and Vietnam, and to one who later entered a hospital in Hong Kong. That index case and secondary cases resulted in the infection of 219 health workers. When the chain of infection was traced backward, it was discovered that the original outbreak in China had proceeded sporadically until December 2002 when the first known hospital worker was infected (see Figure 1-3). The disease spread within the hospital and hospital workers began to amplify transmission of the virus by spreading it to their family members.

FIGURE 1-3. SARS epidemic curve, China, 2002–2003.


SARS epidemic curve, China, 2002–2003. SOURCE: Xu et al. (2004) and Heymann (2006).

Like the Ebola outbreaks described earlier, SARS transmission was amplified and spread through the infection of health workers. And it was not only health workers who treated SARS patients who were at risk: Dr. Carlo Urbani, the WHO staff member who investigated the first SARS case in Vietnam, himself became infected and died from the disease in March 2003. Even since the SARS pandemic was contained, several minor outbreaks have occurred among researchers who were exposed to the virus in laboratory accidents.

Global Alert and Containment

On February 26, 2003, the WHO office in Hanoi reported the case of a 48-year-old businessman with high fever, atypical pneumonia, and respiratory failure who had recently traveled to China and Hong Kong. The seriously ill patient was placed on a respirator and transferred back to Hong Kong. By early March, 77 health-care workers in Hong Kong and 7 in Vietnam were reported to have atypical pneumonia, and it was clear from virological studies that the cause was not influenza. Based on this information, WHO issued its first global alert on March 12: a moderate announcement informing governments, ministries of health, and journalists that a new and highly virulent atypical pneumonia of unknown cause was occurring in Vietnam and Hong Kong.

By March 14, WHO had received reports from Canada and Singapore of persons fitting the case definition of the new atypical pneumonia. The next day, Dr. Michael Ryan, the WHO duty officer, was awakened at 2 a.m. by a call from the ministry of health in Singapore. The official reported that a doctor who had treated patients with atypical pneumonia in Singapore and who had gone to New York for a medical meeting had become ill and was on his way home on a Singapore Airlines flight. WHO worked with the government of Germany to have this patient removed from the airplane in Frankfurt and isolated there; his wife, who also was sick by that time, was also hospitalized in Frankfurt.

On March 15, the situation appeared grave: over 200 patients were infected with the new illness, which apparently was caused by an infectious agent unknown to medical science. Health workers appeared to be at greatest risk of infection. Antibiotics and antivirals were not effective against the illness, which was spreading within Asia and to Europe and North America. Clearly, this was an emerging infection, but its course was impossible to predict. It might become endemic in humans like HIV/AIDS; it might become endemic in animals; or it might pass through two or three generations and attenuate, as monkey pox had done.

Facing this uncertainty, WHO embarked upon a program of global alert and containment. It began by giving the disease a name—sudden acute respiratory syndrome—that would not stigmatize any region or country and by providing increasingly more detailed case definitions as information about the disease evolved. The health organization issued emergency guidance for travelers and urged airlines to watch for and report illness among passengers who had traveled to affected areas. And, at the same time, WHO enlisted support for investigating SARS from institutions represented by GOARN. In all, the effort would grow to involve 115 experts from 26 institutions in 17 countries. Field teams were sent to affected areas, while other experts remained in Geneva to supplement WHO staff.

The electronic networks connecting WHO with countries and regions across the globe made it possible to use real-time information to control the spread of SARS. It soon became clear that, despite the alert issued on March 12, SARS was being spread internationally by air travelers. In some instances, infected travelers were found to have spread the virus to other travelers during the flights themselves. The most famous incident occurred on a China Airlines flight from Hong Kong to Beijing on which many different persons became infected. A number of Asian businessmen who traveled to areas with outbreaks returned home apparently in good health, only to later develop SARS; meticulous epidemiological investigation strongly suggested in-flight transmission of the virus.

In response to these events, on April 2 WHO boldly made additional recommendations that exceeded the existing three-disease framework of the IHR. Airlines serving areas where local transmission of SARS was occurring were advised to actively screen departing passengers using two simple questions: Did the traveler have a history of contact with persons with SARS or with a syndrome similar to SARS? Did the traveler have a fever, cough, or other signs and symptoms? If travelers answered either question in the affirmative, WHO recommended that countries not permit those persons to depart.

The Consequences of Public Health Measures on Individual Rights and National Economies

Each country with local transmission of SARS determined how it would control the further spread of disease. In Hong Kong, information on each person with SARS, including their name and their contacts, was recorded in a police database normally used to identify clusters of crime, with the goal of identifying clusters of SARS. The names entered into the database were also provided to immigration officials in order to prevent those individuals from traveling abroad. Furthermore, Hong Kong used remote screening to detect fever in all airline passengers and required each passenger to have a health declaration. Passengers with fevers were prevented from traveling either within or outside of the country.

On May 2, WHO concluded that environmental transmission of SARS had occurred in one apartment complex in Hong Kong. (It was later determined that this incident was precipitated by several coincident factors). This discovery, along with other cases of SARS that could not be traced back to contact with an infected person, caused great concern that the virus was now spreading outside a confined setting, such as a hospital, and into the general community. This concern led WHO to advise international travelers to postpone nonessential travel to areas with SARS outbreaks. These recommendations, which were distributed on the World Wide Web, came with a major financial cost to those areas where the infection was located. Airline travel to affected areas all but halted, resulting in more than $30 billion in losses in Asia, according to estimates by the Asian Development Bank estimates.

Revision of the International Health Regulations

Within four months of beginning containment activities, and without the use of novel drugs or vaccines, all chains of human-to-human transmission were broken, the SARS virus was driven out of its new human host, and the outbreak was declared over. Several factors contributed to this success: vigorous national containment activities, including case identification, case isolation, contract tracing, surveillance, and quarantine of contacts; well-publicized international travel recommendations; and also an element of good fortune. WHO, working with its many international partners, was able to provide risk assessment and communication during the SARS pandemic that allowed countries to deal with this emerging infectious disease. Although WHO advisories were not always clearly understood by the general public, governments were alerted to the existence of the pandemic and were guided as to how to manage its risks. And, fortunately, SARS’s relatively slow rate of spread allowed time for epidemiological investigation and containment. It is unlikely that an outbreak of influenza would afford such opportunities.

The global response to SARS illustrates the importance of moving beyond the passive role WHO previously played in addressing infectious threats. Indeed, the current vision for the best way to deal with emerging infectious diseases is of a world on constant alert, prepared to detect and respond to international infectious disease threats within 24 hours, using the most up-to-date means of global communication and collaboration. And the new IHR framework is a major move toward achieving this vision.

That framework can accommodate all emerging infectious diseases of international concern, including pandemic influenza. Possible outbreaks are detected using information from networks, as well as from individual countries, and, in a significant break with the past, reports other than official government notifications can be used by WHO to alert the world to an event of international concern. When an outbreak is suspected, a confidential decision-tree analysis is conducted with the affected country. If such an outbreak proves to be of international importance, WHO will support collaborative risk- and evidence-based development of public-health measures and a national containment plan.

It was the SARS outbreak, more than anything else, which led to the realization that the previous approach to emerging infectious diseases was no longer workable. SARS made it clear, for example, that WHO required a revised and greatly strengthened legal framework—the IHR—to obtain reports of infectious diseases from sources other than countries, even though such an action represents a potential infringement on national sovereignty.

With the revised IHR there is now a formal framework for proactive international surveillance and response to any epidemic that begins to spread internationally. And, in particular, the revised IHR (which will come into full legal force in June 2007) will guide responses to any future flu pandemic. At the World Health Assembly in May 2006, WHO was asked to coordinate immediate voluntary implementation of all provisions in the revised IHR relevant to the current avian influenza situation and the related threat of a pandemic. An emergency Influenza Pandemic Task Force has been established for this purpose, and it held its first meeting in Geneva on September 25, 2006. During that meeting, the experts considered the criteria for declaring the start of an influenza pandemic and asked whether the current pandemic alert should be raised to a higher level. Given the current situation—a novel influenza virus is causing sporadic human cases but remains poorly adapted to humans—the experts decided that there is no need to alter the present level of alert. The group also examined a variety of other issues, including ways to improve the sharing of H5N1 influenza virus isolates and information on genetic sequences, the updating of diagnostic reagents and test kits, the development of a pandemic vaccine, and the monitoring for virus strains resistant to currently available antiviral drugs. Their conclusions and recommendations are included in formal report that will be reviewed by the WHO Executive Board in January 2007 and the World Health Assembly in May 2007. These activities form part of the strengthened mechanisms by which WHO and its many partners maintain vigilance for emerging microbial threats and activate defense mechanisms that protect the international community from threats to its health and shocks to societies and economies.


Howard Markel, M.D., Ph.D.3

University of Michigan at Ann Arbor


Although my initial charge for this workshop was to discuss and review the history of the 1918 influenza pandemic, I suspect that that most of you have heard or read that story many times, especially over the last few years. Instead, I propose to widen the dialogue so that we may consider the broader—and I think richer—history of epidemics and pandemics in the American experience.

Because an epidemic represents a living, social laboratory it provides a useful window through which to view the resilience and efficiency of a particular society’s administrative structures, its political and social strengths and shortcomings, and its engagement with rumor, suspicion, or outright bad behavior. After all, epidemics are hardly quiet occasions; they are experienced and responded to in real time by the affected community and then later discovered, heralded, and explained by historians like me. As a result, the historical record of these events is especially rich and provocative (Briggs, 1961; Rosenberg, 1987; Rosenberg and Golden, 1992).

In what follows I will link some of the lessons learned from pandemics past to the quandaries that policymakers are grappling with today in response to a potential influenza pandemic and other microbial threats. And, given that we simply do not have that much solid data on the means of mitigating or containing worst-case scenario influenza pandemics in our modern era, I will discuss why exploring the historical record of the 1918–1920 pandemic may help uncover a body of clues and suggestions. What makes that record so compelling to me as a historian of infectious diseases is that the 1918–1920 American influenza experience constitutes one of the largest databases ever assembled in the modern, post-germ-theory era on the use of non-pharmaceutical interventions to mitigate pandemic influenza in urban centers. Policy makers, on the other hand, may find it more compelling that the record allows them to have the chance to observe how large numbers of people respond when a pandemic appears but vaccines and antivirals are neither effective nor widely available. History suggests that when faced with such a crisis, many Americans—and more formally, American communities—will adopt, in some form or another, what they perceive to be effective social-distancing measures and other nonpharmaceutical interventions (NPI). This is precisely what the nation did in 1918–1920, resulting in a wide spectrum of results and outcomes. A critical question is, Can we make sense of and exploit this historical data to inform decisions today on how best to employ or discard various NPI strategies? And, if so, can we evaluate their costs and benefits in a manner that includes a polished set of social, legal, and ethical lenses?

No one can claim that history provides some magical oracle of what to expect in the future. Human history simply does not work that way. It may move in distinct and recognizable patterns, but this is quite different than repeating itself in predictive cycles. Yet despite those limitations, historians, since at least the days of Thucydides, have contributed nuanced and contextualized views of how past dilemmas emerged or evolved and have offered useful models of the resolution of those dilemmas. These views and models merit our attention.

In particular, historians have been trying for millennia to make sense of epidemics, and we can learn much from studying their conclusions. What follows are but two of the many useful models that historians have developed for analyzing the structure of epidemics.

The Four Acts Model of an Epidemic

When considering the broad scheme of an epidemic or pandemic as a social phenomenon, perhaps the best study that I know of is not a study at all but is rather the remarkable novel by Albert Camus, The Plague—a text I routinely assign to all my students hoping to learn anything about epidemics. Indeed, the eminent historian Charles Rosenberg uses the novel in his seminal essay “What Is an Epidemic?” to gain insights into the nature of an epidemic, combining the observations from fiction with decades of scholarship documenting three of the most serious public-health crises of human history—the devastating cholera pandemics of 1833, 1845, and 1866 (Rosenberg, 1992). From these considerations Rosenberg characterizes the unfolding of an epidemic as a dramaturgic event, usually in four acts, with a distinct but somewhat predictable narrative plot line:

During the first act, “progressive revelation,” members of a community begin to acknowledge an increasing number of cases and/or deaths resulting from the spread of a particular contagious disease. Camus’s The Plague demonstrates this pattern with one of the most memorably disgusting opening scenes in all of literature:

When leaving his surgery on the morning of April 16, Dr. Bernard Rieux felt something soft under his foot. It was a dead rat lying in the middle of the landing. On the spur of the moment he kicked it to one side and without giving it a further thought, continued on his way downstairs. Only when he was stepping out onto the street did it occur to him that a dead rat had no business to be on his landing. . . .

In the pages that follow Dr. Rieux finds many more dead rats along the streets of Oran, but it takes a great deal of hectoring, cajoling, lecturing, and—perhaps most critical when chasing after an epidemic—precious time to convince his fellow townspeople that there is, in fact, a serious problem threatening the entire community’s health. This lethargic response is not restricted to the pages of fiction. Slow acceptance and delayed courses of action in the face of contagious threats are common features in the history of human epidemics. In some cases this tardiness is ascribed to “failure of the imagination,” a reason that may be au courant but that is decidedly uninformative. More often the delayed acknowledgment of an epidemic can be explained by the fact that acknowledging it would threaten various interests or strongly held beliefs, from the economic and institutional to the personal and emotional.

Act two, “managing randomness,” involves the society creating an intellectual framework within which the epidemic’s “dismaying arbitrariness” can be understood. Readers of The Plague will recall the heated debate over causation of the epidemic that took place between the doctor, who subscribes to a modern, scientific approach to understanding the plague, and the Catholic priest, who preaches that the plague’s visitation was an act of divine retribution for sinful lifestyles which thus demanded repentance. This dichotomy in understanding deadly disease, with religion or morality on one hand and science on the other, was a hallmark of many societies in the past, and we should not discount the role that religious, spiritual and cultural beliefs and practices can play in mitigating, containing, or inflaming an epidemic in our own era.

The third act is “negotiating public response.” Once an epidemic is recognized, the public typically demands that collective action of some kind be taken. The history of epidemics is littered with tales demonstrating the importance of bold, decisive leadership and the costs of ineffective or incompetent crisis management. As many historians observing the tug of war between the public and those charged with protecting their health have noted, the operative word in public health is “public.” It is generally necessary to develop a strong consensus among the multitudes constituting a community, taking into account varying cultural values and attitudes, social and class hierarchies, and economic and political imperatives, and if those efforts fail, it is often the case that little is accomplished in any attempt to rein in disease.

Act four, “subsidence and retrospection,” is perhaps the most vexing phase of an epidemic, at least to those involved in public health management and epidemic-preparedness planning. Epidemics often end as ambiguously as they appear. Or, to lift a phrase from the poet T.S. Eliot, they end “not with a bang, but a whimper.” Specifically, once an epidemic peters out and susceptible individuals die, recuperate, or escape, life begins to return to its normal patterns, and healthy people begin to place the epidemic in the past. Although this act can conclude with deep retrospection and action in terms of preparedness for subsequent epidemic events, more often in American—and, in fact, the world’s—history, the curtain closes on a note of complacency or even outright amnesia about the event. A critical question, therefore, is how a community or government maintains credibility in its warning systems, maintains public support for costly preparedness planning, and keeps the public alert but not alarmed, panic-stricken, or completely disengaged.

This four-act model of epidemics is an excellent starting point for our contemplation of pandemics, but, of course, not all microbial threats will follow such a straightforward narrative structure. For that reason, many historians of epidemics have taken a different tack and set out to understand epidemics by identifying their major ingredients or features. This leads to a different model of the structure of epidemics and pandemics.

Major Leitmotivs of Pandemics

In my own work over the past 16 years I have attempted to identify and describe critical leitmotivs that have appeared repeatedly in epidemics and pandemics across time. To this end I have analyzed numerous pandemics, including the Black (bubonic) Plague of the Middle Ages; smallpox in the seventeenth and eighteenth centuries; the cholera pandemics of the nineteenth century (in 1832in 1845in 1866, and 1892); the influenza pandemics of 1880, 1918, 1957, and 1968; childhood infectious disease epidemics of the early twentieth century, including diphtheria, polio, and scarlet fever; small pox epidemics of the nineteenth and twentieth centuries; and also contemporary crises involving HIV/AIDS, tuberculosis, SARS, and other newly emerging infectious diseases (Markel, 1999, 2000, 2001, 2004; Stern and Markel, 2004; Markel and Stern, 1999, 2002).

Not all of the themes that I have identified in this work will appear in each epidemic or pandemic. Instead they should be viewed as major ingredients of an epidemic with the understanding that the precise mix of the themes can change from era to era and disease to disease. These leitmotivs include the following:

Thinking about epidemics is almost always framed and shaped—sometimes in useful ways, sometimes not—by how a given society understands a particular disease to travel and infect its victims.

People living in eras when microbes were not considered to be the cause of epidemic diseases responded to these threats differently from people living in eras when the role of microbes was understood. Well into the nineteenth century, for example, experts and lay people alike believed that many epidemics and contagious diseases were spread through polluted air—or miasma, from the Greek word for defilement of the air or pollution. The miasmatic theory of disease held that toxic emanations emerged from the soil or from rotting organic material or waste products and caused specific epidemic diseases such as cholera, typhus, and malaria. Given the foul odor that pervaded every urban center of this era, the belief that it was an unhealthy force makes a good deal of sense, but when this theory was in vogue it led to public-health approaches that were very different from those taken today. Aside from calls for quarantine, most attempts to manage an epidemic centered on cleaning up and disinfecting streets, sewers, privies, and other dirty parts of the urban environment. This trend changed markedly in the mid-to-late nineteenth century with the advent of the germ theory of disease, and it continues to be revised, refined, and fine-tuned today as we learn more and more about microbial ecology, evolution and genomics. Still, old ideas about contagion are often slow to die and, like fevers of unknown origin, have the power to recrudesce; as a result, many people today have ideas about the cause and spread of particular infectious diseases that are markedly different than the principles we teach in the medical school classroom (Duffy, 1992).

The economic devastation typically associated with epidemics can have a strong influence on the public’s response to a contagious disease crisis.

An order of quarantine, which closes a port or a city to foreign travelers or goods, costs communities a great deal of money and creates great hardships for individuals. It is not surprising, then, that during the international sanitary conferences of the mid-nineteenth century, merchants were often vocal opponents of any efforts to prevent or contain disease that might have had the effect of impeding commercial enterprises and the flow of capital. Such concerns are particularly salient in today’s world, given the existence of a globalized marketplace in which a rapidly growing percentage of the world’s population does business, especially since the emergence of India, China, and the former Communist bloc nations.

There are two sides to this equation however. While increased global commerce can certainly contribute to the spread of a pandemic, it also sets up conditions that encourage more effective responses to a pandemic. Epidemics cost the business community a lot of money, and, in particular, the cost of a human-to-human avian influenza pandemic would be, according to all reliable projections, simply staggering. The threat of such losses could therefore encourage developing nations faced with a brewing epidemic to communicate more openly with Western nations in the hope that their greater financial resources could help them rapidly contain or mitigate the outbreak (Stern and Markel, 2004).

The movements of people and goods and the speed of travel are major factors in the spread of pandemic disease.

It is no coincidence that the rise of bubonic plague pandemics during the Middle Ages (as well as the invention of the formal concept of quarantine) coincided with the advent of ocean travel and imperial conquest. As humans traveled in wider and wider circles, so too did the germs that inhabited them. During the nineteenth century, four devastating cholera pandemics were aided and abetted by the transoceanic steamship travel of millions of people. By the close of the 19th century, journeys from Europe or Asia to North America required a travel time of 7 to 21 days, which gave most infectious diseases ample incubation periods and facilitated their recognition by health officers at the point of debarkation. It is quite different today, when the main mode of international travel, commercial jet planes, allow people to travel anywhere in the world in less than a day. Indeed, a recent study in PLoS Medicine details how seasonal influenza can mirror peaks and valleys in air travel (Brownstein et al., 2006). Yet while the natural response to a pandemic might be to limit air travel, either by an international edict or by the natural response of people to avoid travel by commercial airliner during such a crisis, such a response would pose a new set of troubling and potentially damaging consequences.

Our fascination with the suddenly appearing microbe that kills relatively few in spectacular fashion too often trumps our approach to infectious scourges that patiently kill millions every year.

In 2003, for example, society’s response to SARS—which affected approximately 8,000 people and killed 800—was much more dramatic than its response to tuberculosis, which infected 8,000,000 and killed 3,000,000 that same year. In 2001 there was a similar disproportion in the response to anthrax, which threatened only a few, and to the ongoing global pandemic of HIV/AIDS, which kills 2,000,000 people a year. An even more egregious example is the lack of widespread attention to the common scourges of lower respiratory tract infections and diarrheal diseases, which kill millions on an annual basis (Markel and Doyle, 2003; Achenbach, 2005). Unfortunately, it will be impossible to know until long after the money and resources have been committed—and perhaps only after a flu pandemic has actually occurred—whether influenza was the right microbe to focus upon instead of one of the host of other emerging and re-emerging infectious threats that we face. Perhaps the more salient question for our discussion today is how we can apply the lessons of SARS, influenza, AIDS, bioterrorism, and other microbial threats to develop a comprehensive and global plan against contagion.

Widespread media coverage of epidemics is hardly new and is an essential part of any epidemic.

The media has the power both to inform and to misinform. Because the media powerfully shapes the public’s perception of an epidemic, the details of how popular communication is carried out are of utmost importance. Today’s coverage of pandemic events differs from previous eras in the technology, speed, and variety with which news reports are generated. In the early twentieth century, for instance, American consumers relied heavily on an extensive print media, whereas consumers today can turn to a panoply of newspapers, magazines, television, radio, cable, Internet sites, Web logs, and discussion groups. That does not mean that Americans today are better informed. In the early twentieth century there were multiple daily editions of newspapers in every major city and large town and a great deal of superb reporting on epidemic threats, allowing a majority of Americans to be well-informed on a wide swathe of scientific issues as they were understood at the time. It is hardly a new phenomenon how physicians, public-health officials, and others simultaneously accommodate, inform, and, at times, correct the press. Nonetheless there is no question that the breadth of media genres—and the demographics of their consumers—is far greater today than in previous eras, and there is no doubt that the media has a far greater ability to provide consumers with both useful information and misinformation.

A dangerous theme of epidemics past is the concealment of the problem from the world at large.

Across time many nations or states have concealed news of an epidemic to protect economic assets and trade. In 1892, for example, the German government initially concealed—and therefore exacerbated—that year’s cholera pandemic because of fears that closing the port of Hamburg, at the time the largest port in the world, would mean economic ruin for many (Markel, 1999; Evans, 2005). At other times concealment efforts have been motivated by nationalistic bias, pride, or politics, as was the case with South Africa and HIV in the 1990s, China during the first months of the SARS epidemic of 2003, and, over the past few years, Indonesia and avian influenza (IOM, 2004, 2005). Regardless of the reasons for concealment of a public-health crisis, from the political to the purely mercenary, secrecy has almost always contributed to the further spread of a pandemic and hindered public health management.

One of the saddest themes of epidemics throughout history has been the tendency to blame or scapegoat particular social groups.

History has demonstrated too often that social groups already deemed to be “undesirable” by the population at large are most at risk for harsh or inappropriate treatment in times of crisis, no matter whether the crisis is a product of infectious disease, natural disasters, or simply social unrest. At many points in American history, especially during the nineteenth and early twentieth centuries, the implicit assumption that social undesirability was somehow correlated with increased risk of contagion has led to the development of harsh policies aimed at the scapegoats rather than the containment of a particular infectious microbe.

There are many examples of scapegoating across time, such as the widespread American assumption during the cholera pandemic of 1892 that any case of cholera discovered in the United States had been brought from Eastern Europe in the bodies of impoverished Jewish immigrants, the demonization of the Chinese in the 1900 bubonic plague outbreak in San Francisco, and, more recently, the stigmatization of gay men and Haitians during the early years of the AIDS epidemic in the United States (Markel, 1999, 2004; Kraut, 1994; Grmek, 1990).4 At many—but certainly not all—points of time, poor people have been disproportionately affected by epidemics and pandemics. Public-health policies that place blame on victims or, worse, on perceived victims can have many negative consequences, including the misdiagnosis of the healthy and isolating or quarantining them with unhealthy people; social unrest, legal entanglements, and infringements of civil liberties; and extremely counterproductive behaviors by those targeted as diseased. Such negative results have the potential to detract in a major way from efforts to contain or mitigate a contagious disease.

Both historical constructs of pandemics—the four-acts model and the identification of leitmotivs—proved helpful in our center’s analysis of the 1918–1920 influenza pandemic. For example, when examining the second wave of the pandemic, which stretched from September to December 1918, Rosenberg’s four act-play metaphor provides a useful framework for understanding the rise and fall of that phase of the pandemic. Ultimately, however, the Rosenberg model works best for a single-phase epidemic rather than a multiphasic pandemic such as the entire four-wave flu pandemic of 1918–1920.

The leitmotiv model can also be a useful lens through which to view the 1918 pandemic, but with one key exception: the social scapegoating leitmotiv was not all that loud. I suggest that this was because the pandemic spread so rapidly and ubiquitously among all sectors of American society (especially among those 20–45 years of age). That does not mean, however, that we should assume that this unsavory feature of epidemic disease could not rear its head in the present or future. One has only to recall the SARS epidemic of 2003 and the short-lived but well-publicized ban on all Asian exchange students at the University of California at Berkeley, to name one recent example, to realize that it can still happen here.

All of the other leitmotivs described above did feature prominently in the 1918 influenza pandemic. For example, during the 1918 pandemic it was very common for local business owners to oppose nonpharmaceutical interventions that seriously affected their economic health. School and business closings, restrictions on travel, and even the use of face masks often proved to be quite contentious issues. Furthermore, many warnings of an influenza pandemic in the early summer of 1918 went unheeded; indeed, the stacks of medical libraries are filled with rarely read public health reports published in the years before the flu pandemic that urged the creation of more hospital beds and isolation wards as well the development of better diseases surveillance and containment strategies (Markel, 1999). And once the flu crisis was over, little was done to rectify public health administrative problems that were exposed by the 1918–20 pandemic.

Other leitmotivs that played significant roles in the pandemic include how the media interpreted the contagious spread of influenza and reported on these events; the role public health risk communications played in containing or mitigating the spread; the internecine rivalries between local, state, and federal health agencies and political leaders; suppression of reporting of cases (in 1918, this was often because privately practicing physicians did not want to lose control of—and remuneration from—their paying patients by reporting and referring them over to public health departments); the unclear etiology of influenza; ineffective vaccines against the wrong organism; and, of course, issues of travel, particularly the mass movements of soldiers around the country and then to the European theater of what we now refer to as World War I.

Although historians by nature are hesitant to predict the future, I feel quite comfortable in suggesting that most or all of these themes will again be part of whatever emerging infectious disease crises we face in years to come. And while I cannot tell you what the exact proportion or precise mix of ingredients in this recipe will be, I do think history provides us with many thought-provoking, broad-brush strokes with which to think about pandemics.

The Power and Limits of Historical Inquiry

To investigate how historical inquiry can inform the planning of pandemic mitigation strategies, one must first be aware of the limits of this approach. Let us begin by describing the historian’s laboratory: the archives. A good way to think about archival research is to imagine your life being recorded by a historian. Every day the scholar would file a report and store that document in a bank of file cabinets that, by the end of your life, would presumably hold many reams of paper. Imagine, then, that a fire destroys most of that room, with only occasional file folders from discrete periods of your life surviving. With few exceptions, such spotty records are what historians deal with in their inquiries, and much of our knowledge of the past depends on the supporting archival materials that were actually saved. Furthermore, some archival materials may not be entirely reliable or may simply be unavailable, and sometimes historians may misinterpret the materials, creating yet more problems. Many times, lacunae in the historical record are so great that we can only hypothesize or speculate about what may actually have occurred.

Moreover, when one studies the history of epidemic disease, a whole new set of highly specialized records becomes important. A historian needs to be intimately familiar with the relevant era’s collection of epidemiological data, its medical terminology (the same term can mean different things in different medical eras), its surveillance and containment methods, and its medical and microbiological understandings of the cause and spread of the disease. For the 1918–1920 influenza pandemic there are many cases where critical numerical population and case-incidence data were not recorded or were recorded in a manner less consistent than we would demand of a prospective study conducted today. Such gaps constitute significant challenges and even roadblocks in any historical study.

One also needs to be familiar with the social, cultural, and intellectual history of the region under study and to know its differences from and similarities to our contemporary era. For example, someone studying the 1918 flu epidemic should know that the United States of that time had many similar features to the modern era: rapid transportation in the form of trains and also automobiles, although certainly many fewer automobiles than we have today; rapid means of communication in the form of telegraph and telephone; large, heterogeneous populations with substantial urban concentrations (although many more Americans lived in rural environments in 1918 as compared to the present); a news and information system that was able to circulate information on the pandemic widely; and a broad spectrum of public health agencies at various levels of government.

Conversely, there are also many striking contrasts between that era and our own. For example, the legal understanding of privacy and of civil and constitutional rights as they relate to public health and governmentally directed measures (such as mass vaccination programs or medications) has changed markedly over the past eight decades. Furthermore, public support of and trust in these measures—along with trust in the medical profession in general—has changed significantly over this time, especially with regard to vaccines and medications. This can be seen, for example, in the recent spate of lawsuits filed because of vaccine failures or because of perceptions that vaccines may have significant and dangerous side effects. Other features of the modern world that need to be considered when studying the historical record of the 1918 pandemic in order to inform contemporary policymaking include the speed and mode of travel, particularly the development of high-volume commercial aviation; immediate access to information via the Internet and personal computers; a baseline understanding among the general educated population that the etiological agents of infectious diseases are microbial; and advances in medical technology and therapeutics which have vastly changed the options available for dealing with a pandemic.

Another important aspect of American society circa 1918 that was markedly different from the present is how daily commercial transactions are carried out. In 1918 there were no supermarkets, refrigeration was primitive, and a limited variety of preserved foods were available for purchase. Consequently, consumers often needed to shop daily at multiple locations, such as grocers, produce vendors, bakeries, and butchers. Moreover, there were no credit cards, and personal checking accounts were typically employed only by the affluent, so frequent visits to banks for cash were not uncommon. Indeed, for ordinary citizens in 1918 the United States was almost entirely a cash economy. So while the closure of a bank during an epidemic in 1918 might be explained as a public health measure, for the many Americans who had lived through the Depression of 1893 as well as other boom and bust cycles, such an action might well be misconstrued as a failure of the bank itself, and, as such, it had the potential to create civil unrest. As a result, the last public spaces to close during the 1918 pandemic—after theaters, schools, churches, restaurants, and saloons—were often banks and other financial institutions.

Today, on the other hand, a number of daily functions of life can be accomplished with little or no human interaction—provided you have the economic and educational resources to carry them out. Banking and credit transactions, the ordering and delivery of food via the Internet, entertainment, and personal and business communication, to name just a few, can all be carried out by large numbers of Americans in a way that can allow them to minimize human contact and thus shield themselves somewhat from the spread of contagious disease (Germain, 1996; Chandler, 1980; Blackford, 2003; Rothbard, 2002). Nevertheless, as recent disasters have shown, many Americans have little in the way of an economic safety net, and their restricted access to financial resources and even basic needs of living could have a deleterious affect on disaster-containment strategies.

The Defense Threat Reduction Agency/Department of Defense Escape Communities Study

The overwhelming majority of histories of the 1918 influenza pandemic focus on its widespread carnage. Consequently, our research group was surprised to uncover the archival remnants of a handful of American towns or institutions that emerged from the virulent second wave of the pandemic—September to December 1918—with relatively few influenza cases and no deaths.

In July of 2005, we were asked by the Defense Threat Reduction Agency of the U.S. Department of Defense to study these “escape communities” of 1918 because the Pentagon was contemplating what to do with personnel essential to the nation’s security in the event of a pandemic. The crucial question we were being asked was if the historical experiences of these escape communities might reveal some strategy to keep a small, but specific, sector of the population—the U.S. Armed Forces—completely free of influenza. The results of this year-long, in-depth archival study proved somewhat vexing.

Some of these so-called escape communities that we studied, such as the village of Fletcher, Vermont (population 737) were too small to suggest that their success resulted from anything more than remote location, the uneven attack rates of the virus, and good fortune. Others—like the Trudeau Tuberculosis Sanatorium in Saranac Lake, New York, and the Western Pennsylvania Institution for the Blind, in Pittsburgh—were already de facto quarantine islands because of the era’s prevailing views toward the confinement of the contagious and the disabled.

Two communities, the U.S. Naval base at Yerba Buena Island, one mile from the busy port of San Francisco, and the mining town of Gunnison, Colorado not only escaped the pandemic, they also had carried out a particularly extensive menu of restrictive public health measures (i.e., nonpharmaceutical interventions). Under the bold, decisive direction of astute public health officers, the still-healthy island and mountain towns essentially cut off all contact with the outside world to shield themselves from the incursion of influenza before it arrived in their vicinity, a measure we termed protective sequestration. In a nation besieged by flu, Yerba Buena and Gunnison boasted zero mortality and almost no cases of infection over a lengthy time period.

When planning for pandemics, it is tempting to focus on the apparent success of protective sequestration at Yerba Buena and Gunnison. But lest we be too eager to adopt such measures widely today, we must recall that one of these communities was literally an island directed by the bold, iron hand of a naval commander who could isolate his men from flu-ridden San Francisco. The other was a small, homogeneous, and well-run mining town situated high in the Rockies that could barricade its roads and regulate its railways.

Historical analysis of the few communities around the world that did manage to escape the 1918 influenza pandemic (including Australia and American Samoa) reveals an obvious but admittedly not terribly practical prescription: live in a remote area, preferably an island or mountain community, that can wall itself off from human contact. On the other hand, there are tantalizing suggestions that all these escape communities experienced much milder third waves of the pandemic when compared to neighboring communities.5

The CDC/Michigan Historical Study of Nonpharmaceutical Interventions Taken by 43 U.S. Cities During the Second and Third Waves of the 1918–1920 Pandemic

Beginning in August 2006 the Center for the History of Medicine at the University of Michigan Medical School, collaborating with the CDC’s Division of Global Migration and Quarantine, embarked upon a study of the non-pharmaceutical interventions taken by the 43 most-populated cities in the continental United States (population > 100,000) in the second and third waves of the 1918–1920 influenza pandemic.

During the 1918 pandemic, a broad menu of NPI was executed in different American cities that have captured our attention including making influenza a reportable disease; isolation of the ill; quarantine of suspect cases and families of the ill; closing schools; protective sequestration measures; closing worship services; closing entertainment venues and other public areas; staggered work schedules; face-mask recommendations or laws; reducing or shutting down public transportation services; restrictions on funerals, parties, and weddings; restrictions on door-to-door sales; curfews and business closures; social-distancing strategies for those encountering others during the crisis; public-health education measures; and declarations of public health emergencies. The motive, of course, was to help mitigate community transmission of influenza.

Over the next twelve months we will endeavor an historical epidemiological analysis of the application of NPIs in these communities during 1918–1919 with the goal of informing the potential use of NPIs in future pandemics. At present, no rigrous, systematic historical and epidemiological study exists on the relationship, positive or negative, between influenza case incidence and death rates during the 1918 pandemic and the NPIs taken at different points of time by the most-populated urban centers in the United States. Our principal aim is to fill this intriguing and pertinent lacuna.

Working with a team of epidemiologists, historians, and statisticians, based both at Michigan and the CDC, we are now engaged in the rather arduous task of digging up every municipal report from the 43 large cities in the continental United States during the 1918–1920 pandemic—many of which reside in dusty unmarked boxes or storage units of libraries that have rarely (if ever) been consulted in the secondary historical literature on the pandemic. Further, we will analyze a wide body of U.S. census data, including weekly mortality reports from this period as well as 86 different daily newspapers produced over an 8-month period, records from U.S. military bases, hospitals, and universities, and a huge number of other historical documents and papers from libraries and archives across the nation. When completed, the final report and its supplementary Web-based influenza archive will constitute a widely accessible version of the largest single collection of nonpharmaceutical intervention data taken in the United States during the 1918–1920 influenza pandemic.

Every detail, whether it is the number of the dead in a particular city for a particular week or the political battles being reported in the press, will be compared with at least two other sources for verification. Similarly, in each of the cities studied we will consult at least two newspapers that have been identified in terms of political party affiliation, editorial policy, and circulation figures.

As Alfred Crosby has noted in his classic book, America’s Forgotten Pandemic: The Influenza of 1918, in human terms the pandemic was not one overarching story but instead “thousands of separate stories” with different origins and outcomes for the influenza victims, their families, and their communities (Crosby, 1989). We do not promise any oracular commandments for pandemic preparedness, but we are confident that our fine-grained, rigorous, and scholarly historical epidemiological analysis of these American cities will significantly inform those who are considering the application, utility, policies, and design of nonpharmaceutical interventions today.


When contemplating pandemics it is clear that precise shapes and contours of the next influenza pandemic will be strikingly different from those of the past. But there is a positive side to this change over time. Specifically, this is essentially the first pandemic in human history where we will have had some semblance of advance warning—and hence, the opportunity to prepare. Similarly, with the advances in virology, surveillance, rapid communications, modern computing, and epidemic modeling, there is the exciting hope that we can apply all these methods to a pandemic’s rapid mitigation, if not containment or outright prevention. As such, I am historically optimistic that lessons from both the past and present can help us devise effective and also ethically and socially appropriate strategies to mitigate the microbial threats that inevitably loom on our horizon.


  1. Achenbach J. Can we stop the next killer flu? Washington Post. Dec 7, 2005. p. W10.
  2. Blackford MG. A History of Small Business in America. Chapel Hill: University of North Carolina Press; 2003.
  3. Briggs A. Cholera and society in the nineteenth century. Past and Present. 1961;19(1):76–98.
  4. Brownstein JS, Wolfe CJ, Mandl KD. Empirical evidence for the effect of airline travel on the inter-regional influenza spread in the United States. PLoS Medicine. 2006;3(10):e40. [PMC free article: PMC1564183] [PubMed: 16968115]
  5. Chandler AD. The Visible Hand: The Managerial Revolution in American Business. Cambridge, MA: Belknap Press; 1980.
  6. Crosby AW. America’s Forgotten Pandemic: The Influenza of 1918. Cambridge, UK: Cambridge University Press; 1989. p. 66.
  7. Duffy J. The Sanitarians: A History of American Public Health. Urbana: University of Illinois Press; 1992.
  8. Evans RJ. Death in Hamburg: Society and Politics in the Cholera Years, 1830–1910. New York: Penguin Books; 2005.
  9. Germain RN. Dollars Through the Doors: A Pre-1930 History of Bank Marketing in America. Westport, CT: Greenwood Press; 1996.
  10. Grmek MD, editor; History of AIDS: Emergence and Origin of a Modern Pandemic. Princeton, NJ: Princeton University Press; 1990.
  11. Henderson DA. Eradication: Lessons from the past. Morbidity and Mortality Weekly Report. 1999;48(SU01):16–22.
  12. Heymann D. Emerging Infectious Diseases: Past Is Prologue; Keynote address at the IOM’s Forum on Microbial Threats public workshop entitled “Ethical and Legal Considerations in Mitigating Pandemic Disease”; Washington, DC. Sep 19, 2006.
  13. IOM (Institute of Medicine). Learning from SARS: Preparing for the Next Disease Outbreak. Washington, DC: The National Academies Press; 2004. [PubMed: 22553895]
  14. IOM. The Threat of Pandemic Influenza. Washington, DC: The National Academies Press; 2005. [PubMed: 20669448]
  15. Kraut AM. Silent Travelers: Germs, Genes, and the “Immigrant Menace”. New York: Basic Books; 1994.
  16. Markel H. Quarantine!: East European Jewish Immigrants and the New York City Epidemics of 1892. Baltimore, MD: Johns Hopkins University Press; 1999. pp. 85–134.
  17. Markel H. For the welfare of children: The origins of the relationship between U.S. public health workers and pediatricians. American Journal of Public Health. 2000;90(6):893–899. [PMC free article: PMC1446259] [PubMed: 10846506]
  18. Markel H. Journal of the plague years: Documenting the history of the AIDS epidemic in the United States. American Journal of Public Health. 2001;91(7):1025–1028. [PMC free article: PMC1446708] [PubMed: 11441724]
  19. Markel H. When Germs Travel: Six Major Epidemics That Have Invaded America Since 1900 and the Fears They Have Unleashed. New York: Pantheon Books; 2004.
  20. Markel H. Bird flu: Major threat or Chicken Little? Medscape Public Health and Prevention. [Online]. Aug 9, 2006. [accessed December 28, 2006]. Available: http://www​.medscape.com​/viewarticle/541875.
  21. Markel H, Doyle S. The epidemic scorecard. New York Times. Apr 30, 2003. p. A31.
  22. Markel H, Stern AM. Which face? Whose nation?: Immigration, public health, and the construction of disease at America’s ports and borders, 1891–1928. American Behavioral Scientist. 1999;42(9):1314–1331.
  23. Markel H, Stern AM. The foreignness of germs: The persistent association of immigrants and disease in American society. Milbank Quarterly. 2002;80(4):757–788. [PMC free article: PMC2690128] [PubMed: 12532646]
  24. Rosenberg CE. The Cholera Years: The United States in 1833, 1845, and 1866. Chicago, IL: University of Chicago Press; 1987.
  25. Rosenberg C. What is an epidemic? AIDS in historical perspective. Explaining Epidemics and Other Studies in the History of Medicine. Rosenberg C, editor. New York: Cambridge University Press; 1992. pp. 278–292.
  26. Framing Disease: Studies in Cultural History. New Brunswick, NJ: Rutgers University Press; 1992.
  27. Rothbard MN. A History of Money and Banking in the United States: The Colonial Era to World War II. Auburn, AL: Ludwig von Mises Institute; 2002.
  28. Stern AM, Markel H. International efforts to control infectious diseases, 1851 to the present. Journal of the American Medical Association. 2004;292(12):1474–1479. [PubMed: 15383519]
  29. Xu R-H, He J-F, Evans MR, Peng GW, Field HE, Yu D-W, Lee C-K, Luo H-M, Lin W-S, Lin P, Li L-H, Liang W-J, Lin J-Y, Schnur A. Epidemiologic clues to SARS origin in China. Emerging Infectious Diseases. 2004. [accessed March 26, 2007]. pp. 1030–1037. [Online]. Available: http://www​.cdc.gov/ncidod​/EID/vol10no6/pdfs/03-0852.pdf. [PMC free article: PMC3323155] [PubMed: 15207054]
  30. WHO (World Health Organization). History of WHO and International Cooperation in Public Health. [Online]. 2007. [accessed April 12, 2007]. Available: http://www​.who.or.jp​/GENERAL/history_wkc.html.


History of World Health Organization (WHO) and International Cooperation in Public Health

1830Cholera overruns Europe
1851First International Sanitary Conference is held in Paris to produce an international sanitary convention, but fails.
1882International Sanitary Convention, restricted to cholera, is adopted.
1897Another international convention dealing with preventive measures against plague is adopted.
1902International Sanitary Bureau, later re-named Pan American Sanitary Bureau, and then Pan American Sanitary Organization, is set up in Washington DC. This is the forerunner of today’s Pan American Health Organization (PAHO), which also serves as WHO’s Regional Office for the Americas.
1907L’Office International d’Hygiène Publique (OIHP) is established in Paris, with a permanent secretariat and a permanent committee of senior public health officials of Member Governments.
1919League of Nations is created and is charged, among other tasks, with taking steps in matters of international concern for the prevention and control of disease. The Health Organization of the League of Nations is set up in Geneva, in parallel with the OIHP.
1926International Sanitary Convention is revised to include provisions against smallpox and typhus.
1935International Sanitary Convention for aerial navigation comes into force.
1938Last International Sanitary Conference held in Paris. Conseil Sanitaire, Maritime et Quarantinaire at Alexandria is handed over to Egypt. (The WHO Regional Office for the Eastern Mediterranean is its lineal descendant).
1945United Nations Conference on International Organization in San Francisco unanimously approves a proposal by Brazil and China to establish a new, autonomous, international health organization.
1946International Health Conference in New York approves the Constitution of the WHO.
1947WHO Interim Commission organizes assistance to Egypt to combat cholera epidemic.
1948WHO Constitution comes into force on 7 April (now marked as World Health Day each year), when the 26th of the 61 Member States who signed it ratified its signature. Later, the First World Health Assembly is held in Geneva with delegations from 53 Governments that by then were Members.
1951Text of new International Sanitary Regulations adopted by the Fourth World Health Assembly, replacing the previous International Sanitary Conventions.
1969These are renamed the International Health Regulations, excluding louse-bourne typhus and relapsing fever, and leaving only cholera, plague, smallpox and yellow fever.
1973Report from the Executive Board concludes that there is widespread dissatisfaction with health services. Radical changes are needed. The Twenty-sixth World Health Assembly decides that WHO should collaborate with, rather than assist, its Member States in developing practical guidelines for national health-care systems.
1974WHO launches an Expanded Programme on Immunization to protect children from poliomyelitis, measles, diphtheria, whooping cough, tetanus and tuberculosis.
1977Thirtieth World Health Assembly sets as target: that the level of health to be attained by the turn of the century should be that which will permit all people to lead a socially and economically productive life: Health for All by the Year 2000.
1978Joint WHO/UNICEF (United Nations Children’s Fund) International Conference in Alma-Ata, USSR, adopts a Declaration on Primary Health care as the key to attaining the goal of Health for All by the Year 2000.
1979United Nations General Assembly, as well as the Thirty-second World Health Assembly, reaffirms that health is a powerful lever for socioeconomic development and peace.
1979A Global Commission certifies the worldwide eradication of smallpox, the last known natural case having occurred in 1977.
1981Global Strategy for Health for All by the Year 2000 is adopted, and is endorsed by the United Nations General Assembly, which urges other international organizations concerned to collaborate with WHO.
1987United Nations General Assembly expresses concern over the spread of the AIDS pandemic. The Global Programme on AIDS is launched within WHO.
1988Fortieth Anniversary of WHO is celebrated. Forty-first World Health Assembly resolves that poliomyelitis will be eradicated by the year 2000.
1993Children’s Vaccine Initiative launched with UNICEF, UNDP, World Bank, and the Rockefeller Foundation.
1996WHO Centre for Health Development opened in Kobe, Japan.
199850th Anniversary of the Signing of the WHO Constitution.

SOURCE: WHO (2007).



Acting Assistant Director General, Communicable Diseases.


For more information on the evolution of the International Health Regulations see Annex 1-1, pages 59–60.


George E. Wantz M.D. Distinguished Professor of the History of Medicine; Professor of Pediatrics and Communicable Diseases; Director, Center for the History of Medicine.


For a broader look at the history of quarantine, infectious diseases and public health, particularly as they pertain to influenza, see: Mullet CF. 1949. A century of English quarantine, 1709–1825. Bulletin of the History of Medicine 23(6):527–545; McDonald JC. 1951. The history of quarantine in Britain during the 19th century. Bulletin of the History of Medicine 25(1):22–44; Hardy A. 1993. Cholera, quarantine and the English preventive system, 1850–1895. Medical History 37(3):250–260; Rosen G. 1958. A History of Public Health. New York: MD Publications; Duffy J. 1992. The Sanitarians: A History of American Public Health. Chicago: University of Illinois Press; Schepin OP, Yermakov WV, eds. 1991. International Quarantine. Madison, CT: International Universities Press: 125-58; Risse G. 1988. Epidemics and history: ecological perspectives and social responses. In Fee E, Fox D. 1988. AIDS: The Burdens of History. Berkeley: University of California Press: 33–66; Winslow, CEA. 1967. The Conquest of Epidemic Disease: A Chapter in the History of Ideas. New York: Hafner; Crosby AW. 1989. America’s Forgotten Pandemic: The Influenza of 1918. New York: Cambridge University Press; Hoehling AA. 1961. The Great Epidemic. Boston: Little Brown & Co; Kolata G. 1999. Flu: The Story of the Great Influenza Pandemic. New York: Touchstone Books; Barry J. 2003. The Great Influenza. New York: Viking. For more literary versions of the drama of epidemic disease and quarantine, see: Boccaccio G. 1931. The Decameron, Translated by J Payne. New York: Modern Library; Defoe D. 1948. A Journal of the Plague Year. New York: Modern Library; Camus A. 1948. The Plague. Paris: Knopf. Ibsen H. 1988. An Enemy of the People. Translated by J McFarlane. Oxford, UK: Oxford University Press; Lewis S. 1925. Arrowsmith. New York: Harcourt Brace; IOM (Institute of Medicine). 2005. The Threat of Pandemic Influenza: Are We Ready? Washington, DC: The National Academies Press, especially the chapters by J Taubenberger, pp. 69–89 and by L Simonsen, et al., pp. 89–114.


For the full report of this study, see: Markel H, Stern AM, Navarro JA, Michalsen J. 2005. A Historical Assessment of Nonpharmaceutical Disease Containment Strategies Employed by Selected U.S. Communities during the Second Wave of the 1918–1920 Influenza Pandemic. Defense Threat Reduction Agency: U.S. Department of Defense. [Online] Available: http://www​.med.umich​.edu/medschool/chm/influenza​/assets/dtra​_final_influenza_report.pdf [accessed December 28, 2006]. To consult all of the primary source materials that comprised this report, see: The University of Michigan Center for the History of Medicine. The 1918–1920 Influenza Pandemic Escape Community Digital Document Archive. [Online] Available: http://www​.med.umich​.edu/medschool/chm/influenza/index.htm [accessed December, 28, 2006]. For the abbreviated published report of this study, see: Markel H, Stern AM, Navarro JA, Michalsen JR, Monto AS, DiGiovanni Jr C. 2006. Nonpharmaceutical influenza mitigation strategies, U.S. communities, 1918–1920 pandemic. Emerging Infectious Diseases 12(12): 1961–1964. [Online]. Available: http://www​.cdc.gov/ncidod​/EID/vol12no12/pdfs/06-0506.pdf [accessed May 1, 2007].

Copyright © 2007, National Academy of Sciences.
Bookshelf ID: NBK54171


Related information

  • PMC
    PubMed Central citations
  • PubMed
    Links to PubMed

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...