NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

National Research Council (US) Committee on Continuing Assistance to the National Institutes of Health on Preparation of Additional Risk Assessments for the Boston University NEIDL. Continuing Assistance to the National Institutes of Health on Preparation of Additional Risk Assessments for the Boston University NEIDL, Phase 2. Washington (DC): National Academies Press (US); 2010.

Cover of Continuing Assistance to the National Institutes of Health on Preparation of Additional Risk Assessments for the Boston University NEIDL, Phase 2

Continuing Assistance to the National Institutes of Health on Preparation of Additional Risk Assessments for the Boston University NEIDL, Phase 2.

Show details

Committee Response and Findings

The committee reviewed the material presented by the NIH contractors on September 22 and concluded that at this point in time it cannot endorse the illustrative analyses presented as scientifically and technically sound or likely to lead to a thorough assessment of the public health concerns previously raised by the NRC. The committee understands that the analytical results discussed were incomplete and work on additional analyses was still ongoing. For this reason, the committee had to base its review on the presentations, without documentation of the scientific rationale for the analyses.

Based on the limited information provided, the committee has a few overarching concerns. First, it appears that the contractor has not yet been responsive to the committee’s recommendation that qualitative analyses addressing the three questions raised in its 2008 letter report be prepared first and that these qualitative analyses then be supplemented by quantitative analysis through modeling using available data on the agents in question. The results of modeling are only as good as the quality of the modeling inputs, and the problem of limited data should be addressed in narrative, with supporting scientific rationale for its interpretation, as part of a comprehensive qualitative analysis for the 13 pathogens. Instead, NIH and its contractors used a modified Delphi process to gather expert opinions that were then used as a substitute for data for modeling. Circumventing the absence of data with a Delphi process is a tactical error. Parameter values acquired in this way may be misleading without validation (Kaplan, 1992). Had the NRC’s previous recommendations been heeded, it should have been clear that the parameter values acquired in this way were unnecessary. When data are not available in the literature, the contractors should turn to relevant case studies and argue by analogy.

In addition, it is important that modeling be used in a context that reflects scientific knowledge and experience. For example, the University of Utah analysts presented an extensive modeling analysis for plume dispersal of aerosolized Rift Valley Fever virus (RVFv). But because RVFv is transmitted primarily by a vector, it is not an appropriate candidate for aerosol dispersal modeling. The committee reiterates the need to include actual data based on published results in the models where possible, for example, for modeling the speed of secondary transmission of SARS. Again, the models must be transparent and couched in the context of the risk assessment and address appropriate uncertainties.

While it is possible that such analyses will be provided in the final assessment, the committee strongly believes that a mid-course correction by Tetra Tech and its subcontractors will be necessary to reach that point. In short, while the committee commends Tetra Tech and its subcontractors for carrying out some extensive illustrative quantitative risk calculations, much work needs to be done before risks are adequately assessed.

In summary, the results presented on September 22 are insufficient for the committee to find that the analyses presented thus far will lead to a scientifically and technically sound risk assessment. The committee had endorsed the approaches presented at the last meeting in its third letter report, but noted that it had not yet seen results. The illustrative results presented to date are not yet sufficiently documented and supported to convince the committee that the contractors are on track to completing a comprehensive assessment of risk for the NEIDL facility.

The committee offers the following more specific comments on: the process used to generate dose-response models and the dose metrics used; the uncertainty analyses used in the modeling; other issues concerning modeling; the need for case studies based on actual data; and the method used for identification of vulnerable and susceptible populations.

Use of a Delphi Process to Generate Dose-Response Relationships

The committee is very concerned about the method by which dose-response assessment—a critical element of risk assessment for prediction of human infection, morbidity, and mortality—is being handled in the contractor analyses. The committee was informed that NIH elected to use a “modified Delphi method” to generate dose response estimates due to the absence of human data for predicting infections. This process involved soliciting opinions on human infective doses (HIDs) from an expert panel of biodefense specialists and laboratory researchers via questionnaires. Opinions were sought on values for HID10, HID50, and HID90, or the levels of inhalation exposure at which 10 percent, 50 percent, and 90 percent of an exposed human population might become infected with aerosolized pathogenic agents, for 13 pathogens. Although the report on the Delphi process was not presented to the committee at the September 22 meeting with the BRP, the committee subsequently asked for and was given a copy of the draft process report. Although NIH did not ask the committee to comment on the Delphi process report, the committee’s review of this report raised additional concerns, which are presented in an appendix (Attachment C). Only the major concerns are summarized here.

The scientific basis for the opinions of the experts involved in the process is not clearly explained in the report, especially regarding scaling animal data to humans, translating routes and endpoints, addressing low-dose issues, and other uncertainties. There is no detailed documentation of how the experts arrived at their individual opinions and judgments or of how interaction among them might have modified their opinions and judgments. Rather, the appendices to the Delphi report provide only numerical scores from “voting.” The scientific bases for the estimates derived by the experts, including citations to the published literature, case studies of human outbreaks, and knowledge of laboratory-acquired infections (LAIs), should have been integrated into the project team’s reports and presentations. The committee does not find convincing the claim that the experts’ conclusions regarding human infectivity from pathogen particles (models fitted to three median point estimates from 8 experts for 13 pathogens) “tracked the literature,” because the tabulated studies represent an incomplete accounting of the available literature (see Attachment C). To the committee’s knowledge, there has been no outside peer review of the results of the Delphi exercise.

The committee is not in a position to conduct a peer review of the risk assessment results presented because documentation was not provided. However, given the limited documentation provided, the committee is unable to endorse the use of its results in the risk assessment. The analysis presented does not appear to reflect sound scientific judgment or robust risk assessment practices.

Uncertainty and Sensitivity Analyses

Good practice in risk assessment requires transparency and the development of a sensitivity analysis that addresses the effects of variances in the model inputs on results. Methods for deriving sensitivity analyses, typically via variations on Monte Carlo simulations, also usually provide a range of results rather than point estimates. Ranges convey the potential variability of the results better than single point estimates do. Good practice also includes the use of qualitative uncertainty analysis (NRC, 1994; Morgan and Henrion, 1990). This is a frank discussion of the variables that are the least well understood and thus contribute most to the overall scientific uncertainty of the results. Inputs that may be highly variable but based on reliable data with little scientific uncertainty, such as human inhalation volume, contribute largely to the sensitivity analysis. Other input variables, such as pathogen dose-response, may be highly uncertain due to a lack of scientific data as well as variability within and between hosts. An input also may be highly uncertain and have low variability. Input variables of the latter type would have little impact in the sensitivity analyses, but might drive the total uncertainty of the results. The Tetra Tech team discussed the use of Latin Hypercube Sampling (LHS) as their basis for uncertainty analysis, but it was not clear whether these LHS analyses really addressed sensitivity, rather than uncertainty, analysis. The Tetra Tech team should exercise greater care in presenting these aspects of the data used for modeling and assure that both uncertainty and sensitivity analyses are adequately developed.

In addition, the committee is not persuaded that the “uncertainty analysis” provided has much useful content in its current form. As one example already mentioned above, it is inadequate to consider that the uncertainty in the dose-response relationship could be represented as a question of which of the eight Delphi process experts is correct, when another possibility is that none of the curves developed from the three points elicited from each expert provide the true human dose-response curve. Nor is it adequate to state that the results of fitting models to three median point estimates from 8 experts for 13 pathogens “tracked the literature” when the referenced studies represent an incomplete understanding of the available literature.

Incompleteness of the Analyses and Lack of Documentation for Assumptions

The Tetra Tech analyses presented to the committee contain undocumented or poorly documented assumptions. For example, the aerosol release from a centrifuge accident was described as a “10 ml leak from container into rotor, but only a small fraction is aerosolized.” How was the judgment made on how much of the 10ml was aerosolized? Is this fraction an important uncertainty? What was the quantity of pathogen contained in the release? No bounding calculation is given, such as the result if all 10 ml are aerosolized. This uncertainty was apparently not a component of the uncertainty analysis described, which is restricted to only a few parameters. While there are many acceptable ways to conduct a valid risk assessment and the choices made by the contractor team may be defensible, provision of the details of the many unspecified assumptions for the calculations behind the numerical results and risk matrices provided on September 22 would greatly improve the committee’s understanding of the strengths and limitations of the work conducted.

Risk assessment should provide insight on what events and processes give rise to risk, and then allow the acceptability of the risk—and the potential risk reduction from improved equipment and procedures—to be evaluated (NRC, 1996). Given the lack of cohesive knowledge of the dose-response relationships for the various pathogens in humans and animals, and the two disease transmission examples (RVF and SARS CoV), the presentations may have merit as illustrating the types of calculations needed for a risk assessment. But the committee is not persuaded that the project team has yet made progress in exploring and documenting the important issues for the NEIDL.

Consideration of the available case studies (such as the SARS case described below) suggests the possibility that transfer of a pathogen outside the laboratory by an infected worker is an important class of risk events. Such transfers can lead to diverse outcomes, for example, no secondary transfer in a recent case of tularemia in Maryland, but transfers producing secondary infections from laboratories studying more highly contagious pathogens. Scenarios for infection outside leading to secondary transmissions should be considered in addition to the centrifuge example, particularly where there are documented case studies of LAIs. As noted above, the degree of documentation of the details—such as for the centrifuge accident and the dose-response relationships for pathogens—is too sparse to be persuasive.


The committee continues to believe that the use of branching process models and compartmental models is appropriate, rational and straightforward. The committee was pleased with the progress made with the two branching process models described in the presentation. However, based on what was presented, the committee has serious concerns about the modeling context and, in at least one instance, with the manner in which the models were implemented.

First, as noted above, the committee was disappointed with the extent to which qualitative and quantitative approaches were integrated in the September 22 presentations. To be clear, the committee believes that presenting the modeling results without also describing the natural history of a pathogen, the characteristics of the disease outbreaks with which the pathogen has been associated, and experience with the pathogen in the context of known releases from laboratories diminishes the credibility of these results, which cannot be expected to stand alone without an appropriate context and explication. It would boost confidence in the model if the insights it provides generally match the expectations engendered by field experience. If the insights appear to contradict the expectations of experienced researchers, then the specific circumstances that make such results plausible must be explained. For example, the results of the SARS modeling presented to the committee appear to be counterintuitive, yet no credible explanation was provided of why the risks of secondary infection transmission should be the same for urban, suburban, and rural sites with their significant differences in population density.

On the contrary, the committee notes that this counterintuitive outcome may have much to do with the fact that internal restrictions in the MACCS2 model do not allow modeling of a pathogen release within 100 meters of the building release site. The Tetra Tech team’s results “showed” that the concentrations of aerosol at the rural and suburban sites were “two- to four-times higher” than at the urban site due to the increased turbulent mixing of the released puff in the rougher urban topography. Modeling a plume within the first 100 meters of release in an urban zone is admittedly difficult. However, if the puff at exit from the building is assumed to be the same concentration for all locations, the higher population density in the urban location could be associated with higher risk than the suburban and rural sites, which have lower population densities. Thus, it may not be true, as Tetra Tech concluded, that risks are higher in the suburban and rural locations. This 100-meter zone gap in the plume modeling must be more satisfactorily addressed and cannot be ignored as it appeared to have been in the presentations. Perhaps upper bound risk conditions, such as low wind speed, atmospheric stability, and an early morning inversion where mixing complexities may be minimal and urban canyon channeling a major factor, could be considered.

In recent years, a great deal of research in modeling the near field, less than 100 meter urban zone, has been completed due to societal and government concerns over the potential impacts of release of chemical or biological agents by malevolent or other action. Alternatives to MACCS2 exist that Tetra Tech could consider. Granted, parameters for and methods of handling building downwash, building upwash, urban canyon channeling, building wake turbulence and other factors of the urban topography remain difficult to handle with certainty. These can be an issue in a suburban and rural near field as well. (Some publications that further discuss the problems and approaches to modeling include Belcher, 2005; Pullen, et al., 2005; Olvera and Choudhuri, 2006; Burrows, et al., 2007; Neofytou, et al., 2008; Singh, et al., 2008; and Brixey, et al., 2009.

The committee also has a specific concern with the implementation of the SARS modeling related to the way in which mitigation strategies were represented by an “instantaneous” reduction in the value of the reproduction number. This is not supportable. There are several sources that document gradual decreases in the reproduction number for a range of diseases. In almost all cases, the decline occurs over a period of weeks, not instantaneously. In the case of SARS, the decline in the estimated reproduction number can take between 5 and 25 days even in a hospital environment with an already recognized problem (Cooper. et al., 2009).

Finally, the committee notes that the intensive effort put into plume modeling and the earthquake scenario (stated as the worst case) places a great overemphasis on risk pathways that are not particularly relevant to Rift Valley Fever (primarily transmitted by mosquitoes) and SARS (highly transmissible person-to-person and a major concern for an infected laboratory worker).

Vulnerable Populations

The September 2010 presentations indicated that the Tetra Tech team had identified the vulnerable or sensitive groups as “those 5 years of age or younger, those 65 years of age or older, those with diabetes mellitus, those with HIV/AIDS, and those who are pregnant.” The report from the modified Delphi process with the expert panel listed the median percentage increases in vulnerability to disease and death among these five groups. These percentage increases were then used to calculate the increased risk of infection at each site (if any) based on the percentage of population falling into these groups. Although the committee believes that the contractors’ approach and its presentations at the September meeting contributed to a meaningful discussion of the issues surrounding vulnerability and sensitive subpopulations, to meet the risk assessment goals that this committee set out in its previous reports, the contractors should recast their vulnerability analysis and shift direction, as explained below.

Refining, Re-evaluating and Making Key Assumptions Transparent

The committee recommends that the contractors consider re-evaluating and refining several of their key assumptions regarding vulnerable and sensitive subpopulations. First, it is not clear how these categories of vulnerable groups were determined, but the committee gathers that they were presented to the expert group in the Delphi exercise, not developed as a result of that exercise. Appendix A, Part C of the Delphi process report states that “the groups have been dictated in part by the level of data available for the sites being evaluated.” Because this report was made available after the September 22 presentation, the committee did not have the opportunity to discuss this statement, or the vulnerability group criteria, with the Tetra Tech team. Based solely on its reading of this language, it appears that the published literature and available data were the key criteria for selection of vulnerable groups. As discussed below, the committee presents a different approach for consideration. Second, the contractors used these vulnerability categories only to estimate additional infection rates, though the experts addressed percentage increases in vulnerability to disease and death. The committee believes that the contractors should address both increased risk for mortality and more serious health outcomes (severe morbidity). This issue was not addressed in the presentations even though the Delphi report specifically asked experts to assign probabilities for both increased disease (morbidity) and mortality. Third, the modeling carried out by the Tetra Tech team assumed that no member of groups at risk for primary infection was a member of a vulnerable subpopulation. This assumption does not seem realistic. For modeling purposes, it would seem more prudent to assume that some proportion of individuals at risk for primary infection could also be a member of a vulnerable group (e.g., diabetics).

Refining and Re-evaluating the Concept of Vulnerable and Sensitive Subpopulations

As explained previously, the committee cannot comment in detail about the methodology used to determine the vulnerable categories used by the Tetra Tech team. If this methodology were made transparent, the committee would be in a better position to offer a critique and suggestions for next steps. Nevertheless, based on the September presentations, the committee believes that the Tetra Tech team could make some improvements in determining these categories. While it might be possible to include previously gathered data into this new evaluation, a different methodology will better serve the risk assessment and the decision makers who will use it.

EPA’s National Environmental Justice Advisory Committee (NEJAC) set out a conceptual model that the committee believes is useful and should be considered by the Tetra Tech team (NEJAC, 2004). The model defines several key concepts, such as stressors, and conceives vulnerability broadly. More particularly, EPA’s NEJAC defines four important characteristics of vulnerable populations:

  1. More susceptible or sensitive to disease outcomes
  2. Differentially exposed to environmental conditions that could render these populations more vulnerable
  3. Differentially prepared to address deleterious conditions, such as exposures to infectious diseases, and
  4. Differential responses to the same level of infection or exposure (as non-vulnerable populations) that may worsen response.

The South Boston and Roxbury neighborhoods that are potentially affected by the NEIDL facility have been identified as environmental justice communities and a vulnerable population analysis should take that into consideration. These communities suffer from higher rates of several chronic diseases, including asthma, which is not one of the higher risk factors presented, and have a much greater population density (see discussion below on dispersal modeling). The rural and suburban communities to which they are being compared are not environmental justice communities, to the best of the committee’s knowledge.

In recasting its vulnerability analysis, the committee urges the Tetra Tech team consider the following steps:

  • Develop a robust methodology for determining categories of sensitive or vulnerable individuals. In doing so, the committee recommends that Tetra Tech adopt the EPA NEJAC conceptual framework as a starting point, and consult with community leaders, public health professionals familiar with the South Boston and Roxbury areas, and review the published public health literature and health surveillance data, if available.
  • Evaluate not only increased infectivity or morbidity, but also increased disease severity for vulnerable and sensitive subpopulations. This analysis should include endpoints such as mortality and, if possible, those that represent predicted differential morbidity outcomes.
  • Model primary infection assuming that some of the individuals at risk for such infection might also be members of vulnerable groups.

Use of Case Studies

In its April 20, 2010 letter report, the committee made a specific recommendation in the modeling subsection that “modeling should be augmented by case studies based on actual occurrences of laboratory or natural infections.” The committee believes that case studies can be used not only to provide information on how and whether LAIs may or may not be transmitted to the general population by infected workers. They may also provide ground truth examples for how potential time of exposure to a pathogen compares to time of recognition of an LAI, secondary transmission of the disease, and the effectiveness of treatment.

There are a number of well-documented accounts of recent LAIs that could be used in the development of brief case studies to illustrate more clearly the effect of infected laboratory workers on community health. Examples include, but are not limited to, infection with Brucella spp, SARS virus (see Box as an example), Francisella tularensis, and Burkholderia mallei. In addition, case studies describing naturally occurring illness can be developed for those agents potentially studied at NEIDL for which well documented or recent LAI have not been reported. A few examples include infection with Yersina pestis (tourists in New York), Monkey pox virus (contracted from exotic pets), Bacillus anthracis (contaminated drum hides), Marburg virus (a tourist who visited Uganda), and U.S. experience involving the recent H1N1 influenza virus pandemic. The committee is not advocating that these specific examples be developed as case studies, but believes that available information on a variety of the agents to be studied at the NEIDL could be used to provide context and a basis for reality to the qualitative aspect of the risk posed to the local community by an infected laboratory worker. A 2010 NRC report, Evaluation of the Health and Safety Risks of the New USAMRIID High-Containment Facilities at Fort Detrick, Maryland, provides a list of laboratory incidents that have occurred at USAMRIID’s laboratories that might provide useful examples from which to develop case studies. In the box below, the committee provides an example to illustrate what it means.

Box Icon


Sample Case Study: SARS/CoV. In China, SARS/CoV was grown in a BSL-3 laboratory by a worker who apparently had worn inappropriate personal protective equipment (PPE) and then treated the sample to inactivate the virus before removing it to a BSL-1 laboratory (more...)


The committee believes that Tetra Tech should carefully consider what might be the appropriate metric(s) when evaluating the transmission of pathogens in heterogeneous human populations. The metric presented to the committee was the probability that a release would not lead to secondary transmission (probability of > 0 transmissions). This probability arises naturally and easily out of multiple stochastic simulations—and may be of interest to a policymaker concerned with the health and well being of the population as a whole—but it will be of no interest to those groups at particular risk of infection. Such at-risk groups will want to know what might happen to them should an introduction in fact lead to a secondary transmission.

Copyright © 2010, National Academy of Sciences.
Bookshelf ID: NBK51858


  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this title (208K)

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...