NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Institute of Medicine (US) Roundtable on Value & Science-Driven Health Care; Olsen LA, McGinnis JM, editors. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington (DC): National Academies Press (US); 2010.

Cover of Redesigning the Clinical Effectiveness Research Paradigm

Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary.

Show details

6Aligning Policy with Research Opportunities


The scope of the reforms in clinical effectiveness research—that were the focus of the Redesigning the Clinical Effectiveness Research Paradigm workshop and that are discussed in this report—are truly broad and will deeply affect long-held practices and tenets. However, bringing such change about will require much more than new and improved methodologies. Instead, many stakeholders will need to significantly engage in reform. Cross-sector collaboration is needed to create a focus and to set priorities, to clarify the questions that must be addressed, and to marshal the resources that the reform effort requires. Moreover, the sheer scope of change needed requires stakeholders who are diverse, but working together toward common goals. A coordinated, public- and private-sector effort historically has been imperative to secure funding for such efforts and to coordinate spending strategically. Such collaborations also are vital to moving forward on the establishment of standards, such as common language for electronic health records (EHRs). Furthermore, government interventions are widely considered necessary to remove perceived policy impediments to progress. One example, stated earlier in this summary, is to address the chill on clinical research imposed by real and perceived barriers and burdens from the ways privacy rules and Institutional Review Boards (IRBs) are interpreted and structured.1 In addition, broad partnerships are needed to effect wide access to and sharing of data, considered another linchpin of progress. This chapter outlines some policy levers that can drive innovative research and progress in practice-based approaches as well as the potential roles that various healthcare stakeholders can play to accelerate progress.

Focused on course-of-care data, Greg Pawlson of the National Committee for Quality Assurance describes a major opportunity to use these clinical data for “rapid learning.” By capturing the experience of each patient and clinician in a structured and quantifiable manner, EHR systems have great potential to help transform our capacity to develop information that can be used as important evidence in making clinical decisions. Policy interventions will play a crucial role in improving the development of and access to databases that are suitable for clinical effectiveness research. With product approval increasingly tied to postmarket trial or database commitments to demonstrate the value of treatments, health product developers also are contending with a variety of issues related to the development and use of data for clinical effectiveness analyses. Merck’s Peter K. Honig discusses several key challenges that manufacturers face in responding to these demands. Those challenges include finding a suitable balance between demands for data transparency and maintaining competitive advantage, and improving the methods used to develop clinical effectiveness information.

Recognizing that the scope and scale of existing and future evidence gaps exceed any one entity’s capacity to address all of the needs related to improving evidence availability and application to improve practice, Mark B. McClellan of the Brookings Institution advocates that other approaches also are needed. These approaches should take better advantage of regulatory data that offers a rich opportunity to improve our knowledge base. McClellan cites the Food and Drug Administration Amendment Act of 2007 (FDAAA) and the Medicare Coverage with Evidence Development policy as models for how regulatory data can be integrated successfully into the ongoing capacity to develop better evidence on what works and, in turn, inform medical practice. Another speaker, J. Sanford Schwartz of the University of Pennsylvania, acknowledges that large amounts of data generated and supported by public investment provide innovative opportunities to inform clinical and comparative effectiveness assessment, but that substantial barriers must be passed for optimal use of these data. Schwartz offers a series of suggestions to mitigate the following paradox: We have large amounts of data and significant opportunities, but we are prevented from fully accessing the data and taking advantage of potential opportunities. In view of the reality that evidence-based medicine (EBM) requires integration of clinical expertise and research and depends on an infrastructure that includes human capital and organizational platforms, the head of the recently created Office of Portfolio Analysis and Strategic Initiatives at the National Institutes of Health (NIH), Alan M. Krensky, describes ongoing commitments with the NIH to build a sustainable research infrastructure centered on EBM principles. Finally, Kathy Hudson of Johns Hopkins University describes work to assess public perspectives on research and efforts to engage the public and the research community in dialogue and consultation designed to weave consumer perspectives into research design, encourage consumer participation in study recruitment and retention, and generally build a relationship of enhanced trust and understanding between healthcare consumers and the research community.


Greg Pawlson, M.D.

National Committee for Quality Assurance

There have been a number of conferences and publications, including an entire web Health Affairs volume, that have articulated the major developing opportunity to use clinical data collected for patient care (course of care data) for “rapid learning” (Etheredge, 2007; Pawlson, 2007). Rapid learning using clinical data implies that we should be able to capture the experience of each patient with each clinician in a structured and quantifiable manner similar to what we now do in formal research studies, to extend, but not entirely replace classic clinical research using randomized controlled trials (RCTs). For the purposes of this paper, we will include clinical effectiveness, health services, and other related research using large clinical databases as within the scope and definition of rapid learning. However, much of rapid learning is still far from a reality, not only because of spotty use of information technology but also because of policy and related barriers that have created a “chasm” between clinical and health services research (efforts to systematically and scientifically add to our knowledge of patient care) and the actual care of patients in practice. These barriers range from the way we fund, or in many cases do not fund, clinical and health services research, to the structure of data in most electronic records, to the form and content of health professions education. While solutions are not easy or even all that evident, we would propose the following be explored: (1) enhanced funding for health services research linked much more closely and coordinated with funding for basic and clinical research; (2) a private–public partnership, with strong input from the research community along with others, to set standards for what and how data is entered and retrieved from electronic medical records (EMRs), (3) an active effort to insure that data from health plans and the growing number of data consortia (Health Information Exchanges [HIEs] and Regional Health Information Organization [RHIOs]) and similar efforts, provide more open and affordable access to legitimate researchers and educators from academic and other institutions; (4) that Health Insurance Portability and Accountability Act (HIPAA) regulations be reviewed, modified, and delimited to remove the major barriers imposed on research and rapid learning that pose NO direct risk for patients; and (5) that health professions, and especially medical, education recognize and incorporate knowledge and skills related to the use of clinical data for new knowledge.

To begin this overview, imagine a healthcare encounter in the future in which a clinician is seeing a patient with multiple cardiovascular risk factors, including obesity. The clinician records all critical parameters that are needed to follow a patient in a set of carefully structured data fields in an EMR. That data is then merged and compared to data on similar patients both within that physicians’ own practice, as well as across other patients in other practices. The EMR has a decision support tool that analyzes all the data including genomic information, helps the clinician delineate and understand the precise level of the patient’s cardiovascular risk (i.e., which are the critical factors to consider whether blood pressure is more of an issue than cholesterol, etc.), and provides a recommendation for treatment pathways and interventions. In this scenario, the EMR might recommend a relatively newly approved agent for hypertension as well as indicate any additional data needed to track potential treatment effects and side effects. Over the course of treatment, this patient’s data is combined with those of all other patients currently taking the “new” medication in an electronic health records environment. This data (some patient identified and some de-identified depending on the need and permissions) is fed back to the individual clinician, regulatory agencies, and researchers with an interest in this medication, to provide data on how this medication, in comparison to other possible medications, is performing in actually use, both for the specific patient and for similar patients. The EHR system also could provide decision support within all attached EMRs to help clinicians to determine if the specific medication is still optimal. All of these linkages and feedback loops can be subsumed under the term “rapid learning” using health information technology (HIT).

The reality of the current situation, in most clinical settings, is far from the efficient, evidence-based practice presented in the scenario, and many barriers impede progress toward this ideal. Although a critical step, implementation of EMRs alone, or even interoperable EMRs linked in an EHR, will be sufficient to achieve this standard of care. Indeed, studies have suggested that to achieve the highest quality standard of practice today, EMRs are necessary but not sufficient (Ozcam and Kazley, 2008; Solberg et al., 2005).

Research and development funding and research focus also are major barriers to the use of electronic data for rapid learning. There is widespread acknowledgement that the current levels of funding for health services research (as contrasted with basic biological research) is far from adequate. Beyond insufficient funding, the priorities and compartmentalization of the budgets of major public (the Agency for Healthcare Research and Quality [AHRQ], NIH, Centers for Disease Control and Prevention, Department of Veteran Affairs) and private (foundations and corporations) make it difficult for researchers in a new area such as rapid learning to piece together stable funding to even begin to create the data exchange and protocols that may be required prior to initiation and testing of rapid learning. Funding for infrastructure development in the HIT area is even more problematic. While there are some efforts that are at least tangentially related to rapid learning, such as the Practice Research Network funded by the AHRQ, Aligning Forces for Quality funded by the Robert Wood Johnson Foundation, or various RHIOs and HIEs, most efforts are very underfunded and none that we are aware of directly address issues of rapid learning.

Also related to research, there continues to be a large chasm between clinical practice and even health services research. Academics often focus on datasets that are close at hand, such as those in hospitals, faculty practices, or residents’ clinics. It is often challenging to identify, understand, and use data from a source outside of the academic environment, and in some instances, it is either difficult to obtain permission to use the data or substantial charges are attached to using data from private settings. However, one of the reasons that academics do tend to use available databases is the difficulty and often cost of using databases from health plans or other sources that might actually have broad and useful data.

Another barrier that presents a challenge is that electronic data standards, including those for EMRs, are still far from complete, especially the critical parameters to guide what data should be included in EMRs and how that data can be entered in fields that lend themselves to retrieval and analysis. Efforts to even do basic clinical performance measurement using EMR data (as contrasted with claims data) are often stymied by missing data (such as left ventricular ejection fraction) or fields that are nonstandardized across EMRs. While several groups, including The National Quality Forum, the Office of the National Coordinator (for HIT), and a collaborative headed by the American Medical Association with the National Committee for Quality Assurance and the EMR Vendors Association (EMRVA) and others, are working on various aspects of the problem, there are few linkages of any of this work to the research community, and the work is far from complete. The issue that is perhaps the most neglected is the lack of attention to completeness of clinical data recorded on any given patient. While tangential events such as malpractice claims, audits around submitted claims for insurance or reporting for quality purposes may have some impact on efforts to have more complete data, there is little if any standardization, even within EMRs sold by a given vendor, around either defining what data elements are critical for patient care (and therefore should be nearly universally recorded) let alone in what fields or format the data are entered. Few, if any, efforts or programs are in place to enhance the training of clinicians in data entry (beyond how to enhance billing) and there are few direct rewards for enhanced data or consequences for poor data entry.

A less apparent but potentially crippling barrier is the increasing conflation of the regulation of direct human subjects in research with secondary data analysis for general knowledge. Interpretation of HIPAA, and especially the use of personal health information (PHI) is core; there are others at play as well. Since rapid learning requires secondary analysis and use of data gathered for clinical care or quality improvement purposes, how research and PHI issues are handled directly affects rapid learning. All agree that individual patients who are research subjects need to have careful oversight and protection from undue risk from all forms of research. However, it would seem that the risks to patients from data that have already been collected to monitor and assist in their own care are both quantitatively and qualitatively different from primary data collection for research purposes. Finally, there have been several incidents in which projects that have been centered on quality improvement (which is in many ways very analogous to rapid learning) have been either stopped or subject to multiple delays because they were seen or treated like primary clinical research. It is not clear how current approaches to research or PHI would treat the flow and exchange of information in our initial scenario, but there is likely to be little investment in pursuing rapid learning unless these issues are addressed.

Fortunately there are some policy interventions that could be important in overcome these barriers. With respect to the inadequacy and compartmentalization of funding, improvements are needed in the way that research and clinical learning involving HIT are funded and coordinated by both the public sector (the U.S. Department of Health and Human Services including NIH, AHRQ, and Centers for Disease Control and Prevention), Department of Defense, the Department of Veteran Affairs, and the Department of Homeland Security and the private sector, so that our overall expenditures of dollars in research and HIT better reflect national priorities. A more dramatic scenario would be to combine AHRQ and NIH budgets or to place the planning of all public-sector research and HIT development-related budgets under strong central executive branch oversight with requirements coordination for overall healthcare research budgeting. A shorter term, and more immediately critical issues is that to capitalize on the potential of greatly enhanced health care data sources, the proportion of funding for secondary database use and other health services research should be markedly increased. Calls for more funding are always viewed as easy to say but difficult to bring off given entrenched interests even within the research communities, let alone elsewhere. As it has in the past in some areas, a very clear and focused signal from the Institute of Medicine could have a substantial impact in breaking the political and policy logjam in this area.

Policy changes are also important in fostering the development of a more widely effective HIT clinical data program that might support rapid learning. Such policies should incentivize the utilization of data collected at the point of care in rapid learning and in related research efforts. Additional funding could facilitate the development of research and educational development teams that could work with health insurers, EMR vendors, and others in the creation and production of data useful for research. As previously noted, examples of this sort of linkage (e.g., HMO Research Network, AHRQ’s Practice Based Research Networks [PBRNs]) are few and far between and painfully underfunded. AHRQ and NIH review panels should include more researchers and data experts with practice and clinical systems HIT backgrounds. More open and affordable access should be provided by insurers and others to large clinical databases that could be the basis of expanding opportunities for the knowledge that is critical to rapid learning. Pediatric cancer care may provide a useful example, as virtually all of the treatment provided in pediatric oncology is recorded and applied to registries or active clinical trials, which then informs the optional future care for children ongoing treatment.

To address the lack of standardization of data elements in EMRs, and to appropriately harness this resource for comparative or clinical effectiveness research or for rapid learning, researchers must be actively involved in the many discussions and organizations that are working to set standards for EMRs. In work to define common data elements, cross-link different systems, and develop approaches to the retrieval and coherent use of datasets, the input of the research community is greatly needed to ensure that critical fields, parameters, and measurements are built into the system. While there might be some hope that, as with data protocols involving ATM cards, the private sector might develop the appropriate conventions, there is a substantial presence of the public sector in health care (whether in financing such as Medicare or Medicaid or delivery of care as in the Department of Defense and the Department of Veteran Affairs). Thus only a core effort directed across multiple executive branch agencies (the U.S. Department of Health and Human Services, Department of Defense, the Department of Veteran Affairs, the Department of Homeland Security, and others) with strong and continuing liaisons and input from the private sector would seem likely to succeed. Requirements for interoperability between EMRs and other data sources; the use of standard protocols for inserting and modifying elements and extracting data related to guidelines, performance measurement, and research-knowledge expansion; and the involvement of researchers from AHRQ, NIH, and elsewhere in decisions being made about data elements in EMRs and connectivity between data sources are all areas in which a cross-departmental effort might be critical. While congressional jurisdictions might be an impediment to such an effort within the executive branch, the effects of HIT on the nearly $3 trillion healthcare sector could actually dwarf those within the banking community in the adoption of ATMs.

To address the conflation of research and quality improvement, policies are needed that protect patients but do not unduly constrain the use of secondary data that can add to our generalizable knowledge. Focused, expedited reviews of quality improvement and or research protocols that deal with secondary data could be done by groups other than the traditional IRB. To improve the clinician’s ability to use data, all medical and nursing students graduating after 2015 should be required to have the equivalent of an MPH degree with a focus on population health and the use of individual and aggregated data in the care of patients. State and federal medical education funding (including Graduate Medical Education) could be tied to medical student and residency program participation in quality and resource use improvement training. Finally, a push is needed by the public and the research community to encourage boards and medical organizations to address deficiencies in the performance of practicing physicians (recertification).

Finally, to contend with the current lack of data connectivity, beyond requiring EMRs to have core capability to aggregate data across patients and to provide standardized outputs of data, the further development of HIEs, RHIOs, or other efforts at regional aggregation or exchange of clinical data is key. While supporting patient care at point of care delivery is the most important facet of this work, benchmarking, assessment, public reporting and rapid learning (both research and direct care related) should be incorporated into these efforts.

In conclusion, this appears to be a critical moment in the development of EMRs and EHRs, which have the potential to provide complete, real-world data to inform clinical practices, help to develop needed clinical effectiveness information, improve the systematic quality of care, and produce a rapid, evidence-based method of continuous practice improvement. Unless the substantial barriers to progress are addressed quickly and collectively, the United States may well fall far behind in yet another critical aspect of health care.


Peter K. Honig, M.D., M.P.H.

Merck Research Laboratories

Merck & Co., Inc.

The pharmaceutical industry is challenged with meeting the demands of an increasingly complex and evolving healthcare system. Regulatory, stakeholder, payer, and patient demands for increased data requirements, transparency, access, and value represent formidable issues in the areas of benefit–risk assessment, ongoing safety assessment, and comparative effectiveness. Several important initiatives are under way to address these challenges; however, significant opportunities remain that are amenable to research and policy remediation, including clinical trial and pharmacovigilance methodologies, data standards and access, as well as the perpetual challenge of education focused on translating evidence into behaviors.

The pharmaceutical industry is operating in a changing healthcare ecosystem. Although explicit regulatory registration evidentiary standards have not significantly changed (i.e., evidence of safety and efficacy demonstrated through adequate and well-controlled clinical investigations), regulatory and social acceptance of residual uncertainty around benefit risk has changed significantly over the past several years. Increasingly, the FDA and other regulators around the world are exercising the precautionary principle and, at times, creating barriers to new drugs reaching the market. While not affecting drugs with profound benefits in addressing unmet medical needs, some drugs occupy a grayer area of risk–benefit and are becoming harder to bring to market. Moreover the interest in risk management has led to increased postmarket clinical trial and database commitments included as a prerequisite of approval.

Payers and providers also are increasing their demands for demonstration of value. The downturn in development of “me too” drugs is, in part, an appropriate outcome of the fact that most payers will not pay for these drugs unless there is an explicit demonstration of incremental value. The commercial failure of Exubera, an inhaled insulin product, and the reimbursement challenges experienced by follow-on, TNF sequestrants for rheumatoid arthritis resulted from their perceived lack of demonstrated incremental benefit over existing therapies.

Along with these healthcare ecosystem changes, large pharmaceutical companies face continually rising costs of drug development, decreasing output of new therapeutics, and an increased number of companies competing in the fields of drug discovery and development. Basic and translational research is no longer the sole province of large integrated pharmaceutical companies but now occurs increasingly outside of the walls of industry in academic centers and smaller companies. There has been significant progress in drug development with substantial advances with regards to improved animal models of efficacy/toxicity, using system biology approaches to target identification, efficacy and safety biomarkers, dose–response methodologies, pharmacokinetic and pharmcodynamic modeling (exposure response), clinical trial simulations, disease progression models, demographic representativeness in clinical trials, and genetic and environmental predictors of pharmacodynamic response (e.g., whole genome screening). In spite of these advances, drug development remains a high-risk, high-cost proposition.

The industry is facing challenges with regard to data transparency and data access expectations. Congress recently passed the Food and Drug Administration Amendments Act of 2007, which included language about data transparency, registration, and access. Many states also are involved in this issue, developing their own laws around disclosure and transparency. Major medical journal editors also are expressing their perspectives and implanting policies around registration requirements and independent validation of results. Internationally, the World Health Organization (WHO) is also weighing in on registration transparency. The balance between transparency and proprietary considerations in a highly competitive environment remains a significant concern to industry.

Of particular interest is public- and private-sector access to utilization and claims outcome data. While a concern to the field generally, it is of particular importance to industry because of the increased need to access data to support necessary and required epidemiologic, pharmacovigilance, and outcomes research work with increasingly commoditized and proprietary data sources. Also, the data exist as decentralized and disaggregated nonstandardized clusters. This becomes a challenge, for example, in safety surveillance of rate adverse reactions, which require analysis or large number of data records across databases.

Finally, the industry faces formidable issues in the area of re-establishing trust. Trust between and among healthcare sectors including but not limited to industry is quite low. In particular, much has been done to undermine the authority and the credibility of the provider in the eyes of the patient.

To address some of these challenges, several notable initiatives are underway. Clearly the FDA’s Critical Path initiative has laid the groundwork for improved science-driven regulatory evolution. Likewise, there is the Innovative Medicine Initiative (IMI) in Europe. Both exist and advocate public–private partnerships in the precompetitive space as a means of addressing significant drug discovery and development challenges (e.g., preclinical safety biomarkers). Active comparators are being increasingly incorporated into clinical registration studies and post-approval clinical trials, in part, to demonstrate incremental value. It is important to note that it is and will always remain a challenge to address every clinical question by means of randomized clinical trials. This has been recognized by the Institute of Medicine (IOM) and other groups and is the subject of a growing professional discipline around demonstration of absolute and relative clinical effectiveness. There also are some efforts underway to have more structured approaches to benefit–risk assessment. While recognizing that benefit–risk assessment will likely never be reduced to algorithmic quantitative science, it is amenable to structured methods that can inform clinical and regulatory judgment. It must be acknowledged that benefit–risk assessment is contextual and, at times, relative to currently available therapies. Clinical science still lacks the ability to quantify comparative benefits even when we believe they exist. For example there are many selective seratonin reuptake inhibitor and seratonin reuptake inhibitor on the market for the treatment of major depression, but it has never demonstrated that one works better than another or that there is variation in patient response to each drug. Lack of truly meaningful and sensitive clinical end-points, such as depression scales, can effectively blur differences. More work is need in trial methodologies and validation of sensitive and relevant end-points to address these problems. The same challenge exists for assessment of absolute and relative effectiveness. These are difficult to do before a drug comes onto the market, and better methods are needed once they come onto the market. More insight is needed on the appropriate role for natural-use studies, cluster randomization, and other types of novel trial designs.

Large, simple efficacy and safety trials are often viewed as a panacea. But little work has been done to set standards for these types of trials. Fundamental questions such as What is large? and What is simple? remain unanswered. Perceived regulatory monitoring expectations confound efforts to simplify data collection and make these less simple than they could be. They are large, but they are not so simple, and they are extremely expensive. There also are important distinctions for the design and content analysis of large simple trials for safety. Issues such as of choice of relevant patient population, relevant comparator and the adequate sizing of such studies are important considerations. There is not uniform consensus on some other basic principles around large simple trials such as whether to take an intention-to-treat (ITT) approach or a per protocol approach. For safety trials, exposure is the important variable and an ITT approach probably isn’t the generally appropriate approach. This is in contradistinction to the established primary approach for evaluation of efficacy in large trials. Finally, who should conduct and pay for these trials? The NIH has historically taken up these large trials, but should others such as Centers for Medicare & Medicaid Services (CMS) or industry also contribute? These sort of fundamental issues have not been addressed.

It is encouraging that rigor and standards in pharmaco-epidemiology and meta-analysis practice have been recognized as something that continues to be addressed. Prespecification of hypotheses, scientific methods to control bias, data analysis and statistical analysis plans are now widely accepted as standard practice. Independent replication of results has long been an evidentiary standard for clinical trials and increasingly being accepted by the nonfrequentist community. Registries and sentinel and population- based pharmacovigilance systems are being developed, and equal attention is needed on improving the methodologies.

The application of Bayesian statistical approaches in the field of pharmacovigilance through the evaluation of spontaneous reports and population-based data is an active field of research. There is an initiative involving the collaboration of industry and the FDA to evaluating the potential for electronic medical records for postmarket surveillance efforts.

Finally, the ultimate challenge that faces all of us is to improve the translation of knowledge into behavior. Evidence gaps may persist, but it is still frustrating that best practices and new evidence is not optimally incorporated into patient care. New research, practice guidelines as well as medical product labeling all contain information that is important to consider in choosing patient care options. The translation of population-derived information into individual patient care is a challenge that is being addressed through EMR standards, computerized physician order entry (CPOE), and the teaching of evidence-based decision making, but there is much work to be done.

The medical education system may not adequately address needs in basic pharmacology and clinical pharmacology let alone clinical effectiveness. Concerns about new trainees’ ability to interpret sophisticated analyses of the medical and pharmacologic literature have been raised, but not addressed on a national level, and currently do not include training in the incorporation of evidence into clinical practice. Changes in medical education may help advocates of evidence-based practice to achieve more improvements in care.


Mark B. McClellan, M.P.A., M.D., Ph.D.

Engelberg Center for Health Care Reform

The Brookings Institution

The recent public debates in Congress and in other settings on developing the capacity for comparative effectiveness research have generally focused on providing new funding and adding a new entity to the health-care system. However, even if such an entity is established, perhaps making billions of dollars of new funding available, it is important to recognize that the scope and scale of existing and future evidence gaps exceed any single entity’s capacity to address all of the needs related to improving evidence availability and application to improve practice. Other approaches are needed, and, in this respect, taking better advantage of regulatory data offers a rich opportunity to improve our knowledge base.

A major theme of the larger efforts of the IOM Roundtable on Value & Science-Driven Health Care is that the core of a learning healthcare system is not something added—through funding or new structures—but rather, is something that is built into the system that improves the efficiency, quantity, and quality of electronic data captured, enables the delivery of more sophisticated information in the actual delivery of health care, and establishes the routine capacity to learn from medical practices. Distributed data networks have been discussed as a way to facilitate these types of learnings. Because this information will not be derived from traditional randomized clinical trials, support will be needed for infrastructure, data aggregation, and analysis, and for improving the relevant statistical methods. Given the slow movement in Congress on comparative effectiveness, for the short term, a priority should be to enhance the healthcare system’s capacity to generate data as a routine part of care and to use these data to learn what works in practice. This paper will highlight a couple of areas where regulatory data and prior Congressional action might help to make this happen and where it may be more feasible and may not cost billions more dollars to put this data capacity into place.

Two examples are immediately relevant to this discussion. First, the recently passed Food and Drug Administration Amendment Act of 2007 (FDAAA) envisioned a new postmarket surveillance infrastructure. Second, Medicare data that historically have primarily focused on administrative information for payment issues has the potential to be collected and used in a more sophisticated and clinically relevant way. FDAAA does more than reauthorize user fees and expand agency regulatory authority. This bill does nothing short of envision an additional built-in infrastructure in our healthcare system for developing postmarket evidence. By 2012, an active postmarket surveillance system will be available to provide information about the experience of more than 100 million Americans. This represents a fundamental change to the way we monitor and follow up on suspected safety problems with medical products; it also has the potential to serve as a first step toward introducing a more routine infrastructure into the healthcare system that can be used to address questions about the use of products in different types of patients and populations, and potentially to address effectiveness issues as well.

This kind of system is increasingly feasible as we move towards more electronic data. The pressing need to improve safety surveillance capacity has been underscored by recent shortfalls of the existing passive surveillance system. For example, the current system’s dependence upon spontaneous reporting failed to detect important safety signals such as the higher rate of adverse events in the longitudinal cardiovascular outcomes related to Vioxx. Simulations carried out by the HMO Research Network have demonstrated that with an active surveillance system in place, this higher rate could have been detected in a matter of months rather than multiple years. If the vision articulated in the FDAAA is taken up and implemented effectively, the result could lead to more efficient detection and quicker action on drug labeling and use, and the ability to characterize adverse events much more quickly and precisely. These advances would, in turn, lead to more graded and timely responses from the FDA in regulatory action—not just in pulling a drug off the market, but perhaps using other kinds of labeling refinements because of increased confidence about how drugs are actually being used in the population. Finally, it opens up the opportunity for supporting improvements in evidence-based medical practices and providing some alternatives to the current approaches to addressing safety issues.

Several interesting pilot projects are under way to begin building this kind of infrastructure. Progress will require the development of standards and consistent methods for defining adverse events and pooling relevant summary data from large-scale analyses, as well as efforts to overcome issues that impede data sharing. Much can be learned about drug risks and benefits from observational studies of large population datasets; for questions that require randomization or other statistical approaches, these databases also have great potential to help design targeted trials or post-market clinical studies. Perhaps eventually with more efficient generation of information on risks and benefits, costly postmarket clinical trials can be efficiently used to augment the routine postmarket surveillance system. In sum, the passage of the FDAAA provides an immediate and rich opportunity to improve drug safety and postmarket surveillance, as well as to move the nation closer to a learning healthcare system.

The work of Medicare over the past few years provides another example of important efforts to build more evidence development into the existing healthcare system. Several national coverage decisions have utilized coverage with evidence development (CED) policy to encourage the generation of needed evidence of intervention effectiveness. As a result, some private plans—Aetna and others—have also, in some cases, provided coverage in the context of developing better evidence on how conditionally covered treatments work in clinical practice. This policy has enabled Medicare over the past few years to provide coverage a bit more broadly, specifically in areas for which the development of additional clinically relevant information was needed. Pertinent examples follow of this policy’s use and impact.

One type of CED involves the establishment of clinical registries that collect and house clinically sophisticated data that augment the usual kinds of information that Medicare administrative data systems provide. Since 2005, Medicare coverage of cardioverter defibrillators is conditional on the provision of clinical information deemed necessary for future coverage decisions. In this instance, the clinical characteristics of the patients receiving the Implantable Cardioverter Defibrillator (ICD) were important in determining if the treatment should be covered, and coverage requires that such information be systematically placed in a registry in conjunction with other Medicare information such as noncomplication rates and other aspects of longitudinal care. The resulting large-scale registry is currently being analyzed to answer some important questions surrounding ICD use, including which kinds of patients are actually receiving the ICDs and how they differ from those included in clinical trials and what kind of complication rates are occurring across different settings of care; and to establish a natural history of a whole range of types of patients. Similar registries have been established as a result of CMS CED decisions for a few other cases as well, including Fludeoxyglucose Positron Emission Tomography (PET) scanning.

A second type of coverage with evidence development involves providing needed support for clinical trials. CMS has long paid for routine costs of care in clinical trials and has recently reiterated its policy to do that, but in certain cases CMS also will pay for the cost of treatment in trials conducted by the NIH and others. Examples include coverage of the use of certain biologics off-label or off-drug compendium indications for certain kinds of cancer, and, more recently, for carotid stents in some patients with moderate blockages. These decisions are being made in lieu of straight coverage denials that historically resulted when treatments did not have sufficient evidence for broad-based approval. In the context of a clinical study, CMS has more confidence that the benefits outweigh the risks and that, therefore, the treatments are reasonable and necessary for patients.

As a very helpful and inexpensive next step, Congress needs to clarify CMS’s authority to use these kinds of methods to develop better evidence on what the Medicare program is paying for. It would significantly boost efforts that are already underway and help to reinforce some similar steps that are taking place in the private sector. Bariatric surgery for example has been covered in many cases by private health plans in ways that promote better evidence development.

The efforts of the FDA and Medicare demonstrate how much can currently be done to build the capacity to develop better evidence into our routine healthcare system. Obviously, better statistical methods and approaches to pooling data will be needed to fully capitalize on these efforts, but dedicated effort is needed now to develop an ongoing data capacity, so future work will not be relegated to one-off studies of particular issues in which each investigator has to pull together databases or find some subsets of data needed to answer each question. These efforts pave the way for a much more systematic approach.

Integrated or distributed data networks will be particularly helpful in addressing specific kinds of questions. Although we seem to be developing at least some relevant evidence in some areas, we don’t seem to be developing very good evidence on how to get medical practitioners to follow the best available evidence. It is not enough to develop the evidence on which treatments are appropriate or may not be appropriate in particular kinds of patients. The true impact of evidence-based medicine will be through the development of evidence on what we can do to influence and support the delivery of health care that reflects best use of resources: to get the best evidence to get the best outcomes for patients at the lowest overall cost.

The work of Elliott Fisher and Jack Wennberg and his colleagues at Dartmouth have provided an in-depth analysis of the geographic variations in costs of treating Medicare beneficiaries and utilization of services. The major source of these variations is not intensive treatments like bypass operations or using one drug instead of another in a broad population of patients, but rather the many subtle and built-in differences in medical practice for patients with chronic diseases. For example, how often should patients with diabetes be referred from a primary care doctor to a specialist? How often do you see patients in follow-up? Which lab tests do you order and when? What imaging procedures are needed? What other minor procedures are needed, and how often should they be performed?

Although not high-profile intensive medical technologies, those kinds of treatments account for a surprisingly large share of the area-to-area variations and costs. These also are areas for which it has been particularly difficult to develop evidence. No medical textbooks answer the questions of which lab test should be ordered and when and how often should one see patients with diabetes. There ought to be a way to develop better evidence on approaches that can influence how patients are treated—and all of these kinds of treatments, whether lab tests or revisits, are appropriate for some kinds of patients. It would be very interesting and useful to know the answers to these types of questions. Information is needed not just on whether a patient gets a treatment or not, but what kind of interventions work in terms of payment reforms, formulary reforms, care management programs, or that other interventions that affect medical practices and populations can influence how a population of patients is being treated. These kinds of incremental differences in medical practice are very difficult to analyze through traditional randomized clinical trials; but, putting in place better infrastructure for collecting data and developing evidence longitudinally over time, on actual treatments that populations of patients are receiving, offers the opportunity to transform how care is delivered and to improve health outcomes.

There are many other examples of how regulatory data can be built into the ongoing capacity to develop better evidence on what works. But these particular examples and opportunities are worth emphasizing because of their tremendous potential for learning more about what is going on in actual medical practice.


J. Sanford Schwartz, M.D.

School of Medicine & The Wharton School

University of Pennsylvania

Large amounts of data generated and supported by public investment provide exciting and innovative opportunities to inform clinical and comparative effectiveness assessment. Despite the potential to increase the clinical value of existing information, substantial barriers exist to optimal use of these data. Enhanced coordination in the development of publicly generated data both within and across agencies can reduce overlap and redundancy while expanding the range of issues addressed and information available. Integration of existing publicly supported research and clinical datasets should be facilitated, standardized, and routinized. Access to data generated by public investment, including those by publicly funded investigators, should be expanded through development of effective technical and support mechanisms. The increasingly restrictive interpretation and implementation of HIPAA and related privacy concerns, growth of Medicare HMOs, and the increasing commercialization of private-sector clinical databases are posing new problems for secondary data analysis, have the potential to undermine comparative effectiveness research, and threaten generalizability of research findings. Practical, less-burdensome policies for secondary data that protect patient confidentiality, expansion of Medicare claims files to incorporate new types and sources of data, and facilitated, lower cost access to private-sector secondary clinical data for publicly funded studies need to be developed and implemented.

Concerning evidence-based comparative effectiveness, if our ultimate objective is to answer clinically relevant questions, most researchers are likely in agreement that while RCTs are necessary, they are not in and of themselves sufficient to answer all of our questions. Comparative effectiveness is context dependent. The key questions can be distilled into very simple language: What is being evaluated? How is it being used? For what purpose? Why? For whom? When? Where? Our focus should not be so much on whether a study is “good” or “bad,” but rather on two central, interrelated questions: What is the question we are asking, and how can we answer it in the best way? Similarly, there are no “good” or “bad” data. Again, our focus ought to be on central questions: How do we use those data, and what do we use them for? More importantly, how do we interpret them? The most significant problems in clinical effectiveness research are not that we employ poor methodology, but rather that we fail to ask the right questions.

Working backwards from a problem, one needs to structure the decision and get the information we need to identify the gaps in data needs. One of the ways to use quasi- and nonexperimental data is to inform where we should focus our clinical trials. In part, we use empiric methods and in part we use subjective expert opinion—a combination of those is probably optimal. The challenge for us is to use a whole constellation of available methods. The development of a National Problem List, which has been discussed in other contexts within the IOM, would be another avenue to push us forward.

Cost effectiveness in undertaking comparative effectiveness research is essential. We need to know how much better something is and how much more we are going to have to pay for it. Nonetheless, we do not have enough money to do all of the clinical trials that we need to do and would like to do, no matter how efficient we become in conducting RCTs.

One of the paradoxes of research today is that while we have large amounts of data and significant opportunities, there are real barriers to research effectiveness. As a key funder of research, however, the federal government can play a pivotal positive role in addressing some of these barriers and helping the research community to take full advantage of the opportunities. The government is very good, for example, in enhancing coordination and development of data within and across government and could take steps to both reduce overlap and redundancy and expand the range of issues addressed and the scope of the information available to address them. There is opportunity here for the government to formally review the type and scope of data it collects, determine where gaps exist and what opportunities exist to link databases, and then take steps to fill those gaps. Similarly, the government could expand the RCT registry to include all comparative effectiveness research—any study that says anything about safety and effectiveness should be listed. Such a registry should include the protocol, so that other researchers wouldn’t have to take time to decipher whether they were looking, for example, at a post-hoc analysis, preexisting data, or preexisting hypotheses. That kind of information leads to very different implications in terms of how we interpret the information we see in front of us. Finally, the government—through AHRQ, CMS, NIH, and FDA, for example—also could play a role in defining and prioritizing research problems. We continue to struggle with the design of models—they don’t so much provide answers as help us ask better questions, or they bound the estimates and give us confidence intervals rather than giving us precise answers. One of the roles that government could effectively play would be in the development of models to inform RCT priorities, needs, and design.

Originally some of us hoped that HIPAA would help make the system more rational, but in fact it is really becoming a major barrier to doing research. Part of the problem with HIPAA has to do with excessively restrictive interpretation and implementation. This exists on both the public-sector side and the private-sector side, but ultimately only the federal government can clarify and issue some guidance on how to use this appropriately. In studies using CMS data, for example, researchers are finding that it can take up to 9 months or more to clear an IRB and get through the Research Data Assistance Center. Often, there are issues with data at home institutions, too, based on fear of lawsuits. Restrictive implementation is too cumbersome and simply takes too long, in part because the system essentially asks us to start over again every time. In addition, it is expensive—some estimates are that secondary claims analyses consume some 5 percent of research budgets, due to HIPAA-related forms, processes, and regulations. In short, HIPAA has become an impediment to useful research. Moreover, HIPAA creates a very real level of risk insofar as some use it as a screen to allow them to not follow practices that they don’t want to do, but that should absolutely be done. Only the federal government will be able to resolve these issues.

In terms of recommendations for privacy protection, there are viable options for practical, less-burdensome policies for secondary data that protect patient confidentiality. We can have institutionally based agreements. We can expedite IRB review for secondary data. Although it is important to remember that there is a difference between primary and secondary data and that the ethical and safety issues involved are very different. There is an order of magnitude difference between potential harm to a patient if one is looking and exploring data that has no identifying information to that patient, versus exposing someone to an active treatment; yet, most IRBs do not seem to make that distinction. It would be extremely useful to have HIPAA guidance for private data clarified and to extend federal data-use agreements regarding secondary data to institutions. There is now pressure from CMS to return data when a study is done, and we need to recognize that investigators should be able to keep data to answer questions about the study that are raised after the results are published. In addition, when researchers request data, we theoretically request only the minimal data that we need to do our study, but of course we don’t always know what the minimum is—there is, therefore, always the potential of a lost opportunity to explore data in more depth.

Among other access threats to secondary data is the issue of reduced access to patient-level data. With the growth of Medicare HMOs and increasing concentration and commercialization of private-sector clinical databases, an overarching question is this: Who owns the data? Many researchers are concerned about the increasing concentration and the narrowness of the funnel to be able to get at some of these data. A junior faculty member, for example, with a simple question about the Medicare drug cap vis-à-vis copayments needs $250,000 to buy the commercial data necessary to address her problem, research that in terms of time and effort will cost just $100,000—the economics do not add up, and consequently she is unable to do her study. In fact, most investigators cannot get access to this data. The private sector sometimes uses HIPAA as an excuse not to share data. Access threats like those have the very real potential to undermine comparative effectiveness research itself and moreover to threaten research generalizability. We need to see networks opened to a broad range of investigators, and we need direct access to databases. Again a role for government would be to work with privately held databases to create processes and systems that lead to more open and affordable access and long-term, viable solutions to these problems.

In terms of recommendations to enhance data availability and access, several come to mind. We need to facilitate lower cost access to private-sector secondary clinical data for publicly funded studies. We need to increase public–private partnerships (e.g., the Interagency Registry for Mechanically Assisted Circulatory Support—INTERMACS). We need to develop standardized data elements and definitions across payers and providers. We need to actively explore future opportunities for data aggregation and sharing. We need to incentivize sharing and access.

Finally, an effectiveness study registry is needed and could be substantially supported by the development of better data files at the NIH. It is the requirement of every NIH grant that data from that grant are supposed to be made available to colleagues for as long as 2 years after a trial ends, yet this practice is not widely followed. The NIH should develop reasonable guidelines for data sharing and then enforce that requirement, perhaps even with modest financial incentives as a carrot, and perhaps on a per-use basis. In general, we as investigators have to be more willing to share our data. How one protects intellectual property is one thing, but we need to understand that just because we collect data does not necessarily mean that we own it forever.

Medicare data can and should be enhanced. For example Medicare claims files data sources could be expanded to include lab values, imaging results, and Part D. Disease cohorts could be created by expanding the creation of integrated public-use data, using the Surveillance, Epidemiology and End Results (SEER) Program, CMS, and Department of Veteran Affairs as models. And electronic data transmission should be supported.

In terms of data linkage and integration, support for effectiveness-related data collection and analysis for publicly accessible, federally funded data is needed, especially for prospectively collected data. Government should do a much better job of routinely integrating databases. One approach would be to give a government panel the responsibility of identifying that data that can be integrated across surveys and with Medicare data and seeing that that is done unless there is a compelling reason not to. Routine linkage of clinical and research data also could be facilitated—in NIH-sponsored RCTs, for example, and in public-use versions of registries and surveys (NIH, AHRQ, CMS, VA, Centers for Disease Control and Prevention, National Center for Health Statistics, and possibly FDA). Medicare ought to be the model and the routine.

Finally, investment in methods is needed at the federal level—particularly for the development and evaluation of innovative methods to assess comparative effectiveness. The validity of quasi- and nonexperimental methods—including simple and complex models, adjustment procedures, and Bayesian approaches—in conjunction with RCTs also need to be assessed. For clinical trials, there should be a policy for funding some of these methods concurrently to see what these methods would have shown compared to what the trial is going to show. Broad experimentation with quasi-experimental and practical RCTs is also needed.

Einstein said that in the midst of every challenge lies opportunity. So it is today, with respect to the optimal use of health data. We must find ways to contend with the current stalemate between the great potential of large amounts of existing data and the many barriers that prevent the access needed to explore their utility for comparative effectiveness research.


Alan M. Krensky, M.D.2

Office of Portfolio Analysis and Strategic Initiatives, NIH


Evidence-based medicine requires integration of clinical expertise and research and is dependent upon an infrastructure that includes human capital and organizational platforms. The National Institutes of Health (NIH) is committed to supporting a stable, sustainable scientific workforce. Continuity in the pipeline and the increasing age at which new investigators obtain independent funding are the major threats to a stable workforce. To address these concerns, the NIH is developing new programs that target first time R01 equivalent awardees with programs such as the Pathway to Independence and NIH Director’s New Innovator Awards, with approximately 1,600 new R01 investigators funded in 2007. NIH-based organizational platforms are intra- and inter-institutional. The Clinical and Translational Science Awards (CTSAs) fund academic health centers to create homes for clinical and translational science, from informatics to trial design, regulatory support, education and community involvement. The NIH is in the midst of building a national consortium of CTSAs that will serve as a platform for transforming how clinical and translational research is conducted. The Immune Tolerance Network (ITN), funded by the National Institute of Allergy and Infectious Diseases (NIAID), the National Institute of Diabetes & Digestive & Kidney Diseases (NIDDK), and the Juvenile Diabetes Research Foundation (JDRF), is an international collaboration focused on critical path research from translation to clinical development. The ITN conducts scientific review, clinical trials planning and implementation, tolerance assays, data analysis, and identification of biomarkers, as well as provides scientific support in informatics, trial management, and communications. Centralization, standardization, and the development of industry partnerships allow extensive data mining and specimen collection. Most recently, the nonprofit Immune Tolerance Institute (ITI) was created at the intersection of academia and industry to speed scientific discoveries into marketable therapeutics. Policies aimed at building a sustainable research infrastructure are critical to support evidence-based medicine.

Progress in modern science is increasingly dependent upon robust infrastructure, including human capital, facilities, and organizational structure. Policies aimed at recognizing gaps and redundancies and improving infrastructure are fundamental to advance knowledge and translation to human disease. Evidence-based medicine requires close attention to infrastructure needs. I highlight three areas: (1) the pipeline of investigators, (2) “homes” for clinical and translational medicine, and (3) a model for translational and developmental networking.

The Pipeline

The NIH is committed to supporting a stable and sustainable work-force. Recent analyses raise concerns about the increasing age at which new investigators are able to become independent and the general “aging” of the scientific workforce (Figure 6-1 and Table 6-1). These findings raise the question as to whether we have a sufficient number of new investigators to carry out health-related research in the future. Close attention to this issue and implementation of appropriate interventions are required. The goal is to move new investigators to R01-type support and independence earlier in their careers. Strategies to accomplish this goal include accelerated notification of review outcome to permit a more rapid response and turn-around time for revised applications and the specific targeting of 1,500 new R01 investigators for 2007 and the 5-year rolling average in subsequent years. Award mechanisms aimed at developing new investigators include: (1) the Pathway to Independence (K99/R00), (2) NIH Director’s New Innovator Award, and (3) Career Development Awards.

FIGURE 6-1. Changing demographics from 1980 to 2006 in age of medical school faculty and principal investigators (PIs) of NIH research project grants (RPGs).


Changing demographics from 1980 to 2006 in age of medical school faculty and principal investigators (PIs) of NIH research project grants (RPGs). SOURCE: Derived from IMPAC II Current History and Files and AAMC Faculty Roster System.

TABLE 6-1. Summary of Changes in NIH Principal Investigators (PI) and Medical School Faculty Pools from 1980–2006.


Summary of Changes in NIH Principal Investigators (PI) and Medical School Faculty Pools from 1980–2006.

The Pathway to Independence Award recognizes the challenges of transitioning from a postdoctoral trainee to an independent scientist. Reports from the National Research Council of the National Academies (Bridges to Independence: Fostering the Independence of New Investigators in Biomedical Research3 and Advancing the Nation’s Health Needs: NIH Research Training Program4) highlighted the need for enhanced efforts to foster the transition of postdoctoral scientists from mentored environments to independence (National Research Council, 2005a, 2005b). The K99/R00 award provides up to 5 years of support in two phases. The initial award (K99) provides 1 to 2 years of mentored, postdoctoral support. The second phase (R00) provides up to 3 years of independent research support and is activated when the awardee accepts a full-time tenure track (or equivalent) faculty position. Applicants must be in postdoctoral positions and may be at nonprofit, for-profit, or governmental agencies, including intramural NIH laboratories. Both U.S. citizens and non-U.S. citizens are eligible.

The NIH Director’s New Innovator Award is designed to support new investigators who propose bold and highly innovative new research approaches with the potential to produce major impacts on broad, important problems in the biological, behavioral, clinical, social, physical, chemical, computational, engineering, and mathematical sciences. The NIH Director’s Pioneer Award5 was created in 2004 to provide additional means to identify scientists with ideas that have the potential for high impact, but that may be too novel, span too diverse a range of disciplines, or be at a stage too early to fare well in the traditional peer review process. The NIH Director’s New Innovator Award6 was created in 2007 to support a small number of new investigators of exceptional creativity.7

Up to 24 awards of up to $1.5 million for a 5-year period (an average annual budget of up to $300,000 direct costs) plus applicable facilities and administrative costs are planned for Fiscal Year 2008.

In addition to these new initiatives, NIH Institutes and Centers support a variety of mentored career development programs designed to foster the transition of new investigators to research independence. These programs span research career development opportunities for investigators who have made a commitment to focus on patient-oriented research through the Mentored Patient-Oriented Research Career Development Award (K23)8 to research career development opportunities for individuals with highly-developed quantitative skills seeking to integrate their expertise in research relevant to the mission of NIH (K25).9 All NIH Career Development Award programs are described in detail at the K kiosk Internet site.10

Clinical and Translational Research: Creating a New Discipline

The NIH developed the Roadmap for Biomedical Research to speed scientific discovery and its efficient translation to patient care by providing an incubator space for funding innovative programs to address scientific challenges (Zerhouni, 2003, 2007). Roadmap initiatives are expected to (1) have a high potential to transform how biomedical research is conducted, (2) synergistically promote and advance individual missions of the NIH Institutes and Centers to benefit health, (3) apply to issues beyond the scope of any one or small number of Institutes and Centers, (4) be unlikely to be undertaken by other entities, and (5) demonstrate a public health benefit in the public domain. The CTSA, which arose from the Roadmap processes, are designed to eliminate barriers between clinical and basic research, to address the increasing complexities involved in conducting clinical research, and to help institutions nationwide create an academic home for clinical and translational science.

Each applicant academic health center creates an individualized home for clinical and translational science, challenging some traditional approaches to link clinical trial design, implementation, and regulation with biostatistics, informatics, ethics, training, and community. These new entities serve as platforms for healthcare organizations, industry, and government to synergize their efforts to shepherd biomedical discoveries to clinical applications. They offer new philanthropic opportunities for development of cures for human disease. The NIH is in the midst of building a national consortium of 60 units with an annual budget of $500 million. Priority topics for the consortium include (1) creating open and interoperable information systems, (2) ensuring patient safety and openness to new approaches via Institutional Review Boards, (3) developing a new discipline of researchers with degrees in clinical and translational science, and (4) establishing a network of community engagement research resources and evaluate its impact. This experiment is forging new partnerships, encouraging new methods and approaches, and providing a platform for a coordinated nationwide network aimed at efficiently bringing new treatments to patients.

The Immune Tolerance Network: A Model for Critical Path Research

The ITN, established in 1999 with funding from NIAID, NIDDK, and JDRF, is an international collaboration focused on critical path research from translation to clinical development (Bluestone et al., 2000; Rotrosen et al., 2002). The ITN solicits, develops, implements, and assesses clinical strategies and biologic assays in order to induce, maintain, and monitor immune tolerance in human disease.11 In May 2007, the ITN received a $220 million, 7-year renewal of its contract from the NIAID, which will be used to continue the ITN research mission world-wide.12 It is a model for a team approach for critical path research and development, a key infrastructure for drug development. If “translational research” involves moving basic discoveries from concept to clinical evaluation, the critical path involves drug development via “proof of principle” studies, including clinical trials, assay development, and evaluation tools.

The ITN aggregates more than 75 clinicians, investigators, and government officials to provide the infrastructure to review and develop grant proposals, fund clinical trials and assay development, and provide infrastructure required to test the applicability of basic discovery to human disease. This includes scientific support via information systems, management and operations, and communications as well as business development and financial administration.13 The network provides centralization and standardization of all activities, including data and specimen acquisition, handling, storage, and evaluation. Quality assessments and validation techniques meet industry standards. Industry partners in clinical research, technology, and drug development and supply have aligned to support more than 25 clinical trials. Standardization and reproducibility allow extensive data and specimen analysis both within and across clinical trials.

Challenges addressed by the network include mining of data, team development, development of new biomarkers and therapeutics, and enhancement of the commercial and intellectual potential of mechanism- based clinical research. Academics are working with industry and government to blur the boundaries that often constrain free movement from discovery to drug development. This new approach specifically addresses the growing concerns that despite increasing global expenditures in drug research and development, the number of new drugs registered continues to decline since 1996 (Figure 6-2).

FIGURE 6-2. There has been a decline in new drug registrations in the United States despite a continued, dramatic increase in research and development (R&D) expenditures since 1995.


There has been a decline in new drug registrations in the United States despite a continued, dramatic increase in research and development (R&D) expenditures since 1995. SOURCE: McKinnon, R., K. Worzel, G. Rotz, and H. Williams. 2004. Crisis? (more...)

The Immune Tolerance Institute: Completing the Task

Despite the progress in developing the ITN structure and function over the past 7 years, it became clear that the route from academia to industry was not completely bridged. To address this gap, the Immune Tolerance Institute (ITI) was forged to support academic–industrial collaboration to leverage discoveries into marketable therapeutics. It includes programs and services supporting the continuum from research and development, mechanistic assays, and standardization of data and specimen handling and analysis to intellectual property and product development (Figures 6-3, 6-4, and 6-5).

FIGURE 6-3. ITI strengths and opportunities to transform critical path science.


ITI strengths and opportunities to transform critical path science.

FIGURE 6-4. Collaborative workflow for ITI/ITN immune biomarker discovery and development.


Collaborative workflow for ITI/ITN immune biomarker discovery and development.

FIGURE 6-5. ITI: At the intersection of academia and industry.


ITI: At the intersection of academia and industry.

Together the ITN and ITI couple clinical trials and discovery research with milestone-oriented industry standards for quality control, standard operating procedures, and validated production methodologies. An integrated multidisciplinary organization has evolved to foster the team-building and collaborations required across many disciplines and areas of expertise. A solid platform of clinical service, mechanistic and informatics support, and an array of professional expertise extend the capabilities of the organization beyond either classical academic or pharmaceutical entities. This experiment has built new functionality aimed at improving drug development.

Practical Next Steps

  1. Monitor workforce status and proactively provide for a robust and appropriate pipeline of human capital.
  2. Develop the CTSA consortium as a platform for clinical and translational medicine.
  3. Expand the ITN/ITI model to drug development in general, transcending the divisions between academics, government, and industry.


Kathy Hudson, Ph.D.

Johns Hopkins University

Rick E. Borchelt

Shawna Williams

Genetics and Public Policy Center14

The Human Genome Project created a wealth of genetic data, breathtaking in its promise but potentially overwhelming in its scope. Data generated by the Human Genome Project and successor projects already are transforming the practice of medicine, enabling better medical diagnoses and informing treatment options, including drug choices and dosage. Less than a decade ago, the hunt for genes responsible for illness was a painstakingly slow process limited primarily to identifying single genes that caused disease, such as Huntington disease and cystic fibrosis. The cost of DNA sequencing was so astronomical it required vast infusions of federal money. Today genomewide association studies point to whole complexes of genes that interact with each other and with the environment to affect human health, and the cost of sequencing an individual human genome in its entirety is widely anticipated to drop below $1,000 in the near future.

Absent from most discussions around how to harness these technical advances to accelerate discoveries and their translation into treatments has been the evolving relationship between researcher and study participant. Genomewide association studies themselves are large in scope and complex in nature: Conducting meaningful clinical effectiveness research requires collecting, sharing, and analyzing large quantities of health information from many individuals, potentially for long periods of time. To be truly successful, this research needs the support and active involvement of participants. As defined by current practice, however, the relationship between scientists and the public and between researcher and research participant is ill-suited to successfully leverage such active participation.

The roots of this uneasy relationship lie in the historical reliance that the biomedical community—and the science and technology community more generally—traditionally has placed in a “deficit model” of interaction with the public (Ziman, 1991). The basic assumption behind this model is that there is a linear progression from public education to public understanding to public support, and that this model—if followed—would cultivate a public enthusiastically supportive of research with “no questions asked.”

The science community has since the era of World War II been operating under this information-deficit model, built on a one-way flow of information from the expert to the public with very little information flowing back the other way. This model has driven communication of science and technology for so long despite its very obvious shortcoming: Neither public support for research nor scientific literacy has increased notably in all of that time.

In fact, asymmetric communications practices have cultivated a public wary and mistrustful of the scientific enterprise (Millstone and van Zwanenberg, 2000), in part because they exacerbate the disconnect between scientists’ perceptions of the public, and the public’s perceptions of scientists. A quote from a series of scientist interviews we conducted some years ago encapsulates the engrained thinking of too many scientists: “I don’t think that the general uninformed public should have a say, because I think there’s a danger. There tends to be a huge amount of information you need in order to understand. It sounds really paternalistic, but I think this process should not be influenced too much by just the plain general uninformed public” (Mathews et al., 2005).

The dim view that scientists have of the public’s ability to contribute to science and science policy is reciprocated by public attitudes toward scientists; as Bauer et al. note: “Mistrust on the part of scientific actors is returned in kind by the public. Negative public attitudes, revealed in large-scale surveys, confirm the assumptions of scientists: a deficient public is not to be trusted” (Bauer et al., 2007). More than 40 percent of respondents in a 2004 national survey of some 4,600 U.S. residents, for example, did not trust scientists “to put society’s interest about their personal goals” (Kalfoglou et al., 2004). Specifically in the context of proposed genetic research, more than 40 percent of respondents in a national survey agreed with the statement that “Researchers these days don’t pay enough attention to the morals of society,” and nearly half believed that “Researchers are biased” and do studies to support what they already believe.15

This observation frequently is born out in focus groups on genetics conducted by the GPPC; one quote, representing what we hope is an extreme point of view, comes from a focus group conducted a couple of years ago in connection with reproductive genetic technologies: “We are all responsible people here but some of them scientists, because of the science and because of their warped minds, will do something stupid.”

Clearly, one-way or highly asymmetric communication with the public is just not working. Writing in Science in 2003, American Association for the Advancement of Science Chief Executive Officer Alan Leshner summarized the problem eloquently: “Simply trying to educate the public about specific science-based issues is not working. . . . We need to move beyond what too often has been seen as a paternalistic stance. We need to engage the public in a more open and honest bidirectional dialogue about science and technology” (Leshner, 2003).

As a consequence, research-performing institutions increasingly are turning to public engagement and public consultation approaches to enlist public support (Bauer et al., 2007), a concept Jasanoff terms “the participatory turn” in science and technology (Jasanoff, 2003). One reason that probably motivates scientists to look to new approaches in communication and engagement is the continued belief that if the public really understood, it would support increased budgets, and grants would have a higher likelihood of being funded. This may well be true. Certainly awareness is a prerequisite to advocacy, although evidence is sorely lacking about how these two variables interact—the only thing that is clear is that the relationship isn’t a direct one (Lynch, 2001). But better public understanding of science can add value to science in many other ways (Mathews et al., 2005), leading to better-informed health decision making and to better recruitment for research studies, not to mention recruitment for the science and technology workforce. A better-informed public could provide meaningful input to help shape better policy and even to help design more meaningful public information efforts. Finally, a better-informed public could become more engaged in research and related policy and claim its rightful role as partner in this effort.

The goal of these two-way, symmetric communications models is mutual satisfaction of both parties, the research enterprise and its public—in this case, the researcher and the study subject—with the relationships that exist between them. This mutual-satisfaction approach emphasizes true bi-directional interaction and requires a commitment to transparency on the part of the organization; negotiation, compromise, and mutual accommodation; and institutionalized mechanisms of hearing from and responding to the public. It places a premium on long-term relationship building with all of the strategic publics: research participants, certainly, but also media, regulators, community leaders, policy makers, and others (Borchelt, 2008). These emerging models offer promise for scientists and the public to engage more fully and productively.

Unlike the unidirectional and hierarchal communication that characterize past efforts, public engagement can result in demonstrable shifts in knowledge and attitudes among participants. This shift may not always be in the direction scientists might expect or prefer, however. The expected outcome is different, as well: Rather than aspiring solely for or insisting upon the public’s deeper understanding of science, a primary goal of public engagement is scientists’ deeper understanding of the public preferences and values.

While it has become fashionable for many scientific organizations to say they’re doing “public engagement,” few encourage or engage in true dialogue with the public or publics. Unfortunately, they treat public engagement or public consultation as a box-checking exercise necessary before they get on with their “real” work (Leshner, 2006). Organizations rarely devote significant resources to meaningful symmetric communication (Grunig et al., 2002).

In terms of the translation of human genetics from research to clinical practice, public engagement can be undertaken at a number of points along the discovery pipeline (Figure 6-6). The beginning of this pipeline is happily bloated as the discovery of genes and variants is currently expanding at a mind-boggling velocity. Using new knowledge of the human genome and these advanced technologies, scientists have developed genetic tests for more than 1,200 genetic conditions, and these genetic tests are available in clinics (or, sometimes, even directly to consumers over the Internet). In genomics today, you can pay to have a million of your genetic variants analyzed, then can sit at your computer and read your results. Companies such as deCODE, 23andMe, and Navigenics recently grabbed headlines when they announced their whole-genome scanning services.

FIGURE 6-6. Translational pipeline compared to public participation.


Translational pipeline compared to public participation.

Although we see as yet very little in terms of an impact of genetics on public health at the end of this pipeline, we remain extremely enthusiastic about new thinking that is emerging in this area. For example, a Centers for Disease Control and Prevention (CDC)-funded effort titled Evaluation of Genomic Applications and Practice and Prevention (EGAPP) is looking very carefully at genetic tests. Its goal is to use a systematic, evidence-based process to assess genetic tests and other applications of genomic technology in transition from research to clinical and public health practice. This past December, for example, EGAPP published its first major set of recommendations regarding the appropriate use of genetic testing to guide treatment of depression and identified gaps in knowledge (Evaluation of Genomic Applications in Practice and Prevention [EGAPP] Working Group, 2007). Importantly, the CDC simultaneously made available funding to specifically fill identified knowledge gaps (Centers for Disease Control and Prevention, 2008).

The public interface with research is seldom encountered in the “upstream” end of the research process, where knowledge gaps are identified and research designed to address them. Rather, public engagement if it exists at all is clustered almost exclusively around health outcomes, principally comprising such items as information, advertising, and health campaigns. The next level upstream from simply informing is to consult, to obtain meaningful feedback from the public, and then to collaborate, to a point where the public is involved in issue identification, framing, prioritization, and agenda setting for research.

The GPPC has been involved in a pilot public consultation project well upstream in the pipeline. This project seeks to inform the design and implementation of a large, prospective cohort study proposed by the NIH and other federal healthcare agencies to look at the effects of genes, environment, diet, and lifestyle, and to dissect how they interact with one another and contribute to health and disease. This study would enroll 500,000 individuals representative of the U.S. population, collect DNA and other specimens from them, conduct age-appropriate physical/developmental exams of each participant, interview them for lifestyle and behavioral information and to discern environmental exposures, then follow the cohort for at least a decade. The collected data would be coded and entered into a very large database, which would be mine-able by researchers for the study of complex diseases. Research results would be fed back into the database (Collins, 2004).

Advisory committees have suggested to the NIH that it would be a good idea to talk to the public first about the project (National Human Genome Research Institute, 2004; Secretary’s Advisory Committee on Genetics, 2007). Accordingly, the GPPC entered into a cooperative agreement with the National Human Genome Research Institute at the NIH to learn what the public knows and thinks about large-scale genetic databases and to pilot test engagement strategies; as part of this effort we are conducting interviews, surveys, focus groups, and town hall meetings. Ultimately these efforts will develop and evaluate informational materials for the public, assess public attitudes, engage citizens and community leaders, and test methods for initiating community-based dialogue.

A preliminary glimpse at results from just-completed focus groups for this project is telling. The public is far more science-savvy than we may have given them credit for—about the role of genes in disease, and about the interactions between genes, environment, and lifestyle. Focus group participants were able to appreciate the overall value of the study and the need for a large and representative study. They recognized that scientific research is an iterative process that sometimes gives false leads that draw researchers down the wrong path and that subsequent studies can provide contradictory results. A representative quote comes from a focus group participant in Philadelphia:

[There is] this “news flash” . . . but then they come out a couple of weeks later and they will say well “this is good to eat.” And then a couple of weeks later they will say “this goes as heart disease.” And then they say, “no, now new research has discovered this doesn’t.” You know, they do that all the time. Within a certain amount of time they come up with conflicting reports.

Our work with the focus groups provided some insights into general public attitudes toward participation in scientific research. Altruism is alive and well, albeit not in everyone. Views on participation were tied to general trust of science and government and concerns about loss of confidentiality and misuse of information. Whether the majority of people would participate hinges on the level of burden participation would impose, consideration of incentives or compensation offered for participation, and—the strongest predictor of people’s willingness to participate—what they would receive in terms of return of research results. A universal refrain in the focus groups was “show me the data.” Clearly, we are past the point of no return of results. If one participates in a population-based research study today, however, under the prevailing researcher–participant compact, odds are very good that personal research results will not be disclosed to study participants. This is clearly a challenge, but it also presents an opportunity for reassessing the nature of the communication flow in a research setting.

The ethos of many participants can be summarized in this quote from one of our focus groups: “If you’re in this whole study, I want to know everything that you all find out about me.” Of course, not everyone would want or demand access to their research results. For some, those results would be “too much information.” This view is summarized in this quote: “I don’t want to know everything little thing that is wrong with me because I already have so much wrong with me to begin with. If I know more, I am just, people are going to be like wow, how do you live your life.”16

We heard over and over again that people want choices in their participation. They want to set their preferences—and that exact phrase was used over and over again—analogous to how we set preferences on our computers. They want to be able to make decisions about how their samples and information would be used, about what kind of information they would get back, and how it would be returned.

The importance of being an informed and active participant was underscored by focus group discussions about the nature of the consent they would provide for their participation. While researchers typically view consent as the process by which participants understand and agree to what they are getting in to, focus group members felt that it is (or should be) a reciprocal documentation of the roles and obligations of both the participant and the research team. This speaks to the underlying distrust among the public of science and its practitioners and a desire to reflect on and protect their own interests. Perhaps most importantly, we heard desire on the part of the public to be active participants, if not partners, with researchers.

Obviously, these early findings are qualitative data. The next steps in the project are to test the findings quantitatively in a survey of 5,000 Americans.

In addition to the NIH, the GPPC is working with the Department of Veterans Affairs (VA) on engagement around a project to build a research database of genetic samples linked to a medical records system. They asked us to talk first about the project with veterans. This quote from a veteran shows again the value of symmetric communication: “The fact that they have people sitting around talking about this in advance of even starting to build it tells me that they’re paying attention. . . . This right here is oversight, you know, at the get-go. So I think that that’s a really good thing; and I think ultimately it’s going to be one more way that veterans give something from themselves to make this country better.”

The NIH and VA are to be applauded for their commitment to consultation and engagement of potential research participants in the design and implementation of large-cohort genetic studies. But it should be remembered that simply obtaining information from the public is not sufficient either to claim that the public has been “engaged” or to engender public trust in or support of proposed research. Profound ethical issues attend the meaningful practice of public engagement: One cannot promise engagement but only make a show of listening. The commitment to symmetric communication falls short if the organization hears, but does not respond to, the concerns or issues of its publics. Mutual satisfaction requires that researchers be open to reasonable changes requested of them, just as effective—and ethical—public engagement programs in science should signal a willingness to incorporate public input in science policy, regulatory programs, or research design.


  1. Bauer M, Allum N, Miller S. What can we learn from 25 years of PUS survey research? Liberating and expanding the agenda. Public Understanding of Science. 2007;16(79)
  2. Bluestone JA, Matthews JB, Krensky AM. The immune tolerance network: The “Holy Grail” comes to the clinic. Journal of the Americal Society Nephrology. 2000;11(11):2141–2146. [PubMed: 11053493]
  3. Borchelt R. Public relations in science: Managing the trust portfolio Handbook of Public Communication of Science and Technology. Bucchi M, Trench B, editors. New York: Routledge; 2008.
  4. Centers for Disease Control and Prevention. Genomic Applications in Practice and Prevention (GAPP): Translation Research (U18). 2008. [accessed February 20, 2008]. http://www​​/pgo/funding/GD08-001.htm.
  5. Collins FS. The case for a US prospective cohort study of genes and environment. Nature. 2004;429(6990):475–477. [PubMed: 15164074]
  6. Etheredge LM. A rapid-learning health system. Health Affairs (Millwood) 2007;26(2):w107–w118. [PubMed: 17259191]
  7. Evaluation of Genomic Applications in Practice and Prevention (EGAPP) Working Group. Recommendations from the EGAPP working group: Testing for cytochrome p450 polymorphisms in adults with nonpsychotic depression treated with selective serotonin re-uptake inhibitors. Genetics in Medicine. 2007;9:819–825. [PMC free article: PMC2743615] [PubMed: 18091431]
  8. Grunig L, Grunig J, Dozier D. Excellent Public Relations and Effective Organizations: A Study of Communication Management in Three Countries. Mahwah, NJ: Lawrence Erlbaum Associates; 2002.
  9. Jasanoff S. Technologies of humility: Citizens participation in governing science. Minerva. 2003;41(3):223–244.
  10. Kalfoglou A, Scott J, Hudson K. Reproductive Genetic Testing: What America Thinks. Washington, DC: Genetics and Public Policy Center; 2004.
  11. Leshner AI. Public engagement with science. Science. 2003;299(5609):977. [PubMed: 12586907]
  12. Leshner A. Science and public engagement. Chronicle of Higher Education; 2006. p. B20.
  13. Lynch M. Managing the trust portfolio. Paper read at PCST2001 Conference.2001.
  14. Mathews DJ, Kalfoglou A, Hudson K. Geneticists’ views on science policy formation and public outreach. American Journal of Medical Genetics A. 2005;137(2):161–169. [PubMed: 16082707]
  15. McKinnon R, Worzel K, Rotz G, Williams H. Crisis? What crisis? A Fresh Diagnosis of Big Pharma’s R&D Productivity Crunch. New York: Marakon Associates; 2004.
  16. Millstone E, van Zwanenberg P. A crisis of trust: For science, scientists or for institutions? Nature Medicine. 2000;6(12):1307–1308. [PubMed: 11100103]
  17. National Human Genome Research Institute. Design Considerations for a Potential United States Population-based Cohort to Determine the Relationships Among Genes, Environment, and Health: Recommendations of an Expert Panel. Bethesda, MD: U.S. Department of Health and Human Services; 2004.
  18. National Research Council. Advancing the Nation’s Health Needs: NIH Research Training Programs. Washington, DC: The National Academies Press; 2005a. [PubMed: 20669451]
  19. National Research Council. Bridges to Independence: Fostering the Independence of New Investigators in Biomedical Research. Washington, DC: The National Academies Press; 2005b. [PubMed: 20669450]
  20. Ozcam Y, Kazley A. Do hospitals with electronic medical records (EMRS) provide higher quality care? An examination of three clinical conditions. Medical Care Research and Review. 2008;65:496–517. [PubMed: 18276963]
  21. Pawlson LG. Health information technology: Does it facilitate or hinder rapid learning? Health Affairs (Millwood) 2007;26(2):w178–w180. [PubMed: 17259201]
  22. Rotrosen D, Matthews JB, Bluestone JA. The Immune Tolerance Network: A new paradigm for developing tolerance-inducing therapies. Journal of Allergy and Clinical Immunology. 2002;110(1):17–23. [PubMed: 12110811]
  23. Secretary’s Advisory Committee on Genetics, Health and Society. Policy Issues Associated with Undertaking a New Large US Population Cohort Study of Genes, Environment, and Disease. Bethesda, MD: U.S. Department of Health and Human Services; 2007.
  24. Solberg LI, Scholle SH, Asche SE, Shih SC, Pawlson LG, Thoele MJ, Murphy AL. Practice systems for chronic care: Frequency and dependence on an electronic medical record. Americal Journal of Managed Care. 2005;11(12):789–796. [PubMed: 16336063]
  25. Zerhouni E. Medicine. The NIH roadmap. Science. 2003;302(5642):63–72. [PubMed: 14526066]
  26. Zerhouni EA. Translational research: Moving discovery to practice. Clinical Pharmacology and Therapeutics. 2007;81(1):126–128. [PubMed: 17186011]
  27. Ziman J. Public understanding of science. Science, Technology, & Human Values. 1991;16:99–105.



Since this workshop the Institute of Medicine (IOM) has released a report that assesses the impact of the Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule on the conduct of health research and provides recommendations for ensuring the efficient conduct of research while maintaining or strengthening the privacy protections of personally identifiable health information (Beyond the HIPAA Privacy Rule: Enhancing Privacy, Improving Health Through Research).


I thank Drs. Barbara Alving, Director, National Center for Research Resources, NIH; Jeffrey Bluestone, University of California San Francisco, Director, Immune Tolerance Network, and Norka Ruiz-Bravo, Deputy Director for Extramural Research, NIH, for their helpful comments and data.




The Genetics and Public Policy Center (GPPC) thanks its funders, The Pew Charitable Trusts and the National Human Genome Research Institute, for making possible its public engagement work. Gail Geller, David Kaufman, Lisa LeRoy, Juli Murphy, and Joan Scott each played invaluable roles in its focus groups. Most importantly, the GPPC would like to thank those who have participated in its public engagement activities.


Unpublished data, Genetics and Public Policy Center.


Unpublished data, Genetics and Public Policy Center.

Copyright © 2010, National Academy of Sciences.
Bookshelf ID: NBK51018


  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this title (6.2M)

Related information

  • PMC
    PubMed Central citations
  • PubMed
    Links to PubMed

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...