4Current Practices in Moving from Evidence to Decision

Panelists in this session were asked to address four questions: (1) What uses of genetics does your program consider? (2) What evidence do you need? (3) What kind of process is used to make the decision? (4) What infrastructure is needed to support the process?


James Perrin, M.D.

Harvard Medical School and Massachusetts General Hospital

Center for Child and Adolescent Health Policy

The Evidence Review Workgroup provides timely information to the Secretary’s Advisory Committee on Heritable Disorders in Newborns and Children to guide their recommendation decisions for adding conditions to uniform newborn screening panels. The workgroup is directly responsible to the Maternal and Child Health Bureau of the Health Resources and Services Administration, which staffs the Advisory Committee. The task is not to recommend specific screening tests, but rather to help the committee make decisions about whether to screen for a particular condition. The group is an interdisciplinary team of geneticists, state screeners, epidemiologists, consumers, and others, Perrin said.

To suggest a condition for consideration by the Secretary’s Advisory Committee for addition to the uniform screening panel, there is a nomination form on the Committee’s website. Completed forms are sent to the Maternal and Child Health Bureau staff for technical review, then to the Advisory Committee for evidence review. The Advisory Committee may choose to send the nomination to the Evidence Review Workgroup to carry out a more in-depth evidence review for that particular condition. The workgroup then reports back through the Maternal and Child Health Bureau to the Advisory Committee, which then makes its recommendations to the Secretary of Health and Human Services.

The questions on the nomination form address the incidence, timing of clinical onset, and severity of the condition, as well as the modalities available for testing, clinical and laboratory validation of the test, confirmatory testing, and risks of screening and of treatment.

Evidence reviews for most of the conditions that are considered for newborn testing are impacted by issues of rarity, and therefore limited evidence, and issues of where the evidence may be. These conditions often affect one in 10,000 live births, but many conditions affect closer to one in 100,000, or one in 200,000 births. In most cases, there are no randomized controlled trials (RCTs) available, and correspondingly, data for review of effective treatments will typically come from comparative case series. The rarity of cases and the severity of most of these conditions make RCTs very unlikely in the future. There is limited information on costs and benefits across all potential outcomes (including true and false positives and negatives). Access to any evidence that does exist can also present a challenge. In the case of relatively rare diseases, there may be a moderate amount of unpublished data. There may be valuable data from Food and Drug Administration- (FDA-) regulated trials, and proprietary data from companies involved in producing treatments for particular childhood conditions.

Evidence Review Questions

When the Advisory Committee sends a condition to the Evidence Review Workgroup, the first step is to consider the rationale and the objective provided on the nomination form. Issues that are most critical are whether there are prospective pilot data regarding population-based assessments; whether the spectrum of disease is well characterized; whether there is a screening test capable of identifying the condition; and whether treatment is well described. The next step is reviewing any recent changes in treatment and/or screening.

To assess the evidence, the workgroup again reviews the condition and the test. The workgroup determines if the condition is well defined, what is known about the prevalence and incidence of the condition, and what is known about the natural history of the disease, including clinically important phenotypic or genotypic variations. The methods and accuracy of the screening test are reviewed, including whether the test can adequately distinguish between early- and late-onset conditions. The workgroup also reviews information about the potential harms or risks of screening, cost of screening, cost effectiveness of screening, and pilot testing and experience that exists in the literature or is provided by investigators. Perrin noted although the workgroup asks these questions, in many circumstances the data are limited or nonexistent.

The next sets of questions move beyond the condition and the screening method to address confirmation of the diagnosis. The workgroup reviews the methods of diagnosis and the costs, both of diagnosis and of failure to diagnose the condition in a presymptomatic period. At the treatment level, the workgroup asks whether presymptomatic or early treatment improves outcomes, and what information exists about the benefits of treatment, both efficacy and effectiveness. Are the treatment options standardized or highly variable, are they readily available, and are they FDA approved? Again, potential harms or risks of treatment are reviewed, including existing evidence for false-positive screening results, or late-onset conditions. Finally, costs (of screening, diagnosis, treatment, late treatment, or failure to diagnose in the newborn period) are a main area of interest, but one for which in nearly all cases few data exist.

Evidence Review Methodology

As described above, the workgroup developed evidence questions, many of which apply broadly across conditions, although specific questions within a particular condition always arise, Perrin said. To answer the questions, the workgroup uses traditional methods, employing search engines to look for evidence from the past 20 years or so. The searches are supplemented by interviews with experts, including investigators studying the particular condition, and parents raising children with the condition. In some cases, Perrin said, investigators were willing to provide raw data and preliminary analyses. The workgroup, however, does not have the resources to conduct in-depth analyses of raw data. Special issues are also associated with the format of data and constraints on use. Whatever the workgroup produces for the use of the Advisory Committee becomes public record, which can be an issue for investigators who plan to publish the data they are sharing. Therefore, a clear agreement with investigators is needed that spells out what the workgroup is or is not allowed to share. Perrin noted that a number of medical journals seem willing to allow the workgroup to share a moderate amount of data with the Advisory Committee even though they know the data will be made public before publishing.

The workgroup has also developed conflict-of-interest policies that apply to the workgroup staff, all consultants involved in the project, and anyone the workgroup talks with regarding a particular condition. Perrin noted that the process is similar to the bias and conflict-of-interest process that the Institute of Medicine uses for its committees, and goes beyond simple financial bias to understand other aspects that might influence a person’s decisions.

The workgroup engages condition-specific consultants. Investigators experienced in a particular condition testify to the workgroup and provide data, but are not involved in the analyses or interpretation of those data. Consultants do review the workgroup’s summary of their own work for accuracy, but do not review the interpretation of the data and do not have the opportunity to disagree with the workgroup’s interpretation. They can, however, do that in a public fashion once the workgroup’s data become publicly available to the Advisory Committee.

Systematic reviews generally focus on peer-reviewed, published literature (in English only). Review of “gray literature” (information not available through standard databases or indexes) is generally limited to unpublished studies and related data from pharmaceutical companies. Single case reports are excluded, but the workgroup has included case reports of four or six children. The workgroup uses traditional methods for data abstraction and quality assessment.

Results are provided in a format following the order and the content of the main questions listed above. Key findings are presented in summary and table form. The workgroup indicates where evidence is absent, and what information would be most critical for decision making. It is important to convey what is not known, and what the level of uncertainty is. Perrin reiterated that all decisions and recommendations are made by the Advisory Committee. The workgroup provides the evidence for them to make those decisions.

From the viewpoint of the advisory committee, the questions that tend to be most important are those related to incidence and prevalence of the condition of interest, and the effectiveness of treatment, especially early treatment, based on early identification. Other key questions involve the test itself:

  • How does the newborn screening test work?
  • What are the characteristics of the test?
  • What is known about false negatives and positives?
  • Can it distinguish between early- and late-onset populations?
  • Are there population-based screening data to determine clinical validity?

The Evidence Review Workgroup is in the midst of its third review, which addresses Krabbe Disease, Perrin said. The first was Pompe Disease, which has now been reviewed by the Advisory Committee. The workgroup recently submitted its review of Severe Combined Immunodeficiency, which is under committee review.


Wylie Burke, M.D., Ph.D.


Burke asked Perrin about the decision to establish the explicit and formal separation of the evidence review from the process of making recommendations, noting that other processes often do not do this. Perrin said the statutory authority rests with the Secretary’s Advisory Committee, which was developed in response to the Children’s Health Act of 2000. A participant added that the workgroup has no authority to make recommendations to the Secretary.

A participant suggested that the availability of treatment would play a major role as evidence for or against newborn screening. He asked what would happen if there was a treatment, but one that was not widely available, noting that the establishment of newborn screening would most likely result in greater availability of the treatment. Perrin responded that the workgroup struggled with how to gather evidence on treatment availability. Ultimately the workgroup deals with investigators working on the particular condition to understand what is known about the availability of treatment. He said that a condition for which there was no treatment would likely not pass the nomination process and would not reach the workgroup for review.

Another participant said a fair number of the screening tests already being done have no standard treatment, and asked when the workgroup would review them. Perrin responded that the workgroup reviews whatever is assigned them by the Advisory Committee. He noted that there are 29 conditions in the uniform screening panel recommended by the Advisory Committee in 2005, and whether some of those should be reexamined is a good question. The participant suggested that updates may be required by the National Guideline Clearinghouse.


Geoffrey S. Ginsburg, M.D., Ph.D.

Center for Genomic Medicine,

Duke Institute for Genome Sciences and Policy

Moving biomarkers from bench to bedside is a complex process. Although Figure 4-1 depicts the translation continuum as linear, a bio-marker could follow myriad pathways, resulting in wide variation in the time it takes from discovery to clinical adoption. The OncotypeDX 21-gene assay, for example, took approximately 8 years to make the journey from discovery to use by clinicians for predicting prognosis in breast cancer patients. Contrast that with C-reactive protein, Ginsburg said, which was discovered in the 1930s and is now making its way into clinical practice as a result of recent clinical trials.

FIGURE 4-1. The translational continuum for biomarkers.


The translational continuum for biomarkers. SOURCE: Ginsburg and McCarthy, 2001.

Ginsburg and Califf (2008) recently published recommendations for organizational changes that could enhance modern clinical epidemiology. Ginsburg said many of those recommendations could also apply to the translation of genome-based technologies. Such changes would include the scaled as landscape above establishment of coordinated, perhaps centralized, biobanks with standards both for sample handling and informatics; the aggregation of genomic technologies into core facilities accessible to investigators; the development of interoperable informatics systems, including electronic health records and molecular, clinical, and imaging data; increasing the cadre of skilled biostatisticians and improving physician training in quantitative skills; and better research and training in clinical decision making to understand the biological, psychological, and social aspects that go into making decisions.

Genome-Guided Clinical Trials

How can the evidence necessary for clinical adoption of genome-based diagnostics be obtained, Ginsburg asked, and how can this be implemented in health systems? Prospective genome-guided clinical trials are one means to develop the evidence required for clinical adoption. A prototype for such clinical utility studies is to consider areas where the current standard of practice is a choice between two or more therapies or combinations of therapies, where based on the clinical data there is clinical equipoise. In these cases, the question is whether genomic or genetic information informs the choice of therapy A versus therapy B, and leads to improved health and economic outcomes over random selection of care. A prospective clinical trial that asks whether a genome-guided approach to inform the choice between therapy A and therapy B could more clearly establish the clinical validity and utility of gene- and genome-based tests.

As an example, Ginsburg cited an effort to define a metagene that could predict recurrence in individuals with early-stage lung cancer (Potti et al., 2006). Using retrospective samples, a complex genomic (RNA expression) signature was established that can differentiate between individuals at high versus low risk for recurrence. Validation using several retrospective datasets showed approximately 85 to 90 percent accuracy of the predictive signature. Duke, in collaboration with the Cancer Therapy Evaluation Program and the National Cancer Institute, is now conducting a randomized, prospective Phase III trial of 1,500 patients with early-stage, non-small-cell lung cancer in the United States and Canada. In this trial, patients will have surgical resection of their tumor, and gene expression analysis will be conducted using the predictor. Following surgery, individuals predicted to be at low risk will continue under observation, which is the standard of care. Individuals predicted to be at high risk will be randomized to either observation or adjuvant chemotherapy. This trial design will test whether the genomic assay is accurate in its risk predictions by comparing the low- versus the high-risk groups receiving observation. The trial will also study the clinical utility of the risk information, assessing whether chemotherapy applied to individuals identified as high risk actually improved survival.

In addition to this type of predictive prognosis model, Duke has developed a series of gene expression signatures that predict sensitivity and resistance to a series of commonly used cytotoxic chemotherapeutic agents, including docetaxel, Topotecan, Adriamycin, 5-FU, Taxol, and Cytoxan. Publicly available datasets were used to validate these signatures. These tumor-derived signatures could have a significant impact on care, guiding the selection of commonly used, standard-of-care cytotoxic chemotherapeutic agents across a variety of tumors.

A prospective Phase II clinical trial on the treatment of breast cancer in the neoadjuvant setting was initiated in summer 2008. Approximately 270 patients with Stage II/III operable, HER2-negative breast cancer will be enrolled. Following biopsy, participants will be randomized to standard of care or genome-guided therapy. The trial’s endpoint is pathological response at the time of surgical resection of the tumor following chemotherapy. In the genome-guided arm, patients predicted to have a high response rate to the combination of doxorubicin/cyclophosphamide (AC) will receive AC. Those predicted to have a high probability of responding to docetaxel/cyclophosphamide (TC) will receive TC. Patients with predicted low sensitivity to either regimen will be randomized to AC or TC. The study is powered on the presumption that a 40 percent response rate will be achieved in the individuals receiving genome-guided therapy. Again this is a situation where clinicians choose between two available and relatively equal standard-of-care regimens. The question here is whether a genome-guided treatment strategy can improve outcomes—in this case, pathological complete response.

Pharmaceutical companies developing novel therapeutics can play an important role in translation of genomic information by adopting genomic technologies as part of clinical development. Ginsburg cited an ongoing clinical trial being conducted by Duke and Eli Lilly on advanced-stage, non-small-cell lung cancer, for which the standard of care is a combination of cisplatin and gemcitabine. Individuals predicted to be sensitive to platinum-based therapies (including cisplatin) will receive the standard of care. Individuals predicted to be resistant to platinum-based therapies will be treated with pemetrexed and gemcitabine. This strategy would potentially move a second-line cytotoxic therapy (pemetrexed) to first-line use in platinum-resistant populations.

Another approach a drug development company can take is to enrich the patient population in a trial in a way that allows development of more “targeted therapies.” Ginsburg described an upcoming two-stage trial of advanced-stage, refractory, non-small-cell lung cancer patients being undertaken by Duke and Bristol-Myers Squibb. In the first stage, all participants will receive dasatinib, an experimental therapy for non-small-cell lung cancer that inhibits the Src family of tyrosine kinases, and Src activity of tumors will be measured in all patients. When assessing gene expression signatures of Src pathway deregulation from the tumors of these patients, if deregulation of Src in patients is found to correlate with the response of those patients to the drug, then in the second stage of the trial, only individuals whose tumors display an Src pathway deregulation signature will receive dasatinib. The remaining patients will receive the normal standard of care.

These examples show how integrating genomic signatures into Phase II or Phase III clinical trial designs could lead to the inclusion of the genome-based treatment approach and potentially result in the incorporation of genomic information into the label of the therapeutic product, facilitating translation into clinical use.

Enabling Genome-based Research and Decision Making

To assist oncologists in making better treatment decisions, an individual profile could be derived from analysis of a sample of the patient’s tumor. Duke is developing a prototype clinical decision tool to help physicians understand what combinations might provide the best outcomes for cancer patients. The profile could provide the probability of response to many commonly used cytotoxic chemotherapeutic agents, and the probability of deregulation of known oncogenic signaling pathways. Such a profile could be used to rationally select an optimal therapeutic regimen from within standard-of-care combinations.

To assist with execution of genome-guided clinical trials, Duke has developed a specialized Clinical Genomics Studies Unit that houses Clinical Operations and Project Management, as well as Clinical Genomics Clinical Research Coordinators (CRCs) and Clinical Genomics Technology groups. The CRCs are specially trained to develop genomic protocols, draft informed-consent forms, develop patient and physician educational materials, navigate tissue samples through the complex health system, and assist with communicating the risks identified on the basis of genomic profiles. The genomic technology groups address assay standardization, ensure compliance with the Clinical Laboratory Improvement Amendments, develop the bioinformatics and algorithms necessary to deliver genomic information to the clinical trial, and establish longitudinal genomics data and sample repositories.

Ginsburg offered a variety of approaches in addition to the prospective trials that were discussed that could further enable evaluation of genomic markers. It is important to consider the value and impact of patient registries (for both common and rare diseases) for longitudinal follow-up, sample collection, and establishing robust phenotypes. Electronic health records offer the opportunity for population studies. A cooperative group mechanism could be established to consider and develop prospective genetic and genomic clinical trials. Industry participation provides opportunities through public–private partnerships, and through the ability to collect samples during clinical development, especially as part of Phase III and postmarketing (Phase IV) trials. Developing a national virtual sample bio-repository that is linked to research and clinical data would also enable genomic marker evaluation.

An emerging concept at Duke is the Genomic Testing Advisory Committee (GTAC). The mission of the GTAC is to promote the appropriate evidence-based use of genetic and genomic tests in day-to-day clinical practice within the Duke University Health System. The GTAC reports to the Executive Committee of the health system, and serves as a resource to the Pharmacy and Therapeutics Committee and the Clinical Laboratories Committee, which are responsible for developing and deploying genome-based tests in the Duke system. The general process the GTAC follows is to provide an overview of the clinical evidence, risk information, use, and cost associated with the technology; develop a briefing document around the ACCE1 criteria; make recommendations regarding incorporation into practice; and make recommendations about what types of educational tools and clinical decision support will be necessary to deploy the technology.

A computerized physician order entry tool was recently deployed at Duke for the use of warfarin. It provides relevant clinical and biological data to allow for the medication’s appropriate dosing. Physicians can order a genetic profile including the genes VKORC1 or CYO2C9 by checking the appropriate boxes. When the box is checked, a window pops up that provides further information about the evidence base and rationale for using these genetic tests in guiding dosing decisions. To really drive clinical adoption of potentially valuable genetic tests into the Duke system, this type of clinical decision support tool needs to be integrated into the computerized order entry process. At this time, GTAC is focusing primarily on pharmacogenetic tests, and tests that are included in the label of an FDA-approved drug.

Ginsburg summarized the strategy Duke has adopted to integrate genetic and genomic testing into clinical practice (Figure 4-2). In the discovery phase, Duke is encouraging investigators to focus their research objectives on clinical decisions, particularly in areas where there is clinical equipoise and uncertainty (i.e., where more than one standard of care exists and the choice is generally random), and where the impact on clinical care and economics may be significantly high. The translational phase focuses on developing prospective clinical studies that both validate and establish the clinical utility of genome-based tests. Trials incorporate the use of registries, and the analysis of both health and economic outcomes. The implementation phase involves assay and algorithm standardization, incorporation of educational tools into decision making, policy development, and establishment of public–private partnerships that can help commercialize and deliver a product. The strategy is supported by an enabling infrastructure composed of biorepositories, integrated databases, genomics core facilities, a clinical trials unit, computational and statistical modeling, and the GTAC. Duke’s strategy is a team approach that will help the health system understand which genome-based tests would be most useful to deploy in day-to-day health care practice.

FIGURE 4-2. An integrated strategy for genomic medicine from bench to bedside.


An integrated strategy for genomic medicine from bench to bedside. SOURCE: Ginsburg, 2009.


Ralph Brindis, M.D., M.P.H., FACC, FSCAI

Northern California Kaiser Permanente

To demonstrate how genomics could be integrated into clinical practice, Brindis adapted a diagram by Califf et al. on the integration of quality into therapeutic development (Figure 4-3, adapted from Califf et al., 2002). Genomics can be incorporated at the concept stage. The concept leads to clinical trials, the results of which can be used to generate guidelines and performance indicators, after which the concept enters clinical use. Outcomes of performance are collected in registries, such as the National Cardiovascular Data Registry (NCDR), which can provide feedback to enhance the quality of performance. In addition to improving quality at a local level, outcomes can lead to generation of new ideas and concepts, leading to new clinical trials, and the cycle continues. In this way, the outcome data can be used to improve both quality and effectiveness of care.

FIGURE 4-3. The cycle of clinical effectiveness.


The cycle of clinical effectiveness. SOURCE: Adapted from Califf et al., 2002.

NCDR is a suite of hospital- and office-based registries and quality improvement programs focused on measuring outcomes and identifying gaps in the delivery of quality cardiovascular patient care. The mission of NCDR is to improve care, provide knowledge and tools, implement quality initiatives, and support research. There are now six components in the NCDR (CathPCI, ACTION-GWTG, ICD, CARE, and IMPACT registries, described below, and the IC3 quality improvement program), and two registry studies (the SPECT MPI study, looking at implementation of appropriate-use criteria for better stewardship of health care dollars, and an ICD [implantable cardioverter defibrillator] longitudinal registry).

The registry portfolio has multiple users and uses. The American College of Cardiology (ACC) uses it to conduct educational needs assessments, develop scientific insights, conduct research, and generate publications, including clinical practice guidelines. Health plans have found it useful for developing participation requirements for preferred provider programs, and as a performance tracking tool. Researchers in academia, industry, and regulatory agencies are now actively using it for clinical research, outcomes research, and post marketing surveillance. Hospitals and physician practices use it for quality improvement, performance measurement reporting, and use review.

The structure of NCDR is such that each program has a steering committee, a quality improvement subcommittee, and a research and publications committee. Each component reports to the NCDR Management Board and the Clinical Quality Council, which in turn are responsible to the ACC board of trustees. To ensure the quality of data entered, NCDR uses online field checks for completeness and consistency, electronic data quality reports, and a national audit program where nurse abstractors perform annual onsite chart audits.

As a national registry, NCDR is striving to be patient focused, interop-erable, transparent, and efficient, and to maintain high data quality. To be effective, it is necessary to have coordination of all the key players involved in health care, such as professional societies, hospital organizations, payers (Medicare and private), and federal institutes and agencies.

NCDR Registry Components

CathPCI was the first registry developed. It houses outcomes information on diagnostic catheterizations and percutaneous coronary interventions (PCIs). There are 1,100 hospitals participating, which Brindis said represents a market penetration of 70 percent of the nation’s CathPCI hospitals. The registry incorporates nearly 9 million patient records, and 3 million PCI records. This robust databank has led to 30 published manuscripts, 4 in press, and 16 in preparation.

The ACTION Registry-GWTG is a registry for heart attacks, the result of a merger of the NCDR ACTION (Acute Coronary Treatment and Intervention Outcomes Network) Registry and the American Heart Association Get With The Guidelines program. There are now nearly 400 hospitals participating, contributing more than 100 thousand patient records.

The ICD Registry is an ACC partnership with the Heart Rhythm Society and the Centers for Medicare & Medicaid Services (CMS) containing more than 330,000 patient records related to implantable cardioverter defibrillator implantation. An ICD longitudinal study is under way looking at different outcomes over time. The development of an atrial fibrillation ablation registry is under discussion as well.

The CARE Registry collects data on carotid artery revascularization and endarterectomy procedures. The IMPACT Registry (IMproving Pediatric and Adult Congenital Treatment) is the newest registry. Initial efforts are focused on catheterization procedures, but eventually the initiative should expand into a national registry for congenital heart disease.


The data NCDR provides to clinicians is usually in benchmark form, Brindis said. Practices can assess performance and look for opportunities for quality improvement. As an example, Brindis cited improvements in the timely administration of angioplasty following acute myocardial infarction (MI). The evidence-based care guidelines say that angioplasty should be done within 90 minutes after an MI. In 2004, only 38 percent of the procedures entered in the CathPCI registry had accomplished this. With that data feedback, and the application of quality improvement tools, more than 75 percent of procedures are now within guideline.

NCDR has developed a robust risk adjustment model that can be used to develop patient-centered consent forms that offer outcomes risk assessment based on the patient’s clinical scenario to help the physician and patient make decisions. NCDR data are also being used to assess the safety and efficacy of performing angioplasty in facilities without onsite cardiac surgical facilities. As another example of the use of NCDR data, Brindis noted that the FDA approached NCDR regarding the safety of hemostasis devices used in cardiac catheterization, for which the published literature was very limited. Within a month, NCDR had 90 centers committed to submitting data to the registry on use of the devices. The data showed twice the level of complications associated with one type of device, compared to all of the others. The results were published and the device in question was removed from the market (Tavris et al., 2007).

NCDR Research

NCDR is a perfect platform for effectiveness and translational research, Brindis said. He expressed the hope that the federal economic stimulus package, with its support for research on comparative effectiveness, will acknowledge the role national registries can play in the diffusion of new technologies. NCDR can be used to inform public policy development on issues such as evidence-based reimbursement. There is growing interest in assessing patient quality-of-life and functional status. There is also significant interest in assessing efficiency and return on investment, linking registry health data with administrative data from CMS and health plans, for example.

Going forward, a key task for national registries is developing a longitudinal strategy for how to assess outcomes and incorporate genomics. To do that, Brindis said, registries have to achieve data standardization and streamline data collection with electronic health records, decreasing the burden on hospitals. Other key elements include: evidence-based quality and performance measures; risk-adjusted outcomes, process, and structural measures; appropriateness and effectiveness measures; and financial data.

A national system of unique patient identifiers needs to be developed to fully realize the potential of collecting longitudinal data and evaluating outcomes, and relevant registries need to be linked. The goal is to convert procedural or episodic hospital-based registries into disease state patient-centered registries.

One NCDR effort under way with Yale University has merged hospital-based data from CathPCI with CMS claims data, looking at 30-day mortality after discharge for PCI. The challenge here is related to the lack of a unique patient identifier. Therefore the study has used probabilistic matching related to patient admission criteria.

In another effort, NCDR is merging CathPCI data with the Society of Thoracic Surgery database on coronary artery bypass grafts, looking at clinical outcomes for coronary disease, and perhaps identifying better clinical approaches related to patients with multivessel coronary disease.

NCDR and Genomics

A variety of issues need to be considered regarding use of NCDR to aid translation of genomic technologies. NCDR operates under a quality improvement model that does not require Institutional Review Board (IRB) approval or patient consent, Brindis said. But once NCDR undertakes the longitudinal work necessary for genomic-based research, it must implement IRB approval and patient consent processes and ensure compliance with the Health Insurance Portability and Accountability Act (HIPAA). There needs to be linkage with DNA banks and genomic and biomarker information. Brindis noted that a registry is not time limited like a clinical study, and issues of financial viability need to be considered. One funding model could be public–private partnerships with industry or biobanks.

NCDR is just beginning to think about the use of genetics in decision making, Brindis said. NCDR prioritizes all opportunities by considering the science, the political landscape, potential partners, the available operational resources, and the business case for undertaking the project. One way NCDR could participate in the translation of genomic technologies would be for professional societies and NCDR, in partnership with academic centers, analytical centers, health plans, and clinical research organizations, to work toward merging NCDR data with data from other registries and with payer data (e.g., administrative, pharmacy, and national death data).


Wylie Burke, M.D., Ph.D.



A participant asked Brindis to elaborate on the public–private partnerships that NCDR has developed, specifically noting what the driver is for some organizations that are NCDR supporters.

Brindis said each registry has its own driver financially. CMS has been a partner for the ICD Registry because hospital participation is required for CMS reimbursement. This may not be a sustainable model, however, and NCDR is working to ensure there is a long-term viable strategy for the ICD Registry. Finding partners for postmarketing device surveillance is one approach. If CMS decides not to support the ICD Registry in terms of mandating participation, other payers, clinicians, and the FDA may find value in sustaining it long term. The CathPCI registry has no support from industry, Brindis noted, and each hospital pays about $3,000 to participate. An increasing number of states are mandating participation to oversee quality, particularly related to angioplasty at sites without onsite cardiac surgical facilities. For some other registries, the financial models are weak. Brindis noted that participation fees may be low, or no cost, but it is very costly for a hospital to enter the data, perhaps $100,000 or more. Even though NCDR may spend $20–$22 million to run the registries, the overall cost as a nation to participate is significant.

Burke noted that a theme throughout the day has been that resources are limited. She reiterated Ginsburg’s point that randomized trials cannot be done for everything. Looking at different ways to maximize data collection is very important for creating the right combination of resources.

Brindis and Ginsburg agreed, and Ginsburg noted that the investment that the ACC and other participating organizations have made to build the NCDR infrastructure is phenomenal, and the data coming out of it are having a significant impact on medical practice. The question, he said, is how to take advantage of those resources for the evaluation of genome-based technologies.

Regulatory Issues

A participant asked whether pharmacogenetic assays were being developed as laboratory tests, rather than under an FDA Investigational Device Exemption (IDE). What would the impact be if the research were conducted under an IDE?

The question of whether these tests might be subject to regulatory oversight is one of the uncertainties that still pervades the field, Ginsburg responded. A key question is understanding whether the test is considered high risk and will require a prospective clinical trial in order to prove its clinical value or clinical validity, or whether it is low risk and could be subject to a lower bar, such as a 510k submission (allowed to market by demonstrating substantial equivalence to a device that has been already cleared for marketing by the FDA). A major impact of having to go through the regulatory filing process would be the cost of conducting clinical trials of a greater breadth than is currently being done. Duke is very open to working with commercial firms, who generally have the resources to enable a true regulatory pathway for these tests. This is a major strength of public–private partnerships in these arenas, Ginsburg said. From the academic vantage point, the interest is in proving the value of the science on health outcomes, but to get to the next level and develop a test that is broadly available would require commercialization.

A participant from the FDA clarified that an IDE is an exception allowing for demonstration that the test is reasonably safe without having to demonstrate that it is effective. One usually seeks an IDE when there is a “significant risk.” If test results are being used to select people for a particular treatment they would not otherwise receive, that presents a significant risk and should be done under an IDE, she said. It is a way to monitor the safety of a trial to ensure that people are not being exposed to more risk than they normally would have been.

Another participant commented on the use of algorithms for clinical decision support, pointing out that the FDA has expressed an opinion that some of these algorithms may be treated as devices and be subject to regulation. It will be important to understand the emerging regulatory environment related to genomic medicine because this could impact translation into practice, especially for complex genetic disorders where multiple polymorphisms impact expression of a phenotype or a response, and such algorithms could be used to aid decision making.

Data Quality and use

A participant asked how, with multiple sources of input, NCDR protects its registries from the “Wikipedia phenomenon” or from the simple aggregate of error that may pollute the data. The sources that are inputting into the registry may, not deliberately but by error or ineptitude, submit data that are not good. The data are then part of the repository and become “chart lore,” where something becomes true because it is there. Is there uniform screening for entry of data?

Brindis responded that this is a real weakness of registry data versus data from RCTs. NCDR has completeness and quality checks, and an auditing strategy, but they are not perfect, he said. Some states are very robust in their auditing, such as Massachusetts, which uses NCDR as its platform, but then conducts extra audits with panels of clinicians reviewing coding. Data integrity is a valid concern. Registry data or observational data should not be overused to make decisions, he said. Registry data are just one part of the overall decision-making process.

Burke recalled Davis’s point (see Chapter 3) regarding how the lack of specificity in ICD-9 codes for genetic tests significantly limits the usefulness of administrative data for research. She asked the panel to comment on limitations created by how data is recorded, and what registry data enable us to understand.

Brindis said the quality of registry data is much higher than the quality of administrative data, which he said has greater challenges in terms of accuracy, particularly related to co-morbid conditions and other clinical descriptors that would be important in the genomics field. He expressed hope that there will be good longitudinal registries, and reiterated the need for a unique patient identifier. He also noted the differences between data from RCTs and from patient data registries. The average patient age in the NCDR registries, for example, is 8 to 10 years older than those generally enrolled in RCTs. In addition, patients with co-morbidities are generally excluded from RCTs. This impacts the ability to develop evidence-based medicine for older patients or those with co-morbid conditions. Registries help add this information to the picture.

A question was asked about data on outcomes that are directly tagged to health, such as knowledge or satisfaction. Brindis said NCDR is just beginning to look at these areas. The first task is to look at quality of life and symptoms. In terms of patient satisfaction, large health plan organizations such as Kaiser have studied this, but NCDR has not addressed it.

Ginsburg said that in addition to clinical and economic outcomes, Duke is also looking at quality-of-life metrics in all of the studies being done. Separately, Duke has an employee-based program called Prospective Health, which uses traditional health risk assessment tools that relegate patients into higher or lower risk groups. The program is beginning to deploy some genetic testing for chronic disease conditions into that assessment. Also included are specific questions about workplace satisfaction, absenteeism, and overall satisfaction with the health program.

Genome-Guided Trials and Treatment

A participant asked about the Duke GTAC’s decision process regarding implementation of the genomic testing associated with the physician orders for warfarin, noting that reports by the Evaluation of Genomic Applications in Practice and Prevention working group and Blue Cross Blue Shield concluded it was to soon to implement such testing for warfarin.

Duke is aware of the reports cited, and Ginsburg said they are inconclusive. Ginsburg responded that the GTAC was continuing to work on its methodology and that it would be using the warfarin case to develop additional data. For every thousand warfarin prescriptions that have been written since the FDA approved the inclusion of the test in the warfarin label, there have been only a handful of tests by the clinicians at Duke. He noted that there are several ongoing, prospective clinical trials that will hopefully establish more definitive evidence as to whether these tests should be done. The warfarin example should be viewed as a test case for how to develop a system to integrate genetic testing information and decision making into physician ordering. It is not necessarily focused on whether this test, in particular, would have an impact or not. The goal is to begin to understand the practice environment better so that issues can be addressed more directly when there are tests and other technologies ready for implementation that are going to have potentially more definitive impact on outcomes.

A participant asked how difficult it would be to export the Duke model to less academically focused institutions, particularly those overseas where many clinical trials are conducted.

Ginsburg responded that conducting genome-guided trials at just one site, Duke, has been a significant challenge. Yet Duke has overcome many of the hurdles and is developing the standards that would facilitate expansion of genome-guided clinical trials to other sites. The goal is certainly to establish an exportable model to a variety of settings (both academic and private practices) and to establish a network of private practices across the southeast and then nationally that would accelerate completion of the studies.



ACCE is discussed by Teutsch in Chapter 2.