NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Institute of Medicine (US) Roundtable on Translating Genomic-Based Research for Health. Diffusion and Use of Genomic Innovations in Health and Medicine: Workshop Summary. Washington (DC): National Academies Press (US); 2008.

Cover of Diffusion and Use of Genomic Innovations in Health and Medicine

Diffusion and Use of Genomic Innovations in Health and Medicine: Workshop Summary.

Show details

5Opportunities and Constraints for Translation of Genomic Innovations


Stuart Hogarth

University of Nottingham

Finding something valuable can be difficult, Hogarth said. Innovations in genomics have been much more difficult and taken much longer to develop than many initially hoped.

Innovation is important, but most innovations fail, in many cases simply because they are not very good. Despite this, it is important to support innovation even while acknowledging that many innovations are not successful and never can be.

Some innovations are radical, but most are incremental. In thinking about innovation policy, one must think about the importance not just of major breakthroughs, such as finding a new biomarker and discovering its association with a disease or response to a drug, but also of incremental innovation. In the case of cystic fibrosis, for example, the development of robust, reliable test kits was just as important as the initial identification of the mutations in the cystic fibrosis transmembrane conductance regulator gene.

One must also think about the importance of the diffusion and use of genomic innovations. Indeed, diffusion and use may be more important than innovation in many ways. Science and technology innovation policy generally focuses too much on innovation and not enough on the diffusion and use of existing technologies. It might be more important, for instance, to ensure that everyone is using two or three really good new tests than wondering how to encourage the use of another 100 that do not offer a significant advantage over existing technology.

Hogarth has been involved in a project to examine policy issues surrounding the evaluation and regulation of genetic tests. As part of the project, interviews and workshops were conducted with over 80 individuals from key stakeholder groups (industry, clinicians, patient groups, regulators, and policy makers) in Europe, Canada, the United States, and Australia.

The project classified policy issues into three areas: incentives and infrastructure for generating a robust evidence base for new innovations; regulatory mechanisms for the independent evaluation of evidence; and systems for ensuring that doctors, patients, health care policy makers, and payers have access to accurate and comprehensive information presented in a way that can be easily understood.

Other work in the area includes a project on information policy for pharmacogenetics and two reports for the Canadian government, one on regulating pharmacogenomics and another on the clinical application of molecular diagnostic technologies.

Genomic innovation transcends national boundaries. Multinational companies are involved, and there are global markets for the products. International research is being done by such organizations as the Human Genome Organisation (HUGO) and the Human Proteome Organisation (HUPO). There is transnational regulation and standard setting being carried out by such groups as the International Conference on Harmonization (ICH) and the International Organization for Standardization (ISO).

Innovations in genomics are affected by nongovernmental organizations, such as the Organisation for Economic Co-operation and Development and the World Health Organization, as well as by research funders with a global reach, such as the Bill and Melinda Gates Foundation. Innovation is also affected by transnational agreements such as the General Agreement on Tariff and Trade and the Agreement on Trade-Related Aspects of Intellectual Property Rights. The European Union crosses national borders and heavily influences innovation within Europe.

Genomics varies around the world in terms of the organization of health care delivery systems, the regulatory frameworks for innovation, and the economic incentives and infrastructure. On the other hand, there are a number of policy reports from across the world that express, in different ways, shared policy concerns.

The first such shared concern is that, in some cases, genomic innovations such as genetic tests have been moving into routine clinical practice too quickly and without enough independent evaluation.

The second concern relates to capacity building. Our health care systems need their capacity built through education and expansion of the workforce. There is a further need to enhance capacity in the specialty of clinical genetics and also to diffuse capacity more broadly across the health care system.

The third concern is the opposite of the first one: Some observers worry that, rather than moving too quickly, innovation is moving too slowly because of regulation and gate-keeping. The activities of regulators need to be understood in the context of changing policy priorities. Limiting the inappropriate use of new technology and controlling health care expenditures continue to be major concerns surrounding the health care system; but in the last decade or so there has been a marked shift in emphasis, and now an imperative to support the health care innovation process is emerging as a significant policy concern. Licensing agencies, such as the Food and Drug Administration (FDA) and the European Medicines Evaluation Agency (EMEA), and technology assessment bodies, such as the United Kingdom’s National Institute for Clinical Excellence (NICE), are beginning to reconceptualize their roles in the innovation process. In particular, they are beginning to move from a strictly gate-keeping role, in which they evaluate evidence for the safety and effectiveness of new technologies, to a more collaborative or facilitative role.

This new policy orientation is taking concrete shape in programs such as the FDA’s Critical Path Initiative and the Innovative Medicines Initiative in Europe, which is linked to EMEA’s Road Map strategy. In the United Kingdom there is also the Clinical Research Collaboration, which is attempting to bring together key groups such as NICE, the National Health Service regulatory bodies, medical researchers, industry, and patients in order to create a new system of health care innovation.

Some of these initiatives involve new models of evaluation, while others involve new strategies for assisting the development of the evidence base for a new technology by providing either incentives (for instance, through conditional reimbursement) or the infrastructure for data collection. The new initiatives are often focused primarily on therapeutics, but they also have implications (and potential) for diagnostics innovation (not least because many are designed to support pharmacogenetic testing with new drugs).

The translation of pharmacogenomics into clinical practice has generally been slow. One factor that may be delaying the development of new pharmacogenomic products is a lack of clarity in the regulatory response to pharmacogenomic data. Other factors are the complexity of the science and various structural issues in the pharmaceutical industry. The result of these issues is what Hogarth referred to as a pipeline problem.

The first pipeline problem can be found in drug discovery and development. Biomarkers are frequently seen as the solution to this problem, but there are also problems in the discovery and development of biomarkers.

Regulatory agencies are uniquely positioned, given their responsibility for the development and enforcement of standards for drugs and devices, to shift the focus of the pharmaceutical industry from its preferred blockbuster drug model, which is aimed at broad populations, to a model that is more targeted. The regulatory agencies are also well positioned to encourage the participation of diagnostics companies in working toward this goal.

Pharmacogenomics, although providing an example of a novel approach to drug development, is but one aspect of a more general trend. The FDA’s Critical Path Initiative and EMEA’s Roadmap both see pharmacogenomics at the heart of a broader agenda for the enhanced use of novel biomarkers in drug development, diagnosis, and screening and the review of existing clinical trial design and statistical tools for drug evaluation. This agenda represents a shift in the role of regulatory agencies from guardians of public safety to a wider public health mission as supporters of translational medicine.

In general, regulatory authorities are moving cautiously, seeking to ensure that they do not act prematurely in a fast-developing area of science. Still, a number of general trends can be identified. One of these trends is the establishment of new mechanisms for voluntary sharing of genomic data, which is being done outside the formal approval process in FDA and is also being carried out in EMEA’s pharmacogenomics briefing meetings and within a similar process in Japan. A second trend is the development of guidance on regulatory processes and types of data needed. A third is organizational restructuring in regulatory agencies. A fourth is the approval of new products and the relabeling of existing ones. And a fifth is a broad-based move toward international cooperation and harmonization.

There can be no doubt that the FDA is leading the way, in part because it has prominent champions of pharmacogenomics among its leadership and in part because it has far greater resources to bring to bear on this field than any other organization. A comparison of FDA and EMEA, for instance, shows that the FDA has 20 full-time staff in its interdisciplinary pharmacogenomics review group, while EMEA has none in its equivalent pharmacogenetics working group. However, the EMEA is also very active, albeit at a slower speed and smaller scale, reflecting both the resources available and the complex political relationship between EMEA and European member states. Regulatory agencies in individual European member states have little or no interest in pharmacogenomics.

While there are shared concerns, there are also some major differences between the United States and Europe. For example, the FDA has devoted considerable resources toward and places great importance on the relabeling of existing drugs as a strategic plan for promoting the use of pharmacogenomics. Thus far, labeling updates have been advisory or cautionary rather than mandatory.

EMEA has been far more reluctant to relabel than the FDA. EMEA’s authority in this area is limited since it appears that in those cases where drug approval was given on a state-by-state basis, then updating the drug label is the responsibility of the individual member states. Relabeling to include pharmacogenomic data does not seem to be a priority issue for the member states’ regulatory agencies.

Just as is the case with the FDA, the EMEA has approved drugs co-developed with tests (e.g., Herceptin). Unlike the FDA, however, the EMEA does not have a diagnostics division and has no legal authority over the regulation of diagnostic tests. Authority for the regulation of medical devices under the European In Vitro Diagnostic (IVD) Directive resides at the member state level. Therefore, while the EMEA can evaluate the performance of a test codeveloped with a drug and can include strong recommendations for the use of testing as part of the drug label, it cannot mandate the use of a particular test kit. Furthermore, this regulatory gap means that the EMEA does not feel empowered to issue guidance on codevelopment.

No action has been taken at the European level by the expert groups that guide device regulation, and while the IVD Directive permits individual member states to take action when they deem it necessary, none has done so in relation to pharmacogenomics. EMEA officials, who are committed to the ideal of harmonization through the ICH process, would prefer to avoid a situation where individual member states take action.

This raises the issue of the need for a coherent and consistent regulatory framework for genetic tests. This has not happened on an international basis because of a series of regulatory gaps—different regulatory gaps in different countries. In the United States, for example, the primary regulatory gap is that, historically, the FDA has not regulated laboratory-developed tests as medical devices. By contrast, in Europe and Australia laboratory-developed tests are regulated as medical devices.

There have been some interesting developments over the past few years. Perhaps the most important one in Europe is that the IVD Directive will be revised and the risk classification system is probably going to change. It is likely that genetic tests will be classified as moderate risk rather than receiving the low-risk classification that they have in today’s system. In Australia there has been a complete revision of the IVD regulations, primarily to address the issue of laboratory-developed tests and genetic tests. Australia has issued some guidance concerning nutrigenetic tests. Elsewhere, Canada has provided some guidance on pharmacogenetic tests.

Industry has emphasized the importance of clarity in regulatory guidance and the need to strike a balance between enhancing regulations and the creation of a clear pathway to market. One problem in the European system now is that no standards or guidance for genomic tests are being generated.

Another issue of importance that crosses national boundaries is the issue of sustainable business models. The traditional IVD innovation model is an incremental process involving multiple parties. One starts with laboratory-developed tests and gradually works toward test kits at higher levels of automation. In keeping with this innovation model, the traditional IVD business model is based on intellectual property (IP) in test platforms rather than in biomarkers. Essentially, this business model leads to intense competition between companies, which offer different ways of testing for the same biomarkers. But with little protection on investment, relatively low margins, and little experience or infrastructure for clinical evaluation, the traditional sector is ill-equipped to undertake large-scale clinical studies. Furthermore, there is no economic incentive to invest in the kind of clinical studies discussed in this workshop. The use of a model with weak intellectual property rights in biomarkers has led to a situation where no one party is responsible for developing the data on the clinical validity of a new test. Academic studies and professional advocates have filled the gap, often promoting tests on the back of ad hoc clinical experience.

A lack of biomarker IP has created a disincentive for generating clinical data. Any one manufacturer who undertakes such clinical studies will be developing the market not simply for itself but also for all the other manufacturers, who will bear none of the risks but will share in the benefits. Indeed, the structure of the market is deliberately exploited by some IVD companies that specialize in being “fast followers,” the first on the market with a “me-too” test. The problem is summed up by the industry maxim, “It’s hard to be first.”

There are a number of disruptive new business models appearing among companies that develop and market medical tests, and there is some evidence that the emerging field of molecular diagnostics has disrupted the traditional model in a number of ways. A number of companies have appeared that are developing genetic tests based on patent protection of the gene and its association with disease. The emerging market for gene expression and proteomic tests is based on similar strong intellectual property rights being claimed by companies like Genomic Health, Agendia, Avaria Dx, Correlogic, and Exact Sciences.

Strong intellectual property rights for biomarkers allow companies to charge higher prices for their tests for a longer period of time before the arrival on the market of competing products. Higher reimbursement rates are being seen for some new tests, including Genomic Health’s Oncotype Dx test, which costs $3,460, and Agendia’s MammaPrint test, which costs $3,000. When companies have greater certainty of a return on their investment, they are more likely to invest in substantial clinical studies to generate a proper evidence base for their tests. This anticipated return also gives small companies the leverage to access the money needed for clinical studies; they can raise money from venture capitalists or find a bigger partner, either a major diagnostics manufacturer, or a major reference laboratory. So IP has become an important incentive for funding clinical studies for new molecular diagnostics, and this new model can help to address oversight concerns about the lack of clinical data to support novel tests by offering clear incentives to generate that data.

There are concerns about this business model, however. The issue of pricing leads one to consider the particular regulatory challenges presented by monopolies. Market failures are a major justification for regulatory action and it is a well-established tenet of regulatory practice that the existence of a monopoly is in itself a market failure which provides strong justification for regulatory action. In particular, regulators will try to protect against abuses of the monopoly situation by making sure that consumers have access to goods and services of a decent quality and at a reasonable price.

IP in biomarkers can lead to monopolistic provision of tests, and the homebrew loophole has made it even more attractive for companies to develop their tests as in-house tests which are carried out on a monopolistic basis by the test developer or by two or three exclusive licensees. Many clinicians and laboratory directors have opposed this, arguing that the monopolistic provision circumvents the traditional (informal) methods of test evaluation, with in-house tests being subject to peer review in the field. They are concerned that it creates a situation where the only people who can perform a new test are those with a vested interest in its promotion, which in turn could lead to a situation where companies, in order to recoup their research and development investment, may make strong clinical claims for their tests at a stage when the evidence base is still developing. In recent years there has been repeated controversy over emergent IP-protected tests, with little agreement about when tests are ready for routine clinical use. The novelty and complexity of many of the tests involved only heightens concerns.

Another new business model is the rise of consumer genetics. In this model companies offer their tests directly to consumers. Some have suggested that this business model is a way to overcome some of the hurdles of translation. By taking the test directly to consumers, for example, one does not have to address the issue of physician reluctance to adopt. Consumer genetics is a disruptive business model, Hogarth said, because it marks the first time that new tests go directly from research to a consumer offer. There is significant national and regional variation in regulatory attitudes to direct-to-consumer testing which may affect this business model.

Business issues faced by IVD companies have regional variations. For example, venture-capital funding is far more available in the United States than it is in Europe. Market size is also important; this can be seen, for instance, in the way that Canadian biotech companies that develop new tests will launch them first in the United States, next in Europe, and then, finally, in Canada. Of the 13 companies engaged in the gene-expression market, only 4 are located outside the United States, which illustrates the degree to which innovation is heavily focused on the United States.

In terms of the IVD industry and business models, then, there are a number of policy options to consider. One option is to support a radical restructuring of the traditional industry so as to move toward supporting the new model of biomarkers and monopolistic provision of tests. Another option is to focus on developing mechanisms for addressing market failures of the traditional model. Neither of these options will work on its own, however.

New business models are largely unproven and therefore cannot be relied upon. Intellectual property may turn out to be a poorly structured incentive, or it may be unavailable in many cases. What is needed is to take a case-by-case approach, supporting multiple innovation pathways. Such an approach is a much greater challenge for policy makers.

Another major issue is third-party reimbursement for genomic innovations. Companies are greatly concerned about this issue, not just in the United States but also in Europe. Reimbursement is a very powerful gatekeeper and has been the de facto regulator of genetic tests since payers frequently set stricter evidence standards than those established by licensing authorities. The Roche Amplichip is a good example. In 2004 it became the first pharmacogenetic microarray to gain FDA approval, but since that time the test has been rejected in a number of negative health technology assessment reports in the United States, Canada, and Europe.

Clearly reimbursement decisions can have a profound effect on clinical uptake of new tests. Yet if payers are informal regulators, then they face the same challenges as licensing authorities: how to wield that power responsibly and how to balance thorough evaluation with the encouragement of innovation. One option is conditional reimbursement—that is, paying for new tests but only on the basis that there is systematic data collection post market. Conditional reimbursement is one way of dealing with decision making under uncertainty and is also a way in which health care systems and payers can facilitate the process of evidence development. This model has been adopted by CMS in its Coverage with Evidence Development program and it is being used in the Netherlands, Germany, and Australia.

As can be seen, there are shared problems and policy concerns that cross national borders. There are also some interesting examples of international cooperation and harmonization. Inevitably, however, there is international competition. Each country, even within Europe, wants to promote its own biotechnology, pharmaceutical, and diagnostic sectors. There is also variation in the capacity for action, based on many different kinds of structural issues.

The best innovators, Hogarth concluded, may ultimately not benefit the most from their innovations because they may not be the ones that are best at diffusion.


Deborah Marshall, Ph.D.1

McMaster University

Value in pharmacogenomics has recently taken on new importance, Marshall said. There are a number of reasons for this. For example, there is broader availability of pharmacogenomic testing for some commonly used drugs. The FDA has issued guidance about maximizing translation of pharmacogenomics from the bench to the bedside, including requirements to submit pharmacogenomic data alone and in combination with tests and treatments.2 The Critical Path Initiative, which is intended to address the pipeline problem of getting pharmacogenomics to the bedside, is playing a role as well, and there are concerns about adverse drug reactions of these new technologies. Finally, there is concern about increasing prescription drug costs.

The new buzzword is value. Dr. Harold Varmus, former director of the NIH, has asked, “How much will the expanded use of genetic information further escalate the cost of healthcare, and who will pay for it?” (Varmus, 2002). These questions are not surprising given that there has been an 80 percent growth in the number of new drugs that are being prescribed, a 100 percent growth in new device patents, and a 1,500 percent growth in diseases with identified gene tests (Ferrusi, 2007).

What is “value” in genomic-based translational research? The Secretary’s Advisory Committee on Genetics, Health, and Society (SACGHS) has suggested that for successful adoption into clinical practice, a pharmacogenomic test has to have analytic validity (that is, be an accurate test for the genotype), it must be clinically valid (the test has accuracy for the clinical outcome), and it must have clinical utility (that is, it has the ability to inform clinical decision making, prevent adverse outcomes, or predict outcomes).

There must also be economic value. Measuring economic value in pharmacogenomics involves three different elements: an evaluation of the cost of illness, criteria for cost-effectiveness, and criteria for economic viability. In examining the cost of illness, one examines the size of the problem in monetary terms: What is the relevant population, and what is the cost of disease burden? To determine cost-effectiveness one examines efficiency measured as marginal cost per unit of effectiveness of the new innovation versus the standard care. Finally, in considering economic viability, one takes the perspective of societal net benefit. To what extent is value-based pricing possible, as opposed to cost-based pricing? What is a fully informed patient willing to pay for the innovation?

HER-2 neu and trastuzumab provide good examples to illustrate each of these elements. The cost-of-illness framework (see Table 5-1) has five components: prevalence of the condition for drug treatment, mutation prevalence, utilization, drug expenditures, and condition expenditures. For HER-2 the population would be those patients with metastatic breast cancer or, in its new indication, early breast cancer. One also needs to know what the mutation prevalence is, that is, the size of the population in which testing could affect outcome. In the situation with HER-2, about 20 to 30 percent of breast cancer patients would over-express the HER-2 protein.

TABLE 5-1 Data for Cost-of-Illness of Pharmacogenomics.


TABLE 5-1 Data for Cost-of-Illness of Pharmacogenomics.

Other data require focus on utilization, drug expenditures, and condition expenditures. In terms of drug expenditures, testing will affect how the drug is used, so one needs to think about the annual cost of the treatment. In the case of HER-2 and trastuzumab, the cost might fall somewhere between $40,000 and $80,000 per year per patient. Finally, data related to clinical outcomes are necessary. For the example in Table 5-1, there is a 25 percent increase in median survival.

Moving to cost-effectiveness, one examines the difference in costs divided by the difference of effects between the two different paradigms. The mathematical expression is

Image p200150e2g75001

The new paradigm uses pharmacogenomics; the old paradigm is the standard care delivered. Some of the key factors and test characteristics for which pharmacogenomic testing would likely be cost-effective are shown in Table 5-2. The higher the prevalence of the mutation—that is, the more frequently it appears in the population—the more likely it is that there will be a favorable cost-effectiveness ratio. A favorable cost-effectiveness ratio is also more likely if there is a very strong association between the gene variant and clinically relevant outcomes. Finally, when one is looking at pharmacogenomic testing and treatment combinations, the best cost-effectiveness ratios arise from rapid, accurate, and relatively inexpensive tests (Veenstra et al., 2000; Phillips et al., 2004).

TABLE 5-2 Criteria for Cost-Effectiveness of Pharmacogenomics.


TABLE 5-2 Criteria for Cost-Effectiveness of Pharmacogenomics.

The third approach for examining economic value is concerned with market economics and value-based pricing. To support pharmacogenomic innovation, at least initially, the health system marketplace must provide attractive economics and a sustainable franchise to both the diagnostics and the treatment manufacturers. There needs to be a place where the product can be introduced in a viable way.

In examining market economics, one must determine to what extent value-based pricing is possible. The first criterion is that the test must be able to identify an appropriate patient population or subpopulation and to demonstrate the improved response. Second, value-based, flexible pricing for both the test and drug will provide stronger incentives for innovation. Third, there needs to be some kind of intellectual property protection—which is not as common in the diagnostic industry as in pharmaceuticals—in order to encourage and facilitate the innovation. Finally, there must be some kind of additional regulatory market protection aimed at facilitating innovation in this context (Garrison and Austin, 2007; Trusheim et al., 2007).

The era of blockbuster drugs is past, but there are opportunities for sufficient financial return through charging a premium price for the higher efficacy of a pharmacogenomic innovation, even in a smaller target population. An extreme example is provided by the situation with orphan drugs, but a more pertinent example is Gleevec. In this case the company was able to generate revenue of $2.5 billion even though only about 55,000 patients were eligible for this treatment. The drug generated an average revenue per patient per year of about $44,000 (Trusheim et al., 2007). The question is, how sustainable will this be in the long run, given the likelihood of disruptive competition that could improve performance and decrease costs?

There are many challenges in assessing value and these have implications for the translation of pharmacogenomic technologies to benefit patient outcomes. In order to be of value, pharmacogenomics must fill a knowledge gap that is clinically important to the diagnosis, prognosis, and treatment of patients. However, as discussed earlier, data and evidence of effectiveness are lacking. There is an ongoing debate about whether observational data can provide sufficient evidence of clinical utility, but not all genetic tests can be put through randomized controlled trials. When direct evidence is not available, one must consider methods for obtaining indirect evidence, including modeling approaches.

In the HER-2 example, no secondary data set was available to find real-world utilization of the test, so a chart review was conducted. This review found wide variation in the types of testing performed. Most people received immunohistochemistry, fewer received the fluorescent in-situ hybridization (FISH) test, and some received both. There was variation in trastuzumab use by HER-2/neu status. Importantly, only 56 percent of the patients had documented evidence of actually having a clearly positive test in order to obtain treatment. This raises questions about whether the testing is being done appropriately and whether testing is a requirement for treatment.

A second challenge to assessing value is that there are very few economic models for pharmacogenomics. It is important to conduct economic modeling in order to understand the downstream consequences of the pharmacogenomic testing-treatment paradigm. In the long run, one must demonstrate value for adoption and reimbursement purposes. While the hurdles traditionally have been lower for diagnostics, the situation is changing. It may well be that future requirements for diagnostics will be relatively similar to those for pharmaceuticals.

Historically, diagnostics have been less studied than drugs. Up-front testing costs are perceived to be higher than downstream savings. Most products are not evaluated early enough. Analyses are usually conducted after the intervention has been adopted, yet these are not as useful. Again, HER-2 is a great example. A systematic review by Phillips and Van Bebber found only 11 cost-effectiveness studies, only 1 of which looked at HER-2/neu, even though it had been approved in 1998 (Phillips and Van Bebber, 2004). However, an update in 2007 (Ferrusi et al., 2007) found that there are now 15 cost-effectiveness studies for HER-2, and 7 are for early-stage breast cancer. The reason that few cost-effectiveness analyses are conducted may well be because most payers in the United States do not require cost-effectiveness analyses.

There is a need to model very complex clinical pathways, particularly for test-treatment combinations. Yet most modeling efforts have not adequately considered testing variability, that is, sensitivity, specificity, sequencing, and timing of the tests. For HER-2, most of the models have assumed perfect testing conditions. Those that examined testing accuracy did not include any consideration of the sequence in which tests were administered or of the fact that there were alternative tests available with very different performance characteristics. Nor did the models look at utilization of the test in terms of how often it was actually applied in a particular population.

The one model that did examine testing as an issue found that there was a huge difference in the incremental cost-effectiveness ratio depending on which test was used and in which sequence it was used. The cost-effectiveness ratio was either a few thousand dollars per quality-adjusted-life-year or, when a different sequence was used, it was more than $150,000 per quality-adjusted-life-year (Elkin et al., 2004). This demonstrates that the testing sequence and how it is modeled makes a huge difference in what the cost-effectiveness of that test/treatment process would be.

Another issue in developing an evidence base is the lack of information about performance of the test in a real-world context. None of the models, for instance, looked at real-world utilization, by examining claims data to understand how frequently the test is applied, followed by what treatment decisions the clinicians make based on the test information. Another issue is the need to consider multiple populations. For example, about 80 percent of people with Lynch Syndrome have an increased risk of colorectal cancer. Certainly these patients should be tested, but relatives should also be tested.

A final challenge is the need to build an evidence base for pharmacogenomics that can be used in cost-effectiveness models. There is a lack of evidence about applications. There are numerous studies about genetic associations, but less information is available about what one should do with that information.

It is important to reiterate that evidence is needed concerning many things—analytic validity, clinical validity, clinical utility, availability and utilization, and the effect on economic outcomes and on the entire population health burden. One approach to building the evidence base has been provided by the Evaluation of Genomic Applications in Practice and Prevention Working Group, which was described earlier. Another project that has been proposed to the National Institutes of Health concerns cancer and personalized medicine. That project, which is called the Cancer and Personalized Medicine Research Study, is aimed at building an evidence base from an economic perspective.

One element necessary in any effort to build an evidence base is an examination of utilization. Utilization research needs to explore who has access and who uses the available technologies. Real-world data will be needed for this, perhaps claims data or chart review. It is also very important to understand patient and provider preferences, since these preferences will influence the adoption of new technologies. One approach is to use stated-preference methods, which not only give quantitative estimates of individuals’ preferences but also allow one to calculate willingness to pay for the technologies. Finally, there is the economic element, the “What is value?” question. One needs to understand the downstream consequences of these technology test interventions with respect to their cost-effectiveness.

Pharmacogenomics is an inevitable trend for the future, Marshall said. There are many promising new technologies, but a key aspect of success in the long run will be the ability to demonstrate value to payers, providers, and patients. There are multiple challenges, but building the evidence base that captures the health burden, utilization, clinical utility, and cost-effectiveness of pharmacogenomics will be critical, Marshall concluded.


Wylie Burke, M.D., Ph.D.


One audience member noted that Hogarth said in his presentation that innovation sometimes happens too quickly and that there are those who believe this has been the case with genomics. What, the questioner asked, do the trends for genomics look like from the two presenters’ perspectives? Hogarth responded that there is no single answer to that question, but that it depends on the technologies. For consumer genetics (or direct-to-consumer genetics), which has increased significantly, clinical geneticists and research scientists in the United Kingdom think that translation into practice is premature. Some of the most significant and potentially fruitful innovation appears to be occurring in the gene-expression market, particularly in oncology.

Marshall responded that she believes the translation process is working at about the correct pace. There are rapid adopters and slow adopters, and one needs both when there is something new. One also needs regulation, but not too much because fast adopters and fast innovators must be allowed to get ahead of the curve, thereby enabling the remainder to catch up. The bottom line, however, is that it takes a great deal of time and energy to collect all the data needed—10 to 15 years for randomized controlled trials. On the one hand, one wants to be sure the new technologies do not cause harm, but on the other hand, too much restriction could inhibit innovation.

Another audience member said that the discussion appeared to be primarily from the perspective of those who are involved with diagnostics and those involved with reimbursement. There are a number of other genomic innovations that have proven uses. For example, one presenter said that a drug company might not want to go into genomics because it will decrease market share. But decreasing market share should not be a barrier since a tremendous amount of money can be made on a small market, as shown by the example of Gleevec. One never gets the whole market.

It seems reasonable for a company to look to pharmacogenomics as a way to get to “proof of concept,” a critical stage in drug development. Gleevec is a great example of how a drug with a presumed niche indication and that is tied to biomarker, can become a blockbuster. Pharmacogenetics—biomarkers, in the context of drug development—is an important issue that needs greater exploration.



This presentation was developed collaboratively by Deborah Marshall, Ph.D., of McMaster University and Kathryn Phillips, Ph.D., of the University of California at San Francisco.


Three guidances are relevant. They are: Pharmacogenomic Data, March 2005 Procedural. http://www​​/guidance/6400fnl.pdf (accessed June 2, 2008); Guidance for Industry. Pharmacogenomic Data Submissions—Companion Guidance, August 2007 Procedural. http://www​​/guidance/7735dft.pdf (accessed June 2, 2008); and Realizing the Promise of Pharmacogenomics: Opportunities and Challenges, Draft Report of the Secretary’s Advisory Committee on Genetics, Health, and Society. http://www4​​/oba/sacghs/SACGHS_PGx_PCdraft.pdf (accessed June 2, 2008).

Copyright © 2008, National Academy of Sciences.
Bookshelf ID: NBK3959


  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this title (785K)

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...