NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Institute of Medicine (US) and National Research Council (US) Committee on New Approaches to Early Detection and Diagnosis of Breast Cancer; Joy JE, Penhoet EE, Petitti DB, editors. Saving Women's Lives: Strategies for Improving Breast Cancer Detection and Diagnosis. Washington (DC): National Academies Press (US); 2005.

Cover of Saving Women's Lives

Saving Women's Lives: Strategies for Improving Breast Cancer Detection and Diagnosis.

Show details

6The Necessary Environment for Research and Development

While the public impatiently awaits new technologies and headlines, medical researchers bemoan the “national crisis.” The crisis is not in discovery and invention, but rather in getting those discoveries to the public.

RN Rosenberg, JAMA 2003

Basic research lays the foundation for the discovery and invention of new medical technologies, but the path from discovery to adoption is long and often full of unexpected turns. The value of any new technology must be demonstrated through a series of increasingly stringent steps, each of which can take years.a Figure 6-1 illustrates the pathway of medical technology development from discovery to adoption in clinical practice.

FIGURE 6-1. Pathway of medical technology development.


Pathway of medical technology development.

Once a technology reaches the prototype, or investigational, stage, it is typically tested in small clinical studies, usually involving fewer than 50 subjects. In most cases, a technology must pass Food and Drug Administration (FDA) review for safety and effectiveness before it can be marketed. Because most technologies are affordable only if they are covered by health care insurance, most will not be adopted in clinical practice unless their use is deemed “reasonable and necessary,” by either the Centers for Medicare & Medicaid Services (CMS) or private insurance companies. Practically speaking, that means that the technology must be shown to improve outcomes. The time from discovery and invention to clinical use is a source of great concern and frustration to technology developers, as well as members of the public who eagerly await these advances, none more impatiently than those whose mission is to reduce the toll of breast cancer.

This chapter describes the stages of technology development and considers the degree to which there are obstacles that cause unreasonable delays and proposals for reducing those obstacles. Avoidable pitfalls, such as clinical studies designed so poorly that they fail to provide clear answers or technologies developed with little understanding of what physicians and patients really need, are also covered. The development of medical technologies is a complex enterprise that requires the integrated expertise of engineers, biologists, physicians, statisticians, and health care administrators. This chapter thus highlights a variety of initiatives that illustrate different approaches to integrate the necessary expertise for innovations that save lives.


Fostering the invention and early stage development of medical technology is essential and depends on the nurturing of basic medical research. Due in no small part to the long-standing and tireless efforts of breast cancer activists, breast cancer research has been generously supported over the past few decades. With the possible exception of AIDS, breast cancer research receives more funding than any other disease. The National Cancer Institute (NCI) currently supports more research projects and clinical trials for breast cancer than for any other type of cancer.51 According to their website, NCI supports 2,932 breast cancer projects and 112 clinical trials. By comparison, the average for all 56 types of cancer (or aspects of cancer) listed by NCI is only 276 projects and 8 clinical trials. In addition to the National Institutes of Health (NIH), breast cancer research is supported by private health charities and the Department of Defense (DoD) Congressionally Directed Medical Research Program, which together provide more than $300 million per year, for a total of roughly $800 million per year (Figure 6-2). By comparison, NCI spent $311 million on prostate cancer and DoD's Medical Research Program spent $85 million for a total of just under $400 million (Figure 6-3). Table 6-1 lists the major funders of breast cancer research.

FIGURE 6-2. Distribution of public and charitable funding of breast cancer.


Distribution of public and charitable funding of breast cancer.

FIGURE 6-3. Percentage of NCI budget allocated to selected cancer types.


Percentage of NCI budget allocated to selected cancer types.

TABLE 6-1. Major Funders of Breast Cancer Research .


Major Funders of Breast Cancer Research .

The Committee believes that current priorities for basic research are appropriate. The investment in basic research over the past few decades has yielded a wealth of knowledge that fuels the invention of a rich array of powerful new technologies from imaging devices that can display the activity of individual cell types to assays that can simultaneously measure the activity of thousands of genes or proteins.

A broad consensus among experts in breast cancer over the last few years supports this view. In 1998, the NCI convened the Breast Cancer Research Progress Group, a panel of 30 prominent members of the scientific, medical, and advocacy communities to identify the most important research needs in breast cancer. The panel's recommendations included research to identify biomarkers, molecular analysis of the transition from pre-invasive to invasive disease, the importance of tissue banks as a critical research resource, the need for biologically based imaging, and the need to develop databases and bioinformatics so that the wealth of data can be assimilated and exploited for maximum benefit. Three years later, these same areas were recommended for support in the 2001 Mammography and Beyond report.33 The NCI and DoD breast cancer research portfolios reflect these priorities, as do the research portfolios of key private funders. Further, these same themes have been equally emphasized for all types of cancer. The individual technologies in development for detecting breast cancer are proceeding equally or better than in other disease research areas.

Many new technologies hold great promise to improve breast cancer detection. Over the years “breakthroughs” have been announced with great regularity. But there is a long passage between the development of a promising technology and determining whether its promise can be realized. Few of the breakthroughs heralded in past decades have proved their worth in reducing breast cancer mortality. Although the research engine that drives technology advances is well fueled, the validation and implementation of those advances is another matter.

Technology Assessment

The term “technology assessment” is used in different ways by different people. In the narrowest, but also the most widely used, sense, health technology assessment refers to the synthesis of evidence collected from clinical studies and the application of that synthesis to decisions about whether a particular technology should be adopted by a health care pro vider or reimbursed by a health care payer, such as a private health insurance company or Medicare. Technology assessment of this sort is conducted by federal and private organizations (Table 6-2). In practice, the initial phase of technology assessment done by health care payers does not usually consider cost, feasibility, or social and ethical issues.

TABLE 6-2. Federal and Private Technology Assessors .


Federal and Private Technology Assessors .

The Institute of Medicine (IOM) Committee for Evaluating Medical Technologies in Clinical Use defined medical technology assessment more broadly as:

any process of examining and reporting properties of a medical technology used in health care, such as safety, efficacy, feasibility, and indications for use, cost, and cost-effectiveness, as well as social, economic, and ethical consequences, whither intended or unintended.32

Assessing Medical Technologies, IOM, 1985, p. 2

This definition includes clinical studies of efficacy, effectiveness, diagnostic accuracy, the impact of a technology on quality of life, FDA review, and assessment for health insurance coverage, and post-market.

Assessments of how well a technology is implemented in clinical practice or how it is most effectively integrated with existing technologies are rarely conducted. (Post-market surveillance studies assess product failures as opposed to optimizing performance.) In other words, how effectively a new technology improves overall health outcomes is rarely studied.

Medical technology assessment in the United States has been described as “a battle that's been fought and lost many times before”29 (Box 6-1). Although national advisory panels have called for a nationally coordinated system of health technology assessment for decades,32 no federal agency in the United States has both the mandate and the power to support a comprehensive approach to technology assessment.

Box Icon

BOX 6-1

Brief History of Medical Technology Assessment in the U.S. Federal Government.

The mission statement of the Agency for Healthcare Research and Quality (AHRQ) includes technology assessment, but that agency has never been allocated enough funds to support comprehensive technology assessment. The NIH budget is more than 100 times greater than AHRQ's, but its mandate for technology assessment is limited to clinical trials and NIH has historically resisted further expansion in that direction. In coming years, the gap between technology innovation and assessment might begin to narrow. In May 2002, the NIH director, Elias Zerhouni, laid out the “NIH Roadmap” describing a strategic vision for a more integrated approach to basic research that enables technological innovation and technology development. The Roadmap is discussed later in this chapter.

The Role of Cost-Effectiveness Analysis

As noted above, cost-effectiveness is rarely assessed in the initial phase of technology assessment done by health care payers. Nor is it part of FDA's approval criteria. The Committee agrees that this is appropriate, because it makes little sense to assess cost-effectiveness analysis before effectiveness is determined. Likewise, it is premature to be overly concerned about cost-effectiveness during research and development of new technologies. Besides lacking information about the effectiveness of technologies that have not been clinically tested, later generations of a technology are almost always less expensive and often more effective.60

Consideration of cost-effectiveness is important during the technology adoption process, but at this stage formal cost-effectiveness analysis is seldom undertaken and generally does not play a role in the decision to adopt a new technology. As technology diffuses, or is poised for diffusion, cost effectiveness, or perceptions of it, influence policymaker's views and the decisions of insurers and health care systems about whether to recommend or use a technology.

Cost-effectiveness analysis has the potential to contribute to rational decision making by providing estimates of the magnitude of costs and health outcomes. When conducted in an unbiased way, it can help with decisions about whether or not to recommend a technology in different subgroups (such as screening of men for breast cancer) and with choices between alternative interventions for the same group (for example, screening women for breast cancer versus recommending the use of a drug that has been shown to prevent breast cancer). Cost-effective analysis also can be used to choose between alternative strategies to achieve some overall societal or population goal; for example, in choosing whether to implement a screening program for breast cancer versus a screening program for ovarian cancer to reduce the burden of cancer in women.

Cost-effectiveness analysis is not and should not be the only consideration in decisions about technology use. Cost-effectiveness analysis does not address value judgments that are key to individuals making decisions about their health. Cost-effectiveness analysis is influenced by perspective—that is, whose benefits, costs, and burdens are “counted” and are thus included in the analysis, and whether to count all benefits, burdens, and cost that accrue to certain individuals or groups.27 For example, patients, physicians, health plans, and insurers have different perspectives and will likely weigh costs and benefits differently. A decision to adopt a new technology because it is “worth the cost” is an ethical and moral judgment—not an economic one. Opinions about whether something is “worth” a certain amount of money are subject to differences in the perspective and values of those making the judgment.55


Clinical studies are one of the first steps in assessing medical technologies. Unfortunately, far too many clinical studies yield uninformative data and fail to answer the basic question as to whether a new technology improves health outcomes. Too often, the appearance of a positive result is an illusion based on overlooked assumptions and failures to appreciate the many ways that hidden biases can skew results (Box 6-2).

Box Icon

BOX 6-2

Common Failures in Clinical Trial Designs Submitted for Review (See Appendix D for Detailed Descriptions). Poorly Described Patient Populations Too Narrow a Patient Population

Poor Study Designs Impede Progress

The consequences are disheartening. The developer of a new technology has typically invested millions of dollars in a clinical study—not to mention the time and effort of participating physicians, nurses, and patients. The ability to fund a clinical study is often a limiting factor for a small company hoping to develop a promising medical technology.

From a company's perspective, failure to obtain FDA approval spells disaster, and often signals the end of the project. Small companies whose fortunes are tied to a single technology and who rely on venture capital will find it considerably more difficult—if not impossible—to raise further capital, which often leads to the demise of the company. Ultimately, it is the patients who suffer most from these lost opportunities.

Poorly designed studies have impeded the development of more refined models of risk stratification. In an attempt to develop a model for breast cancer risk, in 2001 AHRQ reviewed 500 studies involving more than 30,000 women. Unfortunately, poorly collected data and insufficient evidence prevented the inclusion of all factors except age. Age was the only risk factor that definitively showed clinical significance. Problems with the meta-analysis included a lack of standardization of risk factor reporting, lack of standard reporting formats, and failure to link risk factors to an eventual diagnosis of breast cancer.6 Because improving the early detection of breast cancer requires the development of better models to assess risk, critical attention must be given to improving the quality of clinical trials.

Population Measure of Cancer Status

There are three major measures of cancer status in a population: incidence, survival, and mortality. Cancer incidence represents the occurrence of cancer in the population and is often reported as a rate. Most cancer registries report cancer incidence in units of number of cases per 100,000 population per year. Calculations of short-term cancer incidence rates can be distorted by the extent to which a population is subjected to tests that might lead to cancer detection. Because studies of cancer screening are designed to do just that, these studies inevitably lead to major perturbations in the “reported” incidence, rendering cancer incidence an invalid endpoint for evaluating the real impact of the screening intervention.

Survival is the term used for the time interval from diagnosis to death from cancer, in patients who contract the disease. Since many patients will not die of their cancer, the survival experience must be calculated actuarially, using methods such as the life table, or the Kaplan-Meier method (Box 6-3). Although such calculations are definitive and unambiguous, the duration of survival is heavily dependent on the time of incidence of the cancer, and, as indicated, this can be strongly influenced in an artifactual way by the intervention under study (for example, screening). Although survival of cancer patients is the critical endpoint for studies of cancer therapies, it has little utility in studies of cancer prevention.

Box Icon

BOX 6-3

Measuring Breast Cancer Survival: Kaplan-Meier Curves. Kaplan-Meier curves are used to illustrate the effects of different factors on survival. These curves are used to show the results of screening studies, because they can depict survival data even (more...)

Mortality (or cancer-specific mortality) is the term used to describe the rate at which subjects die of the disease in the population targeted for the cancer prevention intervention; that is, it is the cancer death rate in the population under study. Mortality is the fundamental endpoint for cancer prevention studies, and to the extent that other endpoints—such as detection of cancer—are employed, they are used in lieu of mortality. Mortality is the only endpoint among these three that is valid for studies of cancer screening.

Screening is a form of secondary prevention, which is the control of cancer by reducing population mortality through early detection and effective treatment. (Primary prevention is the control of cancer through reduction in the incidence of the disease.) Screening tests are not intended or expected to affect the underlying cancer incidence rates, but rather to save lives by detecting cancer earlier than in the absence of screening. It is important to recognize that the early diagnosis conferred by screening can only be useful to the patient if there is an effective treatment for the cancer. More specifically, there must be a treatment whose efficacy is enhanced by early diagnosis.

Definitive Evaluation of a Cancer Screening Modality

The evaluation of any screening test can be affected by two profound sampling biases, length-biased sampling and lead-time bias, and these can only be circumvented by a randomized trial of women at risk of breast cancer, with breast cancer mortality as the endpoint.37 Length-biased sampling occurs when the survival experience of a group of screen-detected cases is compared with a complete sample of incidence cases or with symptomatically detected cases. Because the growth rates of tumors are generally heterogeneous, patients with slow-growing tumors will enjoy a longer period during which the cancer is potentially screen-detectable but not yet symptomatic than patients with fast-growing tumors. This means that patients with slow-growing tumors have a selective advantage in being screen-detected. Consequently, any series of screen-detected cases will have a preponderance of slow-growing tumors, and so will enjoy a longer average survival regardless of whether the early detection confers a therapeutic advantage. Length-biased sampling is only a problem if the purported benefits of screening are derived from a series of screen-detected cases. The experimental group should be a population of subjects who are screened, and the cases derived from such a population will include both screen-detected cases and cases detected symptomatically. Any population-based series of incident cases will include a random selection of slow-growing and fast-growing tumors, and thus represents a valid series for evaluating the impact of screening.

Lead-time bias, however, affects even a population-based series of incident cases. When an asymptomatic population is screened, the time of diagnosis of every screen-detected case is earlier than if screening had not occurred. This advancement of the time of diagnosis is known as the lead time. Lead-time biases are introduced even if the screening test is extremely inaccurate. However, an accurate test will tend to produce more, longer lead times, and will therefore offer a greater opportunity for more patients to be effectively treated earlier in the course of their disease. Because a screened population will diagnose diseases earlier than a comparable unscreened population, the apparent survival times of the screened cases will be longer than the unscreened cases. Therefore, increased survival times are observed regardless of whether the early treatment of the screened cases actually affects their survival. For this reason, case survival is an invalid endpoint for evaluating screening programs.

As a result of these issues, the only accepted study design using a definitive technique for evaluating a new screening test is a randomized trial of individuals at risk of cancer in which the endpoint is cancer mortality. Patients must be followed to ascertain and compare cancer-specific mortality rates, or total numbers of cancer deaths (if the same numbers of subjects are randomized to the comparison group).

These trials are necessarily large and expensive, and require many years of follow-up. The sample sizes for the breast screening trials have ranged from approximately 25,000 to more than 100,000 women, and the trials generally require in excess of 10 years of follow-up.1 To date, there have been only about a dozen or so definitive cancer prevention trials completed, several of them trials of mammography and breast cancer. However, these trials have validated the strategy that radiologic screening can reduce breast cancer mortality. The prevailing view among experts in the field of cancer prevention is that a definitive randomized trial of this nature (with cancer mortality as the endpoint) is necessary to validate strategies for any novel screening strategy.

Studies to Improve Screening and Diagnostic Accuracy

Many techniques designed to enhance the accuracy of or complement mammography screening are under active development. These include digital mammography, computer-assisted detection (CAD), magnetic resonance imaging (MRI), and others. Demonstrations that any of these methods are successful in improving screening in a randomized trial of cancer mortality are prohibitively expensive, and so investigations focus on trials to demonstrate improved screening accuracy rather than improvements in mortality compared with mammography. Because we know that mammography saves lives, more accurate technologies must be presumed to save as many or more lives. Evaluating new diagnostic modalities with respect to accuracy is methodologically challenging, and can be affected by numerous biases. Resulting from a good deal of recent research on the appropriate methodological designs of these trials, a comprehensive summary of current thinking on the issue is contained in the recent Standards for Reporting Diagnostic Accuracy (STARD) guidelines for published articles.3 A related project by a team of experts to develop a quality assessment tool (QUADAS: Quality Assessment of Diagnostic Accuracy Studies) provides a concise tabulation of the key issues that challenge the validity of studies of diagnostic accuracy.71

The key issues from the STARD and QUADAS checklists that pertain to the design of studies to evaluate breast cancer screening technologies can be grouped broadly into four general categories:

  • Construction of the reference standard diagnosis
  • Manner and circumstances in which the various tests are “read”
  • Representativeness of study subjects
  • Statistical analysis and reporting of the results

In general, studies of diagnostic accuracy should be conducted on samples from the population from which the test will be used. For example, the accuracy of mammography in a group of women with symptoms of breast cancer will differ from the accuracy in an asymptomatic screening population. The former will have a preponderance of cancer patients, in addition to patients with larger tumors. Thus, ideally, studies of new screening technologies are conducted in a population of asymptomatic women. However, determination of accuracy involves evaluation of both sensitivity (proportion of true cases of breast cancer detected) and specificity (proportion of normal women who test negative), and thus to achieve adequate statistical power, the study must identify substantial numbers of both cases and controls. What makes this challenging is that in a general population, only a tiny fraction of people being screened will have cancer, and so very large sample sizes are required to achieve statistical power. This issue is exemplified by the design of the American College of Radiology Imaging Network (ACRIN) Digital Mammography Imaging Screening Trial (DMIST), which is a comparison of digital mammography with film mammography. The trial has recruited approximately 49,500 asymptomatic women in order to identify 150 to 500 women with cancer. The sensitivity of a screening tool cannot be sufficiently estimated with a smaller number of detected cancers because the number of cancer patients directly serves as the basis for quantifying sensitivity. Thus, to satisfy the methodological principle of conducting the study in the appropriate target population, a sample size of nearly 50,000 women is required. (See later section below, ACRIN: Network for Cooperative Development of Imaging Technology.)

Another general methodological issue is the construction of the reference standard diagnosis. For breast cancer, the ideal reference standard is biopsy. However, in a screening study such as DMIST only those patients suspected of cancer, based on mammography (or digital mammography), will receive a biopsy. That is, the decision to obtain a biopsy is heavily dependent on the results of the tests under evaluation, and it is well known that this can lead to serious bias in estimates of accuracy (i.e., sensitivity and specificity). In other words, false-negative tests could not be identified. In order to circumvent this problem, one must conduct follow-up exams of trial participants to discover individuals who are identified with breast cancer subsequent to the original screen. The DMIST design includes a follow-up testing at 10 to 15 months following the initial screen.

Finally, aspects of the statistical analysis and reporting of the results are important for the valid assessment of new technologies, and for their comparison with the current standard, which for breast cancer screening is mammography. Measures such as sensitivity and specificity are arbitrary in the sense that they depend on an arbitrary classification of a test as either positive or negative, when in fact many tests have equivocal findings. To avoid this problem diagnostic or screening tests are compared using a statistical method known as receiver operating characteristic (ROC) analysis, which is described in Appendix C. A large body of research to refine this and related statistical techniques has been conducted in recent years, including refinements of ROC analysis that allow for the measurement of the degree to which patient covariates affect mammographic accuracy, and the use of repeated screening tests on the same individual. An important principle for the evaluation of all medical trials is the commitment to report the results of all patients, and not limit the analysis to a selected subset. Thus it is important, for example, to report the frequency with which the test produces uninterpretable test results, especially if this differs in systematic ways between the different test situations or technologies.

Studies of Biomarkers

A screening tool based on a blood test offers a potentially much cheaper option than radiologic approaches. Efforts to identify individual overexpressed proteins, such as riboflavin carrier protein,59 or patterns of proteomic over- or underexpression, such as in the study of ovarian cancer by Petricoin and colleagues,56 are likely to expand in the foreseeable future. The preliminary evaluation of a serum marker is simpler than for a radiologic test, because the serum marker study can be applied retrospectively to stored blood samples. All that is needed are stored blood samples on cases of breast cancer and controls. However, for valid results, it is critical that the cases are representative of incident cases of breast cancer. That is, the serum samples should have been obtained during the workup to diagnose consecutive incident cancers, prior to any treatment. The controls should also be representative of the population at risk of breast cancer. In practice these studies are usually performed on “convenience” samples—samples that are most readily available as opposed to samples that are most relevant. For example, in the study by Rao and his colleagues,59 the control samples were obtained from clinic patients with fibrocystic breast disease, leukemia, and volunteers. In the study by Petricoin's group,56 the cases and the preponderance of the controls were obtained from a high-risk clinic, and the remainder had other gynecological conditions.

Even if the study involves valid case and control selection, care must be taken in extrapolating results to the context of screening. If the specificity appears to be high, the vast preponderance of screenees who test positive may still be negative for disease when the test is extrapolated to a screening population. That is, the positive predictive valve cannot be estimated directly from the case-control approach and will appear to be much higher in the case-control sample than it will be in the screening population. When a test rule (conditions required for indicating potential presence of cancer) is derived from a battery of markers, as in a microarray or proteomic study, the statistical analysis of the results becomes more challenging, because there are certain to be markers that appear to be associated with disease by chance alone. In these circumstances one must estimate the sensitivity and specificity of the rule through a two-stage process, where only a portion of the data is used to derive the rule (the “training” data set) and the remainder of the data is used to evaluate the accuracy of the rule (the “test” data set), as in the analysis by Petricoin and his colleagues.


Inevitably, more exciting new technologies are announced than are proven useful in clinical practice. While basic research enables the development of early stage technologies, different strategies are needed to identify which technologies are truly feasible and add clinical value by improving people's health or the delivery of health care services. This involves large-scale, well-designed multicenter clinical trials. However, clinical trials have historically received substantially less support from NIH than basic research. In 2000, Congress passed the Clinical Research Enhancement Act, which directed NIH to expand the resources for clinical research. Approximately 10 percent of the total NIH budget goes toward clinical trials, although NCI invests relatively more. Sixteen percent of the 2003 NCI budget went toward clinical trials.46 Clinical trials account for approximately 30 percent of the spending on clinical research overall.

In clinical practice, physicians usually have several choices and must choose among different technologies or procedures. Unfortunately, they rarely have access to comparative information on which to base those choices, and the lack of such information reflects a common weakness in our ability to identify optimal strategies in medical care. The Antihypertensive and Lipid Lowering Treatment to Prevent Heart Attack Trial (ALLHAT) illustrates the rare clinical trial that generates evidence necessary to choose among options. The DMIST trial comparing digital with screen-film mammography is another groundbreaking comparative clinical trial.

ALLHAT: A Watershed Trial

Most clinical trials are designed to establish the efficacy and safety of a single treatment compared with an alternative, often a placebo. Clinical trials done to meet FDA requirements for approval to market a drug are required to include a placebo comparison group except in rare circumstances. There are few large clinical trials that directly compare the effects of different treatments and even fewer that are comparisons among active, standard interventions. The ALLHAT was a watershed trial, because it was a large-scale trial that directly compared different FDA-approved drugs already in widespread use—in this case, treatments for hypertension and high cholesterol.

ALLHAT had more than 40,000 participants. The hypertension treatment component was a randomized, double-blind study in which hypertensive patients who were at high risk for heart attacks were randomly assigned to one of four treatments routinely used to treat hypertension: doxazosin, lisinopril, amlodipine, and chlorthalidone (Box 6-4). The doxazosin arm of the trial was terminated early because of a higher rate of combined cardiovascular events.42 Final results from the trial showed that, for preventing major coronary events or increasing survival neither of the newer, more expensive treatments (lisinopril or amlodipine) was superior to the diuretic.64 The ALLHAT data demonstrated that lowering blood pressure is the most important aspect of hypertension management, and that the three classes of drugs that were tested were similarly effective.70 Furthermore, the diuretic had other advantages over both drugs, such as better tolerance and fewer cases of heart failure.

Box Icon

BOX 6-4

Treatments for Hypertension Tested in the ALLHAT Comparative Trial. Doxazosin is an alpha-blocker, also used to treat hypertension. Lisinopril is an angiotensin-converting enzyme (ACE) inhibitor that is marketed under two brand names: Zestril® (more...)

Although expensive, the trial cost a fraction of the billions of dollars spent each year on antihypertensive medications. Each year, about $15 billion is spent to treat the 50 to 60 million people in the United States with hypertension.9 Diuretics can cost as little as 10 cents per pill, whereas generic ACE inhibitors cost 63 cents per pill and calcium channel blockers cost $1.93 per pill.72 The American Heart Association estimates that $3.1 billion could have been saved if diuretics had been used over the more expensive ACE inhibitors and calcium channel blockers from 1982 to 1992.64

The trial was a cooperative effort among clinical centers, the NIH, and the pharmaceutical companies that produce the leading antihypertensive drugs. The study was funded by the National Heart, Lung, and Blood Institute and Pfizer; the drugs for the hypertension were provided by Pfizer (amlodipine and doxazosin) and AstraZeneca (atenolol and lisinopril); Bristol-Myers Squibb (pravastatin) provided the drug for the lipid-lowering treatment arm. It cost $125 million, and was conducted over 8 years in more than 600 “real-life” clinical settings throughout North America. The trial met with many challenges, but was ultimately successful.

The success of ALLHAT serves as a model for future large-scale trials, such as those required for screening.58 The trial illustrates the willingness of community practitioners to participate in research with long-term follow-up, the willingness of for-profit industry to co-fund well-conceptualized research overseen by an independent group of scientists, and the willingness of subjects to enroll in head-to-head comparisons of standard interventions. All of these are often cited as barriers to large-scale clinical trials.

This trial is also a reminder of the need for definitive clinical data. Prior to the publication of the ALLHAT data, the use of diuretics as initial therapy for hypertension had been reduced by nearly 50 percent in favor of the newer, more expensive calcium channel blockers and ACE inhibitors—despite the absence of definitive evidence for their superiority.41 Organization of trials along the ALLHAT model has the potential to accelerate the development of the evidence base for making informed choices among the current and emerging options for the early detection of breast cancer.

Engaging the Public in Clinical Studies

Large-scale, well-designed clinical trials are the linchpins for converting the raw potential of new technologies into interventions that improve health and prolong lives. High-quality trials generate high-quality information, but that information accumulates slowly, one person at a time. Indeed, it often takes 3 to 5 years to enroll enough subjects for a scientifically meaningful and statistically valid clinical trial. Subject enrollment is a major roadblock and is the most frequent source of delay in clinical trials.15

The problem of adequate accrual is of broad concern in the medical research community and a series of reports points to certain trends:19,63

  • Fewer than one out of six cancer patients are aware of the opportunity to enroll in a clinical trial, and only 2 to 3 percent of cancer patients participate in a clinical trial.62
  • The most significant positive influences in participation are a physician's recommendation and a relationship of trust between the physician and the patient or volunteer. However, many physicians are reluctant to encourage their patients to participate.
  • There are many reasons why people choose not to participate in clinical trials, including the demands on their time (including traveling to the study site), cumbersome processes for obtaining coverage of their medical expenses associated with participation, and a mistrust of the clinical trials process.5,19,38
  • Compared to whites, African Americans are more reluctant to participate in clinical trials, although racial and ethnic minorities representation in NCI clinical trials is comparable to their representation in the general population.44,62
  • Many participants are motivated by the desire to help others and take pride in their involvement.

However, there are different classes of clinical trials and they pose very different challenges for accrual. Trials that evaluate cancer risk or screening strategies in healthy, symptom-free people are fundamentally different from those that evaluate treatment interventions for cancer patients. The commonly perceived advantage of participating in a clinical trial—receiving the most “advanced” treatment for a life-threatening disease—does not apply to screening or detection trials. Cancer detection and screening trials generally require vast numbers of participants—as many as 20,000 to 50,000—because the endpoint (cancer incidence or death) is infrequent. For example, because roughly 5 cases of breast cancer occur per year in every 1,000 women over age 40, a study would require about 10,000 women to achieve a sample size of 50 breast cancers per year.

Cancer detection studies, such as the ongoing DMIST that is comparing digital with screen-film mammography, require thousands of subjects. But they have an advantage in that they can often be integrated into routine practice. Both recruitment of the participants and the study procedures can be conducted within existing organizations (for example, receiving regular breast screening in one's usual health care facility). Women in the DMIST trial also receive a direct benefit from participating, which is that they receive “extra careful” screening, because they are screened with two systems. From this perspective, it is not surprising that enrollment in DMIST has been spectacularly successful.

In contrast, epidemiological studies offer no direct benefit to volunteers, but instead involve the nuisance of filling out long questionnaires and the risks and discomfort of donating DNA samples. Furthermore, the methodology of these studies requires the investigators to solicit representative members of the public who have specific risk factors for breast cancer, as opposed to calling for “volunteers.” These subjects are then compared with cases of breast cancer and analyzed with respect to the risk variables under investigation. For these reasons, enrollment in epidemiological studies is particularly challenging. As an example, investigators for the recently completed Long Island Breast Cancer Research Project set out in great detail the steps that were necessary to recruit controls.25 This involved randomly dialing thousands of telephone numbers to identify suitable control subjects under 65 years of age, and use of CMS rosters to identify older women. The recruitment drive was bolstered by community service announcements and various other strategies to encourage participation. In the end, 63 percent of those identified as eligible agreed to participate and completed a questionnaire, and 46 percent provided a blood sample for genetic analysis. Even with a major, well-orchestrated effort such as this one, it is difficult to persuade the majority of candidates to donate DNA samples and fill out questionnaires.

Many people decline to participate in genetic testing or research because they fear the results of tests could be used by health and life insurance companies and employers to discriminate against them.16 One study investigated the reasons that relatives of people with hereditary colon cancer would decline an offer of genetic testing, and found nearly 40 percent rated the potential negative effect on their health insurance as the most important reason to not undergo testing.28 Without protections in place, individuals who do agree to participate will represent a self-selected group that could skew research results and interfere with efforts to find better ways of improving breast cancer screening.17

Various strategies for improving enrollment in clinical trials have been tested.18,36,39,54 Passively distributed information, such as brochures, has little effect, whereas personal discussions are more successful. When the ALLHAT ran into difficulties in meeting its recruitment goals of greater than 20,000 African Americans, the study investigators adopted several strategies to accelerate the lagging accrual phase.58 One of their most effective strategies was to initiate a field personnel program to assist selected clinics. As a result, those sites achieved more than 90 percent of their goals. Another strategy was to mount a nationwide advertising campaign, which recruited about 1,500 additional participants for an added cost of $1,100 per participant. Other strategies were based on increasing the number of participating sites. Finally, the investigators increased the reimbursement for participants' health care to some of the clinics. (Other aspects of ALLHAT will be discussed below.)

Private breast cancer organizations have had a significant impact on the accrual of several critical breast cancer trials. In the mid-1990s, the National Breast Cancer Coalition was instrumental in rescuing the Herceptin® trials (Box 6-5), partly through advising the study investigators on how to redesign the study to make it more acceptable to participants, and partly through campaigning to encourage women to enroll. In contrast, breast cancer advocates were initially a deterring force in trial enrollment for the trials of high-dose chemotherapy with bone marrow transplantation (HDC/ BMT). The completion of those trials was delayed for several years because of a widespread, but mistaken belief that the HDC/BMT treatment already had been shown to be effective. When well-designed trials were eventually completed, the treatment was shown to be largely ineffective. Over time, breast cancer advocacy groups rallied to support these trials, and they are clearly an important ally in the success of clinical trials in breast cancer (Table 6-3).

Box Icon

BOX 6-5

Herceptin®. Also known as trastuzumab, Herceptin® is a monoclonal antibody that was engineered to target a specific cancer cell protein, HER2 (also called HER2/neu or c-erbB2), and to inhibit tumor growth. Herceptin® is the first (more...)

TABLE 6-3. Participation of Breast Cancer Organizations in Clinical Trial Accrual .


Participation of Breast Cancer Organizations in Clinical Trial Accrual .

The public has shown tremendous support for breast cancer research. Last year alone, tens of thousands of women ran 26-mile marathons. Thousands more walked 3-day marathons in heroic efforts to reduce the suffering of others from breast cancer. Many more added their support by donating money—millions of dollars altogether.

Major corporations also support breast cancer research. Pink ribbons are everywhere, from stamps to yogurt lids to T-shirts. The Breast Cancer Research Foundation website notes that given two equally matched products, consumers are more likely to choose the one associated with a pink ribbon.

Many of the thousands of women who participate in or donate their support for marathons might also embrace the idea of contributing in other ways, such as participating in clinical research studies. The need for public support in the fight against breast cancer goes beyond dollars, yet much of the public is unaware of the opportunity to contribute through participation in clinical studies.

It could be relatively simple to integrate information about “Other Ways to Help” with publicity about fundraisers. Such campaigns could inform people about the need for tissue samples and for participants in clinical studies. In fact, it is conceivable that organizers of clinical studies could collaborate with race marketers to promote either specific studies, or to conduct a more general campaign to educate the public about the merits of research and the need to donate specimens or time if they are invited to participate in a research study.

Epidemiologic studies needed to identify breast cancer risk factors require carefully selected study populations; self-selected volunteers would not be eligible. Unfortunately, the type of trial for which enrollment is particularly difficult is also the most restrictive in terms of eligible study populations. Nonetheless, there are certain studies for which volunteers could be helpful, such as preliminary trials of novel screening technologies.

Encouraging enrollment in well-designed clinical studies could facilitate the development of more effective approaches to the early detection of breast cancer. Breast cancer advocacy groups, the American Cancer Society, and funders of clinical research studies each bring different areas of expertise and constituencies that could complement each other effectively if they were to collaborate in improving enrollment in clinical studies. Breast cancer advocates are expert in mobilizing support for breast cancer research. They are also attuned to how potential study participants might react to enrollment requirements and could provide time-saving advice on ways that the design of clinical studies might be refined to promote more efficient enrollment, or to identify aspects of a study design that might needlessly deter enrollment. Finally, breast cancer advocacy groups are in an ideal position to promote enrollment through their established outreach programs. Clearly, such collaborations should apply only to studies that are not for financial gain on the part of the researchers or their institutions and that are clearly aligned with the shared goals of researchers and advocates—specifically for reducing mortality from breast cancer.

Will HIPAA Hamper Research?

The Health Insurance Portability and Accountability Act of 1996 (HIPAA) is a complex federal regulatory effort that has many parts and purposes. It was created to streamline industry inefficiencies in data transfer, improve access to health insurance, better detect fraud and abuse, and ensure the privacy and confidentiality of health information.

The purpose of the HIPAA Privacy Rule, a component of HIPAA, is to establish minimum federal standards for safeguarding the privacy of individually identifiable health information. Concern about the privacy and confidentiality of health information available in electronic form was and still is a concern of the public. The use of medical information to target people for marketing and some well-publicized breaches of individual privacy based on unauthorized use of medical information fuels concern.

The HIPAA Privacy Rule went into effect on April 14, 2003. Although the Privacy Rule applies only to “covered entities” (health plans, health care providers, and health care clearinghousesb), it changes the way hospitals, doctors, and health plans must handle personal health information, and affects how such information can be shared with and among health researchers.2 The intent of HIPAA was not to impede research. Indeed, before the Rule became final, there were many changes made from a draft rule issued in August 2002 in an attempt to minimize the effect of the rule on conduct of research. The implications and effects on research are still unfolding.

How Researchers Can Obtain Protected Health Information

Central to understanding the Privacy Rule is an understanding of what it defines as “protected health information” (PHI). PHI is information about the health of an identifiable individual. PHI is protected by HIPAA; information that is not PHI is not protected. The Rule describes what can be done with information about persons with health and illness that would make it unprotected (i.e., not PHI), namely deidentification. Health information is considered deidentified if all of 18 specified identifiers (Box 6-6) have been removed. Statistical methods can also be used to establish deidentification instead of removing all 18 identifiers, and HIPAA describes the process for establishing this in detail.52

Box Icon

BOX 6-6

Personal Health Information Identifiers Under HIPAA. Names All geographic subdivisions smaller than a state, such as street address, city, county, precinct, or ZIP code

HIPAA describes several procedures for obtaining access to PHI (Table 6-4). In general, a researcher will be required either to obtain consent from the person whose information is needed or obtain a waiver of authorization from an Institutional Review Board (IRB) or Privacy Board.

TABLE 6-4. Options for Obtaining Protected Health Information for Research Under HIPAA Privacy Rule .


Options for Obtaining Protected Health Information for Research Under HIPAA Privacy Rule .

Impact of HIPAA on Medical Research

The potential effects of the HIPAA Privacy Rule on research are farreaching.13 Researchers in medical and health-related disciplines rely on access to many sources of health information, from medical records and epidemiological databases to disease registries, hospital discharge records, and government documents reporting vital and health statistics. For this reason, the Privacy Rule is likely to affect numerous areas of research, including clinical research, repositories and databases, and health services research. Population-based research that requires broad and unbiased access to medical records of community health providers is of special concern. This would include epidemiological, health services, environmental and occupational health research, as well as post-marketing studies of drugs and medical devices.

Research that involves the establishment of information repositories, including tissue and data repositories, is also of concern. Several of the data resources that are described in this report (for example, large databanks of breast images aggregated across institutions) would be more difficult to establish under HIPAA rules and might not be able to take full advantage of the potential to link data and do longitudinal follow-up. If data or tissue provided to a repository are completely deidentified, it is impossible to identify duplicates or to conduct follow-up of individuals.

The debate over the content and effect of the HIPAA regulations has been fierce over the past four years…. Whatever one's view of the HIPAA regulations, they will form the starting point for future national regulation of medical privacy. In this sense, they are akin to movie contracts, about which one Hollywood executive is reported to have said, “we have to have a contract so we have a basis for renegotiation.”10

George Annas, 2003

New England Journal of Medicine

Variations in interpreting the HIPAA Privacy Rule are contributing to high levels of uncertainty and confusion that have already resulted in delays in research. The variations are partly because of the extreme complexity of the Rule, the details of which encompass more than 350 pages.14 The parts of the Rule that relate to research are not easy to either identify or understand. For example, although the Rule's definition of “covered entity” clearly encompasses most, if not all, insurance companies and all hospitals and health plans, researchers working in settings that seem similar do not apply the definition consistently. In a multisite study of diabetes in youth, for example, the Department of Preventive Medicine at the University of Colorado School of Medicine did not define itself as a covered entity whereas the Department of Public Health Services at Wake Forest School of Medicine did.

Review of grants and contracts may also be affected. NIH has indicated that it may require applicants to provide plans for acquiring or accessing data under the Privacy Rule Program Announcements and Requests for Applications. Membership on review committees would need to be augmented to include expertise to evaluate those plans.

For radiology in general and clinical imaging research, HIPAA will be a hurdle to web-based access to images. Despite the advantages of having web-based images that physicians can view from any place at any time, many institutions might not allow image distribution beyond their controlled premises before they can address the security and privacy issues raised by HIPAA.

The Privacy Rule Has Far-Reaching Tentacles

Although the bioscience industry might at first seem to be beyond the reach of HIPAA, it is “an electronic nightmare expected to surpass many firms' Y2K preparations in both the scope and cost of the required systems changes.”12 Many bioscience companies such as those doing protein or gene diagnostics will end up being classified as business associates or vendors to a covered entity. The bioscience industry has developed much of its software in-house, in an environment where a high level of documented security has not been a concern. Indeed, software engineers made it their goal to develop systems open enough for scientists to collaborate on projects, encourage open communication, and extend the scope of research developments.

AAMC Initiative to Gather Data on HIPAA and Research

The Association of American Medical Colleges (AAMC) has been deeply concerned about the effect of HIPAA on biomedical and health research and lobbied vigorously for modifications to earlier versions of the Privacy Rule. After intense lobbying by the AAMC and numerous other groups, the AAMC decided the most effective approach to further mitigation, either by regulatory change or legislation, will depend on credible evidence of adverse effects of the HIPAA Privacy Rule on ongoing or future research. Thus the AAMC has begun a project to monitor and document the effects of HIPAA on research. The association has developed a network across the various disciplines of medical and health research to build a database and provide an effective mechanism for receiving and recording credible data on HIPAA's impact.10 The AAMC will serve as the lead organization in this network and has asked members to forward specific cases illustrating the detrimental effects of HIPAA. The AAMC will thus ensure that “credible data are obtained to provide an accurate picture of the effects of HIPAA on medical and health research and inform further advocacy efforts.”

This database should provide an important benchmark to determine whether the new approach to protecting patient privacy does, in fact, have a chilling effect on the “pace and volume of research.” If it does, then it will be important to develop other approaches to protecting patient privacy.


Over the years, many new cancer detection technologies have been proposed and even developed. Unfortunately, many of them were of no value to patients. The role of the FDA is to evaluate manufacturers' claims, so that the public has some assurance that products on the market indicated as FDA-approved at least meet the claims of their manufacturers. In particular, FDA review is designed to safeguard the public against false and exaggerated medical claims—although, unfortunately some of those claims are beyond the reach of the FDA. The basic requirement for FDA approval is that a product is both “safe and effective” for a specified use. Products that clear the hurdles of FDA review are thus cleared for entry into the medical marketplace, although as discussed below some detection and diagnostic tools can be used even without FDA approval.

Although FDA approval grants permission to enter the marketplace, it is no guarantee of success. For example, the T-Scan™ device that measures electrical impedance in breast tissue was approved as an adjunct to mammography by the FDA in 1999, but 4 years later the manufacturer had not sold a single machine in the United States.

The following section provides an overview of the FDA approval process for medical devices, how medical devices can be utilized without FDA approval, FDA efforts at collaborating and fostering communication with industry, and the unique regulatory problems posed by novel in vitro diagnostics, such as genetic tests that might be used in breast cancer diagnosis or risk prediction.

Classification of Devices Determines Their Regulatory Pathwayc

Potential Safety Risk

Medical devices are as varied in type and purpose as Band-aids® and pacemakers, so claims that the FDA is inconsistent in how it regulates medical devices should not be surprising. The degree of regulatory scrutiny a device receives from the FDA depends on three factors:

  1. How much risk it poses to users;
  2. How different it is from other devices currently on the market; and
  3. The intended use of the device.

How a device “scores” on these three factors determines how much evidence of safety and effectiveness the FDA will require for the device to enter the market or be used for a new medical purpose.

The first step in the FDA approval process for medical devices is to classify a device into one of three categories which then determines how much regulatory control is needed to ensure its safety and effectiveness (Table 6-5). Class I devices pose the least amount of risk of harm to the user and thus require the least amount of FDA oversight. Putting a Class I device on the market is relatively simple. Class II devices pose more safety risks. Prior to marketing, manufacturers of these devices must meet all the requirements of Class I devices, as well as any existing standards for their product. Those standards can be physical (if a physically similar device already exists) or written (descriptions of the physical attributes of the device). In addition to analytical data demonstrating that the device measures what is claimed—for example, that a genetic test actually measures the gene it claims to measure—the FDA may also require clinical safety and efficacy studies of some Class II devices before considering approval for the market.

TABLE 6-5. Device Classification and Application Requirements for FDA Review .


Device Classification and Application Requirements for FDA Review .

Class III devices pose the greatest degree of safety risk and thus require the most regulatory scrutiny by the FDA. Manufacturers of Class III devices must submit a “premarket approval application” (PMA) that requires them to provide clinical data showing their devices are safe and effective for the intended uses.

Intended Use

The FDA also considers the intended use of a medical device. A Class II device can be boosted to Class III status if a manufacturer wants to advertise a new claim for how the device can be used, and the FDA decides there is insufficient data on the safety and effectiveness of the device when used for this purpose.

The scope of claim that the manufacturer is going to make influences the level of evidence for safety and effectiveness that will be required by the FDA. For example, manufacturers of the endoscopes that physicians commonly use to detect abnormal masses in the gastrointestinal tract never had to show clinical data for the safety and effectiveness of these devices in detecting tumors because they do not advertise that claim. Instead, they claimed these devices are tools for providing images of features within the colon or stomach.

But if a device is likely to be used for a specific clinical purpose as opposed to a general indication covering a variety of purposes, then the FDA is likely to require clinical studies to prove the safety and effectiveness of the medical indication for the device. When digital mammography came under FDA scrutiny, “We were not willing, and we have not been willing with breast cancer detection to say, these are just tools [that provide images],” noted David Feigal, Director of the FDA's Center for Devices and Radiological Health.22

Only about 10 percent of devices are approved on the basis of clinical evidence of safety and effectiveness. The rest are approved on the basis of engineering, and other kinds of performance specifications that are used to show that the devices are substantially similar to those already on the market, per the 510(k) requirement. Feigal also noted that every business day about 50 new medical devices are brought to market, but about half of them are not reviewed for safety and efficacy by the FDA.

Table 6-6 lists the devices that have been approved by the FDA for breast cancer detection since 1995.

TABLE 6-6. FDA Device Approvals for Breast Cancer Detection from 1995-2004.


FDA Device Approvals for Breast Cancer Detection from 1995-2004.

FDA Expands Interactions with Industry

To avoid “surprises” to manufacturers during the FDA review process of medical devices, the FDA offers many avenues through which industry can communicate or collaborate with the agency in a nonadversarial way. Companies can meet with FDA officials to get advice and feedback about clinical studies they are planning to conduct on their new devices before submitting either an official “investigational device exemption” (IDE) application, 510(k), or PMA. An approved IDE application is required to conduct clinical studies on experimental devices prior to seeking marketing approval of the devices. Pre-IDE and pre-510(K) or PMA submission meetings can help manufacturers assess whether their studies will meet FDA criteria for safety and effectiveness.

One frustration cited by device manufacturers is that on occasion the FDA has suggested a specific protocol in these meetings, only to require changes at a later date.57 To prevent such developments from occurring, the FDA Modernization Act of 1997 requires the agency to make a written record of meetings with manufacturers. Agreements made during those meetings are binding and not subject to change unless there is a written agreement with the manufacturer or unless the FDA discovers, after the meeting, a new scientific issue that might compromise the safety or effectiveness of the device. In this case, the FDA must give a device sponsor a chance to meet with the agency staff to discuss the new science affecting the sponsor's study protocols.66

Manufacturers of in vitro diagnostic tests also have the opportunity to give the FDA a mock 510(k) application for the agency's comments prior to submitting an official one. Companies can also provide the FDA with basic information about devices they have in the development stage to further discussion with the agency about what they need to do to garner FDA approval of the devices and/or to educate the agency about the new technology they are developing.

To support innovation in medical technology, the FDA also invites companies to offer suggestions on how to develop the appropriate standards, guidance documents, or policies for devices under the agency's purview. In 1995 the agency began offering roundtables on topics such as pharmacogenomics and in vitro diagnostics. Representatives from both industry and the FDA attend these roundtables, which are designed to foster communication and collaboration between these two entities.

Finally, on its website, the FDA offers numerous guidance documents, device advice, and other information to clarify what manufacturers need to do to legally put their devices on the market.

Some Medical Devices Do Not Require FDA Approval

There are a surprisingly large number of ways that medical devices used for cancer screening purposes can enter the market without FDA approval for those indications.

Many devices used for screening were actually approved for other indications. The prostate-specific antigen (PSA) test, for example, was initially approved only as an indicator of prostate cancer progression, but it was widely used “off-label” for many years to screen healthy men for the cancer. Eventually a manufacturer did provide the FDA with a submission for this claim, and since then it has become a commonly reviewed claim and a widely used device. Such “off-label” use of a medical device is legal as long as its manufacturer does not advertise that the device can be used for such a purpose. The manufacturer of the PSA test, for example, cannot advertise that it is a good screening tool for prostate cancer, although clinics and doctors using the test can make such claims.

Although many in industry believe that in order to get the FDA to approve their new medical products for marketing, the agency requires them to study off-label uses of the products, this is not the case. The FDA Modernization Act of 1997 stipulated that the agency cannot impose such requirements.22

Many genetic and other diagnostic tests come on the market without undergoing FDA review for safety and efficacy because they are considered “analyte-specific reagents” or “home brew in vitro diagnostics.” Analyte-specific reagents are monoclonal antibodies, receptor proteins. and other compounds that are used for diagnostic purposes to detect and quantify individual substances (such as a specific genetic sequence) in biological specimens. Home-brew in vitro diagnostics are diagnostic tests that are custom-made in individual laboratories combining several devices or reagents. Home-brew in vitro diagnostics are common in university settings. The university provides a test result, rather than a diagnostic kit for sale, and the makers of the test are not permitted to market it.

Analyte-specific reagent tests or home-brew in vitro diagnostics used clinically must be performed by a laboratory that meets the highest quality standards set by the Clinical Laboratory Improvement Amendment (CLIA) of 1988. But these tests do not have to be shown to be safe and effective prior to their clinical use. Genbank, an NIH-contracted resource for genetic tests, offers more than 1,000 genetic tests, and as of 2003, only 6 of them have been brought to the FDA for approval. None of the tests for mutations in the BRCA1 and BRCA2 genes has been approved by the FDA, and none require approval for use by law.

One of the first-ever proteomics diagnostic tests, OvaCheck, which tests for ovarian cancer in blood samples, is scheduled to be released early in 2004. As required, the tests will be CLIA-certified but will not require FDA approval under existing regulations. However, the FDA has begun to increase its scrutiny of such tests and in February 2004 asked to meet with the company that makes the test to discuss the appropriate regulatory status of the technology.69 Another early application of applied genomics is OncotypeDX, which claims to predict breast cancer recurrence, appeared on the market in early 2004. The test is CLIA-certified, but not FDA-approved.

Devices or the medical procedures using them may bypass a great deal of FDA regulatory scrutiny if they are customized by the doctors or clinics that use them. An example of this is Laser-Assisted In Situ Keratomileusis (LASIK) surgery to improve vision. This surgery is done with a multipurpose FDA-approved laser that is then modified by ophthalmologists to perform the specific surgery needed to correct for nearsightedness or other visual flaws. The LASIK procedure, however, was never shown to be safe and effective prior to its use by ophthalmologists.

The FDA also grants “humanitarian device exemptions” to devices designed to aid the diagnosis or treatment of rare conditions. Manufacturers of these devices must show that they are safe, but are not required to conduct tests of their effectiveness. Costs of such tests are not balanced by the revenues from a small market and requiring them would inhibit development of devices for rare conditions.

Accelerating Medical Technology Development at the FDA

Medical technology developers have long expressed frustration at the rising costs of product development and the uncertainty of the FDA review process. In January 2003, the FDA launched an initiative designed to accelerate the development of new technologies. The initiative has been enthusiastically welcomed by the medical technology community, which predicts that this effort to make FDA reviews more efficient will help to get lifesaving and life-improving technologies to patients faster and reduce the costs associated with bringing innovations to market.3,4

Three primary areas of improvement have been targeted: reducing review delays, improving the quality and efficiency of the review process, and facilitating new product development. These FDA goals are being sought through improving biomedical science, risk-management science, and economic science within the FDA.40 The major changes, some of which the FDA has already begun implementing, are outlined below.67

  • Reducing time delays and overall costs of development
    • Avoiding cycling of application process
    • Increasing communication between the FDA and industry
  • Quality systems approach to the review process
    • Education and training of FDA review staff in latest developments in science and technology
    • Development of review templates to improve consistency
    • Common Technical Document to harmonize application processes of the United States, European Union, and Japan
  • Collaborative clinical guidance development (input from workshops, advisory committee meetings, developers, and scientific community)
    • Guidance development priority areas include oncology, diabetes, and obesity
  • Priority areas of emerging technologies identified
    • Cell and gene therapy
    • Pharmacogenomics
    • Novel drug delivery system

Many of the companies that are generating genomic or proteinomic technology are small start-up firms that lack experience in interacting with the FDA, are unfamiliar with the manufacturing quality controls the agency requires of them, and lack expertise in running clinical trials. There are 14,000 device companies in the United States, but only 10 percent of them would meet the definition of a large business. In fact, 5,000 are very small with revenues of $1 million or less and five or fewer employees.

David Feigal has commented that one of the challenges for the FDA is, “How do you reach all of those different firms and entities?” He noted that few device companies take advantage of the meetings they can have with the FDA to discuss research protocols or their data prior to making official submissions. Many, however, do utilize the FDA's Device Advice group, which answers 45,000 telephone inquiries a year and posts information on the agency's website.22

FDA's detailed guidance documents on what is needed for FDA approval of various types of devices expedite the approval process. When a guidance document exists for a Class II product, the manufacturer of such a device has about an 85 percent chance of getting it approved after the first cycle of FDA's review of the company's submission (as opposed to having to gather more data and undergo additional FDA review cycles before approval). When there is no guidance, the review process takes, on average, 5 months longer with only a 45 percent chance of approval on the first cycle.


Overcoming the regulatory hurdles required to get a new cancer detection technology into the market is no guarantee that the new technique will be readily used. Widespread implementation of new breast cancer detection procedures will depend, in part, on whether federal (Medicared) and private health insurers will pay for these procedures. Reimbursement depends, in turn, on whether the new procedure or device results in improvement of clinical outcomes, whether such improved outcomes are relevant to the covered population, and whether they are legally mandated to cover the new technique.

Coverage Depends on Proven Clinical Utility

FDA approval of a new technology is not enough to ensure that insurers will pay for it. Health insurers also require proof that use of the new technology will improve the net clinical outcomes of patients, including reductions in morbidity and mortality, changes in management decisions, and improvements in quality of life (Box 6-7, Box 6-8).

Box Icon

BOX 6-7

Blue Cross Blue Shield Association Technology Evaluation Committee Requirements for National Coverage. FDA approval Data must permit conclusions about effectiveness

Box Icon

BOX 6-8

Medicare Requirements for Coverage: Steps to Obtaining Medicare Coverage. Regulatory approval Benefit determination

The use of positron emission tomography (PET) in evaluating palpable breast masses illustrates the importance of changes in patient management. Once such masses are discovered, a biopsy is inevitable, and therefore PET adds nothing to the management approach. CMS has therefore decided not to reimburse for this application of PET. On the other hand, they do reimburse when PET is used to monitor response to treatment for breast cancer because the results of such scans will alter how these women are treated. Currently, there are no other imaging modalities that serve this purpose. “The magnitude of an improvement has to be clinically meaningful as opposed to quantitatively described,” according to Sean Tunis, director of the Office of Clinical Standards and Quality at CMS.65

Medicare, as well as many private payers, also requires that a new technology be shown to be effective outside the research setting in which it is originally tested. Medicare is particularly interested in knowing whether the new technique will be useful to its older beneficiaries. Because most clinical trials exclude participants older than 65 years, most trials do not have adequate numbers of elderly patients. In some cases, it is reasonable to assume that older patients will benefit as much as younger patients. In other cases, however, if there is reason to think its performance will differ in the elderly, Medicare will not cover the new technique. According to Alan Rosenberg of WellPoint Health Networks, the effectiveness of a technology “has to be reproduced in a variety of clinical settings” and WellPoint will not normally pay for it unless it has been shown “effective outside investigational settings.”61 What insurers will pay for also depends on legal statutes. When it was first created, screening and preventive services were not covered by Medicare. Since then, Congress added screening mammography, PSA screening for prostate cancer, and colorectal cancer screening to Medicare's benefit package. Although the Blue Cross Blue Shield Association Technology Evaluation Center could not determine whether digital mammography detects breast cancer better or even with the same accuracy as film mammography, Congress has mandated higher reimbursements for digital mammography.11 The higher reimbursement levels have encouraged increased adoption of the technology before the results of the definitive digital mammography trial, DMIST, are released. Various states have passed laws that require private insurers such as WellPoint to cover specific procedures or treatments.35

The Catch in Determining Clinical Value

Although insurers are reluctant to pay for a new medical procedure until enough clinical experience shows that it improves net clinical outcomes, acquiring such clinical information can be difficult. Companies developing and marketing the new technology often do not have the resources to conduct the well-designed, definitive studies needed to document a technique's clinical effectiveness.

Research on preventive services often is unable to determine outcomes within the desired timeline of a technology producer's desire to bring a product to market. As Sean Tunis from CMS noted, there is a clinical research “Catch-22” in that insurance coverage of the new technology would increase its use, providing both some of the resources needed for its developers to study its clinical value and more clinical experience with the new technology. Yet, once coverage is granted there is little incentive (and more likely a disincentive) for companies to gather data and formally evaluate the clinical effectiveness of their new technology.

According to WellPoint's Rosenberg, gathering such information is critical, because research indicates that, nationwide, our health care resources are not spent wisely. A 2003 study that examined Medicare spending found that even though there was as much as a 30 percent difference in spending by state, such regional differences in spending were not associated with significant differences in health outcomes.23,24 But as Tunis pointed out, “There really isn't a place right now in the public or private sector that makes evaluative clinical research a high priority.” Such research would include conducting head-to-head trials of two or more comparable technologies or treatments to see which is more effective. It is not within the NIH mission to do a large number of this sort of studies and they are not a priority of industry, he said. “So there is a big hole in the funding streams of the evaluations of the appropriate clinical uses of new and emerging technologies, particularly as they relate to existing technological alternatives,” Tunis concluded. Rosenberg added, “How do we prioritize spending large sums of money in terms of these new technologies? There is very little opportunity to systematically, as a country, go forward and analyze this.”

Tunis referred to a recent paper on the findings of the IOM Roundtable on Clinical Research that suggested collaborative efforts between public and private organizations involved in the clinical research enterprise should focus on streamlining the overall process.63 CMS is currently participating in a committee composed of private and government health insurers that is trying to prioritize clinical research from the perspective of those who foot most of the health care bills in this country. The committee plans to publish findings with the hope that others will pursue conducting the studies they deem necessary.

Conditional Coverage

Another way around the clinical research Catch-22 is to have “conditional coverage” of new promising technology prior to firm evidence that it improves clinical outcomes. Insurance reimbursements would be conditional on the requirement that coverage of the new technology would be reevaluated in a few years, during which time studies of the technology's effectiveness would be done. If those studies indicate the technology did NOT improve clinical outcomes, then insurance companies would stop reimbursing its costs.

However, once coverage has been granted for a medical procedure or treatment, it may be very difficult to rescind it. Historically, Medicare has had problems withdrawing or limiting coverage for any medical procedure or treatment in the absence of definitive evidence that it is truly useless or harmful.

Another problem with conditional coverage is that companies may not do the studies needed to document clinical utility of their new medical product. The proposed process for conditional coverage of new procedures is akin to that already in operation for the FDA's “accelerated approval” process of new drugs. Accelerated approval, which was initiated in 1987, is based on surrogate endpoint data on the condition that the sponsor confirms actual clinical benefit through well-controlled studies. The effectiveness of this process has varied.

During the post-approval period, nearly all AIDS drugs that received accelerated approval underwent the expanded clinical tests needed to confirm preliminary clinical findings. Those tests revealed clinical benefits, so no drugs needed to be withdrawn from the market. This experience is in stark contrast to that of the agency's accelerated approval of oncology treatments. Almost no confirmatory clinical studies have been completed on these drugs.49 But for accelerated approval, the FDA does not specifically require that confirmatory studies be under way at the time of approval. Such a specification might give the agency the added muscle it needs to make accelerated approval work the way it was designed.22

An important prerequisite for conditional coverage is that the decision to cover a new entity must be linked to high-quality studies whose funding is assured. The Committee does not recommend conditional coverage without careful analysis of feasible mechanisms for implementation. Such an analysis would require a separate study, ideally one that focused specifically on the issue of conditional coverage, as opposed to consideration in the context of a study focused on a specific health issue, such as the current study.

Evidence-Based Evaluations Done by Insurers

In the absence of definitive studies of a new technology's clinical utility, WellPoint considers other information when evaluating the technology. This information includes input from clinical experts throughout the country, as well as the degree of acceptance of the product or service in the national organized medical community. For example, WellPoint puts a lot of weight on recommendations by well-respected organizations such as the USPSTF, the American Heart Association, and the American Cancer Society. If any of these organizations recommend a screening procedure, WellPoint will likely reimburse its costs.

Rosenberg noted that a WellPoint committee meets annually to evaluate new medical technology. The committee relies on a number of inputs to determine which medical products should be evaluated, such as reviews of recent FDA-approved medical products, requests from WellPoint's claims and medical review units, and information supplied by device manufacturers. Medicare's evidence-based reviews are done in an ad hoc fashion, rather than at regularly scheduled intervals. The agency is currently trying to rectify this ad hoc approach by establishing a medical technology council to determine which products or procedures should be evaluated.

Neither WellPoint nor the BCBSA-TEC consider cost when deciding whether to cover a new medical product or procedure. “We go by our legal contract which does not include cost effectiveness,” said Rosenberg. “Dollars and cents are never presented [during our evaluations].” Cost-effectiveness also does not enter into Medicare's evaluations of new medical technology, although they tend to focus their internal evidence-based reviews of new technology on those that are particularly expensive. Medicare will also conduct evaluations at the request of people or organizations outside the agency when they want a new medical product or procedure to be approved for coverage. But Medicare has only recently started to do extensive evidence-based reviews of medical products. Consequently, many well-used techniques, such as MRI, never underwent Medicare scrutiny for clinical utility.

Other technologies bypass such scrutiny by falling under existing coding categories that Medicare has already determined are reimbursable. Digital mammography, for example, falls under the same payment code as standard mammography. “The whole process of developing new codes for new technologies is actually incredibly more important and influential in what technologies become available in the Medicare program than one would think,” said Tunis. He added that “incremental improvements in technology are fairly seamlessly handled in the Medicare program,” because they don't require a new payment code.

Most New Technologies Cannot Be Reimbursed Without New Payment Codes

Current Procedural Terminology (CPT) codes are used by Medicare and Medicaid to reimburse doctors. The development of new CPT codes is critical to new technologies, because without the code, health care providers cannot bill for reimbursement. The CPT code is thus a key step toward facilitating market penetration and broad clinical use of new technology. The assigning of Medicare payment codes is under the control of the AMA and various partner organizations, such as the American College of Radiology and the American Society of Clinical Oncology.

Delays in assigning CPT codes to new medical technologies have long been a source of frustration to technology developers. CPT codes were historically updated once each year and CMS often took 1 to 2 years to issue the codes, creating barriers to patient access. In the past few years, Medicare reform laws called for changes to streamline Medicare coding for new technologies and procedures. Under the Balanced Budget Refinement Act of 1999, Congress called on CMS to reduce coding delays and respond more promptly to advances in medical technology. As a result, codes for the outpatient prospective payment system are now updated quarterly. (Codes for other payment system are still updated annually.)

In 2001, the AMA's CPT Editorial Panel established a new category of CPT codes called Category III codes, which are a set of temporary codes intended for tracking emerging technologies. For laboratory tests, these codes represent emerging technologies that may not be performed by many laboratories and may not yet have been approved by the FDA. Review of emerging technology codes is done by the CPT Editorial Panel as part of its procedures to annually update CPT codes. The CPT Editorial Panel determines if a temporary emerging technology code should be converted to a permanent existing technology Category I CPT code or if a new emerging technology code should be established.

Reimbursement Can Be Out of Sync with Real Costs

Once a medical procedure or technology has been approved for coverage, the next step is a determination of the appropriate payment amount for reimbursement. Although mammography has long been covered by Medicaid and private payers, there has been much discussion about the fact that the reimbursement that health care providers receive for mammography services is less than the cost of providing the services. Indeed, mammography is widely considered a money-losing service that is in effect supported by other radiology services.

Mammography rates were raised in 2002, but they are still estimated by most radiology services to be below real costs (see Chapter 2).


The need to develop stronger links between basic and clinical research has become increasingly clear. In addition, the cost and complexity of clinical research has expanded over the years, making it increasingly important to capture the economies of scale that come from establishing multi-institutional collaborative networks. Initiatives to achieve these goals have been established at all levels of the research enterprise, from interagency projects such as the Interagency Council on Biomedical Imaging in Oncology (ICBIO) to specific projects such as the National Digital Mammography Archive (NDMA). Six of these initiatives are described below. They are not a comprehensive summary of all such initiatives, but rather a set of examples that are particularly relevant to breast cancer detection.

AHRQ Initiative for Research Networks

Since 1999, the AHRQ has issued a series of research funding announcements that support projects on the translation of research findings into “sustainable improvement in clinical practice and patient outcomes.” In 2002, the NCI articulated that a key part of its mission was the rapid movement of research discoveries through program development into service delivery, which included projects designed to “identify and overcome infrastructure barriers to the adoption of evidence-based interventions in clinical and public health systems that serve the American public, with a particular emphasis on reaching those who bear the greatest burden of cancer.” In 2003, AHRQ and NCI issued a joint request for applications, for research projects that assess the use of intervention to translate research into practice in the primary care setting and measure the impact of those interventions.

Although there is broad agreement about the urgent need to accelerate the rate of uptake of evidence-based findings and tools into practice, considerable uncertainty persists about the best strategies for doing this and the setting(s) in which each strategy is most effective. The majority of strategies that have been studied focus on changing clinical behavior. From these trials, it is known that passive diffusion of information (such as distribution of educational materials or lectures) is generally ineffective as a method of promoting behavioral change. Studies of the multidimensional challenges of translating research into everyday practice are hampered by the current concentration of clinical research in academic settings.

Cancer Biomedical Informatics Grid (caBIG)

The pilot project for the Cancer Biomedical Informatics Grid (caBIG)—launched in July 2003 by the National Cancer Institute Center for Bioinformatics (NCICB)—is an attempt to create an open-source, open-access, cancer information network. With the rapid evolution of biomedical research technology, various disciplines of cancer research have been generating enormous amounts of data. However, discrete fields of cancer research, such as radiology, molecular biology, and epidemiology, have no direct means of communicating and sharing information. Thus the main goals of the caBIG project are to enable researchers to internationally share tools, standards, data, application, and technologies according to agreed-upon standards.

In its pilot phase, which is scheduled to be completed in 2006, the NCICB will work with selected cancer centers to join their expertise and infrastructure into a common web of communications, data, and applications.50 Currently, there is no common mechanism for individuals, institutions, or private companies to easily share data and there is no common standard that researchers use. Yet, caBIG will attempt to overcome obstacles to collaboration by implementing several streamlining initiatives. Most importantly, in order to facilitate data sharing, the network will attempt to unify terminology, data sets, and deployment among all the cancer centers and the NCI.50 Another major goal of caBIG, a standardized repository, may facilitate additional insight from previously published datasets. The infrastructure was also established to facilitate sharing of data among consortium groups prior to publication or public release. Finally, caBIG will attempt to integrate several isolated disciplines, potentially resulting in increased efficiency and cost-effectiveness of most aspects of cancer research.

Ultimately, the development of this unique data-sharing platform is intended to allow research groups to tap into the rich collection of emerging cancer research data while supporting their individual investigations in an attempt to accelerate the pace of cancer research. For example, a comprehensive and standardized infrastructure could facilitate collaborations among centers and may result in quicker, less expensive, and more easily coordinated multi-institutional trials. If successful, this project may have a significant positive impact on translating basic research into better patient outcomes.

ACRIN: Network for Cooperative Development of Imaging Technology

ACRIN is an organization of institutions, funded by the NCI, which manages clinical trials of cancer-related imaging technologies. The first large-scale cooperative imaging trials group was the Radiology Diagnostic Oncology Group, established by the NCI with Harvard Medical School, the American College of Radiology, and 45 institutions throughout the country. During its existence from 1987 to 1997 it evaluated nine cancers in terms of staging and follow-up, and resulted in approximately 100 articles and abstracts. It was followed by ACRIN, which has been in operation since March 1999 and is funded by NCI, at least through 2007.

ACRIN offers a unique opportunity to assess emerging technologies and determine their optimal use by providing both funds and an infrastructure for multi-institutional clinical trials. This arrangement allows for both extremely large and smaller trials, to recruit outstanding researchers, gain access to new technologies, and produce high-quality results. More specifically, ACRIN facilitates the standardization, development, and implementation of trials, including data acquisition and management, protocol design and biostatistical analysis, monitoring and quality assurance, financial management, and reporting of trial results. In addition all ACRIN trials include measures of cost-effectiveness and quality of life, except for individual trials where investigators present compelling reasons for why such measures would not be useful.

As of March 2004, ACRIN is conducting seven trials with two others conditionally approved for development. Several of the trials are dedicated to breast cancer imaging. One approved trial that will begin enrollment is for the study of ultrasound as a screening tool for breast cancer, and two other trials are analyzing the role of MRI in breast cancer—one for monitoring breast cancer treatment results and another for screening of the contralateral breast. One of the largest ACRIN studies is the DMIST, which is comparing digital with screen-film mammography. The trial reached its accrual goal of 49,520 participants in November 2003, but the 1-year follow-up results and data analysis will not be complete until 2005.8

As new technologies emerge, trials like ACRIN will be at the forefront. Although the trials are designed to answer specific questions regarding screening, the data collected will also be useful in developing mathematical models that evaluate the incorporation of new techniques, such as risk stratification and nonimaging screening methods.30 (Several of the ACRIN trials involve the collection of both biological and imaging data.) As technology evolves over the next 20 years, from gross anatomic and pathologic imaging to molecular imaging of physiology and metabolism, ACRIN is poised to be involved in the clinical validation of these future technologies.

Overall, ACRIN has the potential to improve clinical practice and patient outcomes by identifying the appropriate use of imaging technologies through rigorous, large-scale clinical studies that otherwise would not be possible for small-scale organizations to conduct. ACRIN also provides a unique opportunity for imaging professionals to participate in rigorous, multicenter clinical trials and learn about how high-quality research is conducted.7

NIH Roadmap for Medical Research

In September 2003, NIH director Elias Zerhouni announced a 5-year plan, known as the NIH Roadmap for Medical Research. The goal of the Roadmap is to reduce the time it takes to turn basic knowledge into tangible benefits—for example, better technologies for breast cancer detection. It is based on a collection of NIH-wide initiatives designed to transform the way research is done at the agency, and is organized around three broad themes:

  1. New pathways to discovery,
  2. Research teams of the future, and
  3. Reengineering the clinical research enterprise.

The strategic initiatives to be funded under the NIH Roadmap will address critical roadblocks and knowledge gaps that currently constrain rapid progress in biomedical research.

Radiology and the emerging field of molecular imaging play prominent roles in the Roadmap. They factor into each of three major initiatives listed above. The theme of reengineering the clinical research enterprise is particularly relevant to what the Committee believes is especially needed to promote the development of more effective approaches to the early detection of breast cancer, and is described on the NIH website as “undoubtedly the most difficult but most important challenge identified by the NIH Roadmap process.”53 This theme is further subdivided into three initiatives—translational research, clinical workforce training, and enhancement of clinical research networks—all of which address the Committee's conclusion that basic research should be integrated with technology development and assessment (see Box 6-9).

Box Icon

BOX 6-9

Reengineering the Clinical Research Enterprise. Over the years, clinical research has become more difficult to conduct. However, the exciting basic science discoveries currently being made demand that clinical research continue and even expand. This is (more...)

At present, the Roadmap does not specifically address the need to incorporate research intended to optimize the value of new technologies in clinical practice, which the committee believes is also important.

Interagency Council Counsels Technology Developers

The ICBIO was established in 1999 to bring together technology developers and representatives from the federal government to expedite the process of bringing new products to market. The multiagency group includes the NCI, FDA, and the CMS. The group is another example of federal agencies working proactively with early stage technology developers—many of whom have little experience with regulatory processes and often founder as a result—to help avoid wasting time and money in what is normally a long and expensive process.

The Council provides advice to medical technology developers on the spectrum of scientific, regulatory, and reimbursement issues related to developing an imaging device or technology. Any business or academic investigator developing a device or technology relevant to biomedical imaging in cancer may submit a request to make a presentation, and small businesses are particularly encouraged to apply. A presenter typically meets with the Council for an informal, confidential discussion with emphasis on helping the presenter develop an effective approach for FDA approval and streamlining the process of coverage and reimbursement decisions from CMS.

The Council hosts an annual conference on biomedical imaging in oncology, designed to identify areas of new biomedical opportunity and address challenges in the cancer imaging community, focusing on the regulatory, coverage, and reimbursement issues associated with more developed and established technologies.

FDA and CMS coordination of their discussions of new technologies is another value of ICBIO, with the potential of easing an oft-cited bottleneck to technology development.

Developers of early stage medical technology have long commented that the process of FDA and CMS review are so unpredictable and burdensome that they unduly impede the development of innovation technologies.33 ICBIO is one example of the series of proactive strategies that federal agencies have taken in recent years to address these problems.

National Digital Mammography Archive

As digital imaging technology becomes increasingly cost-effective, mammography is expected to move away from a film-based format. This transition will also increase opportunities for electronic sharing of images, data, and other information among a wide network of clinicians and researchers. To this end, researchers at the Universiy of Pennsylvania, along with collaborators at the Universities of Chicago, North Carolina, and Toronto and contractors at Oak Ridge National Laboratory in Tennessee, have assembled and tested a prototype for a national database, the NDMA.1

Mammography services could be greatly streamlined if breast imagers were able to examine mammographic images stored at multiple sites from their own facility. This would eliminate the need to physically transfer mammograms from site to site, and would go a long way toward ending the all too common frustration and delays caused by lost mammograms. This project tests the computer's ability to store and instantly retrieve vast numbers of high-quality digital mammograms from distant sites. Medical image data is different from other types of data because the file sizes are large (hundreds of megabytes per exam) and the required turn-around time is short. The NDMA system exploits the speedy content-delivery capabilities of Internet2, which has made it feasible to transfer large quantities of medical image data over low-cost and high-speed wide-area networks. Cumbersome files will no longer have to be mailed in hard-copy format. The NDMA can also facilitate consultation and collaboration among physicians on difficult cases, particularly when they occur in underserved areas. For example, researchers at the University of Toronto are using a mobile van to download mammograms in remote locations.45 These functions may be further enhanced by the planned development of the NDMA as a central resource for computer-aided diagnosis. InfoWorld, a media group that specializes in information technologies, recognized the NDMA in 2002 as the #1 project that best exemplifies the implementation of innovative technology.

Initiated in 2000 with a 3-year grant from the National Library of Medicine's Next Generation Internet initiative, the NDMA project went live in 2002 at the four participating institutions (Figure 6-4). The pilot archive, comprising digital images and information, can be accessed through web portals at each of the four institutions. With continued funding, the network is expected to expand gradually to connect approximately 2,000 mammography facilities.45 This will be accomplished through the construction of a few large regional archives distributed across the country, linked to smaller, more local archives that store data collected within 2 to 3 years, which in turn serve individual hospitals, universities, and other health care institutions through secure portals that can both send and receive information. Currently, a single area archive connects all of the participating institutions.

FIGURE 6-4. Architecture of the National Digital Mammography Archive (NDMA).


Architecture of the National Digital Mammography Archive (NDMA). Courtesy of Dr. Mitchell Schnall and Pat Payne.

During the first 3 years of the project, researchers enrolled about 10 patients per day, uploading their mammography data to the NDMA.68 Archived images are primarily derived from digital mammograms; films also have been digitally scanned for inclusion in the archive, but produced lower-quality images.45 Mammography reports, conforming to BI-RADS® guidelines, are also posted to the archive. In additional to digital mammography, the NDMA can store MRI, ultrasound, and other imaging formats that conform to the binary standard known as Digital Imaging and Communications in Medicine.

The expanded capabilities of the Internet2, also known as Next Generation Internet, are essential to the efficient storage, retrieval, and security of mammography information. Indeed, devising a means of storing enormous quantities of data was one of the most significant challenges. Unlike the standard Internet, the bandwidth and technology of Internet2 can accommodate the storage of very large digital image files—which are predicted to exceed capacity for management and storage by breast center sites—and enable their instant transfer across the network.1 The use of grid-computing addresses is “the trick of making use of digital images, indexing them, and delivering them to hospital locations on demand,” says Robert Hollebeek, chief architect of the NDMA. The Internet2 “grid” framework is also key to ensuring patient privacy and confidentiality, as required by the HIPAA Privacy Rule. Multiple levels of system security include access control, encryption, and the use of virtual private networks, as well as confidentiality safeguards for research purposes that strip personal information that could be used to identify individual patients.

With these safeguards in place, the NDMA constitutes a rich reserve of data that can be “mined” for research and education. Epidemiologists could, for example, use the database to compare breast cancer incidence and prevalence among women of various ages or ethnicities, or in different areas of the country. A national teaching file is being developed as part of the NDMA project to provide teaching and testing material for mammography training programs.1,68 Currently, teaching cases for the training of radiologists and mammographers tend to be developed separately at individual institutions, and this limits students' exposure to cases that occur in that medical center. Some day, radiologists may be able to annotate mammogram images with specific location data and upload them to the NDMA for inclusion in cases file for teaching, testing, and advanced training.48

In the future, the NDMA could link to similar databases under development in the United Kingdom, France, Germany, and Japan to create an international mammography archive.31 The U.K. project, which is jointly funded by that nation's government and IBM, resembles the NDMA in size, scope, and design. These expanded, global networks offer the potential of even greater opportunities for research, education, and the efficient exchange of patient information.

Technology Assessment Centers for Breast Cancer Detection

In contrast to the relative wealth of resources for discovery research, there are very limited resources devoted to the clinical testing of new technologies for breast cancer detection. Companies developing new technologies often hire academic investigators to run their FDA trials. These trials tend to have limited aims—to prove the safety and efficacy of the new products for purpose of marketing and selling the new product. FDA approval does not require assessment of utility of a new technology in clinical practice. The real clinical utility of a new technology depends on how it will be used or co-used with other tests and on which population of women at risk for breast cancer. The specificity, sensitivity, and diagnostic accuracy all vary with the clinical question being addressed and the population being tested. So, while a device may meet FDA's requirements for marketing, deciding whether the device adds value to their decision-making process or merely adds cost is a more complicated question to physicians and their patients. Unfortunately, because little clinical testing of devices is done after the FDA approval process, the adoption of new technologies by users such as radiologists too often depends more on marketing hype and the need to be perceived as having the latest and greatest new products than by clear evidence as to whether those products are really useful to patients.

Currently, very few clinical trials are funded annually to determine whether new technologies might improve the detection and/or diagnosis of breast cancer. Such trials require access to patient populations willing to undergo extra experimental tests, as well as a cadre of investigators who are skilled in trial design and execution. As a rule, academic medical centers do not consider the clinical testing of new technologies to be part of their mission. Multicenter, collaborative studies offer an effective way to meet the need for timely and generalizable clinical evaluations of imaging technologies.26

The NIH uses many criteria in determining the need to establish specific research center programs, but certain criteria are applied across the board, each of which applies to centers for research on developing, assessing, and implementing new technologies for breast cancer detection (Box 6-10).34

Box Icon

BOX 6-10

NIH Criteria for Establishing Center Programs. The scientific opportunities and/or public health needs that the program would address have high priority. The center would provide an organizational environment that would facilitate activities that are (more...)

The model of Comprehensive Cancer Centers and their utility in testing new drug therapies could be applied to the testing of new technologies. Centralized resources where imaging experts, including scientists who can adapt a technology to a new clinical problem, patients willing to participate in clinical trials, and an institution whose mission includes the application and testing of new technologies would provide an infrastructure that would allow new devices to become tested and available to those who need them much more systematically (and quickly) than the system that is currently available. In addition, other endpoints besides diagnostic accuracy, such as cost effectiveness and quality of life, could also be centrally and more uniformly studied in such centers.


National Digital Mammography Archive. May 1, 2001. Web Page. Available at:
AcademyHealth. Playing by New Rules: Privacy and Health Services Research; Workshop Playing by New Rules: Privacy and Health Services Research.2003. Background paper for the April 29, 2003.
AdvaMed. AdvaMed Commends FDA Commisioner's Plan to Speed Review Times; Effort to Complement MDUFMA Performance Goals. Aug 5, 2003. Web Page. Available at:
AdvaMed. AdvaMed Welcomes President Bush's Announcement to Nominate Mark McClellan as New CMS Administrator. Feb 20, 2004. Web Page. Available at:
Advani AS, Atkeson B, Brown CL, Peterson BL, Fish L, Johnson JL, Gockerman JP, Gautier M. Barriers to the participation of African-American patients with cancer in clinical trials: a pilot study. Cancer. 2003;97(6):1499–1506. [PubMed: 12627515]
Agency for Healthcare Research and Quality. Summary, Evidence Report/Technology Assessment. Rockville, MD: Agency for Healthcare Research and Quality; 2001. Diagnosis and Management of Specific Breast Abnormalities. AHRQ Publication No. 01-E045.
American College of Radiology Imaging Network (ACRIN) ACRIN—Frequently Asked Questions. 2004. [Accessed March 4, 2004]. Web Page. Available at:
American College of Radiology Imaging Network (ACRIN) Digital Mammographic Imaging Screening Trial (DMIST) 2004. [Accessed March 4, 2004]. Web Page. Available at:
American Heart Association. Heart Disease and Stroke Statistics—2002 Update. Dallas, TX: American Heart Association; 2001.
Association of American Medical Colleges. Group on Institutional Advancement. 2003. [Accessed April 4, 2003]. Web Page. Available at:
Blue Cross Blue Shield Association Technology Evaluation Center. Full Field Digital Mammography. Tec Assessment Program. 2003;17(7):1–22.
Bole K.Bio-IT World. Decoding HIPAA: Are You Ready? 2003. [Accessed February 2003]. Web Page. Available at:
Bonetta L, Dove A, Watanabe M. The road to research is paved with restrictions. Nat Med. 2003;9(6):630. [PubMed: 12778143]
CenterWatch. Projecting HIPAA's Impact. CenterWatch Newsletter. 2003;10(5)
CenterWatch. Breaking the development speed barrier. CenterWatch Newsletter. 2002;9(6)
Cho MK, Sankar P, Wolpe PR, Godmilow L. Commercialization of BRCA1/2 testing: practitioner awareness and use of a new genetic test. Am J Med Genet. 1999;83(3):157–163. [PMC free article: PMC2225442] [PubMed: 10096590]
Collins FS, Watson JD. Genetic discrimination: time to act. Science. 2003;302(5646):745. [PubMed: 14593134]
Corbie-Smith G, Ammerman AS, Katz ML, St Georg DM, Blumenthal C, Washington C, Weathers B, Keyserling TC, Switzer B. Trust, benefit, satisfaction, and burden: a randomized controlled trial to reduce cancer risk through African-American churches. J Gen Intern Med. 2003;18(7):531–541. [PMC free article: PMC1494890] [PubMed: 12848836]
Cox K, McGarry J. Why patients don't take part in cancer clinical trials: an overview of the literature. Eur J Cancer Care (Engl) 2003;12(2):114–122. [PubMed: 12787008]
Division of Adult and Community Health, National Center for Chronic Disease Prevention and Health Promotion Centers for Disease Control and Prevention Behavioral Risk Factor Surveillance System Online Prevalence Data. 1995. [Accessed June 1, 2004]. Web Page. Available at:
Eisenberg J, Zarin D. Health technology assessment in the United States: Past, present, and future. Int J Technol Assess Health Care. 2002;18(2):192–198. [PubMed: 12053419]
Feigal D. Institute of Medicine Workshop: From Development to Adoption of New Approaches to Breast Cancer Detection and Diagnosis. Washington, DC: The Institute of Medicine of the National Academies; 2003. Challenges in Assessing the Safety and Efficacy of Cancer Detection Devices.
Fisher ES, Wennberg DE, Stukel TA, Gottlieb DJ, Lucas FL, Pinder EL. The implications of regional variations in Medicare spending. Part 2: health outcomes and satisfaction with care. Ann Intern Med. 2003;138(4):288–298. [PubMed: 12585826]
Fisher ES, Wennberg DE, Stukel TA, Gottlieb DJ, Lucas FL, Pinder EL. The implications of regional variations in Medicare spending. Part 1: the content, quality, and accessibility of care. Ann Intern Med. 2003;138(4):273–287. [PubMed: 12585825]
Gammon MD, Neugut AI, Santella RM, Teitelbaum SL, Britton JA, Terry MB, Eng SM, Wolff MS, Stellman SD, Kabat GC, Levin B, Bradlow HL, Hatch M, Beyea J, Camann D, Trent M, Senie RT, Garbowski GC, Maffeo C, Montalvan P, Berkowitz GS, Kemeny M, Citron M, Schnabe F, Schuss A, Hajdu S, Vincguerra V, Collman GW, Obrams GI. The Long Island Breast Cancer Study Project: description of a multi-institutional collaboration to identify environmental risk factors for breast cancer. Breast Cancer Res Treat. 2002;74(3):235–254. [PubMed: 12206514]
Gatsonis C, McNeil BJ. Collaborative evaluations of diagnostic tests: experience of the Radiology Diagnostic Oncology Group. Radiology. 1990;175(2):571–575. [PubMed: 2183290]
Goold SD, Vijan S. Normative issues in cost effectiveness analysis. J Lab Clin Med. 1998;132(5):376–382. [PubMed: 9823931]
Hadley DW, Jenkins J, Dimond E, Nakahara K, Grogan L, Liewehr DJ, Steinberg SM, Kirsch I. Genetic counseling and testing in families with hereditary nonpolyposis colorectal cancer. Arch Intern Med. 2003;163(5):573–582. [PubMed: 12622604]
Health Care Information Center. Medicine and Health. 2002;58(38)
Hillman BJ, Schnall MD. American College of Radiology Imaging Network: future clinical trials. Radiology. 2003;227(3):631–632. [PubMed: 12773670]
IBM News-Australia. Oxford University, IBM and UK Government to build massive computing grid for breast cancer screening and diagnosis. [Accessed August 21, 2003]. Web Page. Available at:
Institute of Medicine. Assessing Medical Technologies. Washington, DC: National Academy Press; 1985.
Institute of Medicine. Mammography and Beyond: Developing Technologies for the Early Detection of Breast Cancer. Washington, DC: National Academy Press; 2001.
Institute of Medicine. NIH Extramural Center Programs: Criteria for Initiation and Evaluation. Washington, DC: The National Academies Press; 2004. [PubMed: 20669404]
Kaiser Family Foundation. State Mandated Benefits: Contraceptives. 2002. [Accessed July 27, 2003]. Web Page. Available at:
Klabunde C, Kaluzny A, Ford L. Community Clinical Oncology Program participation in the Breast Cancer Prevention Trial: factors affecting accrual. Cancer Epidemiol Biomarkers Prev. 1995;4(7):783–799. [PubMed: 8672997]
Kramer BS, Gohagan JK, Prorok PC. Cancer Screening: Theory and Practice. New York: Marcel Dekker; 1990.
Lara PN Jr, Higdon R, Lim N, Kwan K, Tanaka M, Lau DH, Wun T, Welborn J, Meyers FJ, Christensen S, O'Donnell R, Richman C, Scudder SA, Tuscano J, Gandara DR, Lam KS. Prospective evaluation of cancer clinical trial accrual patterns: identifying potential barriers to enrollment. J Clin Oncol. 2001;19(6):1728–1733. [PubMed: 11251003]
Marsden J, Bradburn J. Patient and clinician collaboration in the design of a national randomized breast cancer trial. Health Expect. 2004;7(1):6–17. [PubMed: 14982495]
McClellan MB.Commissioner of Food and Drugs. Joint Economic Committee; 2003. Technology and innovation: their effects on cost growth of healthcare.
McCormack J. ALLHAT—so what? J Inform Pharmacother. 2003;(12)
Messerli FH. Doxazosin and congestive heart failure. J Am Coll Cardiol. 2001;38(5):1295–1296. [PubMed: 11691497]
Michaelson JS, Silverstein M, Wyatt J, Weber G, Moore R, Halpern E, Kopans DB, Hughes K. Predicting the survival of patients with breast carcinoma using tumor size. Cancer. 2002;9(4):713–723. [PubMed: 12209713]
Mouton CP, Harris S, Rovi S, Solorzano P, Johnson MS. Barriers to black women's participation in cancer clinical trials. J Natl Med Assoc. 1997;89(11):721–727. [PMC free article: PMC2608280] [PubMed: 9375475]
Murray W. Cancer's new enemy. New Architect. 2002;7:10–12.
National Cancer Institute. Plans & Priorities for Cancer Research: The Nation's Investment in Cancer Research for Fiscal Year 2003. Bethesda, MD: National Cancer Institute; 2001. FY2003 Bypass Budget.
National Cancer Institute. Summary, Fourth National Forum on Biomedical Imaging in Oncology. 2003. [Accessed August 21, 2003]. Web Page. Available at:
National Cancer Institute. Fourth National Forum on Biomedical Imaging in Oncology Meeting Summary. Bethesda, MD: National Cancer Institute; 2003.
National Cancer Institute. Understanding the Approval Process of New Cancer Treatments: A Short History. 2003. [Accessed July 27, 2003]. Web Page. Available at:
National Cancer Institute. caBIG at a Glance: Overview of Activities and Accomplishments to Date. 2003. [Accessed August 27, 2003]. Web Page. Available at:
National Cancer Institute. Cancer Research Portfolio. 2004. [Accessed February 23, 2004]. Web Page. Available at:
National Institutes of Health. Protecting Personal Health Information in Research: Understanding the HIPAA Privacy Rule. Bethesda, MD: National Institutes of Health, Department of Health and Human Services; 2003.
National Institutes of Health. Re-engineering the Clinical Research Enterprise. 2004. [Accessed February 20, 2004]. Web Page. Available at:
Paskett ED, Cooper MR, Stark N, Ricketts TC, Tropman S, Hatzell T, Aldrich T, Atkins J. Clinical trial enrollment of rural patients with cancer. Cancer Practice. 2002;10(1):28–35. [PubMed: 11866706]
Petitti DB. Meta-Analysis, Decision Analysis, and Cost-Effectiveness Analysis. 2nd ed. New York: Oxford University Press; 2000.
Petricoin EF, Ardekani AM, Hitt BA, Levine PJ, Fusaro VA, Steinberg SM, Mills GB, Simone C, Fishman DA, Kohn EC, Liotta LA. Use of proteomic patterns in serum to identify ovarian cancer. Lancet. 2002;359(9306):572–577. [PubMed: 11867112]
Pisano ED. Current status of full-field digital mammography. Radiology. 2000;214(1):26–28. [PubMed: 10644097]
Pressel S, Davis BR, Louis GT, Whelton P, Adrogue H, Egan D, Farber M, Payne G, Probstfield J, Ward H. Participant recruitment in the Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial (ALLHAT) Control Clin Trials. 2001;22(6):674–686. [PubMed: 11738123]
Rao PN, Levine E, Myers MO, Prakash V, Watson J, Stolier A, Kopicko JJ, Kissinger P, Raj SG, Raj MH. Elevation of serum riboflavin carrier protein in breast cancer. Cancer Epidemiol Biomarkers Prev. 1999;8(11):985–990. [PubMed: 10566553]
Rogers EM. Diffusion of Innovations. 4th ed. New York: Free Press; 1995.
Rosenberg A. Private Payers' Perspectives on Adoption of New Breast Cancer Detection Technologies. Institute of Medicine Workshop: From Development to Adoption of New Approaches to Breast Cancer Detection and Diagnosis. 2003
Sateren WB, Trimble EL, Abrams J, Brawley O, Breen N, Ford L, McCabe M, Kaplan R, Smith M, Ungerleider R, Christian MC. How sociodemographics, presence of oncology specialists, and hospital cancer programs affect accrual to cancer treatment trials. J Clin Oncol. 2002;20(8):2109–2117. [PubMed: 11956272]
Sung NS, Crowley WF Jr, Genel M, Salber P, Sandy L, Sherwood LM, Johnson SB, Catanese V, Tilson H, Getz K, Larson EL, Scheinberg D, Reece EA, Slavkin H, Dobs A, Grebb J, Martinez RA, Korn A, Rimoin D. Central challenges facing the national clinical research enterprise. JAMA. 2003;289(10):1278–1287. [PubMed: 12633190]
The ALLHAT Officers and Coordinators for the ALLHAT Collaborative Research Group. Major outcomes in high-risk hypertensive patients randomized to angiotensin-converting enzyme inhibitor or calcium channel blocker vs diuretic: the Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial (ALLHAT) JAMA. 2002;288(23):2981–2997. [PubMed: 12479763]
Tunis S. Institute of Medicine Workshop: From Development to Adoption of New Approaches to Breast Cancer Detection and Diagnosis. Washington DC: Institute of Medicine; 2003. CMS Perspectives on Adoption of New Breast Cancer Detection Technologies.
U.S. Food and Drug Administration. Overview—FDA Modernization Act of 1997. 1998. p. 4. Web Page. Available at:
U.S. Food and Drug Administration. Improving Innovation in Medical Technology: Beyond 2002. Rockville, MD: U.S. Food and Drug Administration; 2003.
UNC Lineberger Comprehensive Cancer Center. Research Resources: Etta D. Pisano, MD. 2003. Web Page. Available at:
Wagner L. A test before its time? FDA stalls distribution process of proteomic test. J Natl Cancer Inst. 2004;96(7):500–501. [PubMed: 15069105]
Wang JG, Staessen JA, Heagerty AM. Ongoing trials: what should we expect after ALLHAT? Curr Hypertens Rep. 2003;5(4):340–345. [PubMed: 12844470]
Whiting P, Rutjes AW, Reitsma JB, Bossuyt PM, Kleijnen J. The development of QUADAS: a tool for the quality assessment of studies of diagnostic accuracy included in systematic reviews. BMC Med Res Methodol. 2003;3(1):25. [PMC free article: PMC305345] [PubMed: 14606960]
Winslow R, Hensley S. Dose of reality: study questions high cost of drugs for hypertension. The Wall Street Journal. 2002 December 18



“Technology” is used here in the broadest sense and includes biology, drugs, software, devices, and procedures.


Health care clearinghouses include public or private billing services, health management information systems, and networks or switches that process health information.


This section is based on presentations at the March 25, 2003 workshop by David Feigal and Joseph Hackett of the FDA.


Everyone age 65 and older and those with certain disabilities are eligible for Medicare. Approximately 94 percent of women over 65 are covered through Medicare; Medicaid covers low-income people. Some people are eligible for health insurance coverage under both programs.

Copyright © 2005, National Academy of Sciences.
Bookshelf ID: NBK22317


  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this title (6.0M)
  • Disable Glossary Links

Related information

  • PMC
    PubMed Central citations
  • PubMed
    Links to PubMed

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...