U.S. flag

An official website of the United States government

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

National Academies (US) Committee on Measuring Economic and Other Returns on Federal Research Investments. Measuring the Impacts of Federal Investments in Research: A Workshop Summary. Washington (DC): National Academies Press (US); 2011.

Cover of Measuring the Impacts of Federal Investments in Research

Measuring the Impacts of Federal Investments in Research: A Workshop Summary.

Show details

APPENDIX DTHE IMPACT OF PUBLICLY FUNDED BIOMEDICAL AND HEALTH RESEARCH: A REVIEW1

Bhaven N. Sampat

Department of Health Policy and Management

Columbia University

I. INTRODUCTION AND BACKGROUND

New biomedical technologies trigger a number of major challenges and opportunities in health policy. Among economists, there is widespread consensus that new technologies are the major drivers of increased healthcare costs but at the same time a major source of health and welfare improvements (Murphy and Topel 2003). This has led to discussion about whether technological change in medicine is “worth it” (Cutler and McClellan 2001). The impact of new technologies on the health care system has also been the subject of much debate among health policy scholars more generally (Callahan 2009).

Public sector research agencies have an important role in the U.S. biomedical innovation system. In 2004, federal agencies funded roughly one-third of all U.S. biomedical R and D (Moses et al. 2005). The National Institutes of Health (NIH) accounted for three-quarters of this amount. Private sector drug, biotechnology, and medical device companies provide the majority of U.S. biomedical R and D funding (about 58 percent). This private sector research is, in general, focused more downstream and tends to be closer to commercial application than NIH-funded research.

Donald Stokes (1997) observes that the public values science “not for what it is but what it is for.” A perennial question in U.S. science and technology policy is what benefits taxpayers obtain from publicly funded biomedical research. Recent concerns about the clinical and economic returns to NIH funding in the post-doubling era reflect this emphasis.

In this paper, we review the evidence on the effects of publicly funded biomedical research. Reflecting Stokes’s observation above, the review will focus on the health and economic effects of public research, rather than measures of scientific outcomes. Given the prominence of the NIH in funding this research, many of the published articles and research focus on this agency. The evidence examined includes quantitative analyses, and qualitative case studies, published by scholars from a range of fields. While we have made efforts to be broad, the references discussed should be viewed as representative rather than exhaustive. This review takes stock of the empirical methodologies employed and the types of data used; it also highlights common research and evaluation challenges, and emphasizes where existing evidence is more, or less, robust.

We proceed as follows. In Section II, below, we discuss a stylized model of how public research funding affects health, economic, and intermediate outcomes. As Kline and Rosenberg (1986), Gelijns and Rosenberg (1994), and others have emphasized, the research process cannot be reduced to a neat, linear model. While we recognize this fact (and highlight it in our literature review) the simple model is still useful in helping to organize our discussion of theory and data on the effects of publicly funded research. In Section III, we discuss the empirical evidence. In Section IV, we discuss common evaluation difficulties. In Section V, we conclude. The empirical approaches, data sources, and findings of many of the studies reviewed are also summarized in Tables D1-D3.

Table D-1. Public Funding and Health Outcomes: Summary of Selected Studies.

Table D-1

Public Funding and Health Outcomes: Summary of Selected Studies.

TABLE D-2. Public Funding and New Drugs, Devices: Summary of Selected Studies.

TABLE D-2

Public Funding and New Drugs, Devices: Summary of Selected Studies.

TABLE D-3. Public Funding and Private R and D, Patenting: Summary of Selected Studies.

TABLE D-3

Public Funding and Private R and D, Patenting: Summary of Selected Studies.

II. PUBLIC SECTOR RESEARCH AND OUTCOMES: AN OVERVIEW

Figure D-1 is a simple model illustrating how the literature has conceptualized the health and economic effects of publicly funded biomedical research (and publicly funded research more generally):

Flow chart showing how publicly funded research and development affects knowledge, then private sector research and development to new technologies that improve health outcomes

FIGURE D-1

Publicly Funded R and D and Outcomes, Logic Model. SOURCE: Sampat, 2011

The top arm of the model illustrates one important relationship: publicly funded R and D yields fundamental knowledge, which then improves the R and D efficiency of private sector firms, yielding new technologies (drugs and devices) that improve health outcomes.2 This conceptualization has been the essential raison-d’etre for the public funding of science since Vannevar Bush’s celebrated postwar report, Science, The Endless Frontier. For example, Bush asserted in 1945 that “discovery of new therapeutic agents and methods usually results from basic studies in medicine and the underlying sciences” (Bush 1945). It is also the essential mechanism in several important economic models of R and D (e.g. Nelson 1984). Importantly, this conceptualization generally views publicly funded research as “basic” research that is not oriented at particular goals, and thus yields benefits across fields. The influential “market failure” argument for public funding of basic research is that profit-maximizing, private-sector firms will tend to underinvest in this type of fundamental, curiosity driven research, since they cannot appropriate its benefits fully (Nelson 1959, Arrow 1962).

The channels through which publicly funded basic research might influence private sector innovation are diverse, including dissemination via publications, presentations and conferences, as well as through informal networks (Cohen et al. 2002). Labor markets are another channel, since public agencies may also be important in training doctoral and post-doctoral students who move on to work for private sector firms (Scherer 2000).

The second arrow illustrates another relationship. New instruments and techniques that are by-products of “basic” research can also improve private sector R and D (Rosenberg 2000). Prominent examples of instruments and research tools emanating from academic research include the scanning electron microscope, the computer, and the Cohen-Boyer recombinant DNA technique.

Third, publicly-funded researchers sometimes develop prototypes for new products and processes. Some of these are indistinguishable from the informational outputs of basic research discussed above. For example, when academic researchers learned that specific prostaglandins can help reduce intraocular pressure this discovery immediately suggested a drug candidate based on those prostaglandins, though the candidate required significant additional testing and development. (This academic discovery later became the blockbuster glaucoma drug, Xalatan.) The public sector has also been important in developing prototypes (Gelijns and Rosenberg 1995). Roughly since the passage of the Bayh-Dole Act, in 1980, publicly funded researchers have become more active in taking out patents on these inventions and prototypes for new products and processes, and licensing them to private firms (Mowery et al. 2004. Azoulay et al. 2007).

While much of the discussion of publicly funded biomedical research focuses on this more “basic” or fundamental research the public sector also funds more “applied” research and development.3 For example, about one-third of the NIH budget is for clinical research, including patient oriented research, clinical trials, epidemiological and behavioral studies, as well as outcomes and health services research. Such research can be a useful input into the development of prototypes, and may also directly inform private sector R and D. Clinical research may also directly affect health behaviors. For example, knowledge from epidemiological research about cardiovascular health risk factors contributed to reductions in smoking and better diets (Cutler and Kadiyala 2003). New applied knowledge can also influence physicians: for example, by changing their prescribing habits (e.g. “beta-blockers after heart attacks improve outcomes”) or routines (e.g. “this type of device works best in this type of patient”). Importantly, as various studies we review below will emphasize, negative results from clinical trials—showing that particular interventions do not work — can also be important for clinical practice and in shaping health behaviors.

While the discussion above assumes that new biomedical knowledge and technologies improve health outcomes, this is a topic of debate. The conventional wisdom is that while other factors (e.g. better diet, nutrition, and economic factors) were more important for health outcomes historically (McKeown 1976), improvements in American health in the post-World War II era have been driven largely by new medical knowledge and technologies (Cutler, Deaton, and Lleras-Muney 2006). The contribution of publicly funded research to these developments is an open empirical question, discussed below.

At the same time, some scholars suggest that we may have entered an era of diminishing returns, where new technologies are yielding increasingly less value (Callahan 2009; Deyo and Patrick 2004). The effect of new biomedical technologies on healthcare costs is a related concern. There is general agreement among health economists that new medical technologies are the single biggest contributor to the increase in long-run health costs, accounting for roughly half of cost growth (Newhouse 1992). Rising health costs strain the budgets of public and private insurers as well as employers, and may also contribute to generate health inequalities. The dynamic that exists between new medical technologies and health costs in the U.S. may reflect a “technological imperative,” which creates strong incentives for the healthcare system to adopt new technologies once they exist (Fuchs 1995; Cutler 1995). It may also reflect positive feedbacks between demand for insurance and incentives for innovation (Weisbrod 1991).

Concern about the effects of technology on health costs has fueled empirical work on whether technological change in medicine is “worth it.” Long ago, Mushkin (1979) noted (though did not share) “widespread doubt about the worth of biomedical research given the cost impacts.”

A large literature in health economics suggests that new biomedical technologies are indeed, in the aggregate, worth it. Cutler (1995) and others suggest that, given the high value of improved health (current estimates suggest the value of one additional life year is $100,000 or more), even very costly medical technologies pass the cost-benefit test.4 Nordhaus (2003) estimates that the value of improvements in health over the past half century are equal in the magnitude to measured improvements in all non-health sectors combined. Others (Callahan 2009) view these health cost increases as unaffordable, even if they deliver significant value, and therefore ultimately unsustainable.

At the same time, not all medical technologies necessarily increase costs. As Cutler (1995) and Weisbrod (1991) indicate, technologies that make a disease treatable but do not cure it - moving from non-treatment to “halfway” technology in Lewis Thomas’s characterization-are likely to increase costs. The iron-lung to treat polio is an example of this. However, technologies that make possible prevention or cure (“high technology”) can be cost-reducing, especially relative to halfway technologies. Thus the polio vaccine was much cheaper than the iron lung. Consistent with this, Lichtenberg (2001) shows that while new drugs are more expensive than old drugs, they reduce other health expenditures (e.g. hospitalizations). Overall, he argues, they result in net decreases in health costs (and improve health outcomes).5

As Weisbrod (1991) notes, “The aggregate effect of technological change on health care costs will depend on the relative degree to which halfway technologies are replacing lower, less costly technologies, or are being replaced by new, higher technologies. ” 6 One way to think about the effects of public sector spending on costs would be to assess the propensity of publicly funded research to generate (or facilitate the creation of) these different types of technologies. However, since the effects of these new technologies are mediated by various facets of the health care and delivery system, it may be difficult conceptually (and empirically) to isolate and measure the effects of public sector spending on overall health costs (Cutler 1995).7

II. THE EFFECT OF PUBLICLY FUNDED RESEARCH: A REVIEW OF THE EVIDENCE

Health

Measuring the health returns to publicly funded medical research has been a topic of interest to policymakers for decades. In an early influential study, Comroe and Dripps (1976) consider what types of research (basic or clinical) are more important to the advance of clinical practice and health. The authors rely on interviews and expert opinion to determine the top ten clinical advances in the cardiovascular and pulmonary arena, and identified 529 key articles associated with these advances. They coded each of the key articles into six categories: (1) Basic research unrelated to clinical problems; (2) Basic research related to clinical problems (what Stokes later termed “use-oriented” basic research); (3) Research not aimed at understanding of basic biological mechanisms; (4) Reviews or syntheses; (5) Development of techniques or apparatuses for research; and (6) Development of techniques or apparatuses for clinical use. The authors find that 40 percent of the articles were in category 1, and 62 percent in categories 1 or 2. Based on this, the authors assert “a generous portion of the nation’s biomedical research dollars should be used to identify and then to provide long-term support for creative scientists whose main goal is to learn how living organisms function, without regard to the immediate relation of their research to specific human diseases.” Comroe and Dripps also note “that basic research, as we have defined it, pays off in terms of key discoveries almost twice as handsomely as other types of research and development combined” (1976).

A more recent set of studies examines the effects of publicly funded research on health outcomes. Operationalizing the concept of “health” is notoriously difficult. Common measures employed to account for both the morbidity and mortality effects of disease include quality adjusted life years (QALYs) and disability adjusted life years (DALYs) (Gold et al, 2002). However, it is difficult to get longitudinal information on these measures by disease. As a result, most of the analyses of the effects of public funding on health examine more blunt outcomes, including the number of deaths and mortality rates for particular diseases.

Numerous prominent academic studies (Weisbrod 1983, Mushkin 1979) aim to examine the health effects of biomedical research, and the economic value of this impact, in a cost-benefit framework. One important recent study in this tradition, Cutler and Kadiyala (2003), focuses on cardiovascular disease—the disease area where there has been the strongest improvement in health outcomes over the past sixty years. Since 1950 mortality from cardiovascular disease decreased by two- thirds, as Figure D-2 (reprinted from their paper) shows:

Chart showing deaths from 1950 to 1996, broken down by different illnesses, with cardiovascular disease the highest at about 400,000 deaths in 1950 down to 175,000 by 1993

FIGURE D-2

Mortality by cause of death 1950–1994. SOURCE: Cutler and Kadiyala 2003

Cutler and Kadiyala, through a detailed review of the causes of this advance (relying on epidemiological and clinical data, medical textbooks, and other sources), estimate that roughly one third of this cardiovascular improvement is due to high-tech treatments, one third to low tech treatments, and one third to behavioral changes. Assuming one additional life year gained is valued at $100,000, the authors compute a rate of return of 4-to-1 for investments in treatments and 30-to-1 for investments in behavioral changes. These investments include costs borne by consumers and insurers, and estimates of public sector R and D for cardiovascular disease.

Based on these figures, the authors argue that the rate of return to public funding is high, though they don’t directly trace public funding to changes in outcomes in their quantitative analyses. Interestingly, in their qualitative account, the major public sector research activities highlighted have an “applied” orientation, including the NIH’s role in sponsoring large epidemiological trials and holding consensus conferences. This may reflect a traceability and attribution problem, which is common to the evaluation of fundamental research: It is difficult to directly link improvements in outcome indicators to public sector investments in basic research, even in a study as detailed as this one.

A paper by Heidenreich and McClellan (2003) is similarly ambitious, looking at sources of advance in the treatment of heart attacks. The authors focus on this treatment area, not only because of the large improvements, but also because it is a “best case” for attributing health outcomes to particular biomedical investments. Specifically, these authors go further than Cutler and Kadiyala by attempting to link changes in clinical practice to changes in specific R and D inputs. The authors focus here on clinical trials, not basic research. This is not because they believe that basic research is unimportant, “but because it is much easier to identify connections between these applied studies and changes in medical care and health.”

Based on detailed analyses of MEDLINE-listed trials and health outcomes, the authors argue that medical treatments studied in these trials account for the bulk of improvement in AMI outcomes. The authors associate changes in clinical practice and outcomes to research results reported in trials through analysis of timing of events, and detailed clinical knowledge of how the trial results, clinical practices, and health outcomes relate.

One interesting result from this paper is that clinical practice often “leads” formal trials, challenging the “linear” model embodied in Figure D-1 (above). The authors also emphasize that an important role for trials is negative: telling clinicians what doesn’t work, and stopping the diffusion of ineffective technologies. While the sample they examine represents a mix of publicly funded and privately funded trials, the authors do emphasize a particularly important role for the public sector in funding trials on drugs off patent, where private firms have fewer incentives to do so.

Philipson and Jena’s (2005) study of HIV-AIDS drugs is another paper that examines the value of increases in health from new medical technologies. Though this study does not explicitly focus on the role of the public sector, it estimates that HIV-AIDS drugs introduced in the 1990s generated a social value of $1.4 trillion, based on the value of the increments to life expectancy created from these drugs (here again, using the estimate of $100,000 per life year). This study is relevant because of the important role of public sector research in the development of HIV‐AIDS drugs, which is observed in several of the empirical studies discussed below.

A recent paper by Lakdawalla et al (2011) employs a similar approach to assess the benefits from cancer research. The authors find these benefits to be large, estimating the social value of improvements from improvements in life expectancy during the 1988–2000 period to be nearly $2 trillion. The authors note that this compares to investments of about $80 billion dollars in total funding for the National Cancer Institute between 1971 and 2000. As with the HIV studies discussed above, the authors do not calculate a rate-of-return on publicly funded research explicitly, but do argue that the social benefits from cancer research in general far exceed research investments and treatment costs.

A large share of the benefits in the cancer arena, according to this work, results from better treatments. Lichtenberg (2004) also suggests that new drug development has been extremely important in progress against cancer.8 Public sector research may have been important to the development of these drugs: various studies (Stevens et al. 2011, Chabner and Shoemaker 1989) suggest an important role for the public sector in cancer drug development.9

Each of the studies discussed so far focuses on particular disease areas. In a more “macro” approach Manton (2009) and colleagues relate mortality rates in four disease areas to lagged NIH funding by the relevant Institute, over the period 1950–2004. They find that for two of the four diseases (heart disease, stroke) there is a strong negative correlation, but find weaker evidence for cancer and diabetes. Several issues arise here that will re-emerge in other quantitative analyses discussed below. First, linking funds to disease areas is difficult. As with other studies we will consider below, the authors here rely on the disease foci of Institutes within the NIH. More importantly, the counterfactual is hard to prove: It is difficult to make the case that the relationships estimated are causal, since Institute-specific funding is not exogenous. In particular, diseases where there is highest expectation of progress (even absent funding) may be more likely to get funds. Finally, competing risks also complicate interpretation of health outcomes. For example, part of the reason cancer mortality has increased rather than decreased over the period studied is that people no longer die of heart attacks, due to advances in the cardiovascular arena.

Private Sector R and D

Another set of studies relates publicly funded research to private sector R and D and productivity. These include econometric analyses relating public sector and private sector funding, surveys of firm R and D managers, and studies examining the geographic dimension of spillovers from public sector researchers.

Several papers relate NIH funding by disease area to later private sector funding. One motivation in these studies is to assess if public and private sector R and D are substitutes or complements, an issue of perennial interest in science and technology policy (David, Hall, and Toole 2000). The econometric analyses generally find a positive association between public sector and private sector funding. Toole (2007) uses data from the NIH’S Computerized Retrieval of Information on Scientific Projects (CRISP) database, covering NIH basic and clinical research funding across seven therapeutic classes (between 1972 and 1996), and data from the Pharmaceutical Manufacturers of America (PhRMA) on private sector R and D in these same areas (between 1980 and 1999) to examine the relationships between the two. This study finds a 1 percent increase in basic research funding associated with a 1.7 percent increase in private sector funding, though the elasticity for clinical research is much smaller (.4 percent). In a similar analysis, Ward and Dranove (1995), using PhRMA data on R and D spending and NIH data on funding by Institute (similar to that used in the Manton et al 2009 study discussed above) find that a 1 percent increase in NIH research support in a disease area is associated with a .76 percent increase in private sector R and D within that same disease area over the next seven years.

Surveys of firm R and D managers have also been used to gauge how public sector research affects private sector R and D. Cohen, Nelson, and Walsh (2002) report on the 1994 Carnegie Mellon Survey of Industrial R and D managers, which examined (among other issues) the roles of the public sector in industrial R and D, and channels through which public research affects industrial R and D. This survey is particularly interesting since it has data on both the drug and device sectors, and allows for comparison of these sectors to others. The authors find that the pharmaceutical industry is an outlier in its reliance on public sector R and D. In the pharmaceutical industry, according to respondents, public research was the most important source of new project ideas and contributor to project completion. By contrast, in the medical instruments industry R and D projects less frequently rely on public research than other industries. There are also some differences in the fields of science relied upon across these different industries. Thus the top three fields of science important to R and D projects in the pharmaceutical industry are medicine, biology, and chemistry. In medical instruments sector, the top three fields are medicine, materials science, and biology. Although much of the literature on the effects of public sector funding tend to focus on the NIH, the bulk of funding for materials science R and D comes from other agencies (including the National Science Foundation, Department of Energy, and the Department of Defense).

Another set of studies, examining how interactions between public and private sector scientists affects the productivity of private sector R and D, generally finds a strong relationship between the two. Cockburn and Henderson (1996) examine how private sector co-authorship with public sector scientists affects firm level R and D and productivity. The authors bring together several novel datasets, including MEDLINE data on firm publication activity and USPTO data on firm patenting activity. Using panel regression models (with firm fixed effects to control for time-invariant firm characteristics), they find a positive and statistically significant association between their productivity measure (based on important patents per R and D dollar) and collaboration with public sector scientists.

Research by Zucker, Darby, and Brewer (1998) examines the importance of academic science in the creation of new biotechnology firms in the 1980s. In this work, the authors relate new biotechnology firm formations by area to the number of academic “star scientists” (as measured by publications and other measures of scientific productivity) working in that area. The authors find that the presence of academic stars and their collaborators— intellectual capital”—within a geographic area has a statistically significant and positive relationship with the number of new biotechnology enterprises later formed in that area. This research suggests that public sector science has an important, though geographically mediated, effect on private sector research.

The question of whether spillovers from public research to firms are geographically mediated has also been examined through studies using patent citation data (Jaffe et al. 1993). When patents are granted they include citations to prior art: earlier publications and patents that were deemed (by either the applicant or the patent examiner) as relevant to an invention. Economists and others have interpreted patent citations as evidence of knowledge flows or spillovers: thus if a firm patent cites a public sector publication or patent, this is considered evidence that the firm benefited from public funding. While there is some skepticism about this measure, given the prominence of patent examiners in generating citations (Alcacer et al. 2009; Cohen and Roach 2010), it remains commonly employed. Moreover, as it turns out, examiner-added citation are less common within the biomedical arena (Sampat 2010) and for citations to scientific publications (Lemley and Sampat 2011) suggesting that citations in biomedical patents to scientific publications may be less subject to the concerns cited above.

Azoulay, Graff Zivin, Sampat (2011) collected data on 10,450 elite life science researchers (most of them publicly funded), historical information on productivity, employment locations of each scientist, MEDLINE data on their publications, ISI data on citations to their publications, and USPTO data on their patents and citations to their patents and publications. The authors assess the effects of geography on spillovers by examining how citation patterns change after the scientists move. Overall, they find some evidence that geography matters for spillovers, though weaker than in previous analyses. They also find the results on geography are sensitive to whether spillovers are measured through paper-to-paper citations, patent-to-patent citations, or patent-to‐paper citations.

Private Sector Innovation

Numerous studies also consider the public sector role in the development of marketed innovations. Survey work by Mansfield (1998) examines the importance of academic research for industrial innovation for firms across a range fields. In this work, as in the Carnegie Mellon Survey discussed above, the biomedical industries are outliers. The share of products developed over the late 1980s and early 1990s that could not have been developed (without substantial delay) absent recent academic research is nearly twice as high in drugs and medical products than in other industries.

Various recent studies examine the roles of the public sector in drug development using patent and “bibliometric” data. In addition to providing an indicator of returns to public R and D, this work may also be relevant to current policy proposals that aim to exploit public sector ownership of drugs to help reduce downstream drug prices and expand access (Sampat and Lichtenberg 2011).

Sampat (2007) uses data on all drugs approved by the Food and Drug Administration (FDA) between 1988 and 2005 (and listed on the FDA’s Orange Book), and USPTO data on patents associated with these drugs, to examine the share of drugs on which academic institutions (including public sector laboratories) own patents. Overall, a small number of new molecular entities (NMEs), about 10 percent, have academic patents. However, this share is larger for new molecular entities that received priority review (arguably the most innovative new drugs), where about 1-in-5 drugs have academic ownership. He also finds that public sector ownership of drugs is more pronounced for HIV‐AIDS drugs than for other drug classes.

Stevens et al. (2011) expand on this research to include vaccines and biologicals (not always listed on the Orange Book), and construct measures based not only on publicly available patent data but also propriety data on drug licenses. They find 153 FDA-approved drugs were discovered by the public sector over the past 40 years (102 NMEs, 36 biologics, and 15 vaccines.) The authors show that about 13 percent of NMEs (and 21 percent of priority NMEs) were licensed from public sector institutions, consistent with the numbers reported in Sampat (2007). Strikingly, the authors also show that virtually all the important vaccines introduced over the past quarter century came from the public sector. The authors also show broad correlations between NIH Institute budgets and the therapeutic classes where there are numerous public-sector based drugs, similar in spirit to econometric analyses we will review below.

Kneller (2010) takes a different approach, relying not on patent assignment records but instead on information related to the inventors’ places of employment, and applies his analysis to 252 drugs approved by the FDA between 1998 and 2007. Using these measures, Kneller finds a larger public sector influence than the previous studies. Overall, about a quarter of drugs are from university inventors, and a third of priority review drugs are from academic inventors.

The Sampat, Stevens et al, and Kneller studies rely on direct academic involvement in developing the molecules (resulting in academic ownership of the key patents or academic inventors listed on those patents). However, as discussed in Section II, in addition to the development prototypes, the public sector can facilitate or enhance industrial innovation in other ways as well. Thus Keyhani et al (2005), using data from the Federal Register, government clinical trials databases, and documents from the FDA, finds the government was active in supporting clinical trials for nearly 7 percent of a sample of drugs approved between 1992 and 2002. Here again, the government role was more pronounced for HIV-AIDS drugs than for others.

Sampat and Lichtenberg (2011) distinguish between the direct effects of public sector research on drug development, where academic institutions are involved in discovering the molecule, and the indirect effects, where other knowledge spillovers from academic work increase private sector productivity. The authors measure the direct effect of public sector funding using information on “government interest” statements in Orange Book listed patents. And they use citations in Orange Book listed patents to academic patents or academic publications as a measure of this indirect effect. Consistent with the various studies cited above, this study suggests the direct effect is small overall: about 9 percent of drugs, and about 17 percent of priority review drugs, have public sector owned patents. However the indirect effect is much larger: about 48 percent of drugs have patents that cite public sector patents and publications. Among priority drugs, this indirect influence rises to nearly two-thirds. This finding is broadly consistent with the qualitative results from Cockburn and Henderson’s (1996) study of fifteen drugs, which shows the public sector made key enabling discovery for the majority (11 of the 15), but was involved in synthesis of the compound for only 2 of the 15.

The studies discussed above are accounting exercises. Others also have attempted to relate variation in funding by disease area to drug development patterns, econometrically. Dorsey et al. (2009) relate NIH funding by therapeutic area to later drug approvals across nine disease areas between 1995 and 2000. The authors allocate funding to specific diseases based on funding Institute using information in Congressional budget requests for those institutes. They find that despite a sharp rise in NIH funding over this time period, drug approvals remained flat overall. And their cross-therapeutic area analyses show little correlation between NIH funding and subsequent drug approvals.

Blume-Kohut (2009) also explores these issues, using panel regression models. She constructs data on NIH funding by disease area between 1975 and 2004 from the agency’s CRISP and RePORTER databases, based on parsing of abstracts and keywords of grants for disease keywords. She also examines information on drugs in development by class using data from a private data vendor, PharmaProjects. Her results show little evidence of responsiveness between the number of drugs in Phase III trials (late stage) and NIH funding, but evidence of a positive relationship for the number of drugs in earlier stage Phase I trials. The author notes these results may suggest that factors other than NIH funding (or the state of knowledge) may be important for Phase III trials, including commercial considerations such as the size of the market. In a similar approach, using a different outcome measure, Ward and Dranove (1995) relate MEDLINE publications tagged as “drug” articles to NIH R and D funding by disease area, here again categorized based on funding institute. They find a strong relationship between the two.

Most of the studies we have discussed thus far, examining public sector research and product development, focus on drugs and involve quantitative analysis. By contrast, Morlacchi and Nelson (2011) examine the sources of innovation in the development of the left ventricular assist device (LVAD), a medical device used for patients with end-stage heart failure. While the device originally was developed as a “bridge” solution until a heart became available for transplant, it is increasingly used as destination therapy, as a substitute for a heart transplant. Morlacchi and Nelson draw on interviews, primary and secondary articles, and patents to develop a longitudinal history of the development of the LVAD. They consider, among other questions, the importance of public sector funding in this development. Echoing some of the themes in Heidenreich and McClellan’s study of heart attack treatment, they find that in this field application led scientific understanding. The development of the device occurred even as basic understanding of heart failure remained weak, once again challenging the linear model of innovation portrayed in Figure D-1. They also find that the applied and diffusion oriented activities of public sector funders were important in the development of this device, including the NIH’s sponsorship of conferences and centers to spread best-practice, funding of trials and development of important component technologies, and contracts to spur firm formation.

Health Costs

Despite longstanding concerns about the effects of new biomedical technologies on healthcare costs, and speculation that public sector research may be implicated in spurring this cost spiral, there has been surprisingly little empirical research on this topic. For example, there is a paucity of academic work relating funding patterns by disease area to subsequent cost growth, analogous to the work relating funding to private sector R and D, drug development, and health outcomes discussed above.

In 1993, the NIH prepared studies on the cost savings from a non‐random sample of 34 health technologies resulting from NIH support, demonstrating substantial cost savings (NIH 1993). This study examined NIH funding for new technologies, as well as cost savings that accrued to patients, based on conservative assumptions on reductions in disease attributable to those same technologies. An NIH summary (NIH 2005) of this work notes that,“[t]aken together, the 34 technologies were estimated to reduce health care costs by about $8.3 billion to $12.0 billion annually.” As with several studies discussed earlier, difficulty in tracing the effects of “basic” research to particular technologies may complicate such calculations. Moreover, as the agency’s summary emphasizes “because the 34 new health care technologies studies were not chosen to be representative of all health advances resulting from NIH support, the results of these case studies cannot be generalized.”

While there has been little work, beyond this NIH study, on the effects of public sector funding on the direct costs of disease (i.e. health expenditures), the various studies discussed above that address the value of new biomedical technologies, can be interpreted as evidence that public sector funding reduces the total cost of disease, to the extent that the estimated improvements in health are viewed as reductions in the social costs associated with disease.

III. MEASUREMENT AND EVALUATION ISSUES

The diverse set of studies reviewed here illustrates a number of common measurement and evaluation issues that complicate efforts to estimate the health and economic effects of publicly funded biomedical research. Here, we will highlight several that stand out.

Several of the studies reviewed relate public sector funding by disease area to outputs. All of these focus on the NIH, since for other agencies publicly available data on funding by disease area is not readily available. Even for the studies focused on the NIH, however, there are measurement issues. While many studies construct funding stocks based on which Institutes fund the research, Institutes fund numerous diseases, introducing considerable noise into these measures.

The NIH’s CRISP database includes disease keywords, which can also be used to construct disease specific funding, but these are not collected in a standard way across the NIH (Sampat 2011). In 2008, the NIH launched the “Research, Condition, and Disease Categorization” (RCDC) database, which uses standard methodologies to classify funds by area. Whereas, previously, each NIH Institute had linked its grants to diseases in an ad hoc and non-standard way, the RCDC employs standard category definitions to classify grants, developed with input from disease groups, the scientific community, and outside consulting groups. Before the RCDC, the NIH had provided disease-specific funding figures tentatively and with many caveats. Today, with the existence of the RCDC database, the agency has exhibited a more firm commitment to its own data sources and tracking. The NIH website thus affirms: “RCDC provides consistent and transparent information to the public about NIH-funded research. For the first time, a complete list of all NIH-funded projects related to each category is available.” This database may prove a boon for future researchers. However, its time frame and scope (covering only diseases and conditions “of historical interest to Congress”) may limit the types of analyses that can be conducted using these data.

A more fundamental issue is difficulty in categorizing “basic” research in these studies. Thus in the CRISP funding database, 49% of grants awarded in 1996 (accounting for 46% of NIH allocations) listed no disease terms, and only about 45% of grants map to a disease category in the RCDC (Sampat 2011). It is difficult to incorporate these grants into disease level associations of funding and outputs. Basic research is also difficult to trace to outcomes even in a case study context, given lags and diffuse channels of impact. Thus it is not surprising that several of the evaluation studies discussed above (including the study of heart attack treatment, and the studies of NIH research and costs) focus on the effects of applied research.

The bibliometric approaches discussed above, linking grants to publications to citations to patents to drugs may overcome these traceability challenges, relying on paper trails between research and outcomes, and avoiding the need to associate public sector funding with particular diseases. However, the validity of these analyses rest on a number of assumptions, e.g. the extent to which patent-paper citations reflect real knowledge flows from public sector research.

Thus, measurement of inputs and intermediate steps is difficult. Measuring outcomes is conceptually easier, at least relative to evaluation of research outputs in non-biomedical contexts. Though the right output measures (e.g. morbidity or mortality, direct or indirect costs) or desiderata (should the NIH be mainly focused on advancing health? science? competitiveness? something else?) are the subject of debate, there is a wealth of data available to examine changes in health-related outcomes. Similarly, the research community has exploited numerous useful measures of relevant economic outcomes (e.g. patents, drug development, publications), again more readily available in the biomedical context than other arenas.

Causal evaluation of the effects of publicly funded research on these outcomes is difficult however, in this context and in S and T policy more generally. Simply put, funding choices are not random, making it difficult to attribute observed changes in outcomes to specific policies. As just one example, if public sector funding targets disease areas with high scientific opportunity, it is difficult to untangle whether subsequent improvements in health (or changes in private sector R and D, or drug development) reflect the effects of the funding or of the scientific opportunity. Several of the studies discussed attempt to address this problem econometrically, including through panel regression models with disease fixed effects, to absorb the effects of disease-specific characteristics that do not change over time. Going forward, quasi-experimental techniques may also prove useful. For example, it may be possible to exploit random shocks to funding in particular areas that are unrelated to scientific opportunity and disease burden could (e.g. those introduced through political influence on the allocation process, or changes in agencies’ funding rules) to assess the effects of public research.

There is also a need for more qualitative work. A number of the case studies surveyed above relied on detailed knowledge of the institutions at play, in depth clinical knowledge, and information on the timing of relevant events, to make credible arguments that the relationships they observed were causal. These too represent promising research approaches going forward.

IV. CONCLUSIONS

The measurement evaluation challenges highlighted above are endemic to science and technology policy in general (Jaffe 1998). A main output of science and technology policy is knowledge, which is difficult to measure and link to downstream outcomes. This exacerbates traditional difficulties with attributing causal effects to policy interventions, common to evaluation in most public policy domains.

Notwithstanding these challenges, at least on several issues various studies point in the same direction. First, there is consistent evidence across on the importance of public sector biomedical R and D for the efficiency of private sector R and D. The evidence is compelling since it is based on a range of studies using different techniques and samples, including surveys, case studies, and econometric analyses.

Second, the accounting studies on sources of innovation in drugs suggest that the public sector was directly involved in the development of a small share of drugs overall, but that the public sector role is more pronounced for more “important” drugs, and that the indirect effect of public sector research on drug development is larger than the direct effect. On the other hand, the studies that relate patterns of funding by disease area to drug development show less consistent results.

Third, a number of the studies suggest the importance of the applied and clinical public research activities on product development, patient behaviors, and health outcomes. This is striking, since much of the discussion about publicly funded biomedical research focuses on (and most of the funding is for) “basic” research. Whether the importance of applied activities reflects that their effects are easier to measure and trace, or that they are really very important, is an open empirical question.10

Overall, there is strong evidence that new biomedical technologies have created significant value, as measured through the economic value of health improvements. Some scholars believe that even if public sector research was responsible for only a small share of this gain, it delivers high returns on investment (Murphy and Topel 2003).11

More work is needed directly examining the role of the public sector per se, and especially public sector basic research, in affecting these health outcomes. Similarly, very little is known about the effects of public sector research on health expenditures. Detailed longitudinal case studies of trends in public and private sector research activity, technology utilization, health outcomes, and health expenditures across a number of disease areas would be useful for promoting understanding on each of these issues. To the extent possible, it would be useful for these studies to employ common methods and measures, and to examine both disease areas where there has been considerable advance, and those where there has been less progress.

Finally, the bulk of the academic work in this area focuses on the NIH and pharmaceuticals. Much more research is needed on the effects of other funding agencies, and on the effects of public funding on the device sector.

REFERENCES

  1. Alcacer J, Gittleman M, Sampat BN. Applicant and Examiner Citations in U.S. Patents: An Overview and Analysis. Research Policy. 2009;38(2):415–427.
  2. Arrow K. Economic Welfare and the Allocation of Resources for Invention. In: Nelson Richard., editor. The Rate and Direction of Inventive Activity. Princeton, NJ: Princeton University Press; 1962.
  3. Azoulay P, Michigan R, Sampat BN. The Anatomy of Medical School Patenting. The New England Journal of Medicine. 2007;20(357):2049–2056. [PubMed: 18003961]
  4. Azoulay P, Zivin JG, Sampat BN. The Diffusion of Scientific Knowledge across Time and Space: Evidence from Professional Transitions for the Superstars of Medicine. NBER Working Paper 16683. 2011
  5. Ballar J, Gornick H. Cancer Undefeated. New England Journal of Medicine. 1997;336:1569–1574. [PubMed: 9164814]
  6. Blume-Kohout ME. Drug Development and Public Research Funding: Evidence of Lagged Effects. Waterloo, ON, Canada: University of Waterloo; 2009. pp. 1–35.
  7. Bush V. Washington, DC: United States Government Printing Office; Science, The Endless Frontier. 1945
  8. Callahan D. Princeton: Princeton University Press; Taming the Beloved Beast: How Medical Technology Costs Are Destroying Our Health Care System. 2009
  9. Chabner BA, Shoemaker D. Drug Development for Cancer: Implications for Chemical Modifiers. International Journal of Radiation Oncology Biology Physics. 1988;16:907–909. [PubMed: 2703396]
  10. Chandra A, Skinner J. Technology Growth and Expenditure Growth in Health Care. NBER Working Paper 16953. 2011 [PubMed: 21957511]
  11. Cockburn IHR. Public-Private Interaction in Pharmaceutical Research. Proceedings National Academy of Science USA. 1996;93(23):12725–12730. [PMC free article: PMC34128] [PubMed: 8917485]
  12. Cohen WM, Nelson R, Walsh J. Links and Impacts: The Influence of Public Research on Industrial R and D. Management Science. 2002;48(1):1–23.
  13. Cohen W, Roach M. Patent Citations As Indicators of Knowledge Flows From Public Research. Working Paper. 2010 [PMC free article: PMC3901515] [PubMed: 24470690]
  14. Comroe J, Dripps RD. Scientific Basis for the Support of Biomedical Science. Science. 1976;192(4235):105–111. [PubMed: 769161]
  15. Cutler D. Technology, Health Costs, and the NIH. Paper prepared for the National Institutes of Health Economic Roundtable on Biomedical Research. 1995.
  16. Cutler DM. Are We Finally Winning the War on Cancer? Journal of Economic Perspectives. 2008;22(4):3–26. [PubMed: 19768842]
  17. Cutler D, McClellan M. Is Technological Change in Medicine Worth It? Heath Affairs. 2001;20(5):11–29. [PubMed: 11558696]
  18. Cutler D, Kadiyala S. The Returns to Biomedical Research: Treatment and Behavioral Effects. In: Murphy Kevin, Topel Robert., editors. Measuring the Gains from Medical Research: An Economic Approach. Chicago: University of Chicago Press; 2003. pp. 110–162.
  19. Cutler D, Deaton A, Lleras-Muney A. The Determinants of Mortality. The Journal of Economic Perspectives. 2006;20(3)
  20. David PA, Hall BA, Toole AA. Is Public R and D a Complement or Substitute for Private R and D? A Review of the Econometric Evidence. Research Policy. 2000;29:497–529.
  21. Dorsey ER, Thompson JP, Carrasco M, de Roulet J, Vitticore P, Nicholson S, Johnston SC, Holloway RG, Moses H III. Financing of U.S. Biomedical Research and New Drug Approvals across Therapeutic Areas. PloS One. 2009;4(9):e7015. [PMC free article: PMC2735780] [PubMed: 19750225]
  22. Dorsey ER, Vitticore P, De Roulet J, Thompson JP, Carrasco M, Johnston SC, Holloway RG, Moses H III. Financial Anatomy of Neuroscience Research. Annals of Neurology. 2006;60(6):652–659. [PubMed: 17192926]
  23. Fuchs V. Cambridge, MA: Harvard University Press; The Health Economy. 1986
  24. Gelijns A, Rosenberg N. The Dynamics of Technological Change in Medicine. Health Affairs. 1994;13(3):28–46. [PubMed: 7927160]
  25. Gelijns A, Rosenberg N. The Changing Nature of Medical Technology Development. In: Rosenberg N, Gelijsn A, Dawkins H, editors. Sources of Medical Technology: Universities and Industry. Washington National Academies Press; 1995. [PubMed: 25121227]
  26. Gold M, Stevenson D, Fryback D. HALYS and QALYS and DALYS, OH MY: Similarities and Differences in Summary Measures of Population Health. Annual Review of Public Health. 2002;23:115–134. [PubMed: 11910057]
  27. Heidenreich P, McClellan M. Biomedical Research and Then Some: The Causes of Technological Change in Heart Attack Treatment. In: Murphy Kevin, Topel Robert., editors. Measuring the Gains from Medical Research: An Economic Approach. Chicago: University of Chicago Press; 2003. pp. 163–205.
  28. Jaffe A, Trajtenberg M, Henderson R. Geographic Localization of Spillovers as Evidenced By Patent Citations. The Quarterly Journal of Economics. 1993;108(3):577–598.
  29. Keyhani S, Diener-West M, Powe N. Do Drug Prices Reflect Development Time and Government Investment? Medical Care. 2005;43(8):753–762. [PubMed: 16034288]
  30. Kline S, Rosenberg N. An Overview of Innovation. In: Landau Ralph, Rosenberg Nathan., editors. The Positive Sum Strategy: Harnessing Technology for Economic Growth. 1986.
  31. Kneller R. The Importance of New Companies for Drug Discovery: Origins of a Decade of New Drugs. Nature Reviews Drug Discovery. 2010;9(11):867–882. [PubMed: 21031002]
  32. Lemley M, Sampat BN. Examiner Characteristics and Patent Office Outcomes. Forthcoming. Review of Economics and Statistics. 2011
  33. Lichtenberg F. Are the Benefits of New Drugs Worth Their Cost? Health Affairs. 2001;(20):241–251. [PubMed: 11558710]
  34. Mansfield E. Academic Research and Industrial Innovation: An Update of Empirical Findings. Research Policy. 1998;7–8(26):773–776.
  35. Manton K, Gu X, Lowrimore G, Ullian A, Tolley H. NIH Funding Trajectories and Their Correlations with Us Health Dynamics from 1950 to 2004. Proceedings National Academy of Science USA. 2009;106(27):10981–10986. [PMC free article: PMC2700155] [PubMed: 19549852]
  36. McKeown T. The Role of Medicine: Dream, Mirage, or Nemesis? London: Nuffield Provincial Hospitals Trust; 1976.
  37. Morlacchi P, Nelson RR. How Medical Practice Evolves: The Case of the Left Ventricular Assist Device. Research Policy. 2011;40(4):511–525.
  38. Moses H III, Dorsey ER, Matheson DHM, Thier SO. Financial Anatomy of Biomedical Research. Journal of the American Medical Association. 2005;294(11):1333–1342. [PubMed: 16174691]
  39. Mowery D, Nelson RR, Sampat BN, Ziedonis AA. Ivory Tower and Industrial Innovation: University–Industry Technology Transfer Before and After Bayh-Dole. Stanford, CA: Stanford University Press; 2004.
  40. Murphy KM. Measuring the Gains from Medical Research: An Economic Approach. 1. University of Chicago Press; 2003.
  41. Murphy K, Topel R, editors. Measuring the Gains from Medical Research: An Economic Approach. Chicago, IL: University of Chicago Press; 2003.
  42. Mushkin S. Biomedical Research: Costs and Benefits. Cambridge, MA: Ballinger Publishing; 1979.
  43. National Institutes of Health. Cost savings resulting from NIH research support: a periodic evaluation of the cost-benefits of biomedical research. 1993.
  44. Nelson RR. The Simple Economics of Basic Scientific Research. Journal of Political Economy. 1959;(67):297–306.
  45. Nelson RR. The Role of Knowledge in R and D Efficiency. The Quarterly Journal of Economics. 1982;97(3):453–470.
  46. Nordhaus. The Health of Nations: The Contribution of Improved Health to Living Standards. In: Murphy Kevin, Topel Robert., editors. Measuring the Gains from Medical Research: An Economic Approach. 2003.
  47. Philipson T, Jena AB. Who Benefits from New Medical Technologies? Estimates of Consumer and Producer Surpluses for HIV/AIDS Drugs. Forum for Health Economics and Policy. 2006;9(2) Biomedical Research and the Economy), Article 3.
  48. Rosenberg N. Schumpeter and the Endogeneity of Technology: Some American Perspectives. London: Psychology Press; 2000.
  49. Sampat BN. Academic Patents and Access to Medicines in Developing Countries. American Journal of Public Health. 2009;99(1):9–17. [PMC free article: PMC2636619] [PubMed: 19008514]
  50. Sampat BN. When Do Patent Applicants Search for Prior Art? Journal of Law and Economics. 2010;53:399–416.
  51. Sampat BN. The Allocation of NIH Funds Across Diseases and the Political Economy of Mission-Oriented Biomedical Research, Working paper. 2011.
  52. Sampat BN, Lichtenberg F. What Are The Respective Roles Of The Public And Private Sectors In Pharmaceutical Innovation? Health Affairs. 2011;30(2):332–339. [PubMed: 21289355]
  53. Scherer FM. The pharmaceutical industry, Chapter 25. In: Culyer Anthony, Newhouse Joseph., editors. Handbook of Health Economics. Amsterdam: North Holland; 2000.
  54. Sporn MB. The War on Cancer: A Review. Journal of the New York Academy of Sciences. 1997;833(1)
  55. Stevens AJ, Jensen JJ, Wyller K, Kilgore PC, Chatterjee S, Rohrbaugh ML. The Role of Public-Sector Research in the Discovery of Drugs and Vaccines. The New England Journal of Medicine. 2011;364(6):535–541. [PubMed: 21306239]
  56. Stokes D. Pasteur’s Quadrant: Basic Science and Technological Innovation. Washingon Brookings Institution Press. 1997
  57. Toole AA. Does Public Scientific Research Complement Private Investment in Research and Development in the Pharmaceutical Industry? The Journal of Law and Economics. 2007;50(1):81–104.
  58. Ward MR, Dranove D. The Vertical Chain of Research and Development in the Pharmaceutical Industry. Economic Inquiry. 1995;33:70–87.
  59. Weisbrod BA. Economics and Medical Research. AEI Press; 1983.
  60. Weisbrod BA. The health care quadrilemma: an essay on technological, change, insurance, quality, and cost containment. Journal of Economic Literature. 1991:523–552.
  61. Zhang Y, Soumerai S. Do Newer Prescription Drugs Pay for Themselves? A Reassessment of the Evidence. Health Affairs. 2007;26(3):880–886. [PubMed: 17485770]
  62. Zucker L, Brewer M, Darby M. Intellectual Human Capital and The Birth of U.S. Biotechnology Enterprises. American Economic Review. 1998;88(1):290–306.

Footnotes

1

I thank Pierre Azoulay, and participants in the National Academies’ 2011 Workshop on Measuring the Impacts of Federal Investments in Research, for useful comments and suggestions.

2

Stokes (1997) and others have challenged this definition of “basic” research.

3

Stokes (1997) provides a thoughtful critique of conventional distinctions between “basic” and “applied” research. Since much of the literature before and since Stokes uses this terminology, we employ it in our review of this literature, even while recognizing the importance of his argument.

4

Cutler (1998) observes “Common wisdom suggests that rapid cost increases are necessarily bad. This view, however, is incorrect. Cost increases are justified if things that they buy (increases in health) are worth the price paid.” (2)

5

See however, Zhang and Sourmerai (2007) for a critique of this finding.

6

The cost-effectiveness of these technologies also depends on the populations on which they are used, as Chandra and Skinner (2011) emphasize.

7

There is also some discussion about whether the public sector should be paying attention to the cost-side consequences of its investment decisions. Weisbrod (1991) notes: “With respect to the NIH, it would be useful to learn more about the way the size and allocation of the scientific research budget are influenced, perhaps quite indirectly, by the health insurance system, through its impact on the eventual market for new technologies of various types” (535).

8

Cutler (2008) also emphasizes progress in the “war on cancer” – though highlights the role of screening and personal behavior changes, and notes the high costs of treatment. Sporn (2006) and Balilar and Gonik (1997) offer less sanguine assessments, emphasizing that progress against cancer has been highly uneven. Long-standing debates in assessments of the War on Cancer include the disagreements on the relative importance of treatment versus prevention, and of basic versus applied research. The literature also suggests it is difficult to evaluate the extent of progress in cancer, for two main reasons. First, advances in screening increase incidence. The second is competing risks: for example, the reduction in mortality from cardiovascular disease, discussed above, increased cancer cases. See Cutler (2008) for a review.

9

A National Cancer Institute (NCI) “Fact Sheet” asserts that “approximately one half of the chemotherapeutic drugs currently used by oncologists for cancer treatment were discovered and/or developed at NCI.” http://www​.cancer.gov​/cancertopics/factsheet​/NCI/drugdiscovery

10

However, recall that the Toole (2007) study shows that basic research funding by the public sector has a stronger effect on private R and D than clinical research funding.

11

Heidenreich and McClellan (2003) summarize this point of view in the introduction to their study (discussed above), noting that while previous analyses “have generally not provided direct evidence of the impact on health of specific research studies, or on the likely value of additional research funding” these previous studies tend to conclude “recent gains in health are extraordinarily valuable in comparison with the relatively modest past funding.”

Copyright © 2011, National Academy of Sciences.
Bookshelf ID: NBK83123

Views

  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this title (1.6M)

Related information

  • PMC
    PubMed Central citations
  • PubMed
    Links to PubMed

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...