NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

National Academies (US) Committee on Measuring Economic and Other Returns on Federal Research Investments. Measuring the Impacts of Federal Investments in Research: A Workshop Summary. Washington (DC): National Academies Press (US); 2011.

Cover of Measuring the Impacts of Federal Investments in Research

Measuring the Impacts of Federal Investments in Research: A Workshop Summary.

Show details


Like the benefits of research to health, many other research benefits may not be reflected or only partly reflected in market transactions but have enduring national importance. Examples include contributions to national defense, agricultural innovation, environmental protection, and the sustainability of natural resources. Economists have tools to measure the economic effects of non-market benefits, yet these tools may not always capture the full extent of those benefits.

Five speakers at the workshop examined these non-market impacts from very different perspectives, yet their observations had some intriguing commonalities. Foresight, leadership, and risk are all involved in pursing research with difficult-to-measure but very real benefits.


The Bill and Melinda Gates Foundation is a private foundation focused in part on improving health, reducing poverty, and improving food security in some of the world’s poorest countries. It engages in what Prabhu Pingali, Deputy Director of Agricultural Development at the foundation, termed strategic philanthropy. The foundation establishes a set of clear goals; identifies the pathways, partners, and grants necessary to make progress toward those goals; and then measures progress toward those goals. In its Agricultural Development Program it focuses on doubling the productivity of farming by small landholders (less than two hectares) in sub-Saharan Africa and South Asia.

There is a rich history of metrics in agriculture development over the past several decades, Pingali observed. Since the Green Revolution, agriculture development specialists have been tracking the adoption and diffusion of modern varieties of the major table crops, so they know the extent to which modern wheat and rice crops have been adopted by farmers in developing world and the connection of that diffusion to productivity growth. This work also has shown that the rates of return to crop R and D in the developing world have been consistently high—on the order of 50 percent or more. Furthermore, these high rates of return have also high pay-offs for U.S. agriculture. For example, according to a study by Philip Pardey and colleagues for the International Food Policy Research Institute (IFPRI, 1996), from an overall investment of $71 million since 1960 in wheat improvement research at the International Maize and Wheat Improvement Center of the Consultative Group on International Agricultural Research (CGIAR), the U.S. economy realized a return of at least $3.4 billion and up to $13.4 billion for the period 1970 to 1993. From a total investment of about $63 million since 1960 in rice research at the CGIAR’s International Rice Research Institute, the United States gained at least $37 million and up to $1 billion in economic benefits from 1970 to 1993, according to the same study. “The bottom line,” Pingali concluded, “is that international crop improvement research has had high pay-offs, not just for the countries where the work was targeted but also high pay-offs back to U.S. agriculture.”

For small landholders in the developing world the chief crops are rice, wheat, maize, sorghum, millet, and cassava. For each crop the foundation has set clearly defined output targets that it expected grantees to achieve. For example, an output could be the release of a particular variety of maize that is tolerant to drought, or it could be the number of farmers in a given area who adopted a variety over a period of time. For grants across the entire food chain from seed to the consumer’s plate, defining outputs becomes increasingly complex. Outputs for the use of fertilizer are straightforward, but what are outputs for fertilizer policies? Nevertheless, once specified by the foundation, grantees are expected to apply a set of indicators to track progress toward achieving those outputs.

The foundation has also sought to measure the extent to which its $1.7 billion agriculture investment over four years has reduced hunger and poverty. “Just adding up the outcomes from ways to monitor grant making does not necessarily get us to the answer,” said Pingali. To address this problem, it has set up a randomly sampled household survey across Sub-Saharan Africa that is nationally representative and stratified by the agro-ecologies present in each country. It is now in the process of collecting detailed household data on production practices, technologies used, income, nutrition, and health and education status for about 25,000 households in seven countries in Africa and hoping to extend the survey to other countries. Visits to each household are occurring from one to two years apart over a 15- to 20-year period. “We can track changes that are taking place in African households over a long period of time and then track the contribution of productivity improvement to household welfare and the relationship between those two over this long period of time,” said Pingali. “Of course we won’t be able to attribute those changes specifically to our efforts, but I don’t think that matters as long as we can show that there’s progress toward achieving our ultimate goals of hunger and poverty reduction.”


As it enters its third century, the DuPont Company is undergoing a transformation that is bringing biology into a product mix based on traditional chemistry, said Richard Broglie, Director of Research Strategy at DuPont Agricultural Biotechnology. Its investment decisions are informed by four global megatrends: increasing food production; decreasing dependence on fossil fuels; protecting people, assets, and the environment; and growth in emerging markets. These trends derive in part from population projections. Global population is expected to exceed 9 billion by 2050. Feeding that number of people will require an increase in food productivity of 70 percent, Broglie observed. To meet this need, the majority of DuPont’s R and D investments are aimed at adding new traits into crops to increase and protect yields, improving farm input efficiencies, and increasing the end use value of either the grains or the non-harvested crops.

DuPont measures the results of its investments in several ways, said Broglie. It tracks the number of new products introduced (with 1,786 new products produced in 2010), the revenue generated from those products, and the number of patents filed. The first two measures are more important than the third, said Broglie, since patents increase the probability of developing a product but do not necessarily give rise to products.

In the agricultural biotechnology area, a stage-gated approach for R and D decisions is used that progresses from discovery to proof of concept to early and advanced development to pre-launch to launch. This framework allows the company to balance its research investments across a diverse portfolio and over an extended period, since the development of a new crop trait can take 15 years or longer. It also helps balance investments against regulatory costs, which can be anywhere from $100 million to $150 million. At each stage, decisions involve people from the technical organization, the legal organization, the regulatory group, and the marketing group.


An economic cost-benefit analysis is an interesting problem but can be very difficult to implement, according to Michael Roberts, Assistant Professor of Agricultural and Resource Economics at North Carolina State University. In the case of research, economic analysis has shown that it is the main source of productivity growth. It is also a public good, which means that one person’s use of research findings does not diminish its value to others and it is difficult for someone who has it to keep other people from using it. Because of these features, the private sector tends to do too little research, and there is a clear public role in funding research. However, to know how much to invest and how to set research priorities, the costs and benefits of different kinds of research must be weighed.

“This is a challenging conceptual problem,” said Roberts. Research has many possible outcomes that economists might model as random. The range of potential outcomes is large, sometimes unintended, and probably unquantifiable. “We probably can’t even imagine what the potential outcomes are of any individual research project.” Many drugs used today are by-products of efforts to do something else, which reflects the uncertainty of research.

A Pest Forecast System as a Model

A recent research project in which Roberts was involved highlights some of these difficulties. In late 2004, a spore that causes soybean rust, which was then prevalent in South America and much of the rest of the world but not in the United States, landed on the shores of the Gulf Coast. The spore did not reduce yields much but it greatly increased costs because of the need to apply fungicides. The USDA coordinated its experiment stations to set up sentinel plots throughout the United States and monitor for soybean rust to track its spread. Also, an aerobiologist modeled how the spores move around on the winds, with a website reporting the overall results. Farmers could use this information to decide whether to spray fungicide on their soybeans or not.

The USDA’s Economic Research Service sought to determine the value of this research. It took into account three key components: (1) prior beliefs about the amount of risk, (2) the amount of preventable losses, and (3) how well the information system resolves uncertainty. With no information, farmers will sometimes spray when unnecessary or not spray when needed. With perfect information, farmers will always make the right decisions. In the real world, partial information is available. For example, farmers had the option of carefully monitoring their fields, spraying the preventive fungicide, or monitoring their fields and spraying a less effective and less costly fungicide.

This range of scenarios made it possible to model the value of information, in terms of dollars per acre, against the range of prior beliefs about the possibility of infection. The model exhibited peaks of value that represented particular probabilities of beliefs about infection where a rational farmer would switch from doing nothing to monitoring and then to applying the curative fungicide. “You get these peaks right at the decision points because that’s where you’re most unsure about what the right decision is to make, and a little bit of information goes a long way at those points.”

The USDA researchers concluded that the model had value. However, it was still crude. The model depended on an extraordinary simplification of reality and key simplifying assumptions. It had the potential to resolve subjective uncertainties, yet the quantifiable benefits were still difficult to determine and sensitive to the assumptions made.

In light of these limitations, Roberts was pessimistic about valuing individual research projects. However, other strategies may be more productive. For example, it may be possible to value research programs rather than projects. It also may be possible to value canonical examples, such as the development of hybrid corn, which depended on the work of a few key researchers. Finally, it may be possible to value projects and projects in retrospect and adjust research priorities accordingly.

Climate Change Projections

Roberts has been doing research on the effects of climate change on the global crop system. A key finding has been that extreme heat is by far the single most predictive variable for crop yields. This finding could be used to build an early warning indicator that would allow societies to avoid some of the adverse effects of climate change, he said.

However, immense uncertainty continues to make the value of this research difficult to quantify. Research seeks to find low-probability events that have extremely high payoffs. Economists would say that the value distribution has a fat tail. In a totally different context, climate change could have a fat tail if it has a small probability of producing truly catastrophic events. Cost-benefit analyses for research need to be pursued, but in cases like these they may not be feasible, Roberts concluded.


The three major questions raised by Irwin Feller at the beginning of the workshop are somewhat different in the context of a private foundation’s decisions, said Kai Lee, Program Officer with the Conservation and Science Program at the David and Lucile Packard Foundation. The first question becomes how much a foundation should spend on science, which is a question that is ultimately answered by the trustees within the constraints of a foundation’s mission and resources. The second question becomes how to allocate funding given the mission of the foundation. And the third question becomes which research performers should receive the funds from a foundation. In the case of the Packard Foundation, said Lee, program officers are looking for a very specific population of research performers— people willing to work with the foundation to contribute to informing the near-term decision making of entities, including public agencies, that will support the foundation’s conservation mission. “That turns out to be a lot harder than you might think,” he said.

The Packard Foundation made $236 million in grants in 2010 in four areas: population and reproductive health; children, families, and communities; local programs; and conservation and science, with the last of these categories accounting for $154 million in grants in 2010. For example, it supports the Monterey Bay Aquarium Research Institute, which is a major oceanographic institution created by David Packard in which scientists and engineers work together. It has a fellowship program in science and engineering for early career scientists. And it has other programs focused on oceans science, which is a major emphasis for the foundation. Although the amounts of research support it provides are small compared with federal funding for research, the foundation is a significant funder in the field of marine conservation.

In general, knowledge of oceans conservation is held by government agency staff members, academic scientists, and a growing cadre of scientists who work for non-governmental organizations that have varying degrees of advocacy as part of their mission. This knowledge has come to be a countervailing source of information for decision makers in the face of advocacy by resource users and developers, who also depend heavily on publicly funded knowledge.

The foundation seeks to link knowledge with action. While advancing conservation strategies, it also works to improve the use of knowledge in decision making. “In effect, what I’m trying to do is to foster a kind of ‘learning by doing’ by making grants and working with users and researchers,” said Lee. Using this approach, real-time evaluation of outcomes is an essential component.

In the conservation field, the use of knowledge to inform action can be done in two possible ways. One is to bring knowledge to bear to support advocacy to achieve specific conservation ends. The problem with this approach is that knowledge becomes entangled in polarization. “There is a grave risk of damage to the credibility and legitimacy of science when it becomes entangled in that polarization,” said Lee. “Nonetheless, science in support of advocacy has sometimes proved to be necessary and successful.”

The second approach is not to support advocacy but rather to support decision making and learning. This tends to work best in a collaborative setting. In such a setting, science is part of a governance process to solve problems rather than part of a polarized process to try to change the rules. This use of science tends to reinforce existing institutions, but it also requires some conflict so that problems can be recognized and information being brought to bear by science can affect decisions.

Lee discussed the concept of adaptive management, which he described as the idea that the implementation of a policy should be understood as an experimental test of the hypothesis embodied in that policy. Such an experiment requires systematic monitoring of outcomes to determine the consequences, including unanticipated consequences, of a policy. “You want to do integrative assessment of that knowledge to build knowledge of the system that you’re innovating in, the ecosystem if you like, to inform model building, to structure a debate, and from that to enable strong inference.”

The science Lee seeks to support links communities of scientists with decision makers, stakeholders, residents, and citizens of an area who are used to making decisions without any information from science. It can be difficult to make this connection work, Lee observed, so often the foundation has tried to foster the emergence of boundary-spanning organizations. The foundation does this by emphasizing output-oriented grant making, in which it focuses on decisions makers at the outset. “We put a lot of effort into aligning users and researchers, and this is where the art of the grant maker gets called upon.” The foundation presents prospective grantees with a set of questions to think about as they prepare their proposals. “We want them to understand and explain to us whether the situation is the right one. That is, is there an opening for new knowledge to actually cause changes in action.” The process can be burdensome, with the foundation identifying specific indicators and closely monitoring their progress. “The objective is to allow us to learn about the types of short- and medium-term interventions in which the foundation can have the greatest impact.”


Richard Van Atta, Senior Research Analyst at the Science and Technology Policy Institute, pointed out that national security is also a societal value with a very fat tail. The value of national security can be viewed as infinite, or at least as binary, in that the United States has it or it does not.

Similarly, defense research can have immense payoffs that are difficult or impossible to predict. For example, a relatively modest investment in gallium arsenide monolithic microwave integrated circuits for signal processing led to the development of a technology that is now used in every cell phone around the world.

Despite these uncertainties, the Department of Defense still has to assess the effects of research investments on national security as a way of making decisions. Research in the Department of Defense is purpose-driven, Van Atta said. The nation relies on the technological superiority of its armed forces to maintain its position of world leadership. The question then becomes: How can the value of technological superiority be assessed in terms of desired outcomes? “You can’t defend everything against everybody, so you have to make choices.”

The Department of Defense conducts this assessment by establishing a national security strategy and then relating technologies to the strategy. In doing so, it differentiates technologies according to different objectives. Core technologies refer to longstanding traditional capabilities, such as explosives and propulsion. Critical technologies refer to revolutionary or transformational technological changes. Emerging technologies occupy the forefront of knowledge and have the potential to be critically important but have not yet been fully developed. Process and manufacturing production technologies, such as process controls for nanotechnology, underlie other developing technologies. Enabling or cross-cutting technologies are capabilities that everyone wants but does not want to pay for. In this case, different organizations may be devoting insufficient effort to the technologies, and these efforts need to be scaled up to produce a technology that will have a substantial impact.

In all of these cases, technologies need to be managed in increasingly difficult and complex technology environments. This management requires the establishment of goals and purposes. For example, NASA has an approach called GOTChA, for Goals, Objectives, Technology Challenges, and Actions or Activities. Under this approach, activities are organized toward goals by focusing on the questions “Are we getting there?” “Are we there yet?” “How far have we gotten?” “Do we put more in or don’t we?”

The DARPA Approach

DARPA is the best known organization within the Department of Defense for developing high-payoff high-risk technologies, observed Van Atta. When George Heilmeier became Director of DARPA, he imposed what came to be known as the Heilmeier Criteria. These were basically a set of management questions that asked: What is the purpose of doing this research? What difference will it make if it succeeds? How would you know if you are succeeding? What are your midterm criteria for assessing it? And what are your milestones? When researchers responded to these questions by saying, “We’re scientists; we can’t tell you those answers in advance,” Heilmeier responded, “You will if you want my money.”

This approach to assessment is oriented toward research designed to meet specific identified needs, said Van Atta. That begs the question of how to define these needs and how to link them to requirements that have not yet been specified. “We know what the requirements are for today,” said Van Atta. “What are the requirements for five or ten years from now in the security world?” During the Cold War, the requirements changed slowly. “Today the security environment changes faster than we can develop our S and T plans. It’s more like the business environment,” which requires that technology development be managed in a different way than in the past.


Public Agenda is an organization devoted to bridging the gaps between leaders and the public and also experts and the public, said the organization’s president, Will Friedman. By measuring and then working to reduce these gaps, Public Agenda and similar organizations engage stakeholders and help people come to terms with issues.

Public Agenda does considerable public opinion research to find out how people are looking at problems. It also conducts public and stakeholder engagement and communications to set in motion collaborative processes. It has worked on many issues, including energy, the environment, and health care.

The organization tends to become involved in complex societal issues that involve both science and politics. In these cases, people need to make value judgments and adapt to change. Public participation may not be needed to enact a policy, but the lack of participation can lead to backlashes that undermine a policy. Consequently, the challenge for Public Agenda is usually how to create the conditions that allow the public to come to terms with complex, science-intensive issues.

The way the public wrestles with issues and comes to hold certain positions is different than how experts wrestle with issues, Friedman said. The public learning curve involves three stages, beginning with a consciousness raising period. For the public to come to terms with an issue, they need to develop a sense of awareness and urgency about that issue. The public then engages in a process of working through an issue. Many barriers can impede this process, including a lack of urgency, wishful thinking, misperceptions and knowledge gaps, and mistrust. Overcoming these barriers requires strategic facts, appropriate choices, and time. “The real art and science here is to be much more precise, not in terms of your desire to manipulate the public to have the opinion you want them to have, but rather to help them figure out where they actually stand and what’s important to them.”

In the case of climate change, for example, surveys have shown that the public has become less likely over time to view climate change as serious. Further work showed that people were not getting the message that scientists thought they were delivering. The public tends to frame the issues in terms of bread and butter issues— for example, that gas prices and reliance on imported oil are more serious threats than climate change. They have a great deal of wishful thinking, and the issue has become polarized by politics.

Science has a role in helping the public grapple with such issues, but it may not be the role many scientists assume. Their most common mistake it to demand that the public become junior scientists. As a result, they overload people with technical detail without considering what information the public is ready to receive at a given time. “Science literacy is well-intended and education is a good thing, but it does not necessarily help people grapple effectively with specific issues at specific points in time,” said Friedman.

Science’s most important contributions are to lead the charge on the technical side of problem solving while informing public deliberation in critical ways. Science can help clarify the choices the nation needs to make. It can help people understand the implications of different solutions and the tradeoffs involved. Public Agenda uses a tool it calls a choice framework that presents people with a few strategic bits of background information - “not too much, but just based on research about what it is that people need to begin to get into the issue.” It also studies the framing of issues in different ways to help people deliberate more effectively. The choice framework “can help people learn quickly and shift from a non-productive, circular reasoning and non-exploratory dialogue to one where they are working off each other, thinking about solutions, and generating really interesting questions.”


During the discussion period, Van Atta was asked how to build institutional support for entities such as DARPA that are institutionally disruptive. The best approach, he said, is through top-down leadership. For example, the impetus for stealth technologies came from the Secretary of Defense and depended on his vision and strategy in pursuing a new technology. “If you’re going to do something different, you’ve got to do something different.”

Broglie was asked whether DuPont has a strategy for releasing research results into the public sphere when they do not lead to marketable products but could nevertheless lead to important advances. The question is difficult to answer, he said, because there are many reasons why something might not progress through the commercialization pipeline. However, DuPont has worked with the Gates Foundation on crops for which it does not sell seed to improve the nutritional quality of grains. In other cases, technical dead-ends are publicly released to make information available that has public value.

Roberts was asked about liability considerations if a model leads farmers to make a decision that turns out to be mistaken or harmful. He agreed that for a model to be useful as a decision tool, it would need lots of supporting data. Also, through use the model would be refined.

Copyright © 2011, National Academy of Sciences.
Bookshelf ID: NBK83122


  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this title (1.6M)

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...