NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

National Research Council (US) Committee for Capitalizing on Science, Technology, and Innovation: An Assessment of the Small Business Innovation Research Program Policy and Global Affairs; Wessner CW, editor. An Assessment of the Small Business Innovation Research Program at the National Aeronautics and Space Administration. Washington (DC): National Academies Press (US); 2009.

Cover of An Assessment of the Small Business Innovation Research Program at the National Aeronautics and Space Administration

An Assessment of the Small Business Innovation Research Program at the National Aeronautics and Space Administration.

Show details



Created in 1982 by the Small Business Innovation Development Act, the Small Business Innovation Research (SBIR) program was designed to stimulate technological innovation among small private-sector businesses while providing the government cost-effective new technical and scientific solutions to challenging mission problems. SBIR was also designed to help to stimulate the U.S. economy by encouraging small businesses to market innovative technologies in the private sector.1

As the SBIR program approached its twentieth year of existence, the U.S. Congress requested that the National Research Council (NRC) of the National Academies conduct a “comprehensive study of how the SBIR program has stimulated technological innovation and used small businesses to meet Federal research and development needs,” and make recommendations on improvements to the program.2 Mandated as a part of SBIR’s renewal in late 2000, the NRC study has assessed the SBIR program as administered at the five federal agencies that together make up 96 percent of SBIR program expenditures. The agencies are, in decreasing order of program size: the Department of Defense (DoD), the National Institutes of Health (NIH), the National Aeronautics and Space Administration (NASA), the Department of Energy (DoE), and the National Science Foundation (NSF).

The NRC Committee assessing the SBIR program was not asked to consider if SBIR should exist or not—Congress has affirmatively decided this question on three occasions.3 Rather, the Committee was charged with providing assessment-based findings to improve public understanding of the operations, achievements, and challenges of the program as well as recommendations to improve the program’s effectiveness.


Eleven federal agencies are currently required to set aside 2.5 percent of their extramural research and development budget exclusively for SBIR contracts. Each year these agencies identify various R&D topics, representing scientific and technical problems requiring innovative solutions, for pursuit by small businesses under the SBIR program. These topics are bundled together into individual agency “solicitations”—publicly announced requests for SBIR proposals from interested small businesses. A small business can identify an appropriate topic it wants to pursue from these solicitations and, in response, propose a project for an SBIR grant. The required format for submitting a proposal is different for each agency. Proposal selection also varies, though peer review of proposals on a competitive basis by experts in the field is typical. Each agency then selects the proposals that are found best to meet program selection criteria, and awards contracts or grants to the proposing small businesses.

As conceived in the 1982 Act, SBIR’s grant-making process is structured in three phases at all agencies:

  • Phase I grants essentially fund feasibility studies in which award winners undertake a limited amount of research aimed at establishing an idea’s scientific and commercial promise. Today, the legislative guidance anticipates normal Phase I grants around $100,000.4
  • Phase II grants are larger—typically about $750,000—and fund more extensive R&D to develop the scientific and commercial promise of research ideas.
  • Phase III. During this phase, companies do not receive additional funding from the SBIR program. Instead, grant recipients should be obtaining additional funds from a procurement program at the agency that made the award, from private investors, or from the capital markets. The objective of this phase is to move the technology from the prototype stage to acquisition or the marketplace.

Obtaining Phase III support is often the most difficult challenge for new firms to overcome. In practice, agencies have developed different approaches to facilitate SBIR grantees’ transition to commercial viability; not least among them are encouraging applications for additional competitively awarded SBIR grants.

Previous NRC research has shown that firms have different objectives in applying to the program. Some want to demonstrate the potential of promising research but may not seek to commercialize it themselves. Others think they can fulfill agency research requirements more cost-effectively through the SBIR program than through the traditional procurement process. Still others seek a certification of quality (and the private investments that can come from such recognition) as they push science-based products towards commercialization.5


The SBIR program approached reauthorization in 1992 amidst continued concerns about the U.S. economy’s capacity to commercialize inventions. Finding that “U.S. technological performance is challenged less in the creation of new technologies than in their commercialization and adoption,” the National Academy of Sciences at the time recommended an increase in SBIR funding as a means to improve the economy’s ability to adopt and commercialize new technologies.6

Following this report, the Small Business Research and Development Enhancement Act (P.L. 102–564), which reauthorized the SBIR program until September 30, 2000, doubled the set-aside rate to 2.5 percent.7 This increase in the percentage of R&D funds allocated to the program was accompanied by a stronger emphasis on encouraging the commercialization of SBIR-funded technologies.8 Legislative language explicitly highlighted commercial potential as a criterion for awarding SBIR grants. For Phase I awards, Congress directed program administrators to assess whether projects have “commercial potential,” in addition to scientific and technical merit, when evaluating SBIR applications.

The 1992 legislation mandated that program administrators consider the existence of second-phase funding commitments from the private sector or other non-SBIR sources when judging Phase II applications. Evidence of third-phase follow-on commitments, along with other indicators of commercial potential, was also to be sought. Moreover, the 1992 reauthorization directed that a small business’ record of commercialization be taken into account when evaluating its Phase II application.9

The Small Business Reauthorization Act of 2000 (P.L. 106–554) extended SBIR until September 30, 2008. It called for this assessment by the National Research Council of the broader impacts of the program, including those on employment, health, national security, and national competitiveness.10


This NRC assessment of SBIR has been conducted in two phases. In the first phase, at the request of the agencies, a formal report on research methodology was to be developed by the NRC. Once developed, this methodology was then reviewed and approved by an independent National Academies panel of experts.11 Information about the program was also gathered through interviews with SBIR program administrators and during four major conferences where SBIR officials were invited to describe program operations, challenges, and accomplishments.12 These conferences highlighted the important differences in each agency’s SBIR program’s goals, practices, and evaluations. The conferences also explored the challenges of assessing such a diverse range of program objectives and practices using common metrics.

The second phase of the NRC study implemented the approved research methodology. The Committee deployed multiple survey instruments and its researchers conducted case studies of a wide profile of SBIR firms. The Committee then evaluated the results and developed both agency-specific and overall findings and recommendations for improving the effectiveness of the SBIR program. The final report includes complete assessments for each of the five agencies and an overview of the program as a whole.


At its outset, the NRC’s SBIR study identified a series of assessment challenges that must be addressed. As discussed at the October 2002 conference that launched the study, the administrative flexibility found in the SBIR program makes it difficult to make cross-agency comparisons. Although each agency’s SBIR program shares the common three-phase structure, the SBIR concept is interpreted uniquely at each agency. This flexibility is a positive attribute in that it permits each agency to adapt its SBIR program to the agency’s particular mission, scale, and working culture. For example, NSF operates its SBIR program differently than DoD because “research” is often coupled with procurement of goods and services at DoD but rarely at NSF. Programmatic diversity means that each agency’s SBIR activities must be understood in terms of its separate missions and operating procedures. This diversity is commendable but, operationally, makes the task of assessing the program more challenging.

A second challenge concerns the linear process of commercialization implied by the design of SBIR’s three-phase structure.13 In the linear model, illustrated in Figure 1-1, innovation begins with basic research supplying a steady stream of fresh and new ideas. Among these ideas, those that show technical feasibility become innovations. Such innovations, when further developed by firms, become marketable products driving economic growth.

FIGURE 1-1. The linear model of innovation.


The linear model of innovation.

As NSF’s Joseph Bordogna observed at the launch conference, innovation almost never takes place through a protracted linear progression from research to development to market.14 Research and development drives technological innovation, which, in turn, opens up new frontiers in R&D.15 True innovation, Bordogna noted, can spur the search for new knowledge and create the context in which the next generation of research identifies new frontiers. This nonlinearity, illustrated in Figure 1-2, makes it difficult to rate the efficiency of SBIR program. Inputs do not match up with outputs according to a simple function. Figure 1-2, while more complex than Figure 1-1 is itself a highly simplified model. For example, feedback loops can stretch backwards or forwards by more than one level.

FIGURE 1-2. A feedback model of innovation.


A feedback model of innovation.

A third assessment challenge relates to the measurement of outputs and outcomes. Program realities can and often do complicate the task of data gathering. In some cases, for example, SBIR recipients receive a Phase I award from one agency and a Phase II award from another. In other cases, multiple SBIR awards may have been used to help a particular technology become sufficiently mature to reach the market. Also complicating matters is the possibility that for any particular grantee, an SBIR award may be only one among other federal and non-federal sources of funding. Causality can thus be difficult, if not impossible, to establish.

The task of measuring outcomes is made harder because companies that have garnered SBIR awards can also merge, fail, or change their names before a product reaches the market. In addition, principal investigators or other key individuals can change firms, carrying their knowledge of an SBIR project with them. A technology developed using SBIR funds may eventually achieve commercial success at an entirely different company than that which received the initial SBIR award.

Complications plague even the apparently straightforward task of assessing commercial success. For example, research enabled by a particular SBIR award may take on commercial relevance in new unanticipated contexts. At the launch conference, Duncan Moore, former Associate Director of Technology at the White House Office of Science and Technology Policy (OSTP), cited the case of SBIR-funded research in gradient index optics that was initially considered a commercial failure when an anticipated market for its application did not emerge. Years later, however, products derived from the research turned out to be a major commercial success.16 Today’s apparent dead end can be a lead to a major achievement tomorrow. Lacking clairvoyance, analysts cannot anticipate or measure such potential SBIR benefits.

BOX 1-1SBIR and the Luna Innovation Model

Developed by Kent Murphy who founded Luna in rural southern Virginia, the Luna model uses multiple flexible funding instruments, both public and private including SBIR, the Advanced Technology Program (ATP), venture capital, corporate partners, and internal funding to develop and commercialize ideas that were originally generated at universities or with commercial partners.

Securing venture capital funding can be difficult even in the best of times; Luna received only two small investments during the late 1990s bubble. Venture capital firms tend to be highly specialized geographically, and Luna’s southern Virginia location has minimal local venture funding.a The path to technical and financial success is often complex for new technologies, especially those located in more rural areas distant from high-tech clusters.

In one example, Luna Energies built its basic technology with funding from prime contractors and then used SBIR funding to develop applications for NASA and the Air Force. Eventually, it developed civilian applications for the energy industry, leading to its purchase by an energy company. According to Murphy, innovation awards from both SBIR and ATP were “critical” to Luna’s success. b


See John Freear, Jeffrey Sohl, and William Wetzel, “Angles on Angels: Financing Technology-based Ventures—A Historical Perspective, Venture Capital, 4(4):275–287, 2002.


Luna Innovations is now a public company, following an IPO on June 9, 2006.

Gauging commercialization is also difficult when the product in question is destined for public procurement. The challenge is to develop a satisfactory measure of how useful an SBIR-funded innovation has been to an agency mission. A related challenge is determining how central (or even useful) SBIR awards have proved in developing a particular technology or product. In some cases, the Phase I award can meet the agency’s need—completing the research with no further action required. In other cases, surrogate measures are often required. For example, one way of measuring commercialization success is to count the products developed using SBIR funds that are procured by an agency such as DoD. In practice, however, large procurements from major suppliers are typically easier to track than products from small suppliers such as SBIR firms. Moreover, successful development of a technology or product does not always translate into successful “uptake” by the procuring agency. Often, the absence of procurement may have little to do with the product’s quality or the potential contribution of SBIR.

Understanding failure is equally challenging. By its very nature, an early-stage program such as SBIR should anticipate a high failure rate. The causes of failure are many. The most straightforward, of course, is technical failure, where the research objectives of the award are not achieved. In some cases, the project can be a technically successful but a commercial failure. This can occur when a procuring agency changes its mission objectives and hence its procurement priorities. NASA’s new Mars Mission is one example of a mission shift that may result in the cancellation of programs involving SBIR awards to make room for new agency priorities. Cancelled weapons system programs at the Department of Defense can have similar effects. Technologies procured through SBIR may also fail in the transition to acquisition. Some technology developments by small businesses do not survive the long lead times created by complex testing and certification procedures required by the Department of Defense. Indeed, small firms encounter considerable difficulty in penetrating the “procurement thicket” that characterizes defense acquisition.17 In addition to complex federal acquisition procedures, there are strong disincentives for high-profile projects to adopt untried technologies. Technology transfer in commercial markets can be equally difficult. A failure to transfer to commercial markets can occur even when a technology is technically successful if the market is smaller than anticipated, competing technologies emerge or are more competitive than expected, if the technology is not cost-competitive, or if the product is not adequately marketed. Understanding and accepting the varied sources of project failure in the high-risk, high-reward environment of cutting-edge R&D is a challenge for analysts and policymakers alike.

This raises the issue concerning the standard on which SBIR programs should be evaluated. An assessment of SBIR must take into account the expected distribution of successes and failures in early-stage finance. As a point of comparison, Gail Cassell, Vice President for Scientific Affairs at Eli Lilly, has noted that only one in ten innovative products in the biotechnology industry will turn out to be a commercial success.18 Similarly, venture capital funds often achieve considerable commercial success on only two or three out of twenty or more investments.19

In setting metrics for SBIR projects, therefore, it is important to have a realistic expectation of the success rate for competitive awards to small firms investing in promising but unproven technologies. Similarly, it is important to have some understanding of what can be reasonably expected—that is, what constitutes “success” for an SBIR award, and some understanding of the constraints and opportunities successful SBIR awardees face in bringing new products to market. From the management perspective, the rate of success also raises the question of appropriate expectations and desired levels of risk taking. A portfolio that always succeeds would not be investing in high risk, high pay-off projects that push the technology envelope. A very high rate of “success” would, thus, paradoxically suggest an inappropriate use of the program. Understanding the nature of success and the appropriate benchmarks for a program with this focus is therefore important to understanding the SBIR program and the approach of this study.


With $103 million in annual awards in 2005, NASA operates the fourth largest SBIR program in the federal government. Currently, NASA SBIR Phase I award is set at a maximum of $100,000 and lasts for six months. A Phase II award is set at a maximum of $600,000 and lasts for a period of up to two years. NASA has not yet adopted the Phase IIB or Phase II Plus or Fast Track option that exists at some other agencies.

The NASA SBIR program has varied over the years in terms of how centralized it is. Until recently, program operations were run at each of the ten NASA field centers with NASA Headquarters, supported by a national office located at Goddard, focusing on the overall administration of the program. Following NASA’s recent reorganization, the program will be less decentralized. It will run through only four field centers (Ames, JPL, Glenn, and Langley) with Ames replacing Goddard as the national office.

Each NASA center has an SBIR Field Center Program Manager who administers the program at the respective center. Contracts are managed by NASA’s Contracting Officer at each center with support from the Contract Officer Technical Representative (COTR). The COTR serves as the primary contact within NASA on a contract’s technology focus and objectives. Overall program policy, effectiveness, and assessment are the responsibility of the Headquarters Program Executive.

The impetus of the 2006 reorganization is to refocus SBIR on the NASA’s core mission objective, deemphasizing the commercialization outside of NASA. Interviews with Mission Directorate liaisons indicate that, because of the reorganization, the balance of power between the Centers and Headquarters changed substantially in FY2005–2006.

The reorganization is intended to address dissatisfaction with the outcomes of the previous approach. For example, the 2002 Commercial Metrics report (covering 1983–1996) found that only about six percent of NASA’s 1,739 SBIR Phase II awards during this period supported technologies that were eventually infused into NASA or other federal programs via Phase III funding.20

The reorganization also reflects changing needs and priorities within NASA. The addition of new missions and the expansion of existing ones have placed additional demands on Mission Directorates, squeezing funding for basic research.

As a result of the reorganization, Mission Directorates are now focusing on aligning research funded through SBIR with specific technologies that can be taken up (or in NASA-speak “infused”) into the their own technology development programs. Whereas commercialization was the primary priority of NASA’s SBIR program (or, at least, a priority equal to the support for the agency’s mission), the focus of the program since the 2006 reorganization is squarely on support for the NASA mission. For SBIR, this involves finding and developing technologies that can help NASA meet its very specific needs and requirements.

Overall, this new clarity of focus appears to be a positive development. As described in some detail in Chapter 4 (Outcomes), the low volume and high degree of specificity (e.g., space-hardiness) required to meet NASA’s needs makes it less likely that SBIR funded technologies can spin off into commercial sales.21


In gathering and analyzing the data to assess the SBIR program at NASA, the Committee drew on the following set of research questions:

  • How successful has NASA SBIR program been in commercializing technologies supported by Phase I and Phase II awards (and what are the factors that have contributed to or inhibited this level of commercialization)?
  • To what extent has NASA SBIR program supported NASA’s mission (and what are the factors that have contributed to or inhibited this level of support)?
  • To what extent has NASA SBIR program stimulated innovation.
  • How well has the NASA SBIR program encouraged small firms and supported the growth and development of woman- and minority-owned businesses?
  • How effective has NASA’s management of the SBIR program been (and how might this management be improved)?

1.7.1. Surveys of NASA SBIR Award-recipient Companies

Original data gathered by the research team in support of the NRC study of NASA SBIR program included a survey of NASA Phase II award-recipient firms; a survey NASA Phase I award-recipient firms that did not also receive a Phase II award; a survey of NASA technical staff involved in the SBIR program; numerous interviews with NASA personnel directly involved in administering the SBIR program; the assessment and analysis of data provided by NASA’s SBIR staff; and company case studies.

BOX 1-2A Moving Target: The Challenge of Assessing SBIR in a Restructuring NASA

As with other parts of NASA, the NASA SBIR program has, experienced sequential waves of reorientation and restructuring. Mission objectives have changed very substantially, far more than at other SBIR agencies.

During NASA’s reorganization of 2003–2004, the agency’s SBIR program became a component of the Advanced Space Technology Program within the Exploration Systems Mission Directorate (ESMD), which is charged with implementing NASA’s planned exploration of Mars and other space exploration projects. In 2006, further reorganization led a change in the balance of management power between the Mission Directorates and the centers, with the former assuming much more direct authority over SBIR topic and award selection.

Because of this churn, any assessment of program management at NASA must deal with a moving target. Extensive changes in management structures mean that data regarding past activities is of limited relevance in guiding current management.

The NRC Phase II Survey (Appendix B)

In Spring 2005, the National Research Council administered a survey of Phase II SBIR projects across agencies as part of its congressionally mandated evaluation of the SBIR program. The survey targeted a sample of Phase II awards that were awarded through 2001. A large majority of Phase II awards would have been completed by the 2005 survey date, and at least some commercialization efforts could have been initiated.

There may be some biases in these data. Projects from firms with multiple awards were underrepresented in the sample, because they could not be expected to complete a questionnaire for each of possibly numerous awards received; but they may have been overrepresented in the responses because they might be more committed to the SBIR program. Nearly 40 percent of respondents began Phase I efforts after 1998, partly because the number of Phase I awards increased, starting in the late 1990s, and partly because winners from more distant years are harder to reach, as small businesses regularly cease operations, staff with knowledge of SBIR awards leave, and institutional knowledge erodes.

The NRC Phase I Survey (Appendix C)

The Committee conducted a second recipient survey, in an attempt to determine the impact of Phase I awards that did not go on to Phase II. The original sample for this Phase I study was the 3,363 NASA Phase I awards from 1992–2001 inclusive. Valid responses were received from 303 NASA Phase I projects that did not advance to Phase II.

Survey of NASA Project Managers (Appendix D)

The technical project managers of individual SBIR projects can provide unique perspectives on the SBIR program. The project managers were surveyed electronically in three agencies—DoD, DoE, and NASA.

The NRC Project Manager Survey was based on Phase II projects awarded during the study period (1992–2001 inclusive). Project managers for these projects were identified with the help of the agencies. As expected, there was significant attrition (due to absence of email addresses, inability to identify the project manager, the project manager having left the agency or deceased, etc.). The three agencies were able to locate the names and email addresses of project managers for 2,584 projects. Of these, responses were received for 513 projects (a 20 percent response rate), of which 82 were for NASA projects (a 30 percent response rate). It should be noted that the number of individuals responding was fewer than the number of projects because some project managers had oversight for multiple projects. The NASA sample was based on projects since 1997 only.

1.7.2. Case Studies (Appendix E)

Case studies can provide valuable insights concerning the viewpoints and concerns of the small businesses that participate in SBIR, insights that cannot be derived from statistical analysis. While all of the companies selected for case study won SBIR awards from NASA, most also won awards from other agencies as well. The interviews concerned their SBIR experience as a whole, and were not limited to NASA program.

Candidate case study firms were selected from four lists: top recipients of SBIR awards from NASA; NASA SBIR awardees who received R&D 100 awards; NASA identified “success stories”; and firms with large commercial sales as reported as reported to NASA SBIR program. The selected case studies include firms from a variety of locations, across a range of founding dates, having received different numbers of SBIR awards received, and representing a variety of technological domains.

The case studies highlight the ways companies use the SBIR program: the extent to which SBIR is important to their company’s survival and growth, whether and how they intend to commercialize SBIR technology, whether and how the receipt of multiple awards influence their ability to commercialize, what challenges they have faced in the commercialization process, in what way they see the SBIR program serving the needs of technology entrepreneurs and how they believe the program can be improved. In addition, the cases provide insight as to how the NASA’s administration of the SBIR impacts on the program’s outcomes.

BOX 1-3Three Company Profiles from the Case Studies

The case studies reported in Appendix E highlight the variety of technologies, businesses and uses for SBIR awards. As the cases highlighted in this box show, they improve our understanding of how firms view the SBIR program in practice and what role it plays in meeting the diverse missions of the federal government.

Creare, Inc.This privately held engineering services company located in Hanover NH was founded with a focus on engineering problem-solving. To date Creare has spawned a dozen spin-offs that employ over 1,500 people in the Hanover region and that generate revenues in excess of $250 million.

Creare specializes in solving agency-initiated problems. For example, when the Hubble space telescope failed due to an unexpectedly rapid depletion of solid nitrogen used to cool it, Creare was able to solve this problem for NASA by drawing on its knowledge of cryogenic refrigeration technologies developed through SBIR funded research.

Technology Management, Inc.The case study of this Cleveland firm illustrates the significance of SBIR as a source of early-stage funding. TMI used SBIR to support the basic and applied research necessary to prove its Solid Oxide Fuel Cell (SOFC) technology. The case also draws attention to the potential impact on the SBIR program of NASA’s new emphasis on spinning-in technologies from outside. By focusing on harvesting technologies with a higher readiness level for NASA’s near-term use, TMI’s CEO argues that spin-in erodes support for seeding technology development with a focus on long-term private-sector commercialization.

ARACOR.ARACOR’s mobile x-ray inspection system (Eagle) is now being used to inspect containers and trucks at the nation’s ports and borders for contraband. In less than 30 seconds, the Eagle can scan a densely loaded 20-foot container using full penetration and resolution. ARACOR has over $25 million in sales.

According to the firm’s founder, SBIR awards (78 Phase I and 42 Phase II awards from NSF, DoD, and NASA) played a very important role in developing the Eagle’s computed tomography (CT) technology. He pointed out that “SBIR is a brick, not a building.” A combination of SBIR awards were used to build the CT industrial inspection technology. ARACOR was purchased in 2004 by OSI Systems, Inc. (a NASDAQ company) and is now known as Rapiscan Systems High Energy Inspection Corporation.


This report sets out the Committee’s assessment of the SBIR program at the National Aeronautics and Space Administration. The Committee’s detailed findings and recommendations are presented in the next chapter. The Committee finds that the NASA SBIR program largely meets it legislative objectives and makes recommendations to improve program outcomes. Chapter 3 reviews awards made by NASA. Chapter 4 looks at the outcomes of the NASA SBIR program, including commercial sales and employment effects. Chapter 5 examines how the SBIR program at NASA is managed. Appendix A presents program data collected by NASA. Appendix B and C provide the template and results of the NRC Firm Survey and the NRC surveys of SBIR Phase I and Phase II projects. Appendix D provides the results of the survey of agency project managers. Appendix E presents illustrative case studies of firms participating in the NASA SBIR program. Finally, Appendix F provides a reference bibliography.



The SBIR legislation drew from a growing body of evidence, starting in the late 1970s and accelerating in the 1980s, which indicated that small businesses were assuming an increasingly important role in both innovation and job creation. This evidence gained new credibility with empirical analysis by Zoltan Acs and David Audretsch of the U.S. Small Business Innovation Data Base, which confirmed the increased importance of small firms in generating technological innovations and their growing contribution to the U.S. economy. See Zoltan Acs and David Audretsch, Innovation and Small Firms, Cambridge MA: MIT Press, 1990.


See Public Law 106–554, Appendix I—H.R. 5667—Section 108.


These are the 1982 Small Business Development Act, and the subsequent multiyear reauthorizations of the SBIR program in 1992 and 2000.


With the agreement of the Small Business Administration, which plays an oversight role for the program, this amount can be substantially higher in certain circumstances, e.g., drug development at NIH, and is often lower with smaller SBIR programs, e.g., EPA or the Department of Agriculture.


See Reid Cramer, “Patterns of Firm Participation in the Small Business Innovation Research Program in Southwestern and Mountain States,” in National Research Council, The Small Business Innovation Research Program: An Assessment of the Department of Defense Fast Track Initiative, Charles W. Wessner, ed., Washington, DC: National Academy Press, 2000.


See National Research Council, The Government Role in Civilian Technology: Building a New Alliance, Washington, DC: National Academy Press, 1992, p. 29.


For fiscal year 2005, this has resulted in a program budget of approximately $1.85 billion across all federal agencies, with the Department of Defense (DoD) having the largest SBIR program at $943 million, followed by the National Institutes of Health (NIH) at $562 million. The DoD SBIR program, is made up of ten participating components: Army, Navy, Air Force, Missile Defense Agency (MDA), Defense Advanced Research Projects Agency (DARPA), Chemical Biological Defense (CBD), Special Operations Command (SOCOM), Defense Threat Reduction Agency (DTRA), National Imagery and Mapping Agency (NIMA), and the Office of Secretary of Defense (OSD). NIH counts 23 separate institutes and agencies making SBIR awards, many with multiple programs.


See Robert Archibald and David Finifter, “Evaluation of the Department of Defense Small Business Innovation Research Program and the Fast Track Initiative: A Balanced Approach,” in National Research Council, The Small Business Innovation Research Program: An Assessment of the Department of Defense Fast Track Initiative, op. cit. pp. 211–250.


A GAO report had found that agencies had not adopted a uniform method for weighing commercial potential in SBIR applications. See U.S. General Accounting Office, Federal Research: Evaluations of Small Business Innovation Research Can Be Strengthened, AO/RCED-99-114, Washington, DC: U.S. General Accounting Office, 1999.


The current assessment is congruent with the Government Performance and Results Act (GPRA) of 1993: <http://govinfo​.library​>. As characterized by the GAO, GPRA seeks to shift the focus of government decision making and accountability away from a preoccupation with the activities that are undertaken—such as grants dispensed or inspections made—to a focus on the results of those activities. See <http://www​​.items/gpra/gpra.htm>.


The SBIR methodology report is available on the Web. Access at <http://www7​.nationalacademies​.org/sbir/SBIR​_Methodology_Report.pdf>.


The opening conference on October 24, 2002, examined the program’s diversity and assessment challenges. For a published report of this conference, see National Research Council, SBIR: Program Diversity and Assessment Challenges, Charles W. Wessner ed., Washington, DC: The National Academies Press, 2004. A second conference, held on March 28, 2003, was titled, “Identifying Best Practice.” The conference provided a forum for the SBIR Program Managers from each of the five agencies in the study’s purview to describe their administrative innovations and best practices. A conference on June 14, 2005, focused on the commercialization of SBIR-funded innovations at DoD and NASA. See National Research Council, SBIR and the Phase III Challenge of Commercialization, Charles W. Wessner, ed., Washington, DC: The National Academies Press, 2007. A final conference, held on April 7, 2006, examined role of the state programs in leveraging SBIR to advance local and regional economic growth.


This view was echoed by Duncan Moore: “Innovation does not follow a linear model. It stops and starts.” National Research Council, SBIR: Program Diversity and Assessment Challenges, op. cit.


While few hold this process of linear innovation to be literally true, the concept nonetheless survives—for example, in retrospective accounts of the path taken by a particular innovation.


See Donald E. Stokes, Pasteur’s Quadrant, Basic Science and Technological Innovation, Washington, DC: The Brookings Institution, 1997. Stokes’ analysis challenges the artificial separation between basic and applied research underpinning the myth of linear innovation.


Duncan Moore, “Turning Failure into Success,” in National Research Council, SBIR: Program Diversity and Assessment Challenges, op. cit., p. 94.


For a description of the challenges small businesses face in defense procurement, the subject of a June 14, 2005, NRC conference and one element of the congressionally requested assessment of SBIR, see National Research Council, SBIR and the Phase III Challenge of Commercialization, op. cit. Relatedly, see remarks by Kenneth Flamm on procurement barriers, including contracting overhead and small firm disadvantages in lobbying in National Research Council, SBIR: Program Diversity and Assessment Challenges, op. cit., pp. 63–67.


Gail Cassell, “Setting Realistic Expectations for Success.” Ibid, p. 86.


See John H. Cochrane, “The Risk and Return of Venture Capital,” Journal of Financial Economics, 75(1):3–52, 2005. Drawing on the VentureOne database Cochrane plots a histogram of net venture capital returns on investments that “shows an extraordinary skewness of returns. Most returns are modest, but there is a long right tail of extraordinary good returns. Fifteen percent of the firms that go public or are acquired give a return greater than 1,000 percent! It is also interesting how many modest returns there are. About 15 percent of returns are less than 0, and 35 percent are less than 100 percent. An IPO or acquisition is not a guarantee of a huge return. In fact, the modal or “most probable” outcome is about a 25 percent return.” See also Paul A. Gompers and Josh Lerner, “Risk and Reward in Private Equity Investments: The Challenge of Performance Assessment,” Journal of Private Equity, 1 (Winter 1977):5–12. Steven D. Carden and Olive Darragh, “A Halo for Angel Investors” The McKinsey Quarterly, 1, 2004, also show a similar skew in the distribution of returns for venture capital portfolios.


“Phase III funding” comprises contractual or other monies awarded to a SBIR project for federal agency use of the subject technology after expiration of a SBIR Phase II award.


Of course, some companies have made its transition successfully, but overall, there are significant structural impediments standing against successful commercialization from NASA SBIR project—as opposed for example to DoD, where there may be a huge potential market within the agency, or NIH where the private-sector market for SBIR-funded technologies is also potentially enormous.

Copyright © 2009, National Academy of Sciences.
Bookshelf ID: NBK32518


  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this title (2.4M)

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...