NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

National Academy of Sciences (US), National Academy of Engineering (US), and Institute of Medicine (US) Committee on Science, Engineering, and Public Policy. Implementing the Government Performance and Results Act for Research: A Status Report. Washington (DC): National Academies Press (US); 2001.

Cover of Implementing the Government Performance and Results Act for Research

Implementing the Government Performance and Results Act for Research: A Status Report.

Show details


In February 1999, the Committee on Science, Engineering, and Public Policy (COSEPUP) released a report titled Evaluating Federal Research Programs: Research and the Government Performance and Results Act (see Appendix E). The report recommended a set of criteria by which federal agencies might evaluate their programs of research in science and engineering. The criteria were intended to help agencies to respond to the Government Performance and Results Act (GPRA), enacted in 1993 (see Appendix F).

The National Academies were later asked by Congress to undertake another study, as part of the 1999 VA-HUD Independent Agencies Authorization Act, titled “Accountability of Federally Funded Research.” Because many of the issues raised by Congress were addressed by COSEPUP in the original study, the Academies worked with the White House Office of Science and Technology Policy (OSTP) as indicated in the legislation to craft a study that would be most useful to all involved.

In a letter dated April 6, 1999, Dr. Neal Lane, director of OSTP, asked the Academies to undertake a more in-depth study of the actual application of GPRA to research programs as the agencies were shortly to release their first performance reports under GPRA. The study plan was endorsed by the House Committee on Science and by Senators William Frist, John Rockefeller, Jeff Bingaman, and Joseph Lieberman who were cosponsors of the original legislation. The specific charge to the panel was as follows:

As requested by Congress and the White House Office of Science and Technology Policy, this study would assist federal agencies in crafting plans and reports that are responsive to the Government Performance and Results Act (GPRA), OMB Guidance, and agency missions. The study would undertake independent assessments via case studies of the strategic and performance plans federal agencies have developed and of the responsiveness of their performance reports (which are due in March 2000) to the Government Performance and Results Act.

The assessment would take into account the agencies' missions and how science and technology programs and human resource needs are factored into agency GPRA plans. In addition, the study would suggest specific applications of recommendations from COSEPUP's earlier report entitled “Evaluating Federal Research Programs: Research and the Government Performance ad Results Act.” In addition, workshops would be conducted where the agencies could share best practices regarding their performance reports and stakeholders views could be heard.

The Senators also requested that the Academies evaluate the extent to which independent merit-based evaluation achieves the goal of eliminating unsuccessful or unproductive programs and projects and to investigate and report on the validity of using quantitative performance goals for administrative management of these activities. COSEPUP decided not to pursue these analyses for the time being and to instead focus on the task above.

The National Academies formed the Panel on Research and the Government Performance and Results Act 2000 under the auspices of COSEPUP to respond to the request. This panel, which we chair, began its work by examining the GPRA performance reports each federal agency released in March of 2000. These performance reports provided the public with the first opportunity to see the implementation of GPRA.

In May, project staff at the behest of panel members met with the staff at 11 federal agencies to gain a better understanding of the methodology each used for their research programs. At this stage, problems with the charge to the panel emerged based on its discussions with the agency staff and the consultants and the panel's review of the agency performance plans.

Specifically, at its initial meeting in June, the panel members determined it was not appropriate to indicate the degree to which a given agency's work was acceptable, nor was it possible to conduct an in-depth review of each agency's program activities as would have been required to conduct an independent assessment of strategic and performance plans.

In the first instance, agencies were still in the experimental stage regarding the evaluation of research programs in response to GPRA. In the latter case, no single committee could mobilize the level of expertise necessary to conduct an in-depth review in the group of agencies selected given the tremendous diversity of the research programs each supported.

In sum, the panel determined it was not possible to provide the “independent assessment” of each agency's strategic and performance plan anticipated by Dr. Lane. In the spirit of the OSTP request, the panel instead decided to focus on the general methods and approaches each agency undertook. It also intentionally decided not to make agency-specific analyses beyond that which is presented in Appendix C summarizing each agency's approach.

Therefore, instead of attempting an investigation for which it was not equipped, the panel chose to take a “snapshot” of the current state of affairs of agencies' response to GPRA. After reviewing the process used by the 11 federal agencies, the panel in the end decided to select for review the five agencies that provide the most financial support for federal research programs. The five agencies selected were the National Science Foundation (NSF), National Institutes of Health (NIH), Department of Defense (DOD), Department of Energy (DOE), and National Aeronautics and Space Administration (NASA).

The panel then convened five focus groups—one on the process used by each agency—and a workshop to discuss overarching issues that affected all the agencies. Participants in the focus groups and the workshop included several panel members, members of agency scientific advisory groups, and staff from the agencies, Office of Management and Budget (OMB), General Accounting Office (GAO), and Congressional Research Service (CRS). Congressional committee staff were invited, but none attended. During each focus group, agencies were asked to respond to the following questions:

  • What methodology is used for evaluating research programs under GPRA?
  • What level of unit is the focus of the evaluation?
  • Who does the evaluation of the research program under GPRA?
  • What criteria are used for the evaluation?
  • How are the selection and evaluation of projects related to the evaluation of the research program?
  • How is the result communicated to different audiences (such as the S&T community, advisory committees, agency leadership, the administration, congress)?
  • How is the result used in internal and external decision-making?

Their responses are summarized in Appendix C.

During the workshop, a number of overarching issues were discussed, including these:

  • Criteria for evaluation.
  • Aggregation of research programs for purposes of evaluation.
  • Usefulness of GPRA.
  • GPRA and the workload of agencies.
  • Issues of timing.
  • Verification and validation.

The results of the workshop are summarized in Appendix D.

The report itself should be considered a cross section or “snapshot” of agency responses to GPRA based on the agencies' own descriptions. We hope that the observations and recommendations presented here will be useful to other agencies in their efforts to implement GPRA and to oversight bodies in their efforts to supervise and facilitate the implementation. We believe, on the basis of first-hand observation, that the interactions during the focus groups and workshop were useful to all participants.

In the end, this panel does not attempt to recommend a single strategy to be used by all federal agencies in developing their plans to respond to GPRA. Instead, the panel, as requested by OSTP, has worked with individual agencies to focus on observations that could facilitate their responses to GPRA. Ideally, these lessons can be discussed and extended by all agencies and their oversight bodies to begin assembling agency-appropriate, broadly helpful strategies for GPRA compliance beyond that in COSEPUP's original report.

Enriqueta Bond

Alan Schriesheim

Panel Cochairs

Copyright © 2001, National Academy of Sciences.
Bookshelf ID: NBK44110


  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this title (1.3M)

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...