NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

National Research Council (US) Committee on Assessing Behavioral and Social Science Research on Aging; Feller I, Stern PC, editors. A Strategy for Assessing Science: Behavioral and Social Research on Aging. Washington (DC): National Academies Press (US); 2007.

Cover of A Strategy for Assessing Science

A Strategy for Assessing Science: Behavioral and Social Research on Aging.

Show details

4Progress in Science

This chapter examines theories and empirical findings on the overlapping topics of progress in science and the factors that contribute to scientific discoveries. It also considers the implications of these findings for behavioral and social science research on aging. The chapter first draws on contributions from the history and sociology of science to consider the nature of scientific progress and the paths that lead to realizing the potential scientific and societal outcomes of scientific activity. It considers indicators that might be used to assess progress toward these outcomes. The chapter then examines factors that contribute to scientific discovery, drawing eclectically on the history and sociology of science as well as on theories and findings from organizational behavior, policy analysis, and economics.


The history and sociology of science have produced extensive bodies of scholarship on some of these themes, generating in the process significant ongoing disagreements among scholars (see, e.g., Krige, 1980; Cole, 1992; Rule, 1997; Bowler and Morus, 2005). Most of this work focuses on processes and historical events in the physical and life sciences; relatively little of it addresses the social and behavioral sciences (or engineering, for that matter), except possibly subfields of psychology (e.g., Stigler, 1999). It is legitimate to ask whether this research even applies to the behavioral and social sciences (Smelser, 2005).1

We do not attempt an encyclopedic coverage nor a resolution of the debates, past and continuing, on such questions. Rather, we draw on this research to make more explicit the main issues underlying the tasks of prospective assessment of scientific fields for the purpose of setting priorities in federal research agencies, given the uncertain outcomes of research.

The history of science has produced several general theories about how science develops and evolves over long periods of time. A 19th century view is that of Auguste Comte, who argued that there is a hierarchy of the sciences, from the most general (astronomy), followed historically and in other ways by physics, chemistry, biology, and sociology. Sciences atop the hierarchy are characterized as having more highly developed theories; greater use of mathematical language to express ideas; higher levels of consensus on theory, methods, and the significance of problems and contributions to the field; more use of use theory to make verifiable predictions; faster obsolescence of research, to which citations drop off rapidly over time; and relatively fast progress. Sciences at the bottom of the hierarchy are said to exhibit the opposite characteristics (Cole, 1983).

Many adherents to this hierarchical view place the natural sciences toward the top of the hierarchy and the social sciences toward the bottom.2 In this view, advances in the “higher” sciences, conceived in terms of findings, concepts, methodologies, or technologies that are thought to be fundamental, are held to flow down to the “lower” sciences, while the reverse flow rarely occurs. Although evidence of such a unidirectional flow from donor to borrower disciplines does exist (Losee, 1995), there are counterexamples. Historians and sociologists of science have offered evidence against several of these propositions, and particularly dispute the claimed association of natural science with the top of the hierarchy and social science with the bottom (e.g., Bourdieu, 1988; Cetina, 1999; Steinmetz, 2005). The picture is more complex, as noted below.

By far the best known modern theory of scientific progress is that of Thomas Kuhn (1962), which focuses on the major innovations that have punctuated the history of science in the past 350 years, associated with such investigators as Copernicus, Galileo, Lavoisier, Darwin, and Einstein. Science, in Kuhn’s view, is usually a problem-solving activity within clear and accepted frameworks of theory and practice, or “paradigms.” Revolutions occur when disparities or anomalies arise between theoretical expectation and research findings that can be resolved only by changing fundamental rules of practice. These changes occur suddenly, Kuhn claims, in a process akin to Gestalt shifts: in a relative instant, the perceived relationships among the parts of a picture shift, and the whole takes on a new meaning. Canonical examples include the Copernican idea that the Earth revolves around the Sun, Darwin’s evolutionary theory, relativity in physics, and the helical model of DNA.

A quite different account is that of John Desmond Bernal (1939). Inspired by Marxist social science and ideals of planned social progress, Bernal saw basic science progressing most vigorously when it was harnessed to practical efforts to serve humanity’s social and economic needs (material well-being, public health, social justice). Whereas in Kuhn’s view science progressed according to its inner logic, Bernal asserted that intellectual and practical advances could be engineered and managed.

Another tradition of thought, stemming from Derek Price’s (1963) vision of a quantitative “science of science,” has focused less on how innovations arise than on how they spread and how their full potential is exploited by small armies of scientists. Mainly pursued by sociologists of science, this line of analysis has focused on the social structure of research communities (e.g., Hagstrom, 1965), competition and cooperation in institutional systems (Merton, 1965; Ben-David, 1971), and structured communication in schools of research or “invisible colleges” (e.g., Crane, 1972). These efforts, while focused mainly on how science works, may imply principles for stimulating scientific progress and innovation.

There are also evolutionary models of scientific development, such as that of the philosopher David Hull (1988). Extending Darwin’s account of evolution by variation and selection, Hull argues that scientific concepts evolve in the same way, by social or communal selection of the diverse work of individual scientists. In evolutionary views, science continually produces new ideas, which, like genetic mutations, are essentially unpredictable. Their ability to survive and expand their niches depends on environmental factors.

Bruno Latour and Steve Woolgar (1979) also offer an account of a selective struggle for viability among scientific producers. The vast majority of scientific papers quickly disappear into the maw of the scientific literature. The few that are used by other scientists in their work are the ones that determine the general direction of science progress. In evolutionary and competitive models, a possible function of science managers is to shape the environment that selects for ideas so as to propagate research that is judged to promote the agency’s scientific and societal goals.

Stephen Cole (1992) emphasized a distinction between the frontier and the core of science that seems consistent with an evolutionary view. Work at the frontiers of sciences is characterized by considerable disagreement; as science progresses over time, disagreement decreases as processes such as empirical confirmation and paradigm shift select out certain ideas, while others become part of the received wisdom.

Although the view that different sciences have similar features at their respective frontiers is not unchallenged (Hicks, 2004), we have found the idea of frontier and core science to be useful in examining the extent to which insights from the history and sociology of science, fields that have concentrated their attention predominantly on the natural sciences, also apply to the social and behavioral sciences.

Cole (1983, 1992) reports considerable evidence to suggest that different fields of science have similar features at the frontier, even if they are very different at the core. In the review of research proposals and journal submissions, an activity at the frontier of knowledge, he concludes that consensus about the quality of research is not systematically higher in the natural sciences than in the social sciences, citing the standard deviations of reviewers’ ratings of proposals to the National Science Foundation, which were twice as large in meteorology as in economics.

In the core, represented by undergraduate textbooks, the situation appears to be quite different. Cole (1983) found that in textbooks published in the 1970s, the median publication date of the references cited in both physics and chemistry was before 1900, while the median publication date in sociology was post-1960. Sociology texts cited an average of about 800 references, while chemistry and physics texts generally cited only about 100. Moreover, a comparison of texts from the 1950s and the 1970s indicated that the material covered, as well as the sources cited, were much the same in both periods in physics and chemistry, whereas in sociology, the newer texts cited only a small proportion of the sources cited in the earlier texts.

Cole interpreted these findings as indicating that core knowledge in physics and chemistry was both more consensual and more stable over time than core knowledge in sociology. Such findings suggest that even though sciences may differ greatly at the core, for the purpose of assessing the progress of science at the frontiers of research fields, insights from the study of the natural sciences are likely to apply to the social sciences as well. They also point to the need to differentiate between “vitality,” as indicated by ferment at the frontier, and scientific progress as indicated by movement of knowledge from the frontier to the core.3 These findings suggest that the policy challenges for research managers making prospective judgments at the frontiers of research fields are quite similar across the sciences.


Scientific progress can be of various types—discoveries of phenomena, theoretical explanations or syntheses, tests of theories or hypotheses, acceptance or rejection of hypotheses or theories by the relevant scientific communities, development of new measurement or analytic techniques, application of general theory to specific theoretical or practical problems, development of technologies or useful interventions to improve human health and well-being from scientific efforts, and so forth. Consequently, many different developments might be taken as indicators, or measures, of progress in science.

Science policy decision makers need to consider the progress and potential of scientific fields in multiple dimensions, accepting that the absence of detectable advance on a particular dimension is not necessarily evidence of failure or poor performance. Drawing on Weinberg’s (1963) classification of internal and external criteria for formulating scientific choices, we make the practical distinction between internally defined types of scientific progress, that is, elements of progress defined by intellectual criteria, and externally defined types of progress, defined in terms of the contributions of science to society. Managers of public investments in science need to be concerned with both.

Scientific Progress Internally Defined

The literatures in the history of science and in science studies include various analyses and typologies of scientific and theoretical progress (e.g., Rule, 1997; Camic and Gross, 1998; Lamont, 2004). This section presents a distillation of insights from this research into a short checklist of major types of scientific progress. The list is intended as a reminder to participants in science policy decisions that assess the progress of scientific fields of the variety of kinds of progress science can make. Recognizing that these broad categories overlap and also that they are interdependent, with each kind of progress having the potential to influence the others, directly or indirectly, the list is intended to simplify a very complex phenomenon to a manageable level.

Types of Scientific Progress

Discovery. Science makes progress when it demonstrates the existence of previously unknown phenomena or relationships among phenomena, or when it discovers that widely shared understandings of phenomena are wrong or incomplete.

Analysis. Science makes progress when it develops concepts, typologies, frameworks of understanding, methods, techniques, or data that make it possible to uncover phenomena or test explanations of them. Thus, knowing where and how to look for discoveries and explanations is an important type of scientific progress. Improved theory, rigorous and replicable methods, measurement techniques, and databases all contribute to analysis.

Explanation. Science makes progress when it discovers regularities in the ways phenomena change over time or finds evidence that supports, rules out, or leads to qualifications of possible explanations of these regularities.

Integration. Science makes progress when it links theories or explanations across different domains or levels of organization. Thus, science progresses when it produces and provides support for theories and explanations that cover broader classes of phenomena or that link understandings emerging from different fields of research or levels of analysis.

Development. Science makes progress when it stimulates additional research in a field or discipline, including research critical of past conclusions, and when it stimulates research outside the original field, including interdisciplinary research and research on previously underresearched questions. It also develops when it attracts new people to work on an important research problem.

Recent scientific activities supported by the Behavioral and Social Research (BSR) Program of the National Institute on Aging (NIA) have yielded progress in the form of scientific advances of most of the above types. We cite only a few examples.

  • Discovery: The improving health of elderly populations. An example is analyses of data from Sweden, which has the longest running national data set on longevity, that have shown that the maximum human life span has been increasing since the 1860s, that the rate of increase has accelerated since 1969, and that most of the change is due to improved probabilities of survival of individuals past age 70 (Wilmoth et al., 2000). Parallel trends have been discovered among the elderly in the form of declining physical disability, which declined in the United States from 26 percent of the elderly population in 1982 to 20 percent in 1999 (e.g., Manton and Gu, 2001), and declining cognitive impairment (e.g., Freedman et al., 2001, 2002). Such findings together suggest overall improvements in the health of elderly populations in high-income countries.
  • Analysis: Longitudinal datasets for understanding processes of aging. The Health and Retirement Study (Juster and Suzman, 1995), a major ongoing longitudinal study that assesses the health and socioeconomic condition of aging Americans in which BSR played a central entrepreneurial role, has provided data that made possible, among other things, some of the discoveries about declining disability already noted. International comparative data sets on health risk factors and health outcomes, such as the Global Burden of Disease dataset (Ezzati et al., 2002), have also made significant scientific progress possible.
  • Explanation: Questioning and refining understandings. Several BSR-funded research programs have yielded findings that called into question widely held views about aging processes. Examples include findings that question the beliefs that more health care spending leads to better health outcomes (Fisher et al., 2003a, 2003b), that increasing life expectancy implies increased health care expenditures (Lubitz et al., 2003), that unequal access to health care is the main explanation for higher mortality rates among older people of lower socioeconomic status (e.g., Adda et al., 2003; Adams et al., 2003), and that aging is a purely biological process unaffected by personal or cultural beliefs (Levy, 2003). Other BSR-sponsored research has provided evidence that a previously noted association of depression with heart disease may be explained in part by a process in which negative affect suppresses immune responses (Rosenkranz et al., 2003).
  • Integration and development: Creating a biodemography of aging. BSR supported and brought together “demographers, evolutionary theorists, genetic epidemiologists, anthropologists, and biologists from many different scientific taxa” (National Research Council, 1997:v) to seek coherent understandings of human longevity that are consistent with knowledge at levels from genes to populations and data from human and nonhuman species). This effort has helped to attract researchers from other fields into longevity studies, add vigor to this research field, and put the field on a broader and firmer interdisciplinary base of knowledge.

Paths to Scientific Progress

Scientific progress is widely recognized as nonlinear. Some new ideas have led to rapid revolutions, while other productive ideas have had lengthy gestation periods or met protracted resistance. Still other new ideas have achieved overly rapid, faddish acceptance followed by quick dismissal. An earlier generation of research in the history and sociology of science documented variety and surprise as characteristics of scientific progress, but it was not followed by broad transdisciplinary studies that developed and tested general theories of scientific progress.

No theory of scientific progress exists, or is on the horizon, that allows prediction of the future development of new scientific ideas or specifies how the different types of scientific progress influence each other—although they clearly are interdependent. Rather, recent studies by historians of science and practicing scientists typically emphasize the uncertainty surrounding which of a series of findings emerging at any point in time will be determinative of the most productive path for future scientific inquiries and indeed of the ways in which these findings will be used. Only in hindsight does the development of various experimental claims and theoretical generalizations appear to have the coherence that creates a sense of a linear, inexorable path.

Science policy seems to be in particular need of improved basic understanding of the apparently uncertain paths of scientific progress as a basis for making wiser, more efficient investments. Without this improved understanding, extensive investments into collecting and analyzing data on scientific outputs are unlikely to provide valid predictors of some of the most important kinds of scientific progress. Political and bureaucratic pressures to plan for steady progress and to assess it with reliable and valid performance indicators will not eliminate the gaps in basic knowledge that must be filled in order to develop such indicators.

Despite the incompleteness of knowledge, the findings of earlier research remain a suggestive and potentially useful resource for practical research managers. They suggest a variety of state-of-knowledge propositions that are consistent with our collective experience on multiple advisory and review panels across several federal science agencies. We consider the following propositions worthy of consideration in discussions of how science managers can best promote scientific progress:

  • Scientific discoveries are initially the achievements of individuals or small groups and arise in varied and largely unpredictable ways: the larger and more important the discoveries, the less predictable they would have been.
  • The great majority of scientific products have limited impact on their fields; there are only a few major or seminal outputs. Whether or not new scientific ideas or methods become productive research traditions depends on an uncertain process that may extend over considerable time. Sometimes the impacts of research are quite different from those anticipated by the initial research sponsors, the researchers, or the individuals or organizations that first make use of it. For example, the Internet, which was developed as a means of fostering scientific communication among geographically dispersed researchers, has now become a leading channel for entertainment and retail business, among other things.
  • Existing procedures for allocating federal research funds are most effective at the mid-level of scientific innovation, where there is consensus among established fields about the importance of questions and the direction and content of emerging questions in those fields.
  • The uncertainties of scientific discovery and the difficulties of accurately identifying turning points and sharp departures in scientific inquiry suggest that research managers will do best with a varied portfolio of projects, including both mainstream and discontinuous or exploratory research projects. These uncertainties also suggest that assessment of a program’s investments in research is most appropriately made at the portfolio rather than the project level.
  • The portfolio concept also applies to a program’s investments in analysis: in advancing the state of theoretical understanding, tools, and databases. Scientific progress in both the natural and social sciences may either follow or precede the development of new tools (instruments, models, algorithms, databases) that apply to many problems. Contrary to simple models of scientific progress that have theory building as the grounding for empirical research or data collection as the foundation for theory building, the process is not linear or unidirectional.4 Program investments in theory building, tool development, and data collection can all contribute to scientific progress, but it is very difficult to predict which kinds of investments will be most productive at any given time (see National Research Council, 1986, 1988; Smelser, 1986).
  • Scientific progress sometimes arises from efforts to solve technological or social problems in environments that combine concerns with basic research and with application. It can also arise in environments insulated from practical concerns. And progress can involve first one kind of setting and then the other (see Stokes, 1997).

Interdisciplinarity and Scientific Progress

The claim that the frontiers of science are generally located at the interstices between and intersections among disciplines deserves explicit attention because it is increasingly found in the conclusions and recommendations of national commissions and NRC committees (e.g., National Research Council, 2000b; Committee on Science, Engineering, and Public Policy, 2004) and in statements by national science leaders. 5 Scholarship in the history and sociology of science is consistent with competing views on this claim. A considerable body of recent scholarship has noted that exciting developments often come at the edges of established research fields and at the boundaries between fields (Dogan and Pahre, 1990; Galison, 1999; Boix-Mansilla and Gardner, 2003; National Research Council, 2005b). Moreover, interdisciplinary thinking has become more integral to many areas of research because of the need to understand “the inherent complexity of nature and society” and “to solve societal problems” (National Research Council, 2005b:2).

The idea is that scientific advances are most likely to arise, or are most easily promoted, when scientists from different disciplines are brought together and encouraged to free themselves from disciplinary constraints. A good example to support this idea is the rapid expansion and provocative results of research on the biodemography of aging that followed the 1996 NRC workshop on this topic (National Research Council, 1997). The workshop occasioned serious efforts to develop and integrate related research fields.

To the extent that interdisciplinarity is important to scientific progress and for gaining the potential societal benefits of science, it is important for research managers to create favorable conditions for interdisciplinary contact and collaboration. In fact, for some time BSR has been seeking explicitly to promote both multidisciplinarity and interdisciplinarity (Suzman, 2004). For example, when the Health and Retirement Study was started in 1990, it was explicitly designed to be useful to economists, demographers, epidemiologists, and psychologists, and explicit efforts were made to convince those research communities that the study was not for economists only. BSR has reorganized itself and redefined its areas of interest on issue-oriented, interdisciplinary lines; sought out leading researchers and funded them to do what was expected to be ground-breaking and highly visible research in interdisciplinary fields; supported workshops and studies to define new interdisciplinary fields (e.g., National Research Council, 1997, 2000a, 2001c); created broadly based multidisciplinary panels to review proposals in emerging interdisciplinary areas; and funded databases designed to be useful to researchers in multiple disciplines for addressing the same problems, thus creating pressure for communication across disciplines. Some of the results, such as those already mentioned, have been notably productive and potentially useful.

The available studies seem to support the following conclusions about the favorable conditions for interdisciplinary science (Klein, 1996; Rhoten, 2003; National Research Council, 2005b):

  • Successful interdisciplinary research requires both disciplinary depth and breadth of interests, visions, and skills, integrated within research groups.
  • The success of interdisciplinary research groups depends on institutional commitment and research leadership with clear vision and teambuilding skills.
  • Interdisciplinary research requires communication among people from different backgrounds. This may take extra time and require special efforts by researchers to learn the languages of other fields and by team leaders to make sure that all participants both contribute and benefit.
  • New modes of organization, new methods of recruitment, and modified reward structures may be necessary in universities and other research organizations to facilitate interdisciplinary interactions.
  • Both problem-oriented organization of research organizations and the ability to reorganize as problems change facilitate interdisciplinary research.
  • Funding organizations may need to design their proposal and review criteria to encourage interdisciplinary activities.

Several conditions favorable to interdisciplinary collaboration can be affected by the actions of funders of research. For example, science agencies can encourage or require interdisciplinary collaboration in the research they support, support activities that specifically bring researchers together from different disciplines to address a problem of common interest, provide additional funds or time to allow for the development of effective interdisciplinary communication in research groups or communities, and organize their programs internally and externally around interdisciplinary themes. They can ask review panels to consider how well groups and organizations that propose interdisciplinary research provide conditions, such as those above, that are commonly associated with successful interdisciplinary research. And they might also ensure that groups reviewing interdisciplinary proposals include individuals who have successfully led or participated in interdisciplinary projects.

Encouraging interdisciplinary research may have pitfalls, though. It is possible for funds to be offered but for researchers to fail to propose the kinds of interdisciplinary projects that were hoped for. Sometimes interdisciplinary efforts take hold, but they fail to produce important scientific advances or societal benefits. Interdisciplinarity can also become a mantra. If disciplines are at times presented as silos—independent units with no connections among them—interdisciplinary fields may also become silos that happen to straddle two fields. At any point in time, an observer can identify numerous new research trajectories, several involving novel combinations of existing disciplines. Thus, alongside recently institutionalized fields, such as biotechnology, materials science, information sciences, and cognitive (neuro)sciences, are claimants for scientific attention and programmatic support, such as vulnerability sciences, prevention science, and neuroeconomics.

Little is known about how to predict whether a new interdisciplinary field will take off in a productive way. Floral metaphors about budding fields are not always carried to the desired conclusion: many budding fields lack the intellectual or methodological germplasm to do more than pop up and quickly wither. It is at least as difficult to assess the prospects of interdisciplinary fields as of disciplinary ones, and probably more so (Boix-Mansilla and Gardner, 2003; National Research Council, 2005b).6

Federal agency science managers can act as entrepreneurs of interdisciplinary fields, so that their expansion from an interest of a small number of researchers into a recognizable cluster of activity may reflect the level of external support from federal agencies and foundations. As a field develops, though, a good indicator of vitality may be the exchange of ideas with other fields and particularly the export of ideas from the new field to other scientific fields or to practical use. But progress in interdisciplinary fields may be hard to determine from recourse to such indicators alone. Fields can be vital without exporting ideas to other fields. Policy analysis, now a well-established academic field of instruction and research, engages researchers from several social science disciplines, but it is a net importer of ideas (MacRae and Feller, 1998; Reuter and Smith-Ready, 2002).

It is worth noting that support for interdisciplinary research, although it has unique benefits, may be a relatively high-risk proposition because it requires high-level leadership skills and innovative organizational structures. These characteristics of interdisciplinary research may pose special challenges for research managers in times of tightening budgets, when pressures for risk aversion may conflict with the need to develop innovative approaches to scientific questions and societal needs.

Contributions of Science to Society

In government agencies with practical missions, investments in science are appropriately judged both on internal scientific grounds and on the basis of their contributions to societal objectives. In the case of NIA, these objectives largely concern the improved longevity, health, and well-being of older people (National Institute on Aging, 2001). There are many ways research can contribute to these objectives. For simplicity, we group the societal objectives of science into four broad categories.

Identifying issues. Science can contribute to society by identifying problems relating to the health and well-being of older people that require societal action or sometimes showing that a problem is less serious than previously believed.

Finding solutions. Science can contribute to society by developing ways to address issues or solve problems, for example, by improving prevention or treatment of diseases, improving health care delivery systems, improving access to health care, or developing new products or services that contribute to the longevity, health, or quality of life for older people in America.

Informing choices. Science can contribute to society by providing accurate and compelling information to public officials, health care professionals, and the public and thus promoting better informed choices about life and health by older people and better informed policy decisions affecting them.

Educating the society. Science can contribute to society by producing fundamental knowledge and developing frameworks of understanding that are useful for people facing their own aging and the aging of family members, making decisions in the private sector, and participating as citizens in public policy decisions. Science can also contribute by educating the next generation of scientists.

Research on science utilization, a field that was most vital in the 1970s and that has seen some revival recently, has examined the ways in which scientific results, particularly social science results, may be used, particularly in government decisions (for recent reviews, see Landry et al., 2003, and Romsdahl, 2005, for some classic treatments, see Caplan, 1976; Weiss, 1977, 1979; Lindblom and Cohen, 1979). In terms of the above typology, this research mainly examines the use or nonuse of research results for informing choices by public policy actors. It does not much address the use of results by ordinary citizens, medical practitioners, the mass media, or other users involved in identifying issues and finding solutions, other than policy solutions. The most general classification in this research tradition of the ways social science is used is for enlightenment (i.e., providing a broad conceptual base for decisions) and as instrumental input (e.g., providing specific policy-relevant data). In addition, researchers note that social science results may be used to provide justification or legitimization for decisions already reached or as a justification for postponing decisions (Weiss, 1979; Oh, 1996; Romsdahl, 2005).

Federal science program managers face the challenges of establishing causal linkages between past research program activities and societal impacts and of projecting societal impacts from current and planned research activities. The challenges are substantial. Even when findings from social and behavioral science research influence policies and practices in the public and private sectors and may therefore be presumed to contribute to human well-being, they are seldom determinative. Indicators exist or could be created for many societal impacts of research (Cozzens et al., 2002; Bozeman and Sarewitz, 2005). In addition, evidence that the results of research are used, for example, in government decisions, may be considered an interim indicator of ultimate societal benefit, presuming that the decisions promote people’s well-being.

Limits exist, however, to the ability of a mission agency to translate findings from the research it funds into practice. For the research findings of the National Institutes of Health (NIH) in general and NIA-BSR in particular, contributions to societal or individual well-being require the complementary actions of myriad other actors and organizations in government and the private sector, including state and local governments, insurance companies, nursing homes, physicians’ practices, and individuals. According to Balas and Boren (2000:66), “studies suggest that it takes an average of 17 years for research evidence to reach clinical practice.” Similarly lengthy processes and circuitous connections link research findings to more enlightened or informed policy making (Lynn, 1978).

A scientific development also may contribute to society in the above ways even if working scientists do not judge it to be a significant contribution on scientific grounds. For example, surveys sponsored by BSR produce data, for example on declining rates of disability among older people, that may be very useful for health care planning without, by themselves, contributing anything more to science than a phenomenon to be explained. Thus, it is appropriate for assessments of research progress to consider separately the effects of research activity on scientific and societal criteria. Scientific activities and outputs may contribute to either of these two kinds of desirable outcomes or to both.

Interpreting Scientific Progress

The extent to which particular scientific results constitute progress in knowledge or contribute to societal well-being is often contested. This is especially the case when scientific findings are uncertain or controversial and when they can be interpreted to support controversial policy choices. Many results in applied behavioral and social science have these characteristics. Disagreements arise over which research questions are important enough to deserve support (that is, over which issues constitute significant social problems), about whether or not a finding resolves a scientific dispute or has unambiguous policy implications, and about many other aspects of the significance of scientific outputs. The more controversial the underlying social issues, the further such disagreements are likely to penetrate into the details of scientific method. Interested parties may use their best rhetorical tools to “frame” science policy issues and may even attempt to exercise power by influencing legislative or administrative decision makers to support or curtail particular lines of research.

These aspects of the social context of science are relevant for the measurement and assessment of scientific progress and its societal impact. They underline the recognition that the meaning of assessments of scientific progress may not follow in any straightforward way from the evidence the assessments produce. Assessing science, no matter how rigorous the methods that may be used, is ultimately a matter of interpretation. The possibility of competing interpretations of evidence is ever-present when using science indicators or applying any other analytic method for measuring the progress and impact of science. In Chapter 5, we discuss a strategy for assessing science that recognizes this social context while also seeking an appropriate role for indicators and other analytic approaches.


Research managers understandably want early indicators of scientific progress to inform decisions that must be made before the above types of substantive progress can be definitively shown. Although scientific progress is sometimes demonstrable very quickly, recent histories of science, as noted above, tend to emphasize not only the length of time required for research findings to generate a new consensus but also the uncertainties at the time of discovery regarding what precisely constitutes the nature of the discovery. Time lag and impact may depend on various factors, including the type of research and publication and citation practices in the field. A longitudinal research project can be expected to take longer to yield demonstrable progress than a more conceptual project.

Research Vitality and Scientific Progress

Expressions of scientific interest and intellectual excitement, sometimes referred to as the vitality of a research field, have been suggested as a useful source of early indicators of scientific progress as defined from an internal perspective. Such indications of the development of science are of particular interest to science managers because many of them might potentially be converted into numerical indicators. They include the following:

  • Established scientists begin to work in a new field.
  • Students are increasingly attracted to a field, as indicated by enrollments in new courses and programs in the field.
  • Highly promising junior scientists choose to pursue new concepts, methods, or lines of inquiry.
  • The rate of publications in a field increases.
  • Citations to publications in the field increase both in number and range across other scientific fields.
  • Publications in the new field appear in prominent journals.
  • New journals or societies appear.
  • Ideas from a field are adopted in other fields.
  • Researchers from different preexisting fields collaborate to work on a common set of problems.

Research on the nanoscale is an area that illustrates vitality by such indicators and that is beginning to have an impact on society and the economy. Zucker and Darby (2005:9) point to the rate of increase in publishing and patenting in nanotechnology since 1986 as being of approximately the same order of magnitude as the “remarkable increase in publishing and patenting that occurred during the first twenty years of the biotechnology revolution…. Since 1990 the growth in nano S&T articles has been remarkable, and now exceeds 2.5 percent of all science and engineering articles.” Major scientific advances are often marked by flurries of research activity, and many observers expect that such indications of research vitality presage major progress in science and applications.

However, research vitality does not necessarily imply future scientific progress. For example, research on cold fusion was vital for a time precisely because most scientists believed it would not lead to progress. In the social sciences, many fields have shown great vitality for a period of time, as indicated by numbers of research papers and citations to the central works, only to decline rapidly in subsequent periods. Rule (1997), in his study of progress in social science, discusses several examples from sociology, including the grand social theory of Talcott Parsons (1937, 1964), ethno-methodology (e.g., Garfinkel, 1967), and interaction process analysis (e.g., Bales, 1950). Although these fields were vital for a time, in longer retrospect many observers considered them to have been far less important to scientific progress than they had earlier appeared to be. Rule suggests several possible interpretations of this kind of historical trajectory: the fields that looked vital were in fact intellectual dead-ends; research in the fields did make important contributions that were so thoroughly integrated into thinking in the field that they became common knowledge and were no longer commonly cited; and the fields represented short-term intellectual tastes that lost currency with a shift in theoretical concerns. With enough hindsight, it may be possible to decide which interpretation is most correct, although disagreements remain in many specific cases. But the resource allocation challenge for a research manager, given multiple alternative fields whose aggregate claims for support exceed his or her program budget, is to make the correct interpretation of research vitality prospectively: that is, to project whether the field will be judged in hindsight to have produced valuable contributions or to have been no more than a fad or an intellectual dead-end.

Another trajectory of research is problematic for research managers who would use vitality as an indicator of future potential. Some research findings or fields lie dormant for considerable periods without showing signs of vitality, before the seminal contributions gain recognition as major scientific advances. Such findings have been labeled as “premature discoveries” (Hook, 2002) and “sleeping beauties” (van Raan, 2004b). These are not findings that are resisted or rejected; rather, they are unappreciated, or their uses or implications are not initially recognized (Stent, 2002). In effect, the contribution of such discoveries to scientific progress or societal needs or both lies dormant until there is some combination of independent discoveries that reveal the potency of the initial discovery. In such cases, vitality indicators focused predominately on the discovery and its related line of research would have been misleading as predictors of long-term scientific importance.

An instructive example of the limitations of vitality measures as early indicators in the social sciences is the intellectual history of John Nash’s approach to game theory—an approach that was recognized, applied, and then dismissed as having limited utility, only to reemerge again as a major construct (the Nash equilibrium), not only in the social and behavioral sciences but also in the natural sciences. As recounted by Nasar (1998), the years following Nash’s seminal work at RAND in the early 1950s were a period of flagging interest in game theory. Luce and Raiffa’s authoritative overview of the field in 1957 observed: “We have the historical fact that many social scientists have become disillusioned with game theory. Initially there was a naïve band-wagon feeling that game theory solved innumerable problems of sociology and economics, or that, at least it made their solution a practical matter of a few years’ work. This has not turned out to be the case” (quoted in Nasar, 1998:122). In later retrospect, game theory became widely influential in the social and natural sciences, and Nash was awarded the Nobel Memorial Prize in Economics in 1994.

The complexity of the relationship between the quantity of scientific activity being undertaken during a specific period and the pace of scientific progress (or the rate at which significant discoveries are made) can perhaps be illustrated by analogy to a bicycle race: a group of researchers, analogous to the peloton or pack in a bicycle race, proceeds along together over an extended period until a single individual or a small group attempts a breakaway to win the race. Some breakaways succeed and some fail, but because of the difficulties of making progress by working alone (wind resistance, in the bicycle race analogy), individuals need the cooperation of a group to make progress over the long run and to create the conditions for racing breakaways or scientific breakthroughs. When scientific progress follows this model, fairly intense activity is a necessary but not sufficient condition for progress. Alternatively, the pack may remain closely clustered together for extended periods of time, advancing apace yet with a sense that little progress toward victory, however specified, is being made (Horan, 1996).

In our judgment, these various trajectories of scientific progress imply that quantitative indicators, such as citation counts, require interpretation if they are to be used as part of the prospective assessment of fields. Moreover, the implications of intensified activity in a research area may be quite different depending on the mission goals and the perspective of the agency funding the work. Significant research investments can create activity in a field by encouraging research and supporting communication among communities of researchers. But activity need not imply progress, at least not in terms of some of the indicators listed above, such as the export of ideas to other fields. If research managers conflate the concepts of scientific activity and progress, they can create self-fulfilling prophecies by simply creating scientific activity. These warnings become increasingly important as technical advances in data retrieval and mining make it easier to create and access quantitative indicators of research vitality and as precepts of performance assessment increase pressures on research managers to use quantitative indicators to assess the progress and value of the research they support.

Indicators of Societal Impact

A variety of events may indicate that scientific activities have generated results that are likely to have practical value, even though such value may not (yet) have been realized. Such events might function as leading indicators of the societal value of research. These events typically occur outside research communities. For example:

  • Research is cited as the basis for patents that lead to licenses.
  • Research is used to justify policies or laws or cited in court opinions.
  • Research is prominently discussed in trade publications of groups that might apply it.
  • Research is used as a basis for practice or training in medicine or other relevant fields of application.
  • Research is cited and discussed in the popular press as having implications for personal decisions or for policy.
  • Research attracts investments from other sources, such as philanthropic foundations.

Some of these potential indicators are readily quantifiable, so, like bibliometric indicators, they are attractive means by which science managers can document the value of their programs. But as with quantitative indicators of research vitality, the meaning of quantitative indicators of societal impact is subject to differing interpretations. For example, as studies of science utilization have emphasized, the use of research to justify policy changes may mean that the research has changed policy makers’ thinking or only that it provides legitimation for previously determined positions. Moreover, policy makers have been known to use research to justify a policy when the relevant scientific community is in fact sharply divided about the importance or even the validity of the cited research. Such research nevertheless has societal impact, even if not of the type the scientists may have expected.


Historically, analysis of the factors that contribute to scientific discoveries has occurred at least at three different levels of analysis. Macro-level studies have considered the effects of the structures of societies—their philosophical, social, political, religious, cultural, and economic systems (Hart, 1999; Jones, 1988; Shapin, 1996). Meso-level analyses have examined the effects of functional and structural features of “national research and innovation systems”—for example, the relative apportionment of responsibility and public funding for scientific inquiry among government entities, quasi-independent research institutes, and universities (Nelson, 1993). Microlevel studies have examined the associations between indicators of progress and such factors as the organization of research units and the age of the researcher (Deutsch et al., 1971).

The programmatic latitude of any single federal science unit to adjust its actions to promote scientific discovery relates almost exclusively to micro-level factors. Even then, agency policies, legislation, and higher level executive branch policies may limit an agency’s options. For this reason, we look most closely at micro-level factors. It is nevertheless worth examining the larger structural factors affecting conditions for scientific discovery, if only to understand the implicit assumptrions likely to be accepted by BSR’s advisers and staff.

A convenient means of documenting contemporary thinking on the factors that contribute to scientific advances is to examine the series of “benchmarking” studies of the international standing of U.S. science in the fields of materials science, mathematics, and immunology made by panels of scientists under the auspices of the National Academies’ Committee on Science, Engineering, and Public Policy (COSEPUP). The benchmarking was conducted as a methodological experiment in response to a series of studies that had sought to establish national goals for U.S. science policy and to mesh these goals with the performance reporting requirements of the Government Performance and Results Act (Committee on Science, Engineering, and Public Policy, 1993, 1999a; National Research Council, 1995a).

The benchmarking reports covered the fields of mathematics (Committee on Science, Engineering, and Public Policy, 1997), materials science (Committee on Science, Engineering, and Public Policy, 1998), and immunology (Committee on Science, Engineering, and Public Policy, 1999b); they represented attempts to assess whether U.S. science was achieving the stated goals of the National Goals report (Committee on Science, Engineering, and Public Policy, 1993) that the United States should be among the world leaders in all major areas of science and should maintain clear leadership in some major areas of science. These reports can be used to infer the collective beliefs across a broad range of the U.S. scientific community about the factors that contribute to U.S. scientific leadership, and implicitly to the factors that foster major scientific discoveries. The reports are also of interest because several of the factors they cite—for example, initiation of proposals by individual investigators, reliance on peer-based merit review—are the cynosures of proposals to modify the U.S. science system.

Across the three benchmarking reports, the core repeatedly cited as necessary for scientific progress was adequate facilities, quality and quantity of graduate students attracted to a field (and their subsequent early career successes in the field), diversity in funding sources, and adequate funding. In addition, with regard to the comparative international strength and the leadership position of U.S. science in these fields, the reports placed special emphasis on the “structure and financial-support mechanisms of the major research institutions in the United States” and on its organization of higher education research (Committee on Science, Engineering and Public Policy, 1999b:35). Also highlighted as a contributing factor in “fostering innovation, creativity and rapid development of new technologies” was the “National Institutes of Health (NIH) model of research-grant allocation and funding: almost all research (except small projects funded by contracts) is initiated by individual investigators, and the decision as to merit is made by a dual review system of detailed peer review by experts in each subfield of biomedical science” (p. 36).7

We accept the proposition that adequate funds to support research represents a necessary condition for sustained progress in a scientific field. Research progress also depends on the supply of researchers (including the number, age, and creativity of current and prospective researchers) and the organization of research, including the number and disciplinary mix of researchers engaged in a project or program and structure of the research team.

Supply of Researchers

The number, creativity, and age distribution of researchers in a field together affect the pace of scientific progress in the field. Numbers are important to the extent that the ability to generate scientific advances is randomly distributed through a population of comparably trained researchers. Fields with a larger number of active researchers can be expected to generate more scientific advance than fields with smaller such numbers. The pace of scientific advance across fields presumably also varies with their ability to attract the most able/creative/productive scientists. The attractiveness of a field at any point in time is likely to depend on its intellectual excitement (the challenges of the puzzles that it poses), its societal significance, the resources flowing to it to support research, and the prospects for longer term productive and gainful careers. Fields that exhibit these characteristics are likely to attract relatively larger cohorts of younger scientists; if scientific creativity is inversely correlated with age, such fields may be expected to exhibit greater vitality than those with aging cohorts of scientists.

This view is supported by much expert judgment and a number of empirical studies. For example, a study by the National Research Council (1998:1) noted that “The continued success of the life-science research enterprise depends on the uninterrupted entry into the field of well-trained, skilled, and motivated young people. For this critical flow to be guaranteed, young aspirants must see that there are exciting challenges in life science research and they need to believe that they have a reasonable likelihood of becoming practicing independent scientists after their long years of training to prepare for their careers.”

Career opportunities for scientists affect the flow of young researchers into fields. Recent studies of career opportunities in the life sciences have noted that a “crisis of expectations” arises when career prospects fall short of scientific promise (Freeman et al., 2001). Similar observations have been made at other times for the situations in physics, mathematics, computer science, and some fields of engineering. Studies also point, in general, to a decline in research productivity around midcareer. As detailed by Stephan and Levin (1992), the decline reflects influences on both the willingness and ability of researchers to do scientific research. Older scientists are also seen to be slower to accept new ideas and techniques than are younger scientists.8

Organization of Research

Since World War II, the social contract by which the federal government supports basic research has involved channeling large amounts of this support through awards to universities, much of that through grants to individual investigators. It is appropriate to consider whether such choices continue to be optimal and to consider related questions concerning the determinants of the research performance of individual faculty and of specific institutions or sets of institutions (Guston and Keniston, 1994; Feller, 1996).

As detailed above, U.S. support of academic research across many fields, including aging research, is predicated on the proposition that “little science is the backbone of the scientific enterprise…. For those who believe that scientific discoveries are unpredictable, supporting many creative researchers who contribute to S&T, or the science base is prudent science policy” (U.S. Congress Office of Technology Assessment, 1991:146). Against this principle, trends toward “big science” and the requirements of interdisciplinary research have opened up the question of the optimal portfolio of funding mechanisms and award criteria to be employed by federal science agencies. Of special interest here as an alternative to the traditional model of single investigator–initiated research are what have been termed “industrial” models of research (Ziman, 1984) or Mode II research; that is, research undertakings characterized by collaboration or teamwork among members of research groups participating in formally structured centers or institutes. Requests for proposals directed toward specific scientific, technological, and societal objectives; initiatives supporting collaborative, interdisciplinary modes of inquiry organized as centers rather than as single principal investigator projects; and use of selection criteria in addition to scientific merit are by now well-established parts of the research programs of federal science agencies, including NIH and the National Science Foundation.9

A recurrent issue for federal science managers and for scientific communities is the relative rate of return to alternative arrangements, such as funding mechanisms. Making such comparisons is challenging. First, different research modes (e.g., single investigator–initiated proposals and multidisciplinary, center-based proposals submitted in response to a Request for Application) may produce different kinds of outputs. Single-investigator awards, typically described as the backbone of science, are intended cumulatively to build a knowledge base that affects clinical practice or public policy, to support the training of graduate students, to promote the development of networks of researchers and practitioners, and more—but no single awardee is expected to do all these things. Center awards also are expected to contribute to scientific progress—indeed to yield “value added” above the progress that can come from multiple single-investigator awards—but unlike single-investigator awards, they are typically expected to devote explicit attention to the other outcomes, such as translating the results of basic research into clinical practice. Because different modes of research support are expected to support different mixes of program objectives, direct comparisons of “performance” or “productivity” between or among them involves a complex set of weightings and assessments, both in terms of defining and measuring scientific progress and in assigning weights to the different kinds of scientific, programmatic, and societal objectives against which research is evaluated.

Little empirical evidence exists to inform comparisons among modes of research support. Empirical studies, most frequently in the form of bibliometric analyses, exist to compare the productivity of interdisciplinary research units, but these studies are not designed to answer the question of how much scientific progress would have been achieved had the funds allocated to such units been apportioned instead among a larger and more diverse number of single investigator awards (Feller, 1992). Detailed criteria, for example, have been advanced to evaluate the performance of NIH’s center programs (Institute of Medicine, 2004), and a number of center programs have been evaluated. However, these evaluations have not added up to a systematic assessment.10

Expert judgment, historical assessment, and analysis of trends in science provide some support for core propositions about the sources of the vitality of U.S. science: adequate and sustainable funding; multiple, decentralized, funding streams; strong reliance on investigator-initiated proposals selected through competitive, merit-based review; coupling basic research with graduate education; and supplementary funding for capital-intensive modes of inquiry, interdisciplinary collaboration, targeted research objectives, and translation of basic research findings into clinical practice or technological innovations. Still, these principles may not provide wise guidance for the support of behavioral and social science research on aging, for three reasons. First, these observations come from experience with the life sciences, engineering sciences, and physical sciences, and it is not known whether the dynamics of scientific inquiry and progress are the same in the social and behavioral sciences. Second, it is not known whether recent trends in scientific inquiry, such as in the direction of interdisciplinarity, will continue, stop, or soon lead to a fundamental transformation in the way in which cutting-edge science (including in research on aging) is done. Third and perhaps most important, applying these principles presumes an environment of increasing total funds for research. In the more austere budget environment now projected for NIH and its subunits, it will not be possible to increase funding for all modes of support. Turning to existing research for guidance may prove of limited value for making trade-offs among competing funding paradigms.


  1. No theory exists that can reliably predict which research activities are most likely to lead to scientific advances or to societal benefit. The gulf between the decision-making environment of the research manager and the historian or other researcher retrospectively examining the emergence and subsequent development of a line of research is reflected in Weinberg’s (2001:196) observation, “In judging the nature of scientific progress, we have to look at mature scientific theories, not theories at the moments when they are coming into being.” The history of science shows that evidence of past performance and current vitality, that is, of interest among scientists in a topic or line of research, are imperfect predictors of future progress. Thus, although it seems reasonable to expect that a newly developing field that generates excitement among scientists from other fields is a good bet to make progress in the near future, this expectation rests more on anecdote than on systematic empirical research. Notwithstanding the continuing search for improved quantitative measures and indicators for prospective assessment of scientific fields, practical choices about research investments will continue to depend on judgment. We address the prospects and potential roles of quantitative and other methods of science assessment in Chapter 5.
  2. Science produces diverse kinds of benefits; consequently, assessing the potential of lines of research is a challenging task. Assessments should carefully apply multiple criteria of benefit. Science proceeds toward improving understanding and benefiting society on several fronts, but often at an uneven pace, so that a line of research may show rapid progress on one dimension or by one indicator while showing little or no progress on another. In setting research priorities among lines of research, it is important to consider evidence of past accomplishments on the several dimensions of scientific advances (discovery, analysis, explanation, integration, and development) and of contributions to society (e.g., identifying issues, finding solutions, informing choices).
    The policy implications of a finding that a line of research is not currently making much progress on one or more dimensions are not self-evident. Such an assessment might be used as a rationale for decreasing support (because the funds may be expected to be poorly spent), for increasing support (for example, if the poor performance is attributed to past underfunding), or for making investments to redirect the field so as to reinvigorate it. A field that appears unproductive may be stagnant, fallow, or pregnant. Telling which is not easy. Judgment can be aided by the assessments of people close to the field, although not just those so close as to have a vested interest in its survival or growth. The same kind of advice is useful for judging the proper timing for efforts to invest in fields in order to keep them alive or to reinvigorate them.
  3. Portfolio diversification strategies that involve investment in multiple fields and multiple kinds of research are appropriate for decision making, considering the inherent uncertainties of scientific progress. Through such strategies, research managers can minimize the consequences of overreliance on any single indicator of research quality or progress or any single presumption about what kinds of research are likely to be most productive. It is appropriate to diversify along several dimensions, including disciplines, modes of support, emphasis on theoretical or applied objectives, and so forth. Diversification is also advisable in terms of the kinds of evidence relied on to make decisions about what to support. For example, when quantitative indicators and informed peer judgment suggest supporting different lines of research, it is worth considering supporting some of each.
  4. Research managers should seek to emphasize investing where their investments are most likely to add value. This consideration may affect emphasis on types of scientific progress, research organizations and modes of support, and areas of support.
  5. Types of scientific progress. Even as they continue to pursue support of major scientific and programmatic advances, research managers may also find it productive to support improvements in databases and analytic techniques, efforts to integrate knowledge across fields and levels of analysis, efforts to examine underresearched questions, and the entry of new people to work on research problems.
  6. Research organizations and modes of support. Research managers should consider favoring support to research organizations or in modes that have been shown to have characteristics that are likely to promote progress, either generally or for specific fields or lines of scientific inquiry. NIH has multiple funding mechanisms available that would allow support for particular types of organizations (Institute of Medicine, 2004). An ongoing study by Hollingsworth (2003:8) identifies six organizational characteristics as “most important in facilitating the making of major discoveries” (see Box 4-1). Research managers might consider the findings of such studies in making choices about what kinds of organizations to support, especially in efforts to promote scientific innovation.
  7. Areas of support. Some fields may have sufficient other sources of funds that they do not need NIA support, or only need small investments from NIA to leverage funds from other sources. In other fields, however, BSR may be the only viable sponsor for the research. BSR managers may reasonably choose to emphasize supporting research in such fields because of the unlikelihood of leveraging funds. The value-added issue also affects decisions on modes of support and types of research to support.
  8. Interdisciplinary research. BSR should continue to support issue-focused interdisciplinary research to promote scientific activities and collaborations related to its mission that might not emerge from existing scientific communities and organizations structured around disciplines. Interdisciplinary research has significant potential to advance scientific objectives that research management can promote, such as scientific integration and development and scientists’ attention to societal objectives of science consistent with BSR’s mission. Moreover, BSR has a good track record of promoting these objectives through its support of selected areas of interdisciplinary, issue-focused research.
    BSR should continue to solicit research in areas that require interdisciplinary collaboration, to support data sets that can be used readily across disciplines, to fund interdisciplinary workshops and conferences, and to support cross-institution, issue-focused interdisciplinary research networks. Supporting such research requires special efforts and skills of research managers but holds the promise of yielding major advances that would not come from business-as-usual science.
Box Icon

BOX 4-1

Characteristics of Organizations That Produced Major Biomedical Discoveries: The Hollingsworth Study. Rogers Hollingsworth and colleagues (Hollingsworth and Hollingsworth, 2000; Hollingsworth, 2003) have been examining the characteristics of biomedical (more...)



It is often argued that progress in the behavioral and social sciences is qualitatively different from progress in the natural sciences. As noted in a National Research Council review of progress in the behavioral and social sciences (Gerstein, 1986:17), “Because they are embedded in social and technological change, subject to the unpredictable incidence of scientific ingenuity and driven by the competition of differing theoretical ideas, the achievements of behavioral and social science research are not rigidly predictable as to when they will occur, how they will appear, or what they might lead to.” The unstated (and untested) implication is that this unpredictability is more characteristic of the social sciences than the natural sciences. Another view states: “In the natural sciences, a sharp division of labor between the information-gathering and the theory-making functions is facilitated by an approximate consensus on the definition of research purposes and on the conceptual economizers guiding the systematic selection and organization of information. In the social sciences, where the subject matter of research and the comparatively lower level of theoretical agreement generally do not permit comparable consensus on the value and utility of information extracted from phenomena, sharp division of labor between empirical and theoretical tasks is less warranted” (Ezrahi, 1978:288). Even the same techniques are thought to have quite different roles in the social and natural sciences: “The role of statistics in social science is thus fundamentally different from its role in much of the physical science, in that it creates and defines the objects of study much more directly. Those objects are no less real than those of the physical science. They are even more often much better understood. But despite the unity of statistics—the same methods are useful in all areas—there are fundamental differences, and these have played a role in the historical development of all these fields” (Stigler, 1999:199).


Some observers even question the claims of the behavioral and social sciences to standing as sciences. As observed in a recent text on the history of science, “In the end, perhaps the most interesting question is: Did the drive to create a scientific approach to the study of human nature achieve its goal? For all the money and effort poured into creating a body of practical information on the topic, many scientists in better established areas remain suspicious, pointing to a lack of theoretical coherence that undermines the analogy with the ‘hard’ sciences” (Bowler and Morus, 2005:314–315).


According to Cole (2001:37), “The problem with fields like sociology is that they have virtually no core knowledge. Sociology has a booming frontier but none of the activity at that frontier seems to enter the core.”


As noted by Galison (1999:143), “Experimentalists … do not march in lockstep with theory…. Each subculture has its own rhythms of change, each has its own standards of demonstration, and each is embedded differently in the wider culture of institutions, practices, inventions and ideas.”


Rita Colwell, former director of the National Science Foundation, has stated that “Interdisciplinary connections are absolutely fundamental. They are synapses in this new capability to look over and beyond the horizon. Interfaces of the sciences are where the excitement will be the most intense” (Colwell, 1998).


As stated in a recent National Research Council (2005b:150) report, “A remaining challenge is to determine what additional measures, if any, are needed to assess interdisciplinary research and teaching beyond those shown to be effective in disciplinary activities. Successful outcomes of an interdisciplinary research (IDR) program differ in several ways from those of a disciplinary program. First, a successful IDR program will have an impact on multiple fields or disciplines and produce results that feed back into and enhance disciplinary research. It will also create researchers and students with an expanded research vocabulary and abilities in more than one discipline and with an enhanced understanding of the interconnectedness inherent in complex problems.”


Consistent with the belief that competitive, merit-based review is key to creating the best possible conditions for scientific advance is the articulation of how “quality” is to be achieved and gauged under the Research and Development Investment Criteria established by the Office of Science and Technology Policy and the Office of Management and Budget on June 5, 2005: “A customary method for promoting quality is the use of a competitive, merit-based process” (http://www​.whitehouse​.gov/omb/memoranda/m03-15.pdf, p. 7).


As Max Planck famously remarked, “a new scientific truth does not triumph by convincing its opponents and making them see the light, but because the its opponents eventually die, and a new generation grows up that is familiar with it.” Stephan and Levin (1992:83) write: “empirical studies of Planck’s principle for the most part confirm the hypothesis that older scientists are slower than their younger colleagues are to accept new ideas and that eminent older scientists are the most likely to resist. The operative factor in resistance, however, is not age per se but, rather, the various indices of professional experience and prestige correlated with age …. [Y]oung scientists … may also be less likely to embrace new ideas, particularly if they assess such a course as being particularly risky.” Thus, a graying scientific community affects the rate of scientific innovation directly by being less productive and indirectly by being slow to accept new ideas as they emerge.


Interdisciplinary research and the industrial model of research are often found together, but they are not identical. One may organize centers based primarily on researchers from a single discipline, and researchers from several disciplines may collaborate, as co-principal investigators or as loosely coupled teams, on one-time awards. At NIH, research center grants “are awarded to extramural research institutions to provide support for long-term multidisciplinary programs of medical research. They also support the development of research resources, aim to integrate basic research with applied research and transfer activities, and promote research in areas of clinical applications with an emphasis on intervention, including prototype development and refinement of products, techniques, processes, methods, and practices” (Institute of Medicine, 2004).


“NIH does not have formal regular procedures or criteria for evaluating center programs. From time to time, institutes conduct internal program reviews or appoint external review panels, but these ad hoc assessments are usually done in response to a perception that the program is no longer effective or appropriate rather than as part of a regular evaluation process. Most of these reviews rely on the judgment of experts rather than systematically collected objective data, although some formal program evaluations have been performed by outside firms using such data” (Institute of Medicine, 2004:121).

Copyright © 2007, National Academy of Sciences.
Bookshelf ID: NBK26378


  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this title (931K)

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...