• We are sorry, but NCBI web applications do not support your browser and may not function properly. More information

TABLE 4-1Summary of Differences Between 1995 and 2006 Studies

1995 Study2006 Study
University Participation
274 universities (including schools of professional psychology)221 universities and combinations of universities
Field Coverage
41 fields, all of which were ranked59 ranked fields, 3 fields not ranked but with full data collection, 14 emerging fields
Program Inclusion
Nominated by institutional coordinatorsBased on NSF Ph.D. production data and the nominations of institutional coordinators
Number of Programs
3,634 ranked4,838 ranked, 166 unrated
Faculty Definition
78,000 total, 16,738 nominated as raters (faculty could be counted in more than one program)Of the 104,600 total, 7,932 faculty were chosen through a stratified sample for each field to participate in the rating study. Faculty could be counted in more than one program, but were usually counted as “core” in only one. Faculty members were allocated fractionally among programs according to dissertation service so that, over all programs, he or she was counted no more than once.
1995 Study2010 Study
Ratings and Rankings
Raters nominated by the institutional coordinators were sent the National Survey of Graduate Faculty, which contained a faculty list for up to 50 programs in the field. Raters were asked to indicate familiarity with program faculty, scholarly quality of program faculty (scale 1–6), familiarity with graduates of program (scale 1–3), effectiveness of program in educating research scholars (scale 1–4), and change in program quality in the last five years (scale 1–3).

Rankings were determined for each program by calculating the average rating for a program and arranging all the programs from lowest to highest based on average ranking.
  1. All faculty were given a questionnaire and asked to identify the program characteristics in three categories that they felt were most important, and then identify the categories that were most important. This technique provided the survey-based (S) weights for each field.
  2. A stratified sample of faculty in each field were given a stratified sample of 15 or fewer sampled programs to rate them on a scale of from 1 to 6. Included in the data for raters was a faculty list and program characteristics. These ratings were regressed on the program characteristics to determine the regression-based (R) weights. These weights were then assumed to hold for all programs in the field so that all programs could receive a rating based on these weights.
  3. The S weights and the R weights, calculated as just described, were used to calculate S rankings and R rankings.
  4. Uncertainty was taken into account by introducing variation into the values of the measures and by repeatedly estimating the ratings obtained by taking repeated halves of the raters chosen at random. Ratings were calculated 500 times.
  5. The ratings in step 4 were ordered from lowest to highest. The ratings of all programs in a field were pooled and arranged in rank order. The range covering 90 percent of rankings was then calculated for each program.a
a

This is a simplified description. The exact process is more complex and is described in detail in Appendix J.

This is a simplified description. The exact process is more complex and is described in detail in Appendix J.

From: 4, The Methodologies Used to Derive Two Illustrative Rankings

Cover of A Data-Based Assessment of Research-Doctorate Programs in the United States
A Data-Based Assessment of Research-Doctorate Programs in the United States.
National Research Council (US) Committee on an Assessment of Research Doctorate Programs; Ostriker JP, Kuh CV, Voytuk JA, editors.
Washington (DC): National Academies Press (US); 2011.
Copyright © 2011, National Academy of Sciences.

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.