NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

National Research Council (US) Committee to Examine the Methodology for the Assessment of Research-Doctorate Programs; Ostriker JP, Kuh CV, editors. Assessing Research-Doctorate Programs: A Methodology Study. Washington (DC): National Academies Press (US); 2003.

Cover of Assessing Research-Doctorate Programs

Assessing Research-Doctorate Programs: A Methodology Study.

Show details

5Student Education and Outcomes

INTRODUCTION

The 1995 Study contained a reputational measure that assessed the “Effectiveness of [the] Program in Educating Research Scholars/Scientists”1 but was quite straightforward in stating, “Reputational ratings do not tell us how well a program is structured, whether it offers a nurturing environment for students, or if the job placement experiences of its graduates are satisfactory.”2 Yet all of these attributes are important to the quality of a doctoral education. Despite the known shortcomings of “E,” the reputational measure of educational effectiveness, the 1995 Study proceeded to measure it in order to maintain continuity with the previous 1982 Study.

The Committee observed the high degree of correlation between “Q,” the reputational measure of scholarly quality of faculty, and “E,” and asked the Panel on Student Processes and Outcomes to focus on defining direct measures that could be obtained from programs or through surveys that would provide information about the education of doctoral students that was comparable across programs and at the same time useful to potential students and to program administrators. It constructed surveys for first-year students, for mid-course students, and for graduates who had completed their programs 5–7 years before. These questionnaires are shown in Appendix D.

When it came to the point of administering the questionnaires, however, the method of questioning students and graduates directly about their programs ran into a number of obstacles. Prior to the pilot trials, the pilot coordinators warned the Committee about the low response rates often encountered when administering questionnaires to graduate students.3 Raising these response rates was costly and time-consuming. The Committee viewed this as a very serious problem, since the comparisons of programs, which lie at the heart of the assessment, require valid data from each program. If response rates were low, there would be a problem in deciding whether responses were coming from a biased sample. Finding addresses for graduates is also time-consuming. Thus, after considerable debate, the Committee decided to pilot only a questionnaire for mid-course students in selected fields.

This chapter reports on what the Committee agreed it would like to measure that would be valuable in assessing the effectiveness of doctorate programs and on the results of its pilot trials in five fields. It recommends that a survey of admitted-to-candidacy students be conducted as part of the full study for a limited number of fields. It also discusses two other questionnaires, one for first-year students and one for graduates who are 5–7 years beyond completion, which may be helpful to programs that want to survey their own students and graduates.

GUIDING PRINCIPLES

Student input is important in improving doctoral education. Direct assessment of student experiences provides information about program effectiveness that cannot be obtained by other means.

Although faculty are key in the education of research-doctoral students, the effectiveness of that education may hinge on student perceptions and reactions. Interviews with doctoral students conducted by Jody Nyquist and her colleagues4 were confirmed by a survey of doctoral students in 11 selected fields at 27 universities conducted by Golde and Dore (2001), who found that graduate students did not believe that the training they received was preparing them for the jobs that they were likely to take and “Many students do not clearly understand what doctoral study entails, how the process works and how to navigate it effectively.”5 The Committee agreed that effective doctoral programs should have formal means to obtain student input in order to improve their effectiveness.

A student survey can provide a statistical description of students in a program, information about practices in that program, and assist future students in the selection of graduate programs.

Data about graduate programs can be collected from program administrators (see Table 4–1) and also from students. Students can report reliably on what they have experienced in their doctoral programs. Programs can report on what they offer, the overall characteristics of their students, and what information they collect about student outcomes. Such data should permit prospective students to distinguish among programs. If there are differences among programs in the extent to which students have received training in particular areas, a report on these differences will permit potential students to match what a program offers to what they desire in a program.

Information on educational outcomes is essential in assessing the quality and effectiveness of doctoral programs.

It should be no surprise that doctoral education is preparatory to employment. Traditionally, the Ph.D. is certification that a degree holder can conduct original research. Fifty years ago, most doctorate degree holders became academic researchers. This is no longer the case. In fact, in almost all fields, fewer than half of new Ph.D. recipients are employed as tenure-track faculty in research universities.6 In recognition of this change, the 1995 Committee on Science, Engineering, and Public Policy (COSEPUP) study, Reshaping the Graduate Education of Scientists and Engineers recommended that

Academic departments should provide employment information and career advice to prospective and current students in a timely manner and should help students see career choices as a series of branching decisions. Students should be encouraged to consider discrete alternative pathways when they have met their qualifying requirements.7

A corollary of this recommendation is that prospective students should know what kind of employment recent program graduates have undertaken. The Association of American Universities (AAU) reiterated this recommendation in 1998.8 Yet these data are still not routinely available from doctoral programs. This is in contrast to other components of graduate education, for example professional schools, in which employment outcomes are routinely publicized and used as a recruitment tool.

The collection and presentation of data on employment outcomes of graduates neither requires nor implies a hierarchy of employment outcomes for Ph.D.s. Rather, a prospective doctoral student who wants to become employed as a teacher at a liberal arts college or a researcher in an industrial laboratory should be able to learn whether the programs under consideration have produced graduates who work in such settings and provide appropriate training in those directions. Moreover, data suggest that many graduates are employed in sectors other than those that they sought at the outset of their graduate studies.9 Programs that provide opportunities for students to explore career options and encourage exploration through formal and informal means are more likely to create an environment that is supportive of student choices and which prepares students for opportunities in varying labor markets.

Given these guiding principles, the Panel developed three questionnaires that would collect the information recommended in a number of recent reports designed to improve the quality of doctoral education. Only one of these was actually pilot tested, but all three questionnaires are discussed below and provided in the appendices in the hope that they may be adopted and implemented by interested institutions and professional societies.

INFORMATION FROM STUDENTS

The questions for students were designed with the intention of collecting data that are comparable across programs. Thus, they are limited primarily to factual questions about what the student was informed of, exposed to, or experienced with respect to teaching, research, and professional development. Other questionnaires have asked students how they felt about or evaluated aspects of their experience.10 The Committee rejected this approach as being beyond the scope of the present study, although the approach would be informative about student attitudes.

Questions for First-Year Students

The proposed questionnaire is shown in Appendix D. These questions focus on information that the program provided the student either in the application process or following admission. They ask whether students were provided information prior to attending about:

  • Program costs and financial aid,
  • Career prospects in the field,
  • Program time to degree,
  • Program requirements and expectations,
  • Rates of program completion,
  • Employment of recent graduates.

Questions are also asked about whether a program provided a formal orientation program as the student began his or her studies, about career goals or previous participation in post-baccalaureate education, and the status of financial support.

The Committee decided not to field this questionnaire, in part because the pilot trials were carried out toward the end of the academic year, when it was impractical to obtain satisfactory response rates in a short period of time. For the full study, the Committee believes that the benefit from information gathered through this questionnaire would probably not justify the cost of administration. However, it has included the questionnaire in Appendix D in the event that universities might want to use it to compare program practices across programs or if consortia of universities wish to use it for comparative purposes.

Questions for Students Who Are “Mid-Program” Students

This questionnaire is shown in Appendix D. It is intended for registered students who have finished their course work and preliminary examinations and are in the process of completing their dissertation, a status frequently referred to as “advanced-to-candidacy.” This status is reached at varying times in different disciplines but generally means that a student who has entered a program without a relevant master's degree will have been in a program for at least 2 years and, thus, is definitely a doctoral student. The decision as to which group of students are in “mid-course” and should receive the questionnaire will vary by field and program.

Questions are grouped into three categories:

1. Educational Program

These questions deal with whether the student is expected to earn a master's degree as part of his/her training and whether the doctoral program is part of a joint degree program. Also addressed are the student's career goals upon entry into a program and whether these have changed as well as queries about the student's three largest sources of support.

2. Program Characteristics

Questions in this category address the kinds of professional skills in which the student received instruction (e.g., presentation skills, proposal writing, preparing articles for publication, working in teams, independent research, project management, professional ethics, and speaking to nonacademic audiences); the kinds of teaching experiences to which the student was exposed and whether there was formal instruction and evaluation in teaching and an opportunity to teach in a variety of academic environments; the student's perception of the program environment (e.g., feedback, assessment of progress, career advice, mentoring, and liveliness of the intellectual climate); the availability of infrastructure (e.g., personal workspace, and computing facilities) and adequacy of library resources; student research productivity, research presentations, and any publications while enrolled in a doctoral program.

3. Student Demographic Information

These questions deal with the student's age, gender, citizenship, race/ethnicity, dependent care responsibilities, and level of parents' education.

Questions for Graduates 5–7 Years Out

This questionnaire was not pilot tested because the pilot site coordinators told us that they would not be able to provide us with mail or e-mail addresses for their graduates within the short time frame of 2–3 months. Cerny and Nerad11 have been able to track down graduates 10–13 years after their degree at 61 research universities and achieved a response rate of over 60 percent. Programs that support students with institutional training grants from federal sources routinely track the employment of their graduates. Thus, we know that this is a possible task, although not necessarily a routine one, in all fields and institutions.

A more conceptual objection to using these data in an assessment of quality of current programs is that the program faculty and the curricula, which were in place 10 years before, may not be the same as are currently associated with the program. However, data on the career outcomes for graduates 5–7 years out can and should be collected by effective programs. NSF data report that the 5–7 year period allows Ph.D. recipients, including those with post-doctoral appointments, to settle into more stable employment than the position they entered into immediately after graduation. Collection of such data permits programs to understand what type of positions their graduates are taking and to consider whether their curricular offerings provide adequate preparation for these positions. The Committee agreed that programs should track the career outcomes of their students until at least 5 years out and make this information available to prospective students. Such efforts would serve to indicate a positive sense of responsibility on the part of a program, demonstrating a desire to monitor program quality and effectiveness.

PILOT TRIAL FINDINGS

The pilot questionnaire for students admitted to candidacy was administered to 466 students from five fields at five institutions.12 This was done in mid-April, which is late in the school year. A response rate of 25 percent was achieved with one follow-up mailing. When we inquired why the response rates were so low, we were told that it was late in the year and some students may have left campus. We were also told that many students do not access their university mailbox often. In earlier discussions, we had been told that the typical response rate during the middle of the school year is, at best, 40 percent. On the other hand, higher response rates (up to 80 percent) have been achieved when students and staff have been alerted in advance to the importance of the impending survey. The good news was that, for those who did answer the questionnaire, the items worked. All items were answered and we did not receive complaints about the items.

A 40 percent response rate would not be adequate for program-to-program comparisons. The Committee, however, decided that it should recommend a further trial, for five fields, as part of the full study. Questionnaires should be sent out early in the school year and programs should be asked to collect alternative e-mail addresses (in addition to the university mailbox) for their students.

CONCLUSIONS AND RECOMMENDATIONS

Having fielded a questionnaire for mid-course students with the pilot institutions, the Committee concluded that it would be feasible to conduct such a survey in a limited number of fields as part of an assessment of research-doctorate programs. The Committee also found, however, that institutions will need considerable lead time to be able to provide information on recent graduates. Whether a program collects such data is evidence of good practice. Thus, the Committee recommends:

Recommendation 5.1: The proposed NRC study of research-doctorate programs should conduct a survey of enrolled students in selected fields who have advanced to candidacy for the doctoral degree regarding their assessment of their educational experience, their research productivity, program practices, and institutional and program environment.

With respect to career outcomes of graduates, the committee recommends:

Recommendation 5.2: Universities should track the career outcomes of Ph.D. recipients both directly upon program completion and at least 5–7 years following degree completion in preparation for a future NRC doctoral assessment. A measure of whether a program carries out and publishes outcomes information for the benefit of prospective students and as a means of monitoring program effectiveness should be included in the next NRC assessment of research-doctorate programs.

Footnotes

1
2

Op. cit. p. 23.

3

For example, Golde and Dore (2001) had a 42 percent response rate.

4
5
6
7
8
9
10

Examples include Golde and Dore, op. cit., and the National Doctoral Program Survey fielded by the National Association of Graduate and Professional Students in 1999.

11

See, for example, Nerad and Cerny (1999).

12

One institution sent the NRC e-mail addresses for an additional 411 students but did not indicate the field of their program. Questionnaires were not sent to these addresses.

Copyright © 2003, National Academy of Sciences.
Bookshelf ID: NBK43478

Views

  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this title (7.5M)

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...