Logo of bmjLink to Publisher's site
BMJ. 2000 Mar 4; 320(7235): 636–639.
PMCID: PMC1117659

The research assessment exercise and medical research

In 1999-2000 the Higher Education Funding Council for England will distribute over £855m for research, virtually all of it according to the quality and amount of research done. Quality is assessed through a periodic research assessment exercise. Research is funded selectively so that universities and colleges with high quality research departments get a larger share of the money. The first research assessment exercise to cover the entire higher education sector was undertaken in 1992, the last one was in 1996, and the next will take place in 2001. The research community now has an opportunity to influence the way in which quality of research is assessed after the exercise in 2001 because, from now until autumn 2000, the funding councils for England, Scotland, Wales, and Northern Ireland are undertaking a fundamental review of research policy and funding.1

Each higher education institution is allocated a block grant that includes quality related research funding. This quality related funding provides money for the infrastructure of research—helping to cover the costs of the salaries of permanent academic staff, premises, and central computing—while research charities and funding councils provide for direct project costs and contribute to indirect project costs. The quality related research funding is thus the core funding for the university research base and is allocated by the funding councils according to the quality rating of each unit of assessment.

Summary points

  • The research assessment exercise has resulted in substantial reductions in funding to some medical schools and has led to a loss of status for teaching compared with research
  • The exercise has undervalued clinical and health services research and disadvantaged highly specialised and multidisciplinary research
  • It also promotes a short term approach to research and is expensive to operate
  • Proposed changes for the exercise in 2001 should help address these problems, but the fundamental review of research policy and funding by the funding councils provides an opportunity for innovative change

The research assessment exercise

For each clinical unit of assessment, the parent university makes a submission which is given a quality rating after being judged against standards of national and international research excellence. The quality of research submitted for each unit is assessed by a panel of practising researchers in the subject and is based on the quality of publications, numbers of research assistants and research studentships, and income from research grants, particularly peer reviewed funding (such as that from the Medical Research Council and the Association of Medical Research Charities). The panel also seeks evidence of the vitality of the department and its prospects for continuing development. Quality is rated on a seven point scale, from 1 at the bottom through 2, 3a, 3b, 4, and 5 to 5* at the top. A rating of 1 is defined as research quality that equates to attainable levels of national excellence in none or virtually none of the sub-areas of activity, whereas 5* is defined as research quality that equates to attainable levels of international excellence in most sub-areas of activity and attainable levels of national excellence in all others.

Within the funding system as a whole weightings are given to each broad area of research so that, for example, laboratory and clinical subjects receive more funding than the humanities. The total funding for a given subject is calculated by multiplying the funding associated with the quality rating by the volume of research in that subject. Volume of research in each unit of assessment is measured according to five separate components—numbers of academic staff active in research, research assistants, research fellows, and postgraduate students and research income from charities. The number of academic staff active in research is the most important measure of volume and accounts for up to two thirds of the total. Quality ratings 1 and 2 attract no funding, whereas a rating of 5* attracts about four times as much funding as a rating of 3b for the same volume of research activity. Thus the funding of research is highly selective. In 1998-9, 75% of research funds from the Higher Education Funding Council for England went to just 26 of the more than 100 higher education institutions.

Effect of the research assessment exercise on medical research

The three clinical units of assessment (clinical laboratory sciences, community based clinical subjects, and hospital based clinical subjects) are the most important units for medical schools, but the quality of research in other units from the 69 subject areas (including clinical dentistry, nursing, subjects allied to medicine, preclinical subjects, biological sciences, and biochemistry) will also affect the allocation of quality related research funding to universities that have associated medical schools.

In a report of an independent task force on clinical academic careers,2 only six medical schools (out of more than 20) had at least one unit of assessment with a score of 5 or 5*. The report divided medical schools and postgraduate institutions into four bands and commented that the ratings of the 19 medical schools in the lower two bands were very disappointing, yet it also pointed out that Sir Robert May, the government's chief scientific advisor, had written to the task force emphasising that he regarded clinical medicine as one of Britain's research strengths, both absolutely and relatively. However, it is impossible to be certain about the comparative performance of Britain in medical research because the measures used by May were not the same as those used in the research assessment exercise.

Therefore, this seems a good time for the funding councils to undertake a fundamental review of research policy and funding, to consider the continuing fitness of the research assessment exercise for its purpose, and to respond to criticism that the exercise is “misleading, unscientific and unjust.”3

Some problems with the research assessment exercise

Major shifts in funding

The research assessment exercise can result in major shifts in funding to or from individual medical schools (over £2 000 000 from two schools in the 1996 exercise). In theory increased selectivity could result in the emergence of a small number of centres of excellence where internationally competitive research is undertaken while the remaining schools follow a spiral of decline to become “teaching only” medical schools. Since research informs teaching and clinical practice such a development would be undesirable both for those medical schools and the NHS.

Whatever methodology is used for assessing research quality it is likely that there will be only five or six medical schools in the United Kingdom that could aspire to more than one unit of assessment rated at 5 or 5* with its associated funding. Medical schools will need to develop policies to streamline their research portfolio, focusing on two or three existing areas of research strength (such as neurosciences, cardiovascular sciences, cancer studies, infection, inflammation and immunity, etc). In all but a few medical schools it will be impossible to sustain internationally competitive research in a full range of clinical disciplines. It is not viable for there to be, say, a research group of international distinction in anaesthesia in every medical school. But it should be possible for a research group in anaesthesia to be part of an internationally distinguished research group in clinical pharmacology and therapeutics, biophysics, or bioengineering and computing, etc. However, teaching and service functions must remain clearly identifiable. Historical preconceptions about departmental structures will need to be re-examined.

As research activity is reconfigured with increasing emphasis on selectivity, it is important that no major gaps in Britain's clinical research portfolio emerge, not least because research underpins teaching, and medical schools must provide teaching across the clinical spectrum even in these days of core curriculums. Another problem to be addressed is that large and small areas of research may need different treatment in the research assessment exercise. Hopefully, this problem will be addressed by the use of sub-panels in the research assessment exercise in 2001. This might help to reduce financial turbulence by “smoothing out” changes in ratings from one research assessment exercise to another.

Relative status of teaching

Although there is a perception that teaching is less valued as an academic activity than research, the criteria for academic promotion in most institutions include an element that scores teaching. However, institutions generally do have procedures for promotion of academic staff on the basis of research excellence alone (to readership level), whereas few have procedures for promotion on the basis of teaching alone.

There is certainly a perception that clinical teaching has been adversely affected by the research assessment exercise and in some institutions by loss of clinical academic staff following a reduction of the funding council's grant as a consequence of the exercise. Nevertheless, some medical schools that did not perform well in the research assessment exercise in 1996 have subsequently received an excellent rating for teaching in the teaching quality assessment exercise. When the current round of subject review in the teaching quality assessment has been completed it will be interesting to see how many medical schools that achieved international excellence in medical research also achieved the highest ratings in teaching quality.

Molecular science is rated higher than clinical and health services research

In the 1996 research assessment exercise, panels were perceived to have rated molecular science higher than clinical research and health services research. This has led to transfer of funding within medical schools to non-clinical sciences (such as conversion of clinical lectureships to non-clinical lectureships) and has adversely affected clinical and health services research. Some clinical disciplines have been more affected by the research assessment exercise than others (for example, surgical disciplines, pathology, public health and epidemiology, and primary care). In addition, participation by the NHS in the 1996 exercise was considered inadequate. Furthermore, there is no link between the assessment cycles of the funding councils and those of NHS research and development. There have also been inconsistencies between the policies of the funding councils and NHS research and development: for example, the funding councils' strategy is to be highly selective, whereas the NHS strategy has been to widen the research base.

Some of these problems have been addressed for the research assessment exercise of 2001, and the use of sub-panels that cover broad clinical disciplines, together with cross representation on sub-panels and increased NHS representation, may allay some of the anxiety about the value placed on clinical and health services research.

New initiatives from the research charities, the MRC clinical research initiative, and the strategic review of NHS research and development will raise the profile of clinical research. There does, however, remain a fundamental problem for clinical academics, which is their commitment to clinical work. Consideration should be given to whether judgments about the quality and the output of clinical academics should be related to the time available for them to undertake research. In funding terms, an additional question is whether proportionately more quality related research funding should be made available to provide overheads for major charity awards that support clinical research (from the Association of Medical Research Charities). Indeed, it has been suggested that all quality related research funding should follow grants from the research councils and Association of Medical Research Charities; this would have the advantage of simplicity but would not be acceptable to the wider research community and would not necessarily help clinical research.

Highly specialised and interdisciplinary research are disadvantaged

In a system that uses the impact factors of the journals in which research is published as part of the judgment of quality, highly specialised research is likely to be disadvantaged. However, the use of sub-panels, the flagging of specialised subjects, increased representation from the NHS, and the use of international nominees to moderate the ratings should help to address this specific problem. The focus on quantitative parameters also presents problems and should be supplemented by qualitative assessment of research output by experts in the assessment panels and sub-panels.

As research becomes more interdisciplinary it becomes more difficult to decide which is the most appropriate panel to assess research that covers several units of assessment. Wider representation on panels and sub-panels as well as cross representation will help to address this problem. Assessment of an interdisciplinary submission by more than one of the three major clinical units of assessment may be required to resolve differences. The final responsibility for the rating remains with the main panel.

Emphasis on short term goals in research

The frequency of the current research assessment exercises has produced a short term approach to research in order to have results ready for the next exercise. This has encouraged a potentially disruptive and financially expensive “transfer market” of successful research leaders (and often their groups) between exercises. One way to address such problems would be to extend the intervals between exercises to six or seven years. However, this would penalise those institutions that had been able to implement improvements within a shorter period. An alternative approach possibly worth exploring would be for institutions to conduct periodic (possibly triennial) self assessments that would be policed by random site visits. Any self assessment not accepted by the funding council would trigger a site visit, and an inflated self assessment would incur financial penalties. Another model would be for externally moderated internal review (say every three years) within a six or seven yearly cycle of formal external assessment.

Expensive to operate

A key question is whether the cost of research assessment justifies the benefits. With the current system it is difficult to make a judgment about this. Only the direct costs to the Higher Education Funding Council for England (£27m) are currently available, and neither the direct costs nor the opportunity costs that universities bear in preparing for the research assessment exercise are known. These costs must be considerable and are not only financial. They should be evaluated and attempts made to reduce them as far as possible. Furthermore, any new model of assessment must be tested against refinement and development of the current system both in terms of effectiveness and total cost.

Problems with current methods for assessing research quality

The current methods for assessing research quality include quality of publications, the extent of research activity (number of research assistants, research students and research studentships and evidence of esteem by external funding as indicated by income from research grants), and evidence of the department's vitality and prospects for continuing development.

Williams has argued that research grant income should be removed from consideration because it does not “guarantee that the money will be well spent or the project will be successfully conducted or reach worthwhile conclusions.”3 Furthermore, he claims that there is no evidence that a project's measurable outcome is related to the size or source of the grant. He believes that publications are the only universally accepted currency of success, and he also believes (as do many others) that assessing the quality of publications using journal impact factors is flawed. There is also a view that departments should not be restricted to submitting only four publications per member of staff who is undertaking research; instead, a group's total output or a selection of that output should be available to the panel.

In addition, numbers of graduate research students or assistants are not necessarily an index of the quality of research activity; indeed, they could be the opposite if supervisors (especially clinical academics) are overstretched. However, completion rates for higher degrees within a given time can be used to support an assessment of quality. It is encouraging that in the section on criteria and working methods for the research assessment exercise in 2001, panels “will pay particular attention to research output” in assessing the quality of research.4 However, research students, research studentships, and external income will remain as part of the assessment process.

Bias on the part of panel members is also an important issue: Roberts has shown statistically that in the 1996 exercise the outcome was biased in favour of departments with members on assessment panels.5 Whether this association was causal is another matter. However, for the exercise in 2001, this will undoubtedly be borne in mind by panel members. Furthermore, the establishment of sub-panels, the opportunity for universities and panels themselves to seek cross referral to other panels, and the presence of international nominees, will reduce the risk of bias.


The research assessment exercise has been progressively refined over several exercises, and it will be interesting to see the results of the refinements for 2001 in the clinical medicine units of assessment. Of particular importance to the future will be the way that the sub-panels work and how they affect the final rating for a unit of assessment. Of even more importance will be the detail of the funding methodology which follows assessment of research quality.

Overall, the effects of the assessment exercises on the research base have been positive. Universities and their medical schools are developing proactive policies to streamline their research portfolios and to concentrate on existing areas of strength. Successive assessment exercises show an increasing proportion of research that is highly rated. The exercises have affected the research base in most if not all institutions and within units of assessment. The criticism that medical research “shot itself in the foot” in the 1996 exercise is ill founded—funding has increased overall for the clinical units of assessment. Abandoning assessment of research quality would be unfair to those institutions that are internationally competitive and to those that aspire to international excellence.

Concerns about the methodology for assessing quality and the negative consequences of the operation for medical schools must be addressed. Any new method for assessing research quality must be at least as effective and cheaper than the current system. With all its shortcomings, assessment of research quality linked to funding is here to stay. Those who call for abandoning the whole exercise will have to come up with a credible alternative for the accountable allocation of almost £1bn of public money.

The research assessment exercise has led to a loss of status for teaching compared with research


ST was formerly dean of medicine at the University of Manchester and executive secretary of the Council of Heads of Medical Schools. This article is based on a background paper prepared for a symposium on careers in academic medicine sponsored by the Joint Consultants Committee and the Department of Health.


Editorial by Goldbeck-Wood Education and debate pp 630, 633


Competing interests: None declared.


1. Higher Education Funding Council for England (HEFCE). www.hefce.ac.uk (accessed 5 Jan 2000).
2. Clinical academic careers—report of an independent task force chaired by Sir Rex Richards. London: Committee of Vice Chancellors and Principals; 1997.
3. Williams G. Misleading, unscientific and unjust: the United Kingdom's research assessment exercise. BMJ. 1998;316:1079–1082. [PMC free article] [PubMed]
4. RAE 2001. www.rae.ac.uk (accessed 5 Jan 2000).
5. Roberts C. Possible bias due to panel membership in the 1996 research assessment exercise. Res Fortnight. 1999;6(4):1.

Articles from BMJ : British Medical Journal are provided here courtesy of BMJ Group
PubReader format: click here to try


Save items

Related citations in PubMed

See reviews...See all...

Cited by other articles in PMC

See all...


  • PubMed
    PubMed citations for these articles

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...