• We are sorry, but NCBI web applications do not support your browser and may not function properly. More information
Logo of brjgenpracRCGP homepageJ R Coll Gen Pract at PubMed CentralBJGP at RCGPBJGP at RCGP
Br J Gen Pract. Sep 1, 2010; 60(578): e335–e344.
Published online Aug 31, 2010. doi:  10.3399/bjgp10X515340
PMCID: PMC2930243

Performance of small general practices under the UK's Quality and Outcomes Framework

Tim Doran, MD, clinical research fellow, Stephen Campbell, PhD, senior research fellow, Catherine Fullwood, PhD, and Evangelos Kontopantelis, PhD, research associate
National Primary Care Research and Development Centre, University of Manchester, Manchester
Martin Roland, DM, professor of health services research

Abstract

Background

Small general practices are often perceived to provide worse care than larger practices.

Aim

To describe the comparative performance of small practices on the UK's pay-for-performance scheme, the Quality and Outcomes Framework.

Design of study

Longitudinal analysis (2004–2005 to 2006–2007) of quality scores for 48 clinical activities.

Setting

Family practices in England (n = 7502).

Method

Comparison of performance of practices by list size, in terms of points scored in the pay-for-performance scheme, reported achievement rates, and population achievement rates (which allow for patients excluded from the scheme).

Results

In the first year of the pay-for-performance scheme, the smallest practices (those with fewer than 2000 patients) had the lowest median reported achievement rates, achieving the clinical targets for 83.8% of eligible patients. Performance generally improved for practices of all sizes over time, but the smallest practices improved at the fastest rate, and by year 3 had the highest median reported achievement rates (91.5%). This improvement was not achieved by additional exception reporting. There was more variation in performance among small practices than larger ones: practices with fewer than 3000 patients (20.1% of all practices in year 3), represented 46.7% of the highest-achieving 5% of practices and 45.1% of the lowest-achieving 5% of practices.

Conclusion

Small practices were represented among both the best and the worst practices in terms of achievement of clinical quality targets. The effect of the pay-for-performance scheme appears to have been to reduce variation in performance, and to reduce the difference between large and small practices.

Keywords: incentives, quality, primary care

INTRODUCTION

Small general practices in the UK, particularly those that are single handed, are often accused of providing poor-quality care. The 2000 NHS Plan cited a need to ‘confirm that single-handed (solo) practices are offering high standards’.1 The Shipman Inquiry identified advantages and disadvantages associated with single-handed practice, and described an implicit unwritten policy to reduce the numbers of solo practices in the UK; noting that concern about single-handed practitioners was not recent.2 In 2002, the Audit Commission concluded that there were good arguments for preserving a diversity of practice sizes and types: ‘One challenge is to ensure that the trend towards larger practices does not mean that patients lose out on some of the advantages that smaller practices currently offer’.3 However, in 2008, the NHS Next Stage Review continued the pressure on small practices by suggesting that they be congregated as franchised practices, and also advocated groupings of larger numbers of doctors in new GP-led health centres.4

A new general medical services contract was introduced in the UK in 2004 incorporating a pay-for-performance scheme for family doctors: the Quality and Outcomes Framework (QOF). The scheme awards points, which are converted into financial rewards, for meeting quality targets on clinical, organisational, and patient-experience indicators.5 The framework document stated that the contract would ‘allow GPs in small practices to continue as before, but with the opportunity and the incentive to demonstrate that they can provide high quality care and be rewarded appropriately’.6 In the first year of the scheme, quality scores appeared to increase with the size of the clinical team.7 However, high points scores do not always correlate with high rates of achievement. Maximum achievement thresholds of between 50% and 90% exist for each clinical indicator; hence, it is possible to attain maximum points while missing the targets for between 10% and 50% of patients.

How this fits in

Small practices are often suspected of providing poor quality care and face pressures to congregate into larger practice groups. Under the Quality and Outcomes Framework, the quality of care provided by smaller practices is more variable than that provided by larger practices, but on average is comparable. The payment system, however, disadvantages smaller practices as a group.

This article describes the performance, in terms of both points scored and actual levels of achievement, of small practices on 48 clinical activity indicators in the QOF compared to practices of other sizes over the first 3 years of the scheme.

METHOD

The QOF awards points to family practices on the basis of the proportion of eligible patients for whom they achieve clinical targets between a minimum threshold of 25% (that is, the target must be achieved for at least 25% of patients for the practice to earn any points) and a maximum threshold that varies according to the indicator (Table 1). There is no additional reward for achievement above the maximum threshold. Practices may exclude (‘exception report’) patients they deem to be inappropriate for an indicator, and these patients are removed from the achievement calculation. The maximum number of points awardable varies by indicator. In year 1 (2004–2005), each point earned the practice £76, adjusted for the relative prevalence of the disease and the size of the practice population. This was increased to £126 for years 2 and 3 (2005–2006 and 2006–2007). For year 3, most minimum achievement thresholds were raised to 40%, maximum thresholds were raised for some indicators, 17 new indicators were introduced, 32 existing indicators were combined or revised, and three were dropped. The analyses in this article relate to the 48 clinical activity indicators, covering measurement and treatment activities as well as intermediate outcomes, that remained substantially unchanged or underwent only minor revisions (Table 1).

Table 1
Quality and Outcomes Framework clinical activity indicators included in study.

Data on practice performance and points scored were derived from the Quality Management and Analysis System operated by the NHS Information Centre.8 This system automatically extracts data from practices’ clinical computing systems, including:

  • the number of patients deemed appropriate for every indicator:
  • those who were in the subgroup specified by the indicator and were not excluded by the practice (Di);
  • the number of patients for whom the indicator was met (Ni);
  • the number of points scored (Pi); and
  • for year 2 onwards, the number of patients who were excluded by the practice (Ei).9

Data on practice and patient characteristics were taken from the 2006 general medical statistics database, maintained by the Department of Health. Practices were grouped on the basis of the number of registered patients in each year, from group 1 (1000–1999 patients) to group 8 (≥12 000 patients; Table 2). Most group 1 practices were single handed (92.5% in year 1).

Table 2
Number of patients and physicians in study practices, 2004–2005 to 2006–2007.

For each indicator in every practice, points scored (Pi) and reported achievement (the proportion of patients deemed appropriate by the practice for whom the targets were achieved — Ni/Di), were measured. For years 2 and 3, when exclusion data were available, the rate of exclusions (Ei/[Di+ Ei]) and population achievement (the proportion of all eligible patients, including those exception reported by the practice, for whom the targets were achieved — Ni/[Di + Ei]), were also measured. An example of how achievement rates are calculated and points are scored for the indicators is given in Box 1. Summary outcome scores were constructed as unweighted means of the scores for every indicator, following the method of Doran et al.10 All statistical analyses were performed with Stata software (version 9).

Box 1

Example of calculating achievement and allocating points under the Quality and Outcomes Framework

Under the Quality and Outcomes Framework, indicator DM5 rewards practices for measuring the glycated haemoglobin (HbA1c) levels of patients with diabetes. The minimum achievement threshold for this indicator was 40% in year 3, and the maximum achievement threshold was 90%. A maximum of 3 points was available for the indicator. After excluding exception-reported patients, practices achieving the target for fewer than 40% of patients earned 0 points, and practices achieving the target for 90% or more earned the maximum 3 points. Practices achieving the target for 40% to 90% of patients earned between 0 and 3 points according to the proportion of patients for whom the target was achieved.

For example: if a practice had 100 eligible patients on its diabetes register, of whom 10 were exception reported and 60 had their HbA1c levels measured, then:

number of exclusions (Ei) = 10

number of appropriate patients (Di) = 100 – 10 = 90

number of patients for whom the target was met (Ni) = 60

and:

reported achievement = Ni/Di = 60/90 = 66.7%

exclusion rate = Ei/(Di +Ei) = 10/(90 + 10) = 10.0%

population achievement = Ni/(Di +Ei) = 60/(90 + 10) = 60%

As reported achievement is between the minimum and maximum thresholds, the number of points scored (Pi) is calculated as:

([reported achievement — minimum threshold] × maximum points) /

(maximum threshold — minimum threshold)

= (66.7 – 40) × 3/(90 – 40) = 1.6

Achievement and patient-population data for 2004–2005, 2005–2006, and 2006–2007 were available for 8277 general practices in England. Practices were excluded from the study if they had fewer than 1000 patients in any year (49 practices), one or more disease registers were missing (47 practices), complete exclusion data were not available (172 practices), complete practice characteristic data were not available (210 practices), the practice relocated to a more or less affluent area during the period (164 practices), or the practice population changed in size by 25% or more (258 practices). Some practices had two or more exclusion criteria. The main analyses therefore relate to 7502 practices, providing care for more than 49 million patients. Subanalyses were undertaken for excluded practices (Appendices 13).

RESULTS

The 7502 general practices in the main analysis provided primary-care services for 46.7 million patients in 2004–2005, and had a mean practice population of 6226 (standard deviation [SD] 3869) patients. The total number of full-time equivalent family practitioners increased from 27 946 to 30 503 over the period (Table 2), and the mean number in each practice increased from 3.72 (SD 2.56) in year 1, to 3.83 (SD 2.63) in Year 2 and to 4.07 (SD 2.84) in Year 3.

QOF points scored

The maximum number of points available for the 48 analysed indicators was 392 in years 1 and 2, and 396 in Year 3. The median proportion of available points scored was 96.6% in year 1, 99.2% in year 2, and 99.4% in year 3. In year 1 there was a clear progression in the median percentage of points scored, from 92.6% for group 1 (practices with 1000–1999 patients) to 97.6% for group 8 (practices with ≥12 000 patients; Figure 1). The gap between groups 1 and 8 decreased over time, from 5.1% in year 1 to 2.5% in years 2 and 3.

Figure 1
Distribution of practice scores for percentage of total QOF points scored by number of patients, year 1 (2004–2005) to year 3 (2006–2007).

Variation in the percentage of points scored decreased with increasing number of patients; hence, the interquartile range was greatest for group 1 (13.7%) and smallest for group 8 (4.9%). Variation decreased for all groups in year 2, but there was little further change in year 3.

In year 1, the proportion of practices scoring maximum points varied according to the number of patients, from 7.0% for group 2 (2000–2999 patients) to 10.5% for group 5 (6000–7999 patients, Table 3). The number of practices scoring maximum points generally increased over time, from 8.6% overall in year 1 to 23.7% in year 3. However, the rate of increase was slowest for group 1, and by year 3 the proportion of practices attaining maximum scores was over two times higher for group 8. In year 1, the proportion of practices among the worst performing 5% ranged from 13.4% for group 1 to 0.7% for group 8. There was little absolute change in these proportions over time.

Table 3
Proportion of practices from each group among the best and worst performing 5% of practices for QOF points scored, reported achievement and exception reporting from Year 1 (2004–2005) to Year 3 (2006–2007).

Reported achievement

The median overall reported achievement, the proportion of patients deemed appropriate by the practice for whom the targets were achieved, was 85.2% in year 1, 89.3% in year 2, and 90.9% in year 3. Increases in achievement between years were statistically significant (P<0.005 in all cases). Median reported achievement in year 1 varied with number of patients, from 83.8% for group 1 to 85.9% for group 5 (Figure 2). As was the case for points scored, variation in achievement decreased with increasing number of patients; hence, the interquartile range was greatest for group 1 (1000–1999 patients, 16.2%) and smallest for group 8 (≥12 000 patients, 6.6%). However, in contrast to the findings for points scored, both the highest and the lowest achievement rates were attained by smaller practices: 13.2% of practices from group 1 were among the highest achieving 5% in year 1, and 12.1% were among the lowest achieving 5%. In contrast, the corresponding proportions for group 8 were 1.5% and 0.9% (Table 3).

Figure 2
Distribution of practice scores for overall reported achievement by number of patients, year 1 (2004–2005) to year 3 (2006–2007).

By year 3 there was little difference in average achievement rates between practices of different size; a spread of 1.1% covered the median achievement of all groups. However, by this time, group 1, which had the lowest median achievement in year 1, had the highest median achievement (91.5%), and group 8 the lowest (90.4%). Variation in achievement between practices decreased for all groups in year 2 and again in year 3, with the greatest reduction for group 1; to 9.3% in year 2 and 7.2% in year 3. These patterns were consistent across all 48 individual indicators. Despite the changes over time, smaller practices remained more likely to be both very high or very low achievers in year 3. Practices with fewer than 3000 patients (groups 1 and 2) represented 20.1% of all practices, but 46.7% of the highest-achieving and 45.1% of the lowest-achieving practices.

Exception reporting and population achievement

Practices’ reported achievement rates depend, in part, on the number of patients they exclude. Practices excluded a median of 6.6% of patients in year 2 and 7.4% in year 3. In both years there was a trend for higher exception reporting rates in practices with more patients, ranging from 6.3% for group 1 (1000–1999 patients) to 6.8% for group 8 (≥12 000 patients) in year 2, rising to 6.5% and 7.7% respectively in year 3 (Figure 3). As with achievement rates, there was greater variation in exception reporting rates in smaller practices, with a higher proportion of both the highest and lowest exception reporters in group 1 (Table 3).

Figure 3
Distribution of practice scores for overall exception reporting by number of patients, year 2 (2005–2006) to year 3 (2006–2007).

The median overall population achievement, the proportion of all patients for whom the targets were achieved, including those exception reported by the practice, was 83.0% in year 2 and 83.8% in year 3 (Figure 4). The distributions were similar to those for reported achievement, with group 1 having the highest median population achievement in year 3 (84.6%) but also the greatest variation (interquartile range 7.7%). The finding that group 1 practices were more likely to have both very high and very low levels of reported achievement was not, therefore, explained by their rates of exception reporting.

Figure 4
Distribution of practice scores for population achievement by number of patients, year 2 (2005–2006) to year 3 (2006–2007).

DISCUSSION

Summary of main findings

Despite a lack of clear evidence, doubts over the quality of care delivered by small, and particularly solo, practices have frequently been raised in the past. Under the QOF, this study shows that differences in performance between practices of different size depend on the measure used. When measuring quality in terms of points scored, practices with fewer patients performed worse, on average, than those with more. However, when measuring quality in terms of achievement rates, the average performance of the smallest practices was only marginally worse than that in larger practices in year 1 (2004–2005), and the gap had closed by year 3. This discrepancy is due to the maximum achievement thresholds, which make it impossible to discriminate between higher-performing practices, whose actual rates of achievement may vary by up to 50%, when using points scored as the measure of quality.

Smaller practices face several disadvantages under the pay-for-performance scheme. They tend to have relatively more patients with chronic disease, and until 2009 received less remuneration per patient under the scheme because of the formula used to adjust payments for disease prevalence.11 Given that practices are remunerated on the basis of points scored, with achievement beyond the maximum thresholds not rewarded, the payment system does not adequately recognise the achievements of high-performing practices, many of which are small. Smaller practices therefore had to work harder for relatively less financial reward under the scheme, and yet collectively their performance improved at the fastest rate.

A particular concern with physicians working alone is that they have greater opportunity to defraud the system, for example by inappropriately exception reporting patients or falsely claiming a target has been achieved, as they do not have colleagues to directly monitor their behaviour. Fraud is difficult to monitor, but this study found that the smallest practices (predominantly single handed) had the lowest average rates of exception reporting. Patterns of population achievement, which includes exception-reported patients, were also similar to those for reported achievement, which suggests that single-handed practitioners were no more likely to have inflated their achievement scores through inappropriate exception reporting than their peers in larger practices. In addition, patterns of achievement for activities measured externally, such as control of glycated haemoglobin (HbA1c) levels in patients with diabetes, were similar to those for indicators measured and reported internally.

Despite the generally high performance of small practices, many had very low achievement rates. Practices with fewer than 3000 patients represented one-fifth of all practices but nearly half of the worst-performing 5% in terms of reported achievement. Although these poorly performing practices improved at the fastest rate, in year 3 it remained the case that a significant minority of small practices were apparently providing substantially poorer care than the national average. Small practices are more likely to be located in deprived areas and to be poorly organised,12 but these factors are only weakly associated with performance under the scheme, and practices with these characteristics are capable of high levels of achievement.13,14 Other factors must, therefore, be involved.

In 2004, one solution to the problem of poor quality of primary care would have been to close small practices. However, in addition to removing many of the worst-performing practices it would have removed many of the best: over 45% of the practices in the top 5% for reported achievement had fewer than 3000 patients. A more logical approach would be to reduce variation, by bringing the worst-performing practices towards the level of the best. This appears to have occurred under the pay-for-performance scheme, through incentivising a systematic approach to a limited range of chronic diseases and publicly reporting performance. However, variation in achievement, and in exception reporting rates, remains strongly associated with practice size. This is partly a mathematical relationship: variation in all patient and practice characteristics, and in patient outcomes, will be inversely related to list size regardless of the actual quality of care provided. This has consequences for how pay-for-performance schemes measure performance, as each additional patient for whom a target is achieved or missed, whether due to the quality of clinical care provided or due to factors beyond practices’ control, has greater consequences for practices with fewer patients.

Strengths and limitations of the study

The introduction of the QOF in 2004 provided the first opportunity to measure aspects of quality systematically. This study was able to measure achievement on 48 indicators across 7502 practices. However, limitations of the data restricted it to practice-level analyses, and it was not therefore possible to control for differences in age, sex, and comorbidity of patients.

Comparison with existing literature

Small practices in general, and single-handed practitioners in particular, come under regular pressure to join their colleagues in larger practices, and that pressure has intensified with the developments in general practice over the last decade, culminating in the 2008 Next Stage Review.4 Between 2004–2005 and 2006–2007, the number of single-handed practices in England decreased, particularly in more deprived areas,15 as over 2500 additional full-time equivalent physicians entered general practice. Despite this, previous research suggests that there is little relationship between the size of a practice and its ability to provide high quality care.16 Overall, some aspects of quality are associated with smaller practices, such as patient ratings of access or continuity of care, and others with larger practices, such as data recording or organisation of services.1618 There is also no consistent association between practice size and differences in outcome, for example number of hospital admissions for asthma, epilepsy, or diabetes; avoidable admissions;19 or quality of care for patients with ischaemic heart disease.20

Implications for future research

The particular problems associated with single-handed practice — lack of peer review, risk of clinical isolation and of abuse of trust2 — need to be addressed. The principal question is whether they are soluble without abolishing single-handed status. Single-handed practitioners are subject to the same clinical governance and appraisal arrangements as those in group practices, receive the same monitoring from primary care trusts, and, since 2004, have been measured against the same clinical quality targets under the QOF. The present results suggest that small practices, most of which are single handed, achieve, on average, similar levels of performance to larger group practices, despite an arrangement that systematically disadvantages them, but a significant minority do have low rates of achievement and the reasons for this require more attention. However, if we ask questions about why the smallest practices often appear to be the worst we should also be asking why they often appear to be the best.

Appendix 1. 

Practices excluded from the main study

Results for the 775 family practices excluded from the main analyses are presented below. Results are also presented for a further 507 practices that were missing data on numbers of patients for one or more years. The 775 practices for which patient numbers were available provided primary care services for a mean population of 4356 (SD 3464) patients in 2004–2005. The mean number of full-time equivalent physicians increased from 2.49 (SD 2.42) in year 1, to 2.64 (SD 2.48) in year 2 and to 3.17 (SD 2.89) in year 3 (Appendix 2). The total number of full-time equivalent physicians increased from 2563 to 2477 over the period (Appendix 3). The number of excluded practices with 1000 to 1999 patients decreased from 154 (12.0%) to 111 (8.7%).

Excluded practices generally performed worse than included practices on the pay-for-performance scheme, although the gap narrowed over time. The patterns of performance, in terms of points scored and reported achievement, were similar to those found in the included practices. The patterns for exception reporting were different, in that smaller excluded practices exception reported more patients than larger excluded practices.

Points scored

Excluded practices scored fewer Quality and Outcomes Framework points on average than included practices (mean 81.7% and 92.7% of available points respectively in year 1; Appendix 2). There was also greater variation in scores for excluded practices. As with included practices, there was a clear progression in the median percentage of points scored by excluded practices, ranging from 82.1% for group 1 (practices with 1000 to 1999 patients) to 97.7% for group 8 (practices with 12 000 or more patients). All groups improved over time, but the smallest practices improved at the fastest rate, and the gap between groups 1 and 8 decreased from 15.6% in year 1 to 5.6% in year 3. Despite this, small practices remained more likely to perform badly in year 3, with 12.2% of the worst-performing 5% of practices coming from group 1, compared with 0% from group 8.

Achievement

The median overall reported achievement for excluded practices was 80.9% in year 1, 87.8% in year 2, and 89.9% in year 3. Median reported achievement in year 1 varied with practice size, from 76.5% for group 1 to 85.2% for group 8. As was the case for included practices, variation in achievement decreased with increasing number of patients (interquartile range for group 1 of 21.7% compared with 5.8% for group 8), and the smallest practices were more likely to have either very high or very low achievement rates: 7.9% of practices from group 1 were among the highest achieving 5% in year 1, and 4.6% were among the lowest-achieving 5% (the latter figure is relatively low because 30% of the lowest-performing practices had missing list-size data). In contrast, the corresponding proportions for group 8 were 2.3% and 0%. By year 3, there was little difference in average achievement rates between groups: the difference in median achievement between group 1 and group 8 having fallen from 8.7% to 1.9%. Variation in achievement also decreased for all groups over time.

Exception reporting and population achievement

Excluded practices excluded a median of 7.0% of patients in year 2 and 8.0% in year 3. In contrast to included practices, there was a trend for lower exception reporting rates in practices with more patients, ranging from 7.5% for group 1 to 6.5% for group 8 in year 2, rising to 8.4% and 7.9% respectively in year 3. As with achievement rates, there was greater variation in exception reporting rates in smaller practices, with a higher proportion of both the highest and lowest exception reporters in group 1.

The median overall population achievement, the proportion of all patients for whom the targets were achieved including those exception reported by the practice, was 80.3% in year 2 and 82.1% in year 3. There was a clear progression in the median population achievement of excluded practices in year 2, ranging from 76.9% for group 1 to 82.3% for group 8. By year 3 the gap between these groups had reduced from 5.4% to 3.0%.

Appendix 2

Summary data for included and excluded practices

IncludedExcluded
ParameterYearpracticespractices
Mean (SD) FTE GPs13.72 (2.56)2.49 (2.42)
23.83 (2.63)2.64 (2.48)
34.07 (2.84)3.17 (2.89)

Mean (SD) QOF points scored192.68 (8.73)81.71 (20.25)
296.76 (5.20)91.88 (11.91)
395.93 (6.38)91.33 (12.33)

Mean (SD) reported achievement183.00 (8.90)76.94 (13.84)
288.01 (5.74)84.83 (9.34)
390.01 (4.55)87.86 (7.67)

Mean (SD) exception reporting26.97 (2.82)7.94 (3.94)
37.72 (2.84)8.52 (3.61)

Mean (SD) population achievement22 82.06 (5.53)77.96 (9.13)
33 83.19 (4.68)80.50 (7.27)

FTE = full-time equivalent. QOF = Quality and Outcomes Framework. SD = standard deviation.

Appendix 3

Number of full-time equivalent physicians in excluded practices, 2004–2005 to 2006–2007

Year 1 (2004–2005)
Year 2 (2005–2006)
Year 3 (2006–2007)
Number of patientsGroupNumber of practices% of practicesNumber of FTE GPsNumber of practices% of practicesNumber of FTE GPsNumber of practices% of practicesNumber of FTE GPs
Missing035928.039742433.126845735.7124

1000–1999115412.01321259.81111078.489

2000–2999225720.128120215.822617113.3167

3000–3999313710.72051209.361751068.3151

4000–5999414811.534716512.937817313.5387

6000–79995896.9341927.2341977.6369

8000–99996584.5261705.5322705.5351

10 000–11 9997372.9230312.4207413.2259

≥12 0008433.4369534.1480604.7580

Total1282100.025631282100.025081282100.02477

FTE = full-time equivalent.

Notes

Funding body

There was no direct source of funding for this study. However, the National Primary Care Research and Development Centre receives core funding from the UK Government Department of Health. The views expressed are those of the authors and not necessarily those of the Department of Health.

Ethical approval

Ethics committee approval was not required for this study since it was based on publicly available data.

Competing interests

Martin Roland served as an academic advisor to the government and British Medical Association negotiating teams during the development of the Quality and Outcomes Framework scheme during 2001 and 2002. Tim Doran and Stephen Campbell have served as academic advisors for the Quality and Outcomes Framework review process conducted by the National Institute for Health and Clinical Excellence since 2009. The other authors have no competing interests.

Discuss this article

Contribute and read comments about this article on the Discussion Forum: http://www.rcgp.org.uk/bjgp-discuss

REFERENCES

1. Secretary of State for Health. The NHS Plan: a plan for investment. A plan for reform. London: The Stationery Office; 2000. Cm 4818–1.
2. Smith J. Safeguarding patients: lessons from the past — proposals for the future. London: HMSO; 2004. Shipman Inquiry Fifth Report. Cm 6394.
3. Audit Commission. A focus on general practice in England. London: Audit Commission; 2002.
4. Department of Health. High quality care for all: NHS next stage review. London: Department of Health; 2008.
5. Roland M. Linking physicians’ pay to the quality of care — a major experiment in the United Kingdom. N Engl J Med. 2004;351(14):1448–1454. [PubMed]
6. General Practitioners Committee. Your contract your future. London: British Medical Association; 2002.
7. Sutton M, McLean G. Determinants of primary care medical quality measured under the new UK contract: cross sectional study. BMJ. 2006;332(7538):389–390. [PMC free article] [PubMed]
8. NHS The Information Centre. Quality and Outcomes Framework Online GP Database. http://www.qof.ic.nhs.uk/ (accessed 19 Apr 2010)
9. NHS The Information Centre. The Quality and Outcomes Framework 2006/07 exception report. http://www.ic.nhs.uk/statistics-and-data-collections/primary-care/general-practice (accessed 4 Aug 2010)
10. Doran T, Fullwood C, Gravelle H, et al. Pay for performance programs in family practices in the United Kingdom. N Engl J Med. 2006;355(4):375–384. [PubMed]
11. Guthrie B, McLean G, Sutton M. Workload and reward in the Quality and Outcomes Framework of the 2004 general practice contract. Br J Gen Pract. 2007;56(532):836–841. [PMC free article] [PubMed]
12. Wang Y, O’Donnell C, Mackay D, Watt G. Practice size and quality attainment under the new GMS contract: a cross-sectional analysis. Br J Gen Pract. 2006;56(532):830–835. [PMC free article] [PubMed]
13. Gulliford M, Ashworth M, Robotham D, Mohiddin A. Achievement of metabolic targets for diabetes by English primary care practices under a new system of incentives. Diabet Med. 2007;24(5):505–511. [PubMed]
14. Doran T, Fullwood K, Kontopantelis E, Reeves D. The effect of financial incentives on inequalities in the delivery of primary care in England. Lancet. 2008;372(9640):728–736. [PubMed]
15. Ashworth M, Seed P, Armstrong D, et al. The relationship between social deprivation and the quality of care: a national survey using indicators from the Quality and Outcomes Framework. Br J Gen Pract. 2007;57(539):441–448. [PMC free article] [PubMed]
16. Campbell S, Hann M, Hacker J, et al. Identifying predictors of high quality care in English general practice: an observational study. BMJ. 2001;323(7316):784–787. [PMC free article] [PubMed]
17. Campbell J, Ramsay J, Green J. Practice size: impact on consultation length, workload, and patient assessment of care. Br J Gen Pract. 2001;51(469):644–50. [PMC free article] [PubMed]
18. Van den Hombergh P, Engels Y, et al. Saying ‘goodbye’ to single-handed practices; What do patients and staff lose or gain? Fam Pract. 2005;22(1):20–27. [PubMed]
19. Hippisley-Cox J, Pringle M, Coupland C, et al. Do single handed practices offer poorer care?: Cross sectional survey of processes and outcomes. BMJ. 2001;323(7308):320–323. [PMC free article] [PubMed]
20. Majeed A, Gray J, Ambler G, et al. Association between practice size and quality of care of patients with ischemic heart disease; cross sectional study. BMJ. 2003;326(7385):371–372. [PMC free article] [PubMed]

Articles from The British Journal of General Practice are provided here courtesy of Royal College of General Practitioners
PubReader format: click here to try

Formats:

Related citations in PubMed

See reviews...See all...

Cited by other articles in PMC

See all...

Links

  • PubMed
    PubMed
    PubMed citations for these articles

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...