• We are sorry, but NCBI web applications do not support your browser and may not function properly. More information
Logo of bmjBMJ helping doctors make better decisionsSearch bmj.comLatest content
BMJ. May 17, 2003; 326(7398): 1070.
PMCID: PMC155692

Systematic review of scope and quality of electronic patient record data in primary care

Krish Thiru, research fellow,1 Alan Hassey, general practitioner,1 and Frank Sullivan, general practitioner2


Objective To systematically review measures of data quality in electronic patient records (EPRs) in primary care.

Design Systematic review of English language publications, 1980-2001.

Data sources Bibliographic searches of medical databases, specialist medical informatics databases, conference proceedings, and institutional contacts.

Study selection Studies selected according to a predefined framework for categorising review papers.

Data extraction Reference standards and measurements used to judge quality.

Results Bibliographic searches identified 4589 publications. After primary exclusions 174 articles were classified, 52 of which met the inclusion criteria for review. Selected studies were primarily descriptive surveys. Variability in methods prevented meta-analysis of results. Forty eight publications were concerned with diagnostic data, 37 studies measured data quality, and 15 scoped EPR quality. Reliability of data was assessed with rate comparison. Measures of sensitivity were highly dependent on the element of EPR data being investigated, while the positive predictive value was consistently high, indicating good validity. Prescribing data were generally of better quality than diagnostic or lifestyle data.

Conclusion The lack of standardised methods for assessment of quality of data in electronic patient records makes it difficult to compare results between studies. Studies should present data quality measures with clear numerators, denominators, and confidence intervals. Ambiguous terms such as “accuracy” should be avoided unless precisely defined.


The NHS is becoming increasingly accountable for the services it provides. One element of that accountability is clinical governance, which, in turn, depends crucially on the availability of high quality clinical information. This relies on the data collected. 1 A clear message emerging from government policy initiatives is the need for high quality data on health accessible through electronic patient record (EPR) systems. In this context, such systems will inevitably replace their paper based predecessors. They represent a fundamental change in how health professionals approach the management of clinical information. As the service acclimatises to new technology, the need for assessment of quality and improvement of primary care datasets has been repeatedly emphasised. 2 However, the criteria against which quality should be judged remain unclear.

We identified one review of mainly secondary care studies that described system and organisational factors that affect quality of the data in EPR. 3 We carried out a similar review but in primary care.


We searched all major bibliographic databases and several specialist datasets during the last quarter of 2001 (see bmj.com for databases and sources and web table A for search criteria). Under our primary exclusion criteria we excluded duplicate publications, editorials, letters, poster presentations, and coding studies and publications based on EPRs set in health maintenance organisations, administration, and single variable databases (such as, prescribing, disease registers). We searched for citations of papers that used a reference standard for assessment of quality. When relevance was ambiguous (for example, if we were unable to deduce whether the study involved EPR or paper records) we checked the abstract and MeSH headings through PubMed. When ambiguity remained we obtained the full paper and made a collective decision.

We established a framework for categorising and selecting review papers; defined the reference standards and measurements used to judge quality; and examined the quality of EPRs in primary care (box). We extracted data on study design, countries involved, number of sites, measurement criteria, description of reference standard, research topic, main results, name of EPR, and data structure. Eligible papers had to satisfy at least one aspect (numbered) of each category (A-C) within the box.


We identified 4589 abstracts and categorised 174 documents after primary exclusions. Of these, we included 47 journal publications, four reports, 47 and one thesis 8 from 1980-2001. Thirty seven studies measured data quality, and 15 used electronic patient records and commented on quality in the presence of a reference standard (scoping). These were analysed separately ( table 1). Forty eight studies assessed diagnostic data, 20 assessed management information, and 13 examined wider aspects of routine data.

Table 1
Proportions of data type being investigated, reference standards used to assess quality, and commonest measures of quality. Figures are numbers (percentage) of studies

Measuring data quality

Thirty one publications were from the United Kingdom. A similar proportion had been published since 1995. Table 2 shows characteristics of the 37 studies that measured quality of data. Table B on bmj.com gives full details of categorisation (according to that shown in the box) and characteristics. Eight studies were prospective, in which a network of practices was established from which to extract data. Although these studies were prospective, the data extraction was primarily cross sectional. The remaining articles were cross sectional or retrospective surveys. Two studies were interventional: one a case-control study involving onsite training and the other a before and after software update study. Both showed substantial improvements in recording levels after the intervention. 9,10 A retrospective cohort study of data conscious practices that took advantage of generic national services also showed an increase in completeness and accuracy of EPRs over five years. 11

Table 2
Characteristics of 37 studies that measured data quality

Framework for assessing eligibility of publications for review

All three categories (A-C) needed to be satisfied for a paper to be selected

A Reference standard

A modification of the “distance from patient” concept, which classified the reference standard used to judge quality 3

  1. Studies that used objective “close to patient” standards by using techniques such as video recording or direct examination
  2. Studies that used interviews or questionnaire surveys of patient, next of kin, or their immediate carers as reference standard
  3. Studies that used routine consultation data (databases, EPRs, paper records, discharge letter, etc) as standard reference
  4. Studies that used national statistics or equivalent survey results as their reference standard

B Study objectives

  1. Studies that measured change in EPR data quality or those that measured EPR data quality were classified as measuring data quality
  2. Studies that used EPRs and commented on their quality were classified as scoping data quality

C Data types

Publications that investigate:

  1. Diagnostic or symptom state of the patient
  2. Patient management data—for example, health promotion, drug treatment, referrals, tests
  3. Wider aspects of patient and practice management—for example, family history, ethnicity, socioeconomic status, immunisation, hospital episodes, consultation rates

Structured data (codes, classifications, and nomenclatures) were most commonly investigated. Although textual data were mentioned, they rarely received detailed attention. 12 Only one study considered textual data in any detail. 13 Twelve documents did not present their data structure (that is, coding system name) while most did not present the precise codes being investigated ( table 2). UK publications generally used Read and OXMIS (Oxford medical information systems) codes. In other countries the ICPC (international classification of primary care) codes were more widely used. ICD (international classification of diseases) codes act as a referencing standard for these primary care coding systems. When there were deficits in descriptive ability of a coding strategy, subsidiary codes (for example, chapter headings from British National Formulary; Prescription Pricing Authority) were used to enhance the data. 6,1518

Quality of data (reliability) was usually measured with rate comparisons. Data validity was expressed under a range of terms (completeness, correctness, accuracy, consistency, and appropriateness), which were rarely defined. Sensitivity (completeness) was the commonest such index ( table 1, webextra table B on bmj.com). One study used video recording of the consultation to evaluate the EPR content compared with the use of notes and UK national statistics (fourth national study of morbidity in general practice, MSGP4) for comparative measures. 18 Seven studies carried out questionnaire and telephone surveying for a reference standard with data gathered from the patient, carer, or both. 13,14,19,20,2224 These studies involved the sampling of a study population from the database for subsequent validation through questionnaires. The reference standard varied from “life time experience of morbidity” to more structured investigation of diagnostic status through validated questionnaires. 20,22 Triangulation with multiple sources (prescription data, clinician diagnosis in EPR, or notes) was used for further validation. 23

Twenty four studies used clinical information gathered during the consultation as a reference standard ( table 1). Seventeen publications used triangulation within the EPR to test internal consistency of data. Fifteen studies were conducted after 1994. Twelve relied on medication data as the internal reference standard. Sixteen used paper based information as the reference standards. Often EPR diagnostic status was appraised through electronic prescribing information and subsequently validated against the paper notes. Hospital discharge details have also been used to evaluate EPR diagnostic status through practitioner responses, discharge summaries, and consultants' letters. 12,14,17,19Time of diagnosis and referral data were also evaluated under this reference standard. Dissonance between data from secondary and primary care has been documented, though the presence of hospital diagnosis and procedural data have been found to improve the quality of data in primary care. 12,17,19,20,25 Eighteen studies used national statistics or survey data as a reference standard for data reliability. 47,11,13,21,23,2534 A third of UK studies used MSGP4 as a reference standard for rate comparisons.

Scoping data

We identified 15 studies that used EPR data for research or practice management. Although the intention of these studies was not to measure data quality, they gave insight into issues of data validation. These studies relied more on measures of positive predictive value than on measures of sensitivity ( table 1) to meet their needs. Fourteen studies considered the diagnostic status of the patient, with 10 publications dealing primarily with information on patient identification and case validation. 3544 Three used survey techniques to establish diagnostic status. 40,41,45 Of the 12 retrospective investigations, seven used centralised datasets. 3638,40,41,44,46 These “scoping” studies were more than twice as likely to present confidence intervals than studies that measured data quality (10/15 (67%) ν 11/37 (30%)).

Levels of data recording

Table 3 shows that prescribing data are generally the most sensitive. The ability to link prescriptions with diagnosis was the favoured means of identifying patients and establishing the predictive validity of diagnostic codes. The sensitivity of other EPR elements was wide ranging, while positive predictive value was consistently high. Those diseases with clear diagnostic criteria were generally better recorded, as were data on specific procedures. 20 Lifestyle and socioeconomic data were rarely studied and then only in terms of sensitivity. Results indicated lower recording levels than for diagnosis and medication. 11,18,47

Table 3
Selected sensitivities and positive predictive values of data from electronic patient records


We believe this is the first systematic review to investigate the measurement of quality of data in primary care. Most of the research has been published since 1995, reflecting the increasing importance and use of EPRs. The categorisation provided a framework for selecting and describing the most important publications. This showed that patient identification and diagnostic data were the focus of most studies of data quality. Mostly studies were descriptive surveys. This would seem ideal for an environment where external forces set the direction of change (for example, pace of technological development). The scarcity of interventional studies reflects the passivity and inability of researchers to control their study environment. The appraisal of data quality has favoured the selection of practices that embrace technology and was the likely reason for purposive sampling in many studies. Consequently, the EPR quality reported in the literature is likely to be an overestimate of the general picture.

The dominance of UK publications is unsurprising given the scope of this review. However, it also suggests that UK researchers understand the importance of the quality of EPR data in terms of health policy and validated research databases such as the general practice research database (GPRD), the doctor independent network (DIN) database, and the medicines monitoring unit (MEMO). Centrally maintained and quality assured, these databases act as a rich source of data for epidemiological research. Their size and success with pharmacological data are the prime attraction to overseas collaborators.

Publications from non-English speaking countries were disadvantaged under our selection criteria. Those that were identified used similar techniques to measure data quality. Like early UK publications, non-UK studies emphasise where, how, and by whom data are collected and the reliability of the process. This focus is now not present in UK primary care, where clinicians directly collect data during the consultation.

Measuring Quality

The element of the EPR being investigated (numerator) and the components of the reference standard used to appraise its quality (denominator) were often not clearly defined within the literature (for instance, diagnostic code/diagnostic criteria). When they were defined there was inconsistency between studies. This makes comparisons risky and meta-analytical interpretation of results impossible. It may be a reflection of the immaturity of the discipline that more standardised approaches have not yet evolved.

Measurement theory requires that both the concepts of validity and reliability be addressed. Reliability (a precursor to validity) is a measure of stability and is appraised through the subjective comparison of rates and prevalence. Many studies used old statistics (for example, MSGP4) or variations between practices to make judgments on the reliability of “live” data. Such methods cannot measure validity of the EPR in reflecting the “truth.” Sensitivity and positive predictive value, the most widespread measures of data validity, presuppose that the selected denominator is an adequate representation of this truth. Surveys and questionnaires can be of dubious accuracy. Reference standards that emanate from the patient and carers present different but important perspectives on morbidity or concordance with treatment. What is the real health status of the patient? The answer exists in subjective (perceived), objective, and diagnostic dimensions. Each needs to be measured by different techniques and its appropriateness for EPR validation considered. To aid interpretation of the resulting proportions and to facilitate comparisons between populations confidence intervals should be provided.

What is already known on this topic

The demonstration of quality is central to the NHS strategic agenda

Data from electronic records are expected to have a central role within healthcare commissioning, quality control, clinical governance, and the new GP contract

No standard methods of measuring data quality have been described

What this study adds

A framework for categorising and selecting papers which report data quality in primary care

Reliability of data was measured through rate comparison in 73% of studies, while validity was calculated mostly through measures of sensitivity

Markers of quality should comprise internal reference standards based on objective and diagnostic EPR elements that have high positive predictive value

When the opportunity to record clinical data in different forms (paper and computer) exists, this inevitably decreases validity of any one to act as a true reference standard. The use of paper notes to assess EPR validity will become increasingly inaccurate as clinicians migrate to electronic systems. In the medium term it is best to consider several independent markers of quality, and those studies that used several explicit reference standards (triangulation) were more likely to reflect the true quality of electronic data (see table B on bmj.com).

To facilitate comparisons of data quality across sites and systems, it is essential to have a reference standard. User friendly “point of service” technologies have ensured that electronic prescription data has rapidly become accepted as sensitive and highly predictive when used appropriately for diagnosis validation. Similarly, record linkage and automated population of EPRs with investigations and test results will offer alternative objective markers against which to test the internal consistency of EPRs. The sensitivity of these markers for a reference standard may be varied but their predictive abilities are likely to be high. In the longer term we recommend the establishment of internal reference standards based on those objective and diagnostic EPR elements recognised as having high positive predictive value (that is, diagnostic codes, prescriptions, test results, referral outcomes, procedural codes). Such reference standards can then be used to explore measures of sensitivity.

Supplementary Material

[extra: Data sources, search terms and tables]


We thank E Mitchell (Tayside Centre for General Practice) and N Booth (Sowerby Centre for Health Informatics at Newcastle) for access to their specialist medical informatics databases.

Contributors: KT wrote the plan, collected and analysed the data, and wrote the paper. AH directed the work, commented on design, helped to decide which papers to include, and helped to write the paper. FS supervised the review, guided on methods,helped to write the paper, commented on drafts, and is guarantor.

Funding: Fisher Medical Centre Research Unit is funded by the NHS Executive Northern and Yorkshire Region. The guarantor accepts full responsibility for the conduct of the study, had access to the data, and controlled the decision to publish.

Competing interests: None declared.

An external file that holds a picture, illustration, etc.
Object name is webplus.f1.gifDetails of data sources, search terms, and two extra tables can be found on bmj.com


1. Moss F. Spreading the word: information for quality. Quality Health Care 1994; 46-50. [PMC free article] [PubMed]
2. House of Lords. Select committee on science and technology: fourth report. 2001. http://www.parliament.the-stationery-office.co.uk/pa/ld200001/ldselect/ldsctech/57/5701.htm (accessed 1 Jan 2003).
3. Hogan WR, Wagner MM. Accuracy of data in computer-based patient records. J Am Med Inform Assoc 1997;4: 342-55. [PMC free article] [PubMed]
4. Eames M, ed. The general practice research database: data quality in measuring morbidity and health: what information can general practice deliver. Hatfield: University of Hertfordshire, 1996.
5. Department of General Practice. GPASS data validation report. Aberdeen: University of Aberdeen. 1995.
6. Oxfordshire MAAG. Case study: a review of the Oxfordshire scheme. Collection of health data from general practice. Oxford: Primary Care Information Services (PRIMIS), 2000.
7. PRIMIS. Nottingham primary care health data project (NPCHDP). Oxford: Primary Care Information Services (PRIMIS), 2000.
8. Williams J. A study of the factors that affect the quantity and quality of data held on computer based medical information systems in a group of English general practices. [MSc thesis.] Guildford: University of Surrey, 2001.
9. Teasdale S, Bainbridge M. Interventions for improving information management in family practice. Inform Healthcare Austr 1998;7: 38-45.
10. Hiddema-van de Wal A, Smith RJA, van der Werf GT, Meyboom-de Jong B. Towards improvement of the accuracy and completeness of medication registration with the use of an electronic medical record (EMR). Fam Pract 2001;18: 288-91. [PubMed]
11. Thiru K, de Lusignan S, Hague N. Have the completeness and accuracy of computer medical records in general practice improved in the last five years? The report of a two-practice pilot study. Health Inform J 1999;5: 224-32.
12. Jick H, Jick SS, Derby LE. Validation of information recorded on general practitioner based computerised data resource in the United Kingdom. BMJ 1991;302: 766-8. [PMC free article] [PubMed]
13. Vlug AE, van der LJ, Mosseveld BM, van Wijk MA, van der Linden PD, Sturkenboom MC, et al. Postmarketing surveillance based on electronic patient records: the IPCI project. Methods Inform Med 1999;38: 339-44. [PubMed]
14. Van Staa TP, Abenhaim L, Cooper C, Zhang B, Leufkens HGM. The use of a large pharmacoepidemiological database to study exposure to oral corticosteroids and risk of fractures: validation of study population and results. Pharmacoepidemiol Drug Saf 2000;9: 359-66. [PubMed]
15. Nazareth I, King M, Haines A, Rangel L, Myers S. Accuracy of diagnosis of psychosis on general practice computer system. BMJ 1993;307: 32-4. [PMC free article] [PubMed]
16. Johnson N, Mant D, Jones L, Randall T. Use of computerised general practice data for population surveillance: comparative study of influenza data. BMJ 1991;302: 763-5. [PMC free article] [PubMed]
17. Jick H, Terris BZ, Derby LE, Jick SS. Further validation of information recorded on a general practitioner based computerised data resource in the UK. Pharmacoepidemiol Drug Saf 1992;1: 347-9.
18. Pringle M, Ward P, Chilvers C. Assessment of the completeness and accuracy of computer medical records in four practices committed to recording data on computer. Br J Gen Pract 1995;45: 537-41. [PMC free article] [PubMed]
19. Van Staa TP, Abenhaim L. The quality of information recorded on a UK database of primary care records: a study of hospitalizations due to hypoglycemia and other conditions. Pharmacoepidemiol Drug Saf 1994;3: 15-21.
20. Whitelaw FG, Nevin SL, Milne RM, Taylor RJ, Taylor MW, Watt AH. Completeness and accuracy of morbidity and repeat prescribing records held on general practice computers in Scotland. Br J Gen Pract 1996;46: 181-6. [PMC free article] [PubMed]
21. Pearson N, O'Brien J, Thomas H, Ewings P, Gallier L, Bussey A. Collecting morbidity data in general practice: the Somerset morbidity project. BMJ 1996;312: 1517-20. [PMC free article] [PubMed]
22. Van Weel C. Validating long term morbidity recording. J Epidemiol Community Health 1995;49(suppl 1): 29-32. [PMC free article] [PubMed]
23. Whitelaw FG, Nevin SL, Taylor RJ, Watt AH. Morbidity and prescribing patterns for the middle-aged population of Scotland. Br J Gen Pract 1996;46: 707-14. [PMC free article] [PubMed]
24. Suarez AP, Staffa JA, Fletcher P, Jones JK. Reason for discontinuation of newly prescribed antihypertensive medications: methods of a pilot study using computerized patient records. Pharmacoepidemiol Drug Saf 2000;9: 405-16. [PubMed]
25. Kaye JA, Derby LE, Mar Melero-Montes M, Quinn M, Jick H. The incidence of breast cancer in the General Practice Research Database compared with national cancer registration data. Br J Cancer 2000;83: 1556-8. [PMC free article] [PubMed]
26. Hansell A, Hollowell J, Nichols T, McNiece R, Strachan D. Use of the general practice research database (GPRD) for respiratory epidemiology: a comparison with the 4th morbidity survey in general practice (MSGP4). Thorax 1999;54: 413-9. [PMC free article] [PubMed]
27. Hollowell J. The general practice research database: quality of morbidity data. Popul Trends 1997;37: 36-40. [PubMed]
28. Boydell L, Grandidier H, Rafferty C, McAteer C, Reilly P. General practice data retrieval: the Northern Ireland project. J Epidemiol Community Health 1995;49(suppl 1): 22-5. [PMC free article] [PubMed]
29. Hassey A, Gerrett D, Wilson A. A survey of validity and utility of electronic patient records in a general practice. BMJ 2001;322: 1401-5. [PMC free article] [PubMed]
30. Meal AG, Pringle M, Hammersley V. Time changes in new cases of ischaemic heart disease in general practice. Fam Pract 2000;17: 394-400. [PubMed]
31. Martin R. The doctors independent network database: background and methodology. Pharmaceutical Med 1995;9: 165-76.
32. Njalsson T, McAuley RG. On content of practice. An Icelandic multi-centre study, population, practices and contacts. Scand J Prim Health Care 1992;10: 243-9. [PubMed]
33. Simpson DS, Nicholas J, Cooper KD. The use of information technology in managing patients with coronary heart disease. Worcester: British Computer Society Primary Health Care Specialist Group, 2001.
34. Grimsmo A, Hagman E, Faiko E, Matthiessen L, Njalsson T. Patients, diagnoses and processes in general practice in the Nordic countries. An attempt to make data from computerised medical records available for comparable statistics. Scand J Prim Health Care Suppl 2001;19: 76-82. [PubMed]
35. Gray J, Majeed A, Kerry S, Rowlands G. Identifying patients with ischaemic heart disease in general practice: cross sectional study of paper and computerised medical records. BMJ 2000;321: 548-50. [PMC free article] [PubMed]
36. Lawrenson R, Todd JC, Leydon GM, Williams TJ, Farmer RD. Validation of the diagnosis of venous thromboembolism in general practice database studies. Br J Clin Pharmacol 2000;49: 591-6. [PMC free article] [PubMed]
37. Derby L, Maier WC. Risk of cataract among users of intranasal corticosteroids. J Allergy Clin Immunol 2000;105: 912-6. [PubMed]
38. Eland IA, Alvarez CH, Stricker BH, Rodriguez LA. The risk of acute pancreatitis associated with acid-suppressing drugs. Br J Clin Pharmacol 2000;49: 473-8. [PMC free article] [PubMed]
39. Hung J, Posey J, Freedman R, Thorton T. Electronic surveillance of disease states: a preliminary study in electronic detection of respiratory diseases in a primary care setting. Proceedings of the 1998 AMIA Annual Symposium. Baltimore, MD: University of Maryland School of Medicine, 1998: 688-92. [PMC free article] [PubMed]
40. Derby LE, Jick H. Appendectomy protects against ulcerative colitis. Epidemiology 1998;9: 205-7. [PubMed]
41. Turnbull S, Ward A, Treasure J, Jick H. The demand for eating disorder care. An epidemiological study using the general practice research database. Br J Psychiatry 1996;169: 705-12. [PubMed]
42. Gill D, Mayou R, Dawes M, Mant D. Presentation, management and course of angina and suspected angina in primary care. J Psychosom Res 1999;46: 349-58. [PubMed]
43. Meara J, Bhowmick BK, Hobson P. Accuracy of diagnosis in patients with presumed Parkinson's disease. Age Ageing 1999;28: 99-102. [PubMed]
44. Garcia Rodriguez LA, Duque A, Castellsague J, Perez-Gutthann S, Stricker BH. A cohort study on the risk of acute liver injury among users of ketoconazole and other antifungal drugs. Br J Clin Pharmacol 1999;48: 847-52. [PMC free article] [PubMed]
45. Cass AR, Volk RJ, Nease DE Jr. Health-related quality of life in primary care patients with recognized and unrecognized mood and anxiety disorders. Int J Psychiatry Med 1999;29: 293-309. [PubMed]
46. Frischer M, Norwood J, Heatlie H, Millson D, Lowdell J, Hickman M, et al. A comparison of trends in problematic drug misuse from two reporting systems. J Public Health Med 2000;22: 362-7. [PubMed]
47. Scobie S, Basnett I, McCartney P. Can general practice data be used for needs assessment and health care planning in an inner-London district? J Public Health Med 1995;17: 475-83. [PubMed]
48. Basden A, Clark EM. Data integrity in a general practice computer system (CLINICS). Int J BioMed Compu 1980;11: 511-9. [PubMed]
49. Brown PJ, Harwood J, Brantigan P. Data quality probes—a synergistic method for quality monitoring of electronic medical record data accuracy and healthcare provision. Medinfo 2001;10: 1116-9. [PubMed]
50. Neal RD, Heywood PL, Morley S. Real world data—retrieval and validation of consultation data from four general practices. Fam Pract 1996;13: 455-61. [PubMed]
51. Njalsson T, Sigurdsson JA. Doctors, computers and quality of registration. An audit of prescription items and x-rays requests. Eur J Gen Pract 1995;1: 59-62.
52. Vernon M. Ensuring the accuracy and completeness of clinical data on general practice systems. J Inform Prim Care 1998. www.phcsg.org.uk/informatics/nov98/nov6.htm (accessed 11 Mar 2003).
53. Hippisley-Cox J, Pringle M, Crown N, Meal A, Wynn A. Sex inequalities in ischaemic heart disease in general practice: cross sectional survey. BMJ 2001;322: 832. [PMC free article] [PubMed]
54. McColl A, Roderick P, Smith H, Wilkinson E, Moore M, Exworthy M, et al. Clinical governance in primary care groups: the feasibility of deriving evidence-based performance indicators. Qual Health Care 2000;9: 90-7. [PMC free article] [PubMed]
55. Petursson P. What determines a family doctor's prescribing habits for antibiotics? A comparative study on a doctor's own behaviour in two different settings. Scand J Prim Health Care 1996;14: 196-202. [PubMed]

Articles from BMJ : British Medical Journal are provided here courtesy of BMJ Group
PubReader format: click here to try


Related citations in PubMed

See reviews...See all...

Cited by other articles in PMC

See all...


  • Cited in Books
    Cited in Books
    PubMed Central articles cited in books
  • PubMed
    PubMed citations for these articles

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...