Logo of annfammedLink to Publisher's site
Ann Fam Med. Nov 2006; 4(6): 548–555.
PMCID: PMC1687166

Measuring Continuity of Care in Diabetes Mellitus: An Experience-Based Measure

Abstract

PURPOSE Continuity is an important attribute of health care, but appropriate measures are not currently available. We developed an experience-based measure of continuity of care in type 2 diabetes.

METHODS A 19-item measure of experienced continuity of care for diabetes mellitus (ECC-DM) was developed from qualitative patient interview data with 4 continuity subdomains: longitudinal, flexible, relational, and team and cross-boundary continuity. The measure was implemented in a survey of 193 patients with type 2 diabetes from 19 family practices. Associations of ECC-DM scores with clinician organizational characteristics were estimated.

RESULTS Potential ECC-DM scores ranged from 0 to 100 with an observed mean of 62.1 (SD 16.0). The average inter-item correlation was 0.343 and Cronbach’s α was 0.908. Factor analysis found 4 factors that were generally consistent with the proposed subdomains. Patients’ mean scores varied significantly between practices (P = .001), ranging from 46 to 78 at different family practices. Experienced continuity was lower for patients receiving only hospital clinic care than for those receiving some diabetes care from their family practice (difference 13.7; 95% confidence interval [CI], 8.2–19.2; P <.001). Patients had higher ECC-DM scores if their family practice had a designated lead doctor for diabetes (difference 8.2; 95% CI, 2.7–13.6; P = .003).

CONCLUSIONS The results provide evidence for the reliability, construct validity, and criterion validity of the experienced continuity-of-care measure. The measure may be used in research and monitoring to evaluate patient-centered outcomes of diabetes care. Patients’ experiences of continuity of care vary between health care organizations and are influenced by the organizational arrangements for care.

Keywords: Diabetes mellitus, continuity of patient care, family practice, patient care management, patient experience, patient views

INTRODUCTION

Preventive medical care has the potential to improve health in chronic conditions such as diabetes mellitus,13 but many patients do not receive optimal-quality health care,4 and satisfactory outcomes are rarely achieved.5 Achieving treatment objectives for type 2 diabetes mellitus requires close cooperation among the patient, the physician, and other members of the diabetes care team during the long course of the diabetic illness. This process corresponds closely with the notion of continuity of care that is defined by the American Academy of Family Physicians (AAFP) as “the process by which the patient and the physician are cooperatively involved in ongoing health care management toward the goal of high quality, cost-effective medical care.”6 That a considerable minority of diabetic patients do not receive care on a regular basis and are at increased risk of developing complications of diabetes7 suggests continuity of care should be an important element in the management of diabetes patients.

Interventions to improve the delivery of diabetes care8,9 may enhance continuity, but evaluations have not usually emphasized patient-centered outcomes, such as experienced continuity of care. Some research studies have found that greater continuity of care is associated with earlier diagnosis of diabetes,10 better management of the condition,11,12 and more favorable intermediate outcomes,13 but other studies give contradictory results.1416 Research is hampered by a lack of suitable instruments for measuring continuity of care,17 and few instruments are available to measure interpersonal aspects of the patient-clinician interaction, which are considered to be key components of continuity of care.18 This study therefore aimed to develop and test an experience-based questionnaire measure of continuity of care in type 2 diabetes mellitus.

METHODS

The study was approved the by Research Ethics Committee of Guy’s Hospital, London, and patients gave written informed consent to participation. This report comprises 3 elements: (1) a development phase, which included development and cognitive testing of the new measure, as well as a pilot evaluation of the measure in 40 patients; (2) a formal cross-sectional survey to test the measure in a sample of 193 diabetic patients registered with 19 family practices; and (3) an evaluation of test-retest reliability and self-completion in 2 separate convenience subsamples of the main study sample.

Development of the Measure

We held in-depth interviews with 25 type 2 diabetic patients to explore their experiences and values with respect to continuity of care.19 We found that patients’ experiences of continuity of diabetes care can be characterized in terms of 4 dimensions. (1) Experienced longitudinal continuity refers to the regular monitoring of the patient and his or her illness over time, with advice on self-management, and care from as few clinicians as possible.20 This process provides the context in which a relationship may develop between the patient and a usual doctor or nurse based on familiarity, closeness, and trust. This is characteristic of (2) experienced relational continuity.20 When patients experience problems with their diabetes, they may need an urgent consultation, or they may want to speak to their usual doctor or nurse to obtain advice. (3) Experienced flexible continuity characterizes the extent to which clinicians respond flexibly to patients’ changing needs over time. Finally, (4) experienced team and cross-boundary continuity concerns the degree of consistency and coordination of care between different care settings and between different individual clinicians. In our data, patients were unable to comment consistently on the availability of clinical information, so the dimension of informational continuity was omitted.

The 4 dimensions of experienced continuity of care, together with key experiences and values drawn from patients’ accounts, were used to develop questionnaire items. The precise wording of items was based on qualitative data and modified through a process of discussion and consensus among the members of the study team. Item development was also informed by considering items with similar content from established measures of patient satisfaction.21,22 Prototype items were subject to cognitive testing on several occasions, with small samples of 3 or 4 diabetic patients who were attending a local diabetes clinic, to test that the wording was appropriate and items were understood as intended. A nearly final version of the instrument was formally tested in a pilot study that included a convenience sample of 40 patients attending local family practices. Minor changes to the wording of the instrument were made after the analysis of these pilot data, but these pilot data are not presented here.

Questionnaire Items, Coding, and Scoring

The questionnaire items are shown in Table 1 [triangle]. There are 19 items with 4 items each for the subdomains of longitudinal continuity (LC) and flexible continuity (FC), 6 for relational continuity (RC), and 5 for team and cross-boundary continuity (TCB). Eight of the items asked about care received from the “usual doctor or nurse” in a given setting. The usual doctor or nurse was defined as the “doctor or nurse who knows you and your diabetes best.” The response sets for each item were coded using standard Likert-type scales, each of which had 6 response options. Differently worded response sets were used for different sections of the questionnaire. Six response options were preferred to avoid a bias toward the central option. When only 1 item was missing for a given subdomain, then the value was imputed by taking the mean of the items with complete data. The maximum number of cases with imputed data was 6 for item RC2. Each subdomain was scored by summing the items and then rescaling to give a score out of 25. The total score was obtained by summing the scores for the 4 subdomains, giving a score out of 100. Full details of the questionnaire and coding procedures are available in our full report at http://www.sdo.lshtm.ac.uk/files/project/14-final-report.pdf.

Table 1.
Item Responses Combined Across Settings For 193 Patients With Complete Data

Diabetic patients sometimes receive medical care in more than 1 health care setting. When such was the case, patients completed the items for longitudinal, flexible, and relational continuity with reference first to the family practice setting and then with reference to hospital clinic care, whereas the items for team and cross-boundary continuity referred to patients’ overall experience of care. When patients received care in both family practice and hospital clinic settings, then the higher of the subdomain scores was included in the total score, because the patient was considered to experience the continuity in diabetes care that was offered by the more favorable setting.

Field Testing the Measure

The measure was tested in a cross-sectional survey of patients with type 2 diabetes who were registered with 19 family practices in London, England. The patients were older than 30 years when diabetes was diagnosed and did not require insulin within 6 months of diagnosis. In the United Kingdom, patients with type 1 diabetes generally receive diabetes care from hospital-based clinics. The survey questionnaire included the measure of experienced continuity of care and questions concerning the type of care received (family practice only, hospital only, or shared care), age, sex, ethnicity, duration of diabetes, and general health. The questionnaire was administered during an interview in the patients’ homes. All the main study interviews were conducted either by one author (SN) or an assistant, and both were trained to a common standard. Patient questionnaire data were also linked to data concerning the organization of diabetes care at the family practice; this information included an item concerning whether the family practice had a designated lead doctor for diabetes management.

All patients were interviewed 10 months later to evaluate experienced continuity of care as part of a cohort study to determine whether continuity of care is associated with health outcomes. The results of the cohort study will be reported elsewhere.

Two nested substudies were completed at the 10-month follow-up interview. In the first substudy, the test-retest reliability of the questionnaire was evaluated by repeating interviews over the telephone in a convenience sample of 30 patients, with 26 giving complete total scores. The average interval between interviews was approximately 11 weeks. In a separate substudy, a self-administered version of the questionnaire was completed by a convenience sample of 56 patients, with 48 giving complete total scores, and these responses were compared with those obtained by interview. The interval between interviews and self-administered questionnaires ranged from 1 day to 1 month.

Analysis

Correlations between each item and the total score (item-score correlations) and Cronbach’s α were estimated. The factorial composition of the measure was evaluated with factor analysis using the principal factor method in Stata version 9.23 Factor loadings were obtained after varimax rotation by default, but oblique rotation led to a similar interpretation. The number of factors was selected after inspecting the eigenvalues and a scree plot, and by using maximum likelihood estimation to compare the goodness-of-fit of models with different numbers of factors.24,25 Test-retest reliability was evaluated by estimating the mean difference between scale scores from successive administrations.26 To evaluate criterion validity, linear regression models were fitted to evaluate whether mean experienced continuity scores varied in different groups of patients or different organizational settings. To allow for clustering by practice, the family practice was fitted as a random effect with maximum likelihood estimation.23 Intraclass correlation coefficients were estimated from a random effects model with maximum likelihood estimation.

We compared continuity of care according to type of diabetes care received. In the United Kingdom, diabetes care has traditionally been provided by the diabetes outpatient clinic at the local hospital (referred to as hospital-based care). For the last 20 years, family practices have increasingly taken on primary responsibility for the care of patients with type 2 diabetes (family-practice–based care). In some instances, care is coordinated between the family practice and the hospital clinic with the latter providing annual reviews and specialist advice (shared care). We also compared continuity scores according to whether the practice had a designated clinician for diabetes care. In the United Kingdom, practices may assign a doctor or nurse the responsibility for organizing, coordinating, and delivering care for the practice’s diabetic patients.

RESULTS

At the 19 family practices there were 553 registered patients with type 2 diabetes. Interviews were obtained with 209 (38%) patients and, after excluding cases with missing values, data were analyzed for 193 (92%) patients with total continuity-of-care scores. There were 96 men and 97 women, with a mean age of 65 years (range 32 to 90 years). Of the 193 patients, 44 were receiving only family practice care, 35 were receiving only hospital clinic care, and 114 were receiving shared care from both the family practice and specialist clinic.

The number of eligible patients ranged from 13 to 44 at different practices. Practice-specific response rates ranged from 19% to 71%. The practice-specific response rate to the survey was associated with the practice-specific mean continuity score. A 10-unit increase in the practice-specific mean continuity score was associated with a practice-specific response rate that was 6.4% higher (95% confidence interval [CI], 0.3%–2.4%; P = .040). This finding suggests that patients with less favorable experiences of continuity of care might be less likely to respond to the survey.

Reliability

Potential ECC-DM scores ranged from 0 to 100 with an observed mean of 62.1 (SD 16.0). Table 1 [triangle] gives the wording for each item. The items were easy to read, with a Flesch-Kincaid grade level of 3.5. The average inter-item correlation based on 19 items was 0.343, and Cronbach’s α was 0.908. In the substudy of test-retest reliability, the mean difference in total score between repeat interviews in 26 patients was −0.37 (95% CI, −4.17 to 3.44). In the substudy to test the self-completion questionnaire format, the mean difference in total score for self-completion format as compared with interview format in 48 patients with complete scores was 4.0 (95% CI, 1.2–6.9), indicating a slightly positive difference in score for self-completion as compared with interview format. Most, 45 of 56 (80%), respondents to the self-completion questionnaire said that the questionnaire was “very easy” or “easy” to understand.

Factorial Composition

Table 1 [triangle] shows rotated factor loadings from a factor analysis with 4 factors. Items associated with relational continuity load strongly on the first factor. Items LC4 and FC4, which include the concept of a usual doctor or nurse, also load strongly on this factor. The second factor is associated with items representing team and cross-boundary continuity, the third with longitudinal continuity, and the fourth with flexible continuity. Item LC2 did not load strongly on any factor.

Table 2 [triangle] gives the overall mean (and standard deviation) score for the continuity-of-care scale and its subdomains, as well as values for average inter-item correlations, Cronbach’s α, and correlations with total scores. The high inter-item correlation for relational continuity was explained by each item having a 0 score if there was no usual clinician. The subdomains were strongly associated with each other. The highest correlation coefficient (0.656) was between relational and flexible continuity.

Table 2.
Properties of the Total Continuity of Care Scale and the Subdomains of Longitudinal, Flexible, Relational, and Team and Cross-Boundary Continuity

Criterion Validity

Table 3 [triangle] shows the variation in mean continuity scores among 19 different family practices. The mean number of patients per practice was 10 (SD 4.2, range 3–17). The mean total experienced continuity score varied from 46 to 78 at different practices. The intraclass correlation coefficient for the total score was 0.14 (95% CI, 0.04–0.32; P = .001). This finding provides evidence that patients from the same practice tend to rate their experience of continuity of care more similarly than patients registered with different practices. The same was true for the subscale scores for longitudinal, flexible, and relational continuity, but not for team and cross-boundary continuity, which may depend on experiences outside the practice.

Table 3.
Variation in Mean Continuity of Care Scores Among 19 Different Family Practices Based on 193 Patients

Table 4 [triangle] shows the mean continuity-of-care scores by type of care setting. Patients who received diabetes care only from hospital-based specialty clinics gave lower mean scores for the total experienced continuity-of-care score and for each of the subscales except team and cross-boundary continuity. Patients from practices with a designated doctor for diabetes care gave higher continuity of care scores than patients registered with practices with no designated doctor for diabetes care. Adjusting for the type of diabetes care received by registered patients and whether the practice had a designated doctor for diabetes care explained most of the observed variation in continuity-of-care scores between practices. The ICC for total score adjusted for the variables shown in Table 4 [triangle] was 0.04 (95% CI, 0.00–0.25; P = .169). The continuity-of-care score was not consistently associated with age, sex, or duration of diabetes, and the associations in Table 4 [triangle] were robust to additional adjustment for these case-mix variables (estimated difference for type of care, −12.9; 95% CI, −18.4 to −7.3; estimated difference for designated diabetes doctor, 7.4; 95% CI, 2.0–12.7).

Table 4.
Distribution of Continuity-of-Care Scores by Type of Care Setting and Whether Practice Has Designated Lead Physician for Diabetes Care

DISCUSSION

We have described a 19-item measure of experienced continuity of care in type 2 diabetes mellitus (ECC-DM). The measure is grounded in qualitative data from diabetic patients. It provides an overall measure of experienced continuity of care, as well as subscales of longitudinal, flexible, relational, and team and cross-boundary continuity. The measure may be used in self-completion or interview formats. Evidence for the reliability of the overall scale is provided by the item-score correlations, the satisfactory value for Cronbach’s α, and the results for test-retest reliability. Further evidence of the reliability of the measure is provided by the observation that patients from the same family practices give more similar scores for experienced continuity of care than patients from different practices. The intraclass correlation coefficient of 0.14 is higher than 95% of those from a survey of 1,039 intraclass correlation coefficients from 31 studies in primary care.27 Evidence for the construct validity of the continuity-of-care measure is provided by the results of the factor analysis, which generally support the proposed factorial structure of the measure. The criterion validity of the measure is supported by the policy-relevant findings that measured experienced continuity of care was substantially higher for patients who received some diabetes care from their family practice, especially if there was a designated doctor for diabetes care.

Limitations of Study

The experienced continuity-of-care measure has generally satisfactory psychometric properties, but some issues require clarification. Although all of the items were supported by our qualitative data, in a short scale not all relevant concepts can be included. Some of our selections may be justified with reference to other measures, because items with similar content may be found in current patient satisfaction surveys.21,22 Unlike these earlier measures, the ECC-DM is grounded in a conceptual model that identifies experienced continuity of care as a construct of patient satisfaction,28 and marks a shift in thinking away from the earlier view of continuity of care as a process-of-care measure.29 Eight of the items referred to the concept of a “usual doctor or nurse.” This concept was further defined as “the doctor or nurse who knows you and your diabetes best.” Whereas the dependence between these items might be considered an undesirable statistical property of the measure, the importance attached to the concept of a usual doctor or nurse both in the literature on continuity of care30,31 and in our qualitative data make this feature an important part of the measure. Item LC2 concerning appointment letters did not load strongly on any of the domains and might be a candidate for omission, but this item was justified by systematic review evidence that active processes of recall facilitate better care.32 Because items LC4 and FC4, which concern the concept of the “usual doctor or nurse,” load the factor associated with relational continuity, it may not be entirely justified to use the subscales separately, and the overall scale should generally be preferred. The team and cross-boundary continuity items appeared to have satisfactory psychometric properties, but assessment of team and cross-boundary continuity did not vary between practices or between types of care. This finding may suggest that team and cross-boundary continuity does not vary according to these variables, but equally it is possible that the questions relating to team and cross-boundary continuity lack discrimination, because patients are not well able to judge these aspects of their care. These issues require further consideration in the future development of the measure.

A limitation of this study is the low response rate achieved in the interview survey. This low response rate is a consistent finding in inner-city areas in many countries. Because our aim was to develop a reliable and valid tool, however, representativeness was less important to our study than diversity of experience. There was evidence that patients with less-favorable experiences of continuity were less likely to respond to the survey, which might lead to our estimates being too conservative for effects of different organizational arrangements. It may be noted that in the main study, patients completed the questionnaire by interview, and the response rate was not an indicator of the ease of completion of the measure.

Comparison With Other Work

Dolovich et al recently described a different diabetes-specific questionnaire measure of continuity of care.33 Their approach differed from ours because the measure did not refer to an underlying conceptual model of continuity of care.34 Our results using the new measure show that diabetic patients’ experiences of continuity of care vary systematically among health care organizations. This observation requires further study, but it was clear that some organizational characteristics are associated with less-favorable experiences of continuity of care. Hospital-based diabetes services, where there are more complex systems of care and higher staff turnover, are generally associated with lower continuity of care. In primary care, the identification of a designated physician for diabetes care was associated with higher experienced continuity of care. This finding is consistent with systematic review evidence that case manager roles may enhance the delivery of care in chronic illness.9 As a construct of patient satisfaction, continuity of care may be valued in itself,28 and interventions that promote continuity are to be encouraged. Further research is required to find out whether more-favorable experiences of continuity of care are associated with better treatment or improved treatment outcomes.

We have provided evidence for the reliability, construct validity, and criterion validity of a new measure of experienced continuity of care. The measure is brief, quick to complete, acceptable to patients, applicable for use in different care settings, and therefore suitable for use in the routine monitoring of quality of care. The measure may be used in ambulatory care in both specialist and family practice settings to provide information concerning factors that enhance or impede patients’ experiences of continuity of care and the relationship between continuity, processes of care, and health outcomes.

The measure incorporates experiences that are widely applicable, and it may be used without modification. We acknowledge that further development of the measure is desirable to address some of the questions raised in this study. Further work to evaluate the self-completion version is desirable. In addition, modifications may be required for different health systems, as when barriers to access may impede continuity of care for example. The measure also has potential for adaptation and use in other chronic illnesses.

Acknowledgments

We thank the journal’s reviewers for their contributions to the revision of the paper.

Notes

Conflicts of interest: none reported

Funding support: This work was supported by the UK Research and Development Programme in Service Delivery and Organisation.

REFERENCES

1. Intensive blood-glucose control with sulphonylureas or insulin compared with conventional treatment and risk of complications in patients with type 2 diabetes (UKPDS 33). UK Prospective Diabetes Study (UKPDS) Group. Lancet. 1998;352:837–853. [PubMed]
2. Collins R, Armitage J, Parish S, Sleigh P, Peto R. MRC/BHF Heart Protection Study of cholesterol-lowering with simvastatin in 5963 people with diabetes: a randomised placebo-controlled trial. Lancet. 2003;361:2005–2016. [PubMed]
3. Tight blood pressure control and risk of macrovascular and micro-vascular complications in type 2 diabetes: UKPDS 38. UK Prospective Diabetes Study Group. BMJ. 1998;317:703–713. [PMC free article] [PubMed]
4. Institute of Medicine. Crossing the Quality Chasm. A New Health System for the 21st Century. Washington, DC: National Academy Press; 2005.
5. Saydah SH, Fradkin J, Cowie CC. Poor control of risk factors for vascular disease among adults with previously diagnosed diabetes. JAMA. 2004;291:335–342. [PubMed]
6. American Academy of Family Physicians. Continuity of care, definition of. Available at: http://www.aafp.org/online/en/home/policy/policies/c/continuityofcaredefinition.html. Accessed: 15 February, 2005.
7. Institute of Medicine. Unequal Treatment: Confronting Racial and Ethnic Disparities in Health Care. Washington DC: National Academies Press; 2003.
8. Standards of medical care in diabetes. Diabetes Care. 2005;28(Suppl 1):S4–S36. [PubMed]
9. Ouwens M, Wollersheim H, Hermens R, Hulscher M, Grol R. Integrated care programmes for chronically ill patients: a review of systematic reviews. Int J Qual Health Care. 2005;17:141–146. [PubMed]
10. Koopman RJ, Mainous AG, 3rd, Baker R, Gill JM, Gilbert GE. Continuity of care and recognition of diabetes, hypertension, and hyper-cholesterolemia. Arch Intern Med. 2003;163:1357–1361. [PubMed]
11. Parchman ML, Burge SK. Continuity and quality of care in type 2 diabetes: a Residency Research Network of South Texas study. J Fam Pract. 2002;51:619–624. [PubMed]
12. Saultz JW, Lochner J. Interpersonal continuity of care and care outcomes: a critical review. Ann Fam Med. 2005;3:159–166. [PMC free article] [PubMed]
13. Mainous AG, 3rd, Koopman RJ, Gill JM, Baker R, Pearson WS. Relationship between continuity of care and diabetes control: evidence from the Third National Health and Nutrition Examination Survey. Am J Public Health. 2004;94:66–70. [PMC free article] [PubMed]
14. Broom DH. Familiarity breeds neglect? Unanticipated benefits of discontinuous primary care. Fam Pract. 2003;20:503–507. [PubMed]
15. Gill JM, Mainous AG, 3rd, Diamond JJ, Lenhard MJ. Impact of provider continuity on quality of care for persons with diabetes mellitus. Ann Fam Med. 2003;1:162–170. [PMC free article] [PubMed]
16. Hanninen J, Takala J, Keinanen-Kiukaanniemi S. Good continuity of care may improve quality of life in Type 2 diabetes. Diabetes Res Clin Pract. 2001;51:21–27. [PubMed]
17. Haggerty JL, Reid RJ, Freeman GK, et al. Continuity of care: a multidisciplinary review. BMJ. 2003;327:1219–1221. [PMC free article] [PubMed]
18. Saultz JW. Defining and measuring interpersonal continuity of care. Ann Fam Med. 2003;1:134–143. [PMC free article] [PubMed]
19. Naithani S, Gulliford M, Morgan M. Patients’ perceptions and experiences of ‘continuity of care’ in diabetes. Health Expect. 2006;9:118–129. [PubMed]
20. Freeman GK, Olesen F, Hjortdahl P. Continuity of care: an essential element of modern general practice? Fam Pract. 2003;20:623–627. [PubMed]
21. National Primary Care Research and Development Centre. General practice assessment questionnaire. Available at: http://www.gpaq.info/. Accessed: 18 May, 2005.
22. Rand Health. Long form patient satisfaction questionnaire (PSQ III). Available at: http://www.rand.org/health/surveys/PSQIII.pdf. Accessed: 23 May, 2005.
23. Stata Reference Manual. Release 9. College Station, Tex: Stata Corporation; 2005.
24. Fayers PM, Machin D. Factor analysis. In: Staquet MJ, Hays RD, Fayers PM, eds. Quality of Life Assessment in Clinical Trials. Oxford: Oxford University Press; 1998:191–223.
25. Streiner DL. Figuring out factors: the use and misuse of factor analysis. Can J Psychiatry. 1994;39:135–140. [PubMed]
26. Bland JM, Altman DG. Statistical methods for assessing agreement between two methods of clinical measurement. Lancet. 1986;1:307–310. [PubMed]
27. Adams G, Gulliford MC, Ukoumunne OC, et al. Patterns of intra-cluster correlation from primary care research to inform study design and analysis. J Clin Epidemiol. 2004;57:785–794. [PubMed]
28. Christakis DA. Continuity of care: process or outcome? Ann Fam Med. 2003;1:131–133. [PMC free article] [PubMed]
29. Bice TW, Boxerman SB. A quantitative measure of continuity of care. Med Care. 1977;15:347–349. [PubMed]
30. Kearley KE, Freeman GK, Heath A. An exploration of the value of the personal doctor-patient relationship in general practice. Br J Gen Pract. 2001;51:712–718. [PMC free article] [PubMed]
31. O’Connor PJ, Desai J, Rush WA, et al. Is having a regular provider of diabetes care related to intensity of care and glycemic control? J Fam Pract. 1998;47:290–297. [PubMed]
32. Griffin S. Diabetes care in general practice: meta-analysis of randomised control trials. BMJ. 1998;317:390–396. [PMC free article] [PubMed]
33. Dolovich LR, Nair KM, Ciliska DK, et al. The Diabetes Continuity of Care Scale: the development and initial evaluation of a questionnaire that measures continuity of care from the patient perspective. Health Soc Care Community. 2004;12:475–487. [PubMed]
34. Gulliford M, Naithani S, Morgan M. Continuity of care. Fam Med. 2005;37:687–688; author reply 688. [PubMed]

Articles from Annals of Family Medicine are provided here courtesy of American Academy of Family Physicians
PubReader format: click here to try

Formats:

Related citations in PubMed

See reviews...See all...

Cited by other articles in PMC

See all...

Links

  • Cited in Books
    Cited in Books
    PubMed Central articles cited in books
  • PubMed
    PubMed
    PubMed citations for these articles

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...