• We are sorry, but NCBI web applications do not support your browser and may not function properly. More information
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptNIH Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
J Clin Epidemiol. Author manuscript; available in PMC Jun 1, 2008.
Published in final edited form as:
PMCID: PMC1991281
NIHMSID: NIHMS23950

Agreement of self-reported comorbid conditions with medical and physician reports varied by disease among End-Stage Renal Disease patients

Sharon Merkin, MHS, PhD,1 Kerri Cavanaugh, MD, MHS,2,3 J. Craig Longenecker, MD, PhD,3,4,5 Nancy E. Fink, MPH,2,3,4 Andrew S. Levey, MD,6 and Neil R. Powe, MD, MPH, MBA2,3,4

Abstract

Objective:

To compare self-report of eight diseases with review of medical records and physician reports.

Study Design and Setting:

In a cohort of 965 incident end-stage renal disease (ESRD) patients (CHOICE study), data on existing medical conditions were obtained from medical record abstraction, physician report (CMS Form 2728), and self-report in a baseline questionnaire. We evaluated agreement with kappa statistics and sensitivity of self-report. Regression models were used to examine characteristics associated with agreement.

Results:

The results showed excellent or substantial agreement between self-report and the medical record for diabetes (Kappa statistic (k)=0.93) and coronary artery intervention (k=0.79) and poor agreement for chronic obstructive pulmonary disease (k=0.20). Physician-reported prevalence for all diseases was equal or lower than that by self-report. Male patients were more likely to inaccurately report hypertension. Compared to White patients, African American patients were more likely to inaccurately report cardiovascular diseases.

Conclusion:

In ESRD patients, self-report agreement with the medical record varies with the specific disease. Awareness of diseases of the cardiovascular system appears to be low. African American and male ESRD patients are at risk of low awareness of disease and educational interventions are needed in this high-risk population.

Keywords: Validity, Agreement, Self-report, Kidney, End-Stage renal disease, Medical records, Comorbid

1. Introduction

Self-reported medical history of chronic diseases, such as hypertension and diabetes mellitus, is used in research studies to determine an individual participant's risk status and also the prevalence of specific diseases in a study population. The reported accuracy of self-reported health information is inconsistent in the literature. Most reliability or validation studies of self-reported chronic diseases compare participant self-report with the medical record, however comparisons with a clinical examination (1-7) and also administrative data have been performed.(8-10) When comparing the medical record with patient self-report, some studies found good agreement for diabetes, hypertension, myocardial infarction and cerebrovascular disease (1,11-16), while others found lower agreement for some of these same conditions.(5,7,9,10,14,17-19)

Agreement between self-report and the medical record varies, depending upon the specific disease that is being evaluated. Chronic diseases that are diagnostically complex and require a clinical judgment in addition to medical testing may be more vulnerable to misclassification from self-reports.(1,13,17) Agreement also likely varies in different influenced populations, given their respective demographic, educational, and clinical characteristics. Previous studies have been performed in cohorts of study volunteers (11,20), men of the Veterans Administration system in the United States(15), participants in the Women's Health Initiative(21), elderly populations(12,14), and other cohorts that are more representative of the general population.(1,2,7,12,17,22,23)

Patients with end-stage renal disease (ESRD) often have a very complex medical history with many comorbid diseases. The validity of self-report for disease classification in research studies of this population is unknown. Evaluating the agreement between self-report of medical history of ESRD patients and the medical record is necessary to interpret the use of self-report for risk factor classification in research and also potentially for clinical use. Identification of patients who are not in agreement with the medical record may represent patients who are unaware of their clinical diagnoses and may be targets for educational interventions.

In the United States, the Centers for Medicare & Medicaid Services (CMS) requires the completion of the Form 2728 for all incident ESRD patients.(24) This form is required to be completed by the attending nephrologist and includes 20 questions about prevalent comorbid conditions. The Form 2728 represents an individual physician report for each patient. The agreement between physician report and the medical record in ESRD has been described (25), but the agreement between self-reported comorbid chronic diseases and physician report (Form 2728) has not been described.

The objective of this study was to examine the agreement between patient self-report and medical record in a cohort of ESRD patients with regard to eight chronic comorbid conditions, including congestive heart failure (CHF), myocardial infarction (MI), cerebrovascular disease, angioplasty or coronary artery bypass graft surgery (CABG), hypertension, diabetes, chronic obstructive pulmonary disease (COPD), and cancer. In addition, we compared the agreement between patient self-report and physician report, represented by the Form 2728, for each of these diseases. We further examined the effects of patient characteristics, including race, age, gender, or education on the positive agreement of patient self-reports with the medical record.

2. Methods

2.1 Study Design and Population

We conducted a cross-sectional analysis of baseline data from the Choices for Healthy Outcomes in Caring for End-stage renal disease (CHOICE) Study, a national prospective cohort study of ESRD patients undergoing dialysis therapy.(26) At the time of enrollment, participants in this study were at least 18 years old, English or Spanish speaking, and had been diagnosed with kidney failure requiring initiation of outpatient dialysis. From October 1995 to June 1998, 1,041 patients were recruited from 81 dialysis clinics located in 19 states, within the United States, associated with Dialysis Clinic, Incorporated (DCI) (Nashville, Tennessee), New Haven CAPD (New Haven, Connecticut), and the Hospital of St. Raphael (New Haven, Connecticut). Patients were enrolled a median of 45 days from initiation of dialysis and 98% of participants were enrolled within 4 months of starting dialysis.

All patients provided informed consent and the Johns Hopkins University School of Medicine Institutional Review Board and the review boards of each clinical unit approved the CHOICE protocol.

2.2 Patient Reports of Comorbid Diseases

The baseline questionnaire for the CHOICE study contained questions about the patient's health status, including the presence or absence of comorbid diseases. Participants reported their status for eight different medical conditions, including congestive heart failure, myocardial infarction, cerebrovascular disease or stroke, angioplasty or CABG, hypertension, diabetes, COPD, and cancer. In addition, the baseline questionnaire provided self-reported information about educational level.

The baseline questionnaire asked the following question regarding the eight diseases: “Have you ever been told by a doctor that you have: (disease)?” A check marked next to the specific disease indicated a positive response, while those left blank were considered negative answers. Thus, in order to correctly assume that non-responses indicated negative answers, we excluded participants who had completed < 85% of the baseline questionnaire.

2.3 Medical Records of Comorbid Diseases

The comorbid disease assessment was performed at enrollment using the Index of Coexistent Disease (ICED).(27) Clinical evidence relevant to each condition or a reported condition (past or present) was sufficient for positive coding. The ICED is composed of the Index of Disease Severity (IDS), a chart-based review of 19 medical conditions, and the Index of Physical Impairment (IPI), an assessment of 11 physical impairment categories completed by the local nurse or technician. In the CHOICE study, all medical records were collected and sent to the CHOICE Comorbidity Assessment Center at the New England Medical Center (Boston, MA) and abstracted by two expert research nurses. The interobserver reliability of the ICED scoring was assessed by comparing the technique of the CHOICE research nurse to the scoring of an experienced nephrologist in the Mortality and Morbidity in Hemodialysis Patients (MMHD) Study. This resulted in a kappa statistic of 0.77.(27) Table 1 shows the comorbid diseases as listed on the self-report baseline questionnaire and their corresponding definitions on the medical record co-morbidity assessment form.

Table 1
Comorbidity definitions for patient self-report, medical record and physician reports

2.4 Physician's Reports of Co-morbid Conditions

Physician reports of patient comorbid diseases were assessed using the Centers for Medicare & Medicaid services (CMS) Form 2728. The form documents a patient's medical need for renal replacement therapy and is completed by an attending nephrologist upon patient entry into the national ESRD program. The form requires that the physician check off any of 20 comorbid diseases that apply currently or during the past 10 years. Table 1 also shows the eight diseases from Form 2728 included in this analysis. The “ischemic heart disease, CAD” category from Form 2728 includes angioplasty and CABG surgery status. This category was excluded in this part of the analysis because a corresponding category with the same definition was not requested on the self-report baseline questionnaire.

Patient characteristics, including age, sex, race and date of initial dialysis, were also obtained from Form 2728.

2.5 Statistical Analysis

Descriptive statistics were used to examine the study population demographics. Information about the eight comorbid diseases reported by patients in the baseline questionnaire was compared to the medical record review and seven of those conditions were also compared to physician reports. The prevalence of each disease was calculated for each source of comorbid disease data. The prevalence of each disease from patient reports was compared to the medical record and to the physician reports; the differences were evaluated using McNemar's chi-square test for paired observations. For each set of the two comparisons, agreement between the two forms was calculated using the kappa statistic. As suggested by Landis and Koch(28), a kappa statistic value of <0.40 represents poor-to-fair agreement, a value of 0.41 to 0.60 reflects moderate agreement, a value of 0.61-0.80 is considered substantial agreement, and a kappa value of 0.81 to 1.00 is considered excellent agreement. Using the medical record review as the gold standard, the sensitivity and specificity of the patient-reported information were calculated.

In patients with disease as determined by the medical record, generalized estimating equation (GEE) models were used with a logit link to examine characteristics associated with positive agreement between patient reports and the medical records, while accounting for the potential clustering effects of clinic site. These models included the following covariates: age, gender, race, dialysis modality choice at the start of dialysis (hemodialysis or peritoneal dialysis), number of comorbid conditions (one for each of the eight diseases considered in this study, excluding the disease used as an outcome for that particular model), and education (< high school, completed high school and > high school). The outcome for each model was patient agreement with the medical record for each of the eight comorbid diseases, compared to no agreement (where existing disease is recorded in the medical record, but not reported by the patient). The analysis was generated using SAS software, Version 6 (SAS Institute Inc., Cary, NC, 1995).

3. Results

3.1 Characteristics of the Study Population

Of the 1041 patients enrolled in the CHOICE cohort study, 965 (93%) patients completed >=85% of the baseline questionnaire, including the self-report of comorbid diseases, and were included in this study. The average age of the study population was 58 years and 54% were male; 67% were White, and 28% African American. At baseline, 75% of the patients received hemodialysis and 25% received peritoneal dialysis. The average number of existing comorbid conditions was 3, and average days to enrollment were 45. Slightly more participants had more than a high school education (36%) compared to the other education levels.

Dialysis modality, number of comorbid conditions, and education showed statistical difference between the study participants included in this analysis and those excluded (Table 2). Whereas most of the included study participants received hemodialysis, a majority of those who were excluded from the study received peritoneal dialysis (p<0.0001). The average number of comorbid conditions was significantly higher among those excluded (p<0.0001), and those excluded had missing data regarding education.

Table 2
Characteristics of CHOICE Participants and included study population

3.2 Prevalence of Comorbid Conditions by data source

Prevalence for many comorbid diseases was lower in the self-reported data compared to the medical record (Table 3). Prevalence of congestive heart failure and hypertension were considerably lower when measured by self-report in comparison to that measured by the medical record (29% and 46% for congestive heart failure and 78% and 96% for hypertension, respectively). The difference in prevalence of cerebrovascular disease (13% and 16%, respectively) and COPD (6% and 18%, respectively) was also statistically significant. There was a trend toward significance for a lower prevalence of self-reported myocardial infarction (18%) compared with the medical record (20%) (p=0.071) and also for self-reported diabetes (53% and 54%, respectively) (p=0.078). There was no difference in the prevalence of angioplasty or CABG surgery or cancer by self-report compared with the medical record in this study.

Table 3
Comparison of self-reported comorbid diseases with the medical record, and physician report.

Physician report of the eight diseases as described on Form 2728 showed a lower prevalence for all conditions except COPD (Table 3). The greatest difference in prevalence comparing self-report and physician report was for myocardial infarction, where prevalence by self-report was 18% and physician reported prevalence was only 9%.

3.3 Agreement Between Patient Self-Report and the Medical Record

Hypertension and COPD categories revealed the lowest kappa statistic (0.19 and 0.20, respectively), indicating poor agreement between self-reported and medical record data. Excellent agreement was found for diabetes (kappa statistic=0.93). Self-report of angioplasty or CABG and cancer showed substantial agreement with the medical record (kappa statistic: 0.79 and 0.67, respectively). Moderate agreement was demonstrated for congestive heart failure (kappa=0.47), myocardial infarction (kappa=0.55) and cerebrovascular disease (kappa=0.59).

3.4 Sensitivity and Specificity of Patient Self-report

Using the medical record as the gold standard, the sensitivity of self-reported information regarding the eight co-morbid diseases varied substantially. Sensitivity was lowest for COPD (18%), followed by somewhat higher sensitivities ranging from 54%-60% for cardiovascular diseases (congestive heart failure, myocardial infarction and cerebrovascular disease). A slightly higher sensitivity was attributed to self-reports of cancer (69%). Sensitivity of self-reported information of angioplasty or heart bypass procedures, as well as hypertension, was 81%. Reports of diabetes were the most accurate, with a sensitivity of 96%.

The specificity of self-reported co-morbid diseases in this population was considerably higher than the sensitivity and ranged from 91%-97% for all conditions except for hypertension, which had a specificity of 76%.

3.5 Agreement Between Patient Self-reports and Physician Reports

As with the comparison of self-report with the medical records report, hypertension and COPD revealed the lowest kappa statistics (0.19 and 0.20, respectively) (Table 3). However, the prevalence differences between self-report and physician report of COPD were not statistically significant. Similarly, excellent agreement was found for diabetes (kappa statistic = 0.81). Poor agreement was also found for myocardial infarction with a kappa statistic of 0.33. Self-report compared with physician report showed moderate agreement for congestive heart failure (kappa statisitc=0.43), cerebrovascular disease (kappa statistic=0.45), and cancer (kappa statistic=0.43).

3.6 Characteristics Associated with Agreement of self-report and the medical record

In patients with disease as determined by the medical record, there was no association observed between age and agreement of patient self-reports for any of the eight comorbid diseases (Table 4). When including age as a dichotomous variable (data not shown), however (<mean age, vs. >=mean age), age was only significantly associated with self-report of hypertension, where younger patients were 70% more likely to have accurate self-report compared to older patients (OR: 1.7, 95% CI: 1.1-2.5).

Table 4
The association of patient characteristics with agreement of self-report and the medical record by disease.

White participants self-reports were over twice as likely to agree with the medical record when reporting congestive heart failure (Odds Ratio (OR): 2.2; 95% Confidence Interval (CI): 1.5-3.2) and over four times as likely to agree when reporting angioplasty/CABG surgery (OR: 4.2; 95% CI: 1.9-8.9), in this study.

In patients with hypertension, male sex was associated with lower agreement between self-report and the medical record (OR: 0.6; 95% CI: 0.4-0.8) compared to female patients. Patients receiving peritoneal dialysis compared to those receiving hemodialysis demonstrated lower agreement of self-report of angioplasty or CABG (OR: 0.6; 95% CI:0.4-0.9).

We also found that the number of comorbid conditions was associated with agreement of self-reporting congestive heart failure; patient reports were 60% more likely to agree with the medical record with each additional comorbidity (OR: 1.6; 95% CI: 1.4-1.9). However, with regard to reporting diabetes, the reverse trend appeared, where lower agreement was associated with an increasing number of comorbid conditions (OR:0.7; 95% CI:0.6-0.9).

4. Discussion

This cross-sectional study indicates that in a population of incident ESRD patients, self-reported prevalence of comorbid diseases agreed variably with the medical record depending upon the specific disease. Self-report of diabetes had excellent agreement with the medical record. Self-report of congestive heart failure, myocardial infarction, cerebrovascular disease/stroke, and cancer had only moderate agreement with medical records, while hypertension and COPD showed poor agreement. However, diabetes demonstrated excellent agreement with the medical record. Compared with medical record report, physician report demonstrated equivalent or lower agreement with self-reported prevalence for each of the comorbid diseases.

Using the medical record report as the gold standard, the sensitivity of self-report was lowest for COPD, and ranged from 54 to 69 for congestive heart failure, myocardial infarction, cerebrovascular disease, and cancer. These data suggest that self-reported prevalence of these conditions in ESRD patients is not uniformly accurate, and may result in misclassification in research studies without supplemental medical record review or confirmatory biomedical measures.

The lack of agreement between self-report and the medical record or physician report may be due to a number of potential causes. First, the defining criteria for each disease in the patient questionnaire, the medical record itself, and physician report are broad categories (Table 1). However, the use of broad categories, specifically to define self-report of chronic diseases, has been used in many other studies. (1,2,11,13,14,20,29) The criteria for such diseases as congestive heart failure or chronic obstructive pulmonary disease are usually composed of a combination of clinical examination and testing.(30,31) The clinical decision to apply these diagnoses to a specific patient may itself vary by medical provider; the complexity of the diagnosis; the intermittent nature of clinical symptoms may also affect the awareness and accuracy of patient self-report. For the diagnosis of congestive heart failure, other studies have found similar agreement with kappa statistics ranging from 0.3 to 0.48.(10,13,14,16,21) Only one study, comparing self-report with a medical examination in a Finnish cohort study found substantial agreement with a kappa statistic of 0.66 for congestive heart failure.(2) Similarly, the range of previously reported kappa statistics for COPD ranged from 0.28 to 0.66.(2,10,12,14,15).

In our study, the agreement between self-report and medical record of diabetes was excellent. This level of agreement has been found in previous studies in diverse study populations.(10-12,14-16) In the United States, diabetes is a very common etiology leading to ESRD. In our population, and others, the agreement between self-report and medical record may be explained by high awareness of diabetes because of its high prevalence(32) and educational efforts to improve awareness.(33)

Coronary artery intervention including coronary angioplasty or CABG surgery is a high-risk procedure that is not likely to be misinterpreted by the patient. We found that the agreement between self-report and the medical record was substantial for coronary artery intervention with a kappa statistic of 0.79. Similar results were found by other investigators.(16,21) These findings suggest that for diseases with clear definitions or for those that require invasive testing for diagnosis, self-report is highly accurate.

Previous studies have reported wide variations in agreement of self-report of myocardial infarction, with the medical record, with kappa statistics ranging from 0.12 to 0.8.(1,7,10,12,13,16,18,34) Other researchers have attributed poor agreement to underreporting of the disease by the study population.(9) In our study, 59% of the patients where the self-report did not agree with the medical record were indeed false negative self-reports. Lack of awareness or knowledge by the patient may explain this underreporting either because they were never informed of the diagnosis or because they did not understand the information given to them about their health condition. Further study is needed to differentiate why inaccurate reporting occurs and to develop appropriate interventions to improve awareness and knowledge in dialysis patients, especially for ischemic heart disease given that this is the most common cause of death in ESRD patients.(35)

Despite the relatively high awareness of hypertension in our study, where 81% of those with hypertension indeed had a positive self-report, the kappa statistic for comparison to the medical record was 0.19, representing poor-to-fair agreement. However, if the overall percent agreement is calculated for this comparison, the value is high at 81%. The prevalence of hypertension by medical record is 96% and slightly lower by self-report at 78%. This difference in prevalence by the two classification methods may account for the low kappa statistic. When the sensitivity and specificity levels for each reporting method are fixed, the kappa tends toward zero as the prevalence approaches either 0 or 100%.(36) This may explain the low kappa statistic in our population despite the high percentage agreement.

Physician report by the CMS Form 2728 showed overall poor agreement with self-report for all diseases except diabetes. The prevalence of disease by physician report was frequently lower than that of self-report. The limitations of the CMS Form 2728 in comparison to the medical record have been previously described.(25) The CMS Form 2728 often underreports comorbid diseases when compared to clinical data and demonstrated an average sensitivity across 17 conditions of 59%, with good specificity (>90% for all conditions).(25) Generally, in the physician report the sensitivity of diseases is variable ranging from 43-75%, but with consistently high specificity (>90%). Possible explanations for this lack of sensitivity in the physician's report may be due to inadequate time spent with the patient, and/or lack of information gathered about patient's past medical history. Our findings suggest that self-report in this population may be more sensitive than physician reports to identify comorbid disease in ESRD patients.

It is important to note that the agreement of self-reports with the medical record may depend on a variety of factors. Although not all conditions showed this to be true, we found that with congestive heart failure and coronary artery intervention, race and number of comorbid diseases must be considered as factors potentially affecting the sensitivity of patient reports. In this study, compared with White patients, African Americans were significantly more likely to inaccurately report their diagnosis of heart failure or coronary artery intervention. The reason for this disparity is unclear. However, of great concern is the possibility that these patients are not receiving or understanding health information specific to their high-risk condition of heart disease.

Inaccurate reporters of hypertension were also more likely to be male. The reason for potentially low awareness of hypertension in males is unclear although this has been reported in Hispanic males (37) and in a general population study(38). While this may reflect awareness of disease, it may also represent differences in trust between patients and providers for a highly prevalent disease such as hypertension. Male patients with hypertension may also have different patterns of denial or acceptance of disease than female patients. Further study is needed to understand differential reporting of disease by gender.

We did not find any association with educational level or age with agreement of self-report and the medical record. These characteristics have been shown in some previous studies to be associated with accuracy (1,7,13,29,38,39), while others have shown no relationship(12,15,40).

A limitation of this study is that there is some evidence that the medical record itself may have substantial errors.(41,42) Thus although we used the medical record as the “gold standard” to which we compare self-report, it is certainly possible that this assumption has a degree of limitation. It is also possible that the positive and negative classification of disease is influenced by the criteria for the disease itself and its ease of interpretation by the medical abstractor. The interpretation may also be limited by the knowledge and awareness of the abstractor for each specific diagnosis. Because the assumption is that a patient does not have a positive diagnosis unless evidence is found in the medical record, it may be easier to negatively code a subject than to confirm the positive coding. However, in our study, we found prevalence to be higher in the medical records, compared to the self-reports. In the CHOICE study, the medical abstraction techniques are very thorough and multiple sources of medical records are utilized to make the final determination.

Another potential limitation of this study is with regard to the selected study sample. As shown in Table 2, those who were excluded from the study had a higher number of comorbid conditions. It is possible that those included represent a relatively healthier population of individuals who might also be more aware of their condition, since they frequent dialysis centers and were able to complete the baseline questionnaire.

Research and Clinical Implications

The implications of validation studies such as this study are profound. Reliance on patient self-report without validation can lead to errors in determination of disease status and related inferences. This may affect enrollment eligibility in research studies and certainly may affect outcome analysis that depends upon self-report for disease classification. Our study shows that among ESRD patients, self-report accuracy varies by specific disease. This information may be used to determine the allocation of resources in studies to adjudicate specific diagnoses or to evaluate studies that rely upon self-reported information.

This validation study of self-report in ESRD patients also has importance in clinical practice, revealing a low level of awareness among some patients regarding their own health condition. Moreover, it suggests a greater need for education of patients by their health care providers and further study to determine the utility of these interventions.

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

References

1. Haapanen N, Miilunpalo S, Pasanen M, Oja P, Vuori I. Agreement between questionnaire data and medical records of chronic diseases in middle-aged and elderly Finnish men and women. Am J Epidemiol. 1997;145:762–769. [PubMed]
2. Heliovaara M, Aromaa A, Klaukka T, Knekt P, Joukamaa M, Impivaara O. Reliability and validity of interview data on chronic diseases. The Mini-Finland Health Survey. J Clin Epidemiol. 1993;46:181–191. [PubMed]
3. Johansson J, Hellenius ML, Elofsson S, Krakau I. Self-report as a selection instrument in screening for cardiovascular disease risk. Am J Prev Med. 1999;16:322–324. [PubMed]
4. Natarajan S, Lipsitz SR, Nietert PJ. Self-report of high cholesterol: determinants of validity in U.S. adults. Am J Prev Med. 2002;23:13–21. [PubMed]
5. Tretli S, Lund-Larsen PG, Foss OP. Reliability of questionnaire information on cardiovascular disease and diabetes: cardiovascular disease study in Finnmark county. J Epidemiol Community Health. 1982;36:269–273. [PMC free article] [PubMed]
6. Weaver MF, Cropsey KL, Fox SA. HCV prevalence in methadone maintenance: self-report versus serum test. Am J Health Behav. 2005;29:387–394. [PubMed]
7. Wu SC, Li CY, Ke DS. The agreement between self-reporting and clinical diagnosis for selected medical conditions among the elderly in Taiwan. Public Health. 2000;114:137–142. [PubMed]
8. Coleman EA, Wagner EH, Grothaus LC, Hecht J, Savarino J, Buchner DM. Predicting hospitalization and functional decline in older health plan enrollees: are administrative data as accurate as self-report? J Am Geriatr Soc. 1998;46:419–425. [PubMed]
9. O'Donnell CJ, Glynn RJ, Field TS, Averback R, Satterfield S, Friesenger GC, 2nd, Taylor JO, Hennekens CH. Misclassification and under-reporting of acute myocardial infarction by elderly persons: implications for community-based observational studies and clinical trials. J Clin Epidemiol. 1999;52:745–751. [PubMed]
10. Chaudhry S, Jin L, Meltzer D. Use of a self-report-generated Charlson Comorbidity Index for predicting mortality. Med Care. 2005;43:607–615. [PubMed]
11. Bush TL, Miller SR, Golden AL, Hale WE. Self-report and medical record report agreement of selected medical conditions in the elderly. Am J Public Health. 1989;79:1554–1556. [PMC free article] [PubMed]
12. Kriegsman DM, Penninx BW, van Eijk JT, Boeke AJ, Deeg DJ. Self-reports and general practitioner information on the presence of chronic diseases in community dwelling elderly. A study on the accuracy of patients' self-reports and on determinants of inaccuracy. J Clin Epidemiol. 1996;49:1407–1417. [PubMed]
13. Okura Y, Urban LH, Mahoney DW, Jacobsen SJ, Rodeheffer RJ. Agreement between self-report questionnaires and medical record data was substantial for diabetes, hypertension, myocardial infarction and stroke but not for heart failure. J Clin Epidemiol. 2004;57:1096–1103. [PubMed]
14. Simpson CF, Boyd CM, Carlson MC, Griswold ME, Guralnik JM, Fried LP. Agreement between self-report of disease diagnoses and medical record validation in disabled older women: factors that modify agreement. J Am Geriatr Soc. 2004;52:123–127. [PubMed]
15. Skinner KM, Miller DR, Lincoln E, Lee A, Kazis LE. Concordance between respondent self-reports and medical records for chronic conditions: experience from the Veterans Health Study. J Ambul Care Manage. 2005;28:102–110. [PubMed]
16. Tisnado DM, Adams JL, Liu H, Damberg CL, Chen WP, Hu FA, Carlisle DM, Mangione CM, Kahn KL. What is the concordance between the medical record and patient self-report as data sources for ambulatory care? Med Care. 2006;44:132–140. [PubMed]
17. Colditz GA, Martin P, Stampfer MJ, Willett WC, Sampson L, Rosner B, Hennekens CH, Speizer FE. Validation of questionnaire information on risk factors and disease outcomes in a prospective cohort study of women. Am J Epidemiol. 1986;123:894–900. [PubMed]
18. Robinson JR, Young TK, Roos LL, Gelskey DE. Estimating the burden of disease. Comparing administrative data and self-reports. Med Care. 1997;35:932–947. [PubMed]
19. Scheitel SM, Boland BJ, Wollan PC, Silverstein MD. Patient-physician agreement about medical diagnoses and cardiovascular risk factors in the ambulatory general medical examination. Mayo Clin Proc. 1996;71:1131–1137. [PubMed]
20. St Sauver JL, Hagen PT, Cha SS, Bagniewski SM, Mandrekar JN, Curoe AM, Rodeheffer RJ, Roger VL, Jacobsen SJ. Agreement between patient reports of cardiovascular disease and patient medical records. Mayo Clin Proc. 2005;80:203–210. [PubMed]
21. Heckbert SR, Kooperberg C, Safford MM, Psaty BM, Hsia J, McTiernan A, Gaziano JM, Frishman WH, Curb JD. Comparison of self-report, hospital discharge codes, and adjudication of cardiovascular events in the Women's Health Initiative. Am J Epidemiol. 2004;160:1152–1158. [PubMed]
22. Klungel OH, de Boer A, Paes AH, Seidell JC, Bakker A. Cardiovascular diseases and risk factors in a population-based study in The Netherlands: agreement between questionnaire information and medical records. Neth J Med. 1999;55:177–183. [PubMed]
23. Midthjell K, Holmen J, Bjorndal A, Lund-Larsen G. Is questionnaire information valid in the study of a chronic disease such as diabetes? The Nord-Trondelag diabetes study. J Epidemiol Community Health. 1992;46:537–542. [PMC free article] [PubMed]
24. ESRD Medical Evidence Report Medicare Entitlement and/or Patient Registration. Centers for Medicare & Medicaid Services. 2004
25. Longenecker JC, Coresh J, Klag MJ, Levey AS, Martin AA, Fink NE, Powe NR. Validation of comorbid conditions on the end-stage renal disease medical evidence report: the CHOICE study. Choices for Healthy Outcomes in Caring for ESRD. J Am Soc Nephrol. 2000;11:520–529. [PubMed]
26. Powe NR, Klag MJ, Sadler JH, Anderson GF, Bass EB, Briggs WA, Fink NE, Levey AS, Levin NW, Meyer BK. Choices for healthy outcomes in caring for end-stage renal disease. Seminars in Dialysis. 1996;9:9–11.
27. Miskulin DC, Meyer KB, Athienites NV, Martin AA, Terrin N, Marsh JV, Fink NE, Coresh J, Powe NR, Klag MJ, Levey AS. Comorbidity and other factors associated with modality selection in incident dialysis patients: the CHOICE Study. Choices for Healthy Outcomes in Caring for End-Stage Renal Disease. Am J Kidney Dis. 2002;39:324–336. [PubMed]
28. Landis J, Koch G. The measurement of observer agreement for categorical data. Biometrics. 1977;33:159–174. [PubMed]
29. Bergmann MM, Byers T, Freedman DS, Mokdad A. Validity of self-reported diagnoses leading to hospitalization: a comparison of self-reports with hospital records in a prospective study of American adults. Am J Epidemiol. 1998;147:969–977. [PubMed]
30. Celli BR, MacNee W. Standards for the diagnosis and treatment of patients with COPD: a summary of the ATS/ERS position paper. Eur Respir J. 2004;23:932–946. [PubMed]
31. Hunt SA. ACC/AHA 2005 guideline update for the diagnosis and management of chronic heart failure in the adult: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines (Writing Committee to Update the 2001 Guidelines for the Evaluation and Management of Heart Failure) J Am Coll Cardiol. 2005;46:e1–82. [PubMed]
32. Prevalence of diabetes and imipaired fasting flucose in adults--United States, 1999-2000. MMWR Morb Mortal Wkly Rep. 2003;52:833–837. [PubMed]
33. Norris SL, Nichols PJ, Caspersen CJ, Glasgow RE, Engelgau MM, Jack L, Snyder SR, Carande-Kulis VG, Isham G, Garfield S, Briss P, McCulloch D. Increasing diabetes self-management education in community settings. A systematic review. Am J Prev Med. 2002;22:39–66. [PubMed]
34. Lampe FC, Walker M, Lennon LT, Whincup PH, Ebrahim S. Validity of a self-reported history of doctor-diagnosed angina. J Clin Epidemiol. 1999;52:73–81. [PubMed]
35. U.S. Renal Data System . USRDS 2004 Annual Data Report: Atlas of End-Stage Renal Disease in the United States. National Institutes of Health, National Institute of Diabetes and Digestive and Kidney Diseases; Bethesda: 2004.
36. Feinstein AR, Cicchetti DV. High agreement but low kappa: I. The problems of two paradoxes. J Clin Epidemiol. 1990;43:543–549. [PubMed]
37. Ford ES, Harel Y, Heath G, Cooper RS, Caspersen CJ. Test characteristics of self-reported hypertension among the Hispanic population: findings from the Hispanic Health and Nutrition Examination Survey. J Clin Epidemiol. 1990;43:159–165. [PubMed]
38. Giles WH, Croft JB, Keenan NL, Lane MJ, Wheeler FC. The validity of self-reported hypertension and correlates of hypertension awareness among blacks and whites within the stroke belt. Am J Prev Med. 1995;11:163–169. [PubMed]
39. Harlow SD, Linet MS. Agreement between questionnaire data and medical records. The evidence for accuracy of recall. Am J Epidemiol. 1989;129:233–248. [PubMed]
40. Tsubono Y, Fukao A, Hisamichi S, Hosokawa T, Sugawara N. Accuracy of self-report for stomach cancer screening. J Clin Epidemiol. 1994;47:977–981. [PubMed]
41. Luck J, Peabody JW, Dresselhaus TR, Lee M, Glassman P. How well does chart abstraction measure quality? A prospective comparison of standardized patients with the medical record. Am J Med. 2000;108:642–649. [PubMed]
42. Peabody JW, Luck J, Glassman P, Dresselhaus TR, Lee M. Comparison of vignettes, standardized patients, and chart abstraction: a prospective validation study of 3 methods for measuring quality. Jama. 2000;283:1715–1722. [PubMed]
PubReader format: click here to try

Formats:

Related citations in PubMed

See reviews...See all...

Cited by other articles in PMC

See all...

Links

  • MedGen
    MedGen
    Related information in MedGen
  • PubMed
    PubMed
    PubMed citations for these articles

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...