NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Henriksen K, Battles JB, Marks ES, et al., editors. Advances in Patient Safety: From Research to Implementation (Volume 2: Concepts and Methodology). Rockville (MD): Agency for Healthcare Research and Quality (US); 2005 Feb.

Cover of Advances in Patient Safety: From Research to Implementation (Volume 2: Concepts and Methodology)

Advances in Patient Safety: From Research to Implementation (Volume 2: Concepts and Methodology).

Show details

Applying Patient Safety Indicators (PSIs) Across Health Care Systems: Achieving Data Comparability

, , , , , , , and .

Author Information

,* , , , , , , and .

Veterans Administration Center for Health Quality, Outcomes, and Economics Research (PR, ARE, SL, SZ, DT, AKR). Carroll School of Management, Boston College (PR). Agency for Healthcare Research and Quality (AE). University of California, Davis (PSR). Boston University School of Public Health (AKR)
*Address correspondence to: Peter Rivard, MHSA; Veterans Administration Center for Health Quality, Outcomes, and Economics Research, 200 Springs Road (152), Bedford, MA 01730. Phone: 781-687-3573; e-mail: ude.ub@pdravir.

Abstract

Patient Safety Indicators (PSIs), developed by the Agency for Healthcare Research and Quality (AHRQ), are administrative data-based indicators that identify potential in-hospital patient safety events. This study developed and tested methods for (1) applying PSIs to Department of Veterans Affairs (VA) discharge data, and (2) comparing VA with non-VA PSI rates. VA inpatient data file structure and elements were modified in order to apply PSIs to VA data; further modifications were required to compare VA and non-VA PSI rates. We found that key measures, including demographics, clinical elements, and length of stay, as well as the PSI rates themselves, are sensitive even to minor data modifications. This paper demonstrates both the adaptation of a database for use with the PSIs, and the sensitivity of PSI rates to small differences in database characteristics. The paper shows how differences in data sources might affect comparisons of event rates across health care systems.

Introduction

Patient safety has become a national priority. However, due to the lack of standardized terminology or methodology for identifying patient safety problems, the rates of reported patient safety events vary widely in the literature. 1 27 The lack of a standard method is problematic for a number of reasons, including the fact that comparing quality of care, of which patient safety is an integral component, requires meaningful, reliable, and valid performance measures that can be used across health care systems and settings. Thus, development of standardized generic tools that can capture potentially preventable patient safety events is a necessary, though challenging, step in promoting a better understanding of the magnitude of the problem and in furthering the development of interventions aimed at reducing patient safety events.

Patient Safety Indicators and their development

The Patient Safety Indicators (PSIs), developed by the Agency for Healthcare Research and Quality (AHRQ) and revised by the University of California at San Francisco-Stanford University Evidence-based Practice Center (UCSF-Stanford EPC), are a set of administrative data-based indicators used to identify potential in-hospital patient safety events. 28 The AHRQ PSIs have their roots in the Institute of Medicine's definition of patient safety: “freedom from accidental injury caused by medical care.” 14 This definition has since been expanded to include “the failure of a planned action to be completed as intended or the use of a wrong plan to achieve an aim. Errors can include problems in practice, products, procedures, and systems.” 29 The PSIs are measured as rates defined as outcome of interest/population at risk. For example, the rate of the hospital-level PSI Complications of Anesthesia is the number of discharges with this complication, divided by the total number of surgical discharges. 30 PSIs track surgical complications and other iatrogenic events, screening for “potential problems that patients experience resulting from exposure to the health care system, and that are likely amenable to prevention by changes at the system level.” 28

PSIs represent a significant advance in the development of a methodology for identifying patient safety events. The PSIs, unlike previous measures evaluating complications or adverse events related to hospitalization, were specifically developed to capture those instances representing potentially preventable adverse events that compromise patient safety in the inpatient setting, such as surgical complications, death in cases with low-mortality diagnoses, and decubitus ulcers. 3, 28 In this paper, we focus on AHRQ's accepted hospital-level PSIs, which were developed through a four-step process that included literature review, evaluation of candidate PSIs by clinical panels, expert review of ICD-9-CM codes in candidate PSIs, and empirical analyses of candidate PSIs. 28, 31, 32 These indicators show good face and construct validity and specificity. 28, 32 34 Since the purpose of the hospital-level PSIs is to identify instances where a complication of care occurs during a given hospital stay, PSI cases include only those in which a secondary diagnosis code—rather than the principal diagnosis—flags a potential patient safety event. Of the 20 accepted hospital-level PSIs, 8 are for surgical discharges, 8 are for either medical or surgical discharges, and 4 are for obstetric discharges. Because this study compares PSIs from Department of Veterans Affairs (VA) discharge data to non-VA discharge data, we exclude the four obstetric PSIs here as they are not relevant to the VA. In this study, we use Version 2.1 of the AHRQ PSI software, released March 2003. Table 1 contains the definitions of the numerators, denominators, and exclusion criteria for the 16 hospital-level PSIs used in this study.

Table 1. Definitions of accepted hospital-level AHRQ Patient Safety Indicators (excludes obstetric and birth trauma indicators).

Table 1

Definitions of accepted hospital-level AHRQ Patient Safety Indicators (excludes obstetric and birth trauma indicators).

Use of administrative data

Compared to other methods of detecting patient safety events (e.g., error reporting systems and medical records) 5, 17 the PSIs offer several advantages. PSIs capitalize on the unique attributes of hospital discharge administrative data, are relatively inexpensive to use, readily available, computer readable, and typically encompass large populations, thereby facilitating population-level assessments based on calculation of event rates. 7, 32 34 Despite extensive empirical evaluation and clinical review, 28, 31, 32, 34 concerns similar to those raised about the use of administrative-data-based algorithms for identifying substandard care have surfaced in response to the development of the PSIs. 35 A recent publication linking PSIs with increased mortality, length of stay, and charges 33 generated considerable debate about the usefulness of the PSIs as a measure of hospital-acquired injuries. 36 38 Notwithstanding such controversy, the development of the PSIs has opened up new opportunities for screening potential patient safety events and paved the way for implementing patient safety initiatives and benchmarking hospital performance. 31, 32, 34

Research objectives

The purpose of this study is to develop and test methods for applying the PSIs to hospital discharge data from the Department of Veterans Affairs (VA) and for comparing VA with non-VA PSI rates. Because the PSIs were developed and tested using computerized hospital discharge abstracts from AHRQ's Healthcare Cost and Utilization Project (HCUP), PSI definitions are based on a core set of variables available from standardized hospital discharge abstracts. The abstracts are formatted using clinical and nonclinical data elements from the 1992 Uniform Bill (UB-92) hospital claims, considered the institutional claim standard. 7 However, unlike most State-level hospital administrative databases, which contain standardized discharge abstracts, VA databases have evolved using distinctive formatting structure and data element definitions. Furthermore, VA hospital discharge data contain both acute and nonacute care, whereas HCUP data contain information only from acute care hospitals. Therefore, it is necessary to modify some VA data elements to provide the appropriate inputs required by the PSI algorithms. Such differences in data elements and structure between the VA and non-VA setting (as well as across other health care systems) could affect comparisons of PSI event rates.

In this paper, we describe the modifications we made to VA file structure and data elements to (1) generate valid indicator rates using PSI software on VA data, and (2) compare PSI rates between the VA and HCUP datasets. Our goal is to present what we have learned, thereby facilitating the work of other researchers and practitioners who wish to use the PSIs, particularly for comparison across systems where there are differences in the nature and structure of the data.

Methods

VA patient treatment file

VA administrative databases contain diagnostic, demographic, and utilization information on all veterans who receive health care services in the VA. The unit of analysis is the hospitalization, but since the patient has a unique identifier, these hospitalizations can be linked by patient across datasets and fiscal years and aggregated at the patient level. Both acute and nonacute hospitalization data on veterans discharged from VA inpatient facilities are contained in the Patient Treatment File (PTF). Currently, there are 140 VA hospitals nationwide that provide information to the PTF. 39 The PTF is comprised of four subfiles, referred to as the Main, Bedsection, Procedure, and Surgery files. 39, 40

The PTF Main file contains demographic (e.g., age, sex, date of birth), diagnostic (one principal, one primary, and up to nine secondary ICD-9-CM diagnosis codes), and summary information related to each episode of care (e.g., facility identifier, dates of admission and discharge, setting of care within facility, discharge status). The PTF Bedsection file (i.e., a patient's stay under a specified treating physician's specialty service) contains one primary and up to four secondary bedsection diagnoses, as well as length of stay information for each stay in a physician's specialty service. A patient may have several bedsection records for one hospitalization. The PTF Procedure file includes the ICD-9-CM procedure code, date, time, and place of all procedures during the inpatient stay; similarly, the PTF Surgery file contains data on each hospitalization's ICD-9-CM surgery codes and surgical specialty.

AHRQ HCUP State inpatient databases

HCUP is a Federal-State-private sector collaboration sponsored by AHRQ that collects hospital discharge abstract data for research purposes. Currently, 36 States participate in HCUP, with the data comprising approximately 90 percent of all hospital discharges in the United States. 41, 42 Each HCUP inpatient record summarizes one hospital discharge. Statewide hospital data collection programs voluntarily submit data to HCUP, where the data are converted into a uniform format to facilitate multistate analyses. The State databases differ in how many diagnoses and procedures can be recorded for a hospital stay: some States provide up to 30 diagnosis and procedure fields, while some have as few as 10. The uniformly formatted HCUP discharge data are available as State Inpatient Databases (SID), which include all data from hospitals in participating States. A sample is drawn from the SID, approximating a 20 percent sample of all U.S. hospitals; this database is called the Nationwide Inpatient Sample (NIS). For this analysis, we employ a previously published study that applied the PSIs to the NIS. A subsequent study will compare PSI rates in the VA to data from the SID.

Modifying VA data for PSI software

Data required for PSI software

PSI algorithms link diagnosis and procedure codes with other information contained in standardized hospital discharge data, such as diagnosis related group (DRG) and admission type, to generate PSI event rates. The required data elements are age, sex, race, hospital identification number, disposition of patient, admission type, admission source, length of stay (LOS), DRG, major diagnostic category (MDC), ICD-9-CM principal and secondary diagnosis codes, ICD-9-CM principal and secondary procedure codes, number of diagnoses, number of procedures, and days from admission to procedure. 30 Input data files must be in Statistical Analytical System (SAS) or Statistical Package for the Social Sciences (SPSS) for running the PSI software.

Modifications to VA data elements

To create a database for fiscal year (FY) 2001, we used all PTF files with veterans-only hospitalization discharge dates between October 1, 2000, and September 30, 2001. Admission date could precede October 1, 2000; therefore, some lengths of stay exceed 365 days. Some required PSI data elements were in the VA files and needed minimal or no recoding: age, sex, race, LOS, hospital identifier, disposition of patient, DRG, MDC, and principal and secondary diagnoses. Other data were available but needed to be calculated or modified: principal procedure, admission type, and days from admission to procedure. Finally, one data element was completely missing—admission source—and needed to be constructed based on other existing data elements. The following is a discussion of those data elements that required some modification.

Principal procedure. The definitions of three PSIs (postoperative hemorrhage or hematoma, postoperative pulmonary embolism or deep vein thrombosis, and postoperative wound dehiscence) include the principal procedure, and several PSI definitions include secondary (i.e., any but the principal) procedure. Because VA files do not indicate principal procedure, we developed an algorithm to do this. The chronological first procedure in the Surgery file—or the first in the Procedure file if there is no Surgery file—would seem a logical candidate for principal. However, an attribute of the level of detail in the Surgery and Procedure files is that relatively minor procedures (e.g., administering oxygen) as well as more major but nonsurgical procedures (e.g., cardiac catheterization) may precede major surgeries (e.g., bypass surgery). Designating these first minor or nonsurgical procedures “principal” would contradict the logic of these PSIs: the principal procedure—defined as the procedure performed for definitive treatment or that is most related to the principal diagnosis—should be a clinically plausible cause of the potential safety event.

Because output from the DRG grouping software indicates whether a given procedure is typically performed in the operating room (OR), we elected to use a list of all ICD-9-CM procedure codes designated as “valid OR procedures” by the DRG grouper to identify “true” surgeries from the Surgery and Procedure files. Using this list as the criterion, nearly 3 percent of FY2001 VA hospitalizations with surgical DRGs did not include valid OR procedures. These represented surgical DRGs with particular combinations of non-OR procedures (e.g., certain pacemaker insertions or temporary tracheostomies). Our clinician team elected to eliminate cases without valid OR procedures from consideration for surgical PSIs, but to include them in the risk pools for PSIs that are not limited to surgical discharges. We then modified our algorithm to identify the principal procedure as the first chronologically valid OR procedure from either the Surgery or the Procedure file.

Admission type. The PSIs detect potential events primarily related to elective surgery. Three PSIs—postoperative physiological and metabolic derangements, postoperative pulmonary compromise, and postoperative sepsis—exclude nonelective hospitalizations from the PSI denominator. This reflects the logic that any complications occurring in patients admitted for nonelective surgeries, or urgent/emergent conditions, are less likely to be preventable, given the need for immediate care. To exclude nonelective admissions, the PSI software searches for admission type (emergent/urgent, elective); if admission type is missing, the PSI software uses the admission source (e.g., emergency department, transfer from hospital, long-term care) to determine admission type. The VA PTF lacks an admission type field and, although there is a field for admission source, there is no code for one important source, the emergency department.

We developed an algorithm that uses DRG, admission date and time, and principal procedure date and time, to distinguish between elective and nonelective cases. The algorithm first screens out urgent/emergent admissions using a list of DRG codes. This list of nonelective DRGs, originally developed for use with California hospital data that lacked admission type, included mostly trauma DRG codes; our clinicians added codes for other surgeries also considered urgent/emergent (e.g., appendectomy). The algorithm then determines the time elapsed between admission and principal procedure. To be consistent with the Complications Screening Protocol, we designated surgery performed on the day of or the day after admission as elective, with surgery performed on or after the third day as nonelective. 18 Urgent/emergent surgeries, such as appendectomy upon admission, are excluded by the trauma DRG screen. Other urgent/emergent cases could involve surgery after the second day of hospitalization, allowing time to stabilize the patient and perform diagnostic testing. Because VA data include admission and procedure times, as well as dates, we refined the algorithm to exclude (i.e., designate urgent/emergent) weekend and evening (5 p.m. to 5 a.m.) admissions and hospitalizations where the principal procedure occurs on a weekend or in the evening. The remaining hospitalizations are considered elective.

Modifying VA data for comparison with non-VA data

Modifications to VA file structure

Compared to HCUP data, the four VA inpatient data files appear to be a richer source of patient-level and hospitalization-level information. Examples of this include the ability to link multiple hospital stays associated with one patient; the availability of bedsection data; more diagnosis and procedure codes; and admission, discharge, and procedure times. However, to assure a meaningful comparison of PSI rates between the VA and HCUP, we modified the PTF files to “equalize” information between the two data sources, as described below.

Acute-only hospitalizations. Because VA hospitalization records include nonacute and acute care, while HCUP records contain acute care only, we eliminated nonacute care from the VA records using the VA Health Economics Resource Center's (HERC) definitions of acute and nonacute bedsections. 43, 44 HERC defines the following bedsections as nonacute: rehabilitation, spinal cord, substance abuse, domiciliary, long-term care, blind rehabilitation, psychiatry, intermediate medicine, and psychosocial residential rehabilitation treatment program (PRRTP). We separated the PTF Main file hospitalizations into three groups by examining the bedsections associated with each hospitalization: (1) “pure” acute hospitalizations (all bedsections in the hospitalization are acute), (2) “pure” nonacute hospitalizations (all bedsections in the hospitalization are nonacute), and (3) “mixed” hospitalizations (one hospitalization includes both acute and nonacute bedsections).

Pure acute hospitalizations were left unmodified. Pure nonacute hospitalizations were excluded from the analytical database. We eliminated nonacute bedsections from mixed hospitalizations, and created entirely new acute hospitalizations from the remaining sets of contiguous acute bedsections. More than one new hospitalization may be created from a mixed hospitalization (e.g., a mixed hospitalization with one nonacute bedsection between acute bedsections becomes two acute hospitalizations). Elimination of nonacute bedsections from mixed hospitalizations resulted in some acute hospitalizations with new discharge dates prior to FY2001; those hospitalizations were eliminated.

We thus created a new database from the four existing PTF files, with the hospital discharge as the unit of analysis. The resulting hospital discharge summary file, constructed of data aggregated from the PTF Main, Bedsection, Procedure, and Surgery files, contained only acute care, and was similar in structure to the standardized hospital discharge abstract found in HCUP data. Although we risked losing some potentially useful information from the VA by making these changes, this risk was minimized because, as we describe in the next section, we selected the unique data elements from each of the files for the summary discharge file.

Creation of new summary file. Principal procedure: The principal procedure algorithm is essentially the same for the new aggregated hospitalizations as for the pure acute hospitalizations: the first, chronologically valid OR procedure from the Surgery or Procedure file is selected, as long as the date falls between the new admission and new discharge dates.

Diagnosis: The distinction between principal and secondary diagnoses is central to several PSI definitions. The diagnosis related to the reason for admission is principal. In VA PTF hospitalization records, the Main file principal diagnosis is consistent with this definition and can be used as principal diagnosis for the PSIs. However, because our method of aggregating acute hospitalizations eliminated nonacute bedsections, and some of those nonacute bedsections preceded the acute portion of the hospitalization, the principal diagnosis from the Main file may have originated from the nonacute bedsection and may not have applied to the acute hospitalization. Therefore, for the newly aggregated hospitalizations, we created the following rules for determining principal diagnosis: (1) If the new and original admission dates are the same, use the Main file principal diagnosis; (2) if the dates are different, choose the primary diagnosis (diagnosis responsible for the bedsection stay) from the first bedsection as a proxy for principal diagnosis (bedsection files lack a principal diagnosis field).

DRG: The DRG grouper designates each DRG as either medical or surgical; PSI software uses DRGs to classify a discharge as surgical or medical. The problem and solution for identifying the correct DRG for the new summary file parallel those for principal diagnosis. While the DRG from the PTF Main file is appropriate for pure acute hospitalizations when no nonacute bedsections were discarded, in the newly aggregated acute hospitalizations, a DRG must be selected from among the DRGs associated with each of the one or more remaining acute bedsections. The DRG for each bedsection is assigned based on the diagnoses and procedures associated with that bedsection. Since the DRG grouper assigns a weight to each DRG for costing purposes, our solution was to select the bedsection DRG with the highest weight for our new hospitalization summary file. Our rationale was that in the private sector, this DRG would be selected to maximize reimbursement. This also follows the HERC method, which selects the highest weighted DRG as the discharge DRG among the different bedsection DRGs.

Admission type: The admission-type algorithm described earlier, developed for use with the original VA PTF Main file data, could not be used in its entirety for the HCUP comparison; HCUP data do not include admission times or procedure times. Therefore, we eliminated admission time and principal procedure times from the screens used by the algorithm.

Results

Effects of modifying file structure

Table 2 compares the modified VA data to the original VA data files and structure. Compared to the complete PTF database (left-hand column), the modified acute-only file (right-hand column) contains substantially fewer hospitalizations, as expected, because we eliminated nonacute bedsections. These modifications reduced the total number of hospitalizations 22 percent, from 561,229 to 439,537. The reduction in total records resulting from the elimination of pure nonacute hospitalizations was balanced somewhat by the creation of new acute hospitalizations from the mixed acute/nonacute records. Reconfiguring the mixed acute/nonacute hospitalizations (about 8 percent of the total) generated from one to eight new acute hospitalizations from each mixed hospitalization. Similarly, median and mean lengths of stay decreased by 1 and approximately 4 days, respectively, while median and mean age increased by about 2 and 3 years, respectively. There were minor differences in sociodemographic characteristics between the two sets of hospitalizations.

Table 2. VA hospitalizations: FY2001 discharges, veterans only.

Table 2

VA hospitalizations: FY2001 discharges, veterans only.

Table 3 depicts the numerators, denominators, and unadjusted rates for each of the PSIs, using both the original VA database and the aggregated acute-only database. Both the numerators and denominators decreased for all PSIs; however, there was a slight increase in the rates for some PSIs and a slight decrease for others. For example, rates of infection due to medical care increased from 1.91 to 2.37 per 1,000 discharges, while rates of postoperative hip fracture decreased from 1.90 to 1.14 per 1,000 discharges.

Table 3. Unadjusted FY2001 rates for accepted hospital level Patient Safety Indicators: comparison of complete VA files with acute-only VA files.

Table 3

Unadjusted FY2001 rates for accepted hospital level Patient Safety Indicators: comparison of complete VA files with acute-only VA files.

Effects of modifying admission type definition

The VA-only database has a smaller number of elective admissions because hospitalizations with evening and nighttime admissions (after 5 p.m.) or with principal procedure occurring after hours (after 5 p.m.) are also designated nonelective; whereas, in the VA-HCUP comparison database, those “after-hours” cases remain in the pool of hospitalizations designated elective. In the VA-only database, elective admissions make up 43 percent of all surgical hospitalizations with one or more valid OR procedures; in the VA-HCUP comparison database, this rate of elective admissions rises to 61 percent. Table 4 compares PSI rates for the three indicators for which the risk pool is elective admissions only. Removing admission and procedure times from the algorithm (i.e., more admissions designated “elective”) increased the rates of all three PSIs, suggesting that the definition that includes admission and procedure times may have better specificity (i.e., capturing the cases that are “true” PSIs).

Table 4. Unadjusted FY2001 rates for accepted hospital level Patient Safety Indicators ( VA, acute-only).

Table 4

Unadjusted FY2001 rates for accepted hospital level Patient Safety Indicators ( VA, acute-only).

PSI rates

Table 5 compares the PSI rates that were calculated using the VA aggregated-acute database with rates presented in a previously published nationwide study. As noted in the table, some PSI rates are not fully comparable between the two studies due to changes in PSI inclusion/exclusion criteria between the earlier version of the PSI software used by Romano et al. 32 and Version 2.1 used in this study. The nationwide rates in Table 5 are based on NIS data from AHRQ, as reported by Romano et al. Therefore, the original admission-type field, and not the admission-type algorithm described above, was used to determine elective admissions in the Romano et al. study. Overall, VA and non-VA rates are similar. However, for a few indicators, rates differ substantially. For example, for death in low-mortality DRGs among males age 65–74 years, the VA rate is 4.28 per thousand, while the non-VA rate is 2.02; for postoperative sepsis in this age group, the VA rate is 6.48 per thousand while the non-VA rate is 12.32.

Table 5. Unadjusted rates for accepted hospital level Patient Safety Indicators: comparison of VA FY2001 (acute-only) and HCUP 2000 Nationwide Inpatient Sample.

Table 5

Unadjusted rates for accepted hospital level Patient Safety Indicators: comparison of VA FY2001 (acute-only) and HCUP 2000 Nationwide Inpatient Sample.

Discussion

The purpose of this study was to develop and test methods for applying PSIs to VA hospital discharge data and for comparing VA with non-VA PSI rates. The VA inpatient administrative database was modified to bridge apparent differences between the VA database and the HCUP databases. First, modifications were needed because certain elements required by the PSI software were missing or defined differently in the VA data. While some modifications had little or no impact on database characteristics or PSI rates, others, such as designating principal procedure and admission type, had substantial impact. Differences among the rules that we considered for designating principal procedure, and thereby for designating all other procedures as secondary, affected both the numerators and denominators for several PSIs. Similarly, the choice of algorithm to designate elective admissions substantially affected both the rates of elective admissions and those of three PSIs.

Second, the greatest difference between VA and non-VA databases that might affect our results was the inclusion of nonacute care in the VA data. For comparison of VA and non-VA PSI rates, it was necessary to eliminate nonacute hospitalizations as well as the nonacute portions of mixed acute/nonacute hospitalizations and then reaggregate the remaining acute portions of the mixed hospitalizations. The altered file structures gave new characteristics to the reaggregated hospitalizations, requiring us to make further changes to certain data elements in the affected files. This included changes to length of stay, principal procedure, principal diagnosis, and DRG. This “equalization” of the VA and HCUP databases achieved comparability of PSI rates between the two sources; however, it also created a less rich and “true” representation of veterans' inpatient care.

Finally, there were substantial differences in some PSI rates between the complete and the acute-only VA databases. This demonstrates the sensitivity of PSI rates to changes in data aggregation and to inclusion of nonacute care. It also indicates that PSIs can identify potential safety events in the nonacute setting as well as the acute care setting, and suggests that certain PSI events (e.g., postoperative hip fractures) are relatively more likely than others to occur in the course of nonacute care. In addition, while a portion of the difference in the rate between the original VA data and the acute-only data is attributable directly to the elimination of nonacute hospital stays and portions of hospital stays, a portion is also an artifact of changes in principal procedure, principal diagnosis, DRG, and number of procedure codes for mixed acute and nonacute hospitalizations. A change in any of these elements of the hospitalization record can result in a record being excluded or included in a PSI numerator or denominator.

(not reported)

Our results agree with the literature that raises concerns over the use of administrative data-based algorithms for detecting patient safety events. 15, 35 While this approach to measuring patient safety has distinct advantages over other methods, it carries the risk of capturing false events. Our results suggest that other health care systems may face similar needs to make modifications to their data in order to apply the PSI software. While this study demonstrated that it is possible to modify data elements to achieve a high degree of comparability across health care systems with different databases, such differences in data elements and structure between systems could still affect comparisons of PSI event rates. Nonetheless, because we did not examine risk-adjusted PSI rates in this study, further research is critical to better our understanding of how meaningful these differences really are.

Conclusion

We have demonstrated the sensitivity of PSI rates to differences in data file structure and to definitions and sources of data elements. The consequences of this sensitivity are amplified by the fact that PSI rates are inherently low: most PSI rates are in the range of one to five per thousand hospitalizations. Therefore, differences in data structures and algorithms that add or subtract just one or a few cases from the numerator of a PSI for a given population and time period could make a meaningful difference in the overall PSI rate.

Acknowledgments

The research reported here was supported by the Department of Veterans Affairs, Veterans Health Administration, Health Services Research and Development Service, Project IIR 02-144-1, Amy K. Rosen, Principal Investigator. The views expressed in this article are those of the authors and do not necessarily represent the views of the Department of Veterans Affairs. The authors are very grateful to Cindy L. Christiansen, Ph.D., for statistical consultation; Jeff Geppert, J.D., for assistance on modification of VA data; Denise Remus, Ph.D., for assistance with AHRQ PSIs; and Priti Trivedi for assistance with manuscript preparation.

References

1.
Roos LL, Brazauskas R. Outcomes and quality assurance: facilitating the use of administrative data. Qual Assur Health Care 1990;(2):77–88. [PubMed: 2103874]
2.
Miller RM, Elixhauser A, Zhan C. Patient safety events during pediatric hospitalizations. Pediatrics 2003;(111):1358–66. [PubMed: 12777553]
3.
Miller M, Elixhauser A, Zhan C. et al. Patient Safety Indicators: using administrative data to identify potential patient safety concerns. Health Serv Res. 2001;36:110–32. [PMC free article: PMC1383610] [PubMed: 16148964]
4.
Iezzoni LI. Assessing quality using administrative data. Ann Intern Med. 1997;127:666–74. [PubMed: 9382378]
5.
Iezzoni LI, Mackiernan YD, Cahalane MJ. et al. Screening inpatient quality using post-discharge events. Med Care. 1999 Apr;37(4):384–98. [PubMed: 10213019]
6.
Iezzoni LI, Davis RB, Palmer RH. et al. Does the complications screening program flag cases with process of care problems? Int J Qual Health Care. 1999 Apr;11(2):107–18. [PubMed: 10442841]
7.
Iezzoni, LI. Risk adjustment for measuring health care outcomes. Chicago: Health Administration Press; 2003.
8.
Hofer TP, Kerr EA, Hayward RA. et al. What is an error? Eff Clin Pract. 2000 Nov–Dec;3(6):261–9. [PubMed: 11151522]
9.
Hayward RA, Hofer TP. Estimating hospital deaths due to medical errors: preventability is in the eye of the reviewer. JAMA. 2001;286:415–20. [PubMed: 11466119]
10.
Brennan TA. The Institute of Medicine report of medical errors—could it do harm? N Engl J Med. 2000 Apr 13;342(15):1123–5. [PubMed: 10760315]
11.
Leape, LL Institute of medicine medical error figures are not exaggerated. JAMA. 2000;284:95–7. [PubMed: 10872022]
12.
McDonald CJ, Weiner M, Hui SL. Deaths due to medical errors are exaggerated. JAMA. 2000 Jul 5;284(1):93–5. [PubMed: 10872021]
13.
Thomas EJ, Studdert DM, Newhouse JP. et al. Costs of medical injuries in Utah and Colorado. Inquiry. 1999 Fall;36(3):255–64. [PubMed: 10570659]
14.
Kohn LT, Corrigan JM, Donaldson MS, editors. To err is human: building a safer health system. A report of the Committee on Quality of Health Care in America, Institute of Medicine. Washington, DC: National Academy Press; 2000.
15.
Weingart SN, Iezzoni LI, Davis RB. et al. Use of administrative data to find substandard care: validation of the complications screening program. Med Care. 2000;38(8):796–806. [PubMed: 10929992]
16.
Kalish RL, Daley J, Duncan CC. et al. Costs of potential complications of care for major surgery patients. Am J Med Qual. 1995;10(1):48–54. [PubMed: 7727988]
17.
Iezzoni LI, Daley J, Heeren T. et al. Using administrative data to screen hospitals for high complication rates. Inquiry. 1994;31(1):40–55. [PubMed: 8168908]
18.
Lawthers AG, McCarthy EP, Davis RB. et al. Identification of in-hospital complications from claims data. Is it valid? Med Care. 2000;38(8):785–95. [PubMed: 10929991]
19.
Silber JH, Rosenbaum PR, Schwartz JS. et al. Evaluation of the complication rate as a measure of quality of care in coronary artery bypass graft surgery. JAMA. 1995;274(4):317–23. [PubMed: 7609261]
20.
Rosen AK, Geraci JM, Ash AS. et al. Postoperative adverse events of common surgical procedures in the Medicare population. Med Care. 1992;30(9):753–65. [PubMed: 1518309]
21.
Rosen AK, Ash AS, Geraci JM. et al. Postoperative adverse events of cholecystectomy in the Medicare population. Am J Med Qual. 1995;10(1):29–37. [PubMed: 7727985]
22.
McGuire HH Jr., Horsley JS 3rd, Salter DR. et al. Measuring and managing quality of surgery. Statistical vs. incidental approaches. Arch Surg. 1992;127(6):733–7. [PubMed: 1596176]
23.
Dubois RW, Brook RH. Preventable deaths: who, how often, and why? Ann Intern Med. 1988;109(7):582–9. [PubMed: 3421565]
24.
Bedell SE, Deitz DC, Leeman D. et al. Incidence and characteristics of preventable iatrogenic cardiac arrests. JAMA. 1991;265(21):2815–20. [PubMed: 2033737]
25.
Andrews LB, Stocking C, Krizek T. et al. An alternative strategy for studying adverse events in medical care. Lancet. 1997;349(9048):309–13. [PubMed: 9024373]
26.
Silber JH, Williams SV, Krakauer H. et al. Hospital and patient characteristics associated with death after surgery. A study of adverse occurrence and failure to rescue. Med Care. 1992;30(7):615–29. [PubMed: 1614231]
27.
Brailer DJ, Kroch E, Pauly MV. et al. Comorbidity-adjusted complication risk: a new outcome quality measure. Med Care. 1996;34(5):490–505. [PubMed: 8614170]
28.
McDonald K, Romano P, Geppert J, et al. Measures of patient safety based on hospital administrative data—the Patient Safety Indicators. Technical Review 5 (prepared by the University of California San Francisco-Stanford Evidence-based Practice Center under Contract No. 290-97-0013). AHRQ Publication No. 02-0038. Rockville, MD: Agency for Healthcare Research and Quality; August 2002.
29.
Quality Interagency Coordination (QUIC) Task Force. Doing what counts for patient safety: Federal actions to reduce medical errors and their impact. A report to the President. Washington, DC; 2000. Available at: http://www​.quic.gov/report​/fullreport/htm. Accessed May 2004.
30.
Agency for Healthcare Research and Quality. Patient Safety Indicators SAS Software Documentation. Version 2.1, Rev. 1. Rockville, MD: Agency for Healthcare Research and Quality; 2003.
31.
Romano PS, Chan BK, Schembri ME. et al. Can administrative data be used to compare postoperative complication rates across hospitals? Med Care. 2002;40(10):856–67. [PubMed: 12395020]
32.
Romano PS, Geppert JJ, Davies S. et al. A national profile of patient safety in U.S. hospitals. Health Aff (Millwood) 2003;22(2):154–66. [PubMed: 12674418]
33.
Zhan C, Miller MR. Excess length of stay, charges, and mortality attributable to medical injuries during hospitalization. JAMA. 2003;290(14):1868–74. [PubMed: 14532315]
34.
Zhan C, Miller MR. Administrative data based patient safety research: a critical review. Qual Saf Health Care 2003;12 (Suppl 2):ii58–63. [PMC free article: PMC1765777] [PubMed: 14645897]
35.
Weingart SN, Iezzoni LI. Looking for medical injuries where the light is bright. JAMA. 2003;290(14):1917–9. [PubMed: 14532322]
36.
Schiowitz MF. Definitions of medical injuries. [See Comment] JAMA. 2004;291(3):303–4. [PubMed: 14734589]
37.
Blank, A Definitions of medical injuries. [See Comment] JAMA. 2004;291(3):304. [PubMed: 14734591]
38.
Carr, JB Definitions of medical injuries. [See Comment] JAMA. 2004;291(3):304. [PubMed: 14734590]
39.
Office of the Secretary, Department of Veterans Affairs. Department of Veterans Affairs strategic plan 2003-2008. Washington, DC: Department of Veterans Affairs; 2003. p. 5. Available at: http://www​.va.gov/opp​/sps/2003plan/Plan_2003.pdf. Accessed February 2004.
40.
Rosen AK, Loveland SA, Rakovski CC. et al. Do different case-mix measures affect assessments of provider efficiency? Lessons from the Department of Veterans Affairs. J Ambul Care Manage. 2003;26(3):229–42. [PubMed: 12856502]
41.
Steiner C, Elixhauser A, Schnaier J. The Healthcare Cost and Utilization Project: an overview. Eff Clin Pract. 2002;5(3):143–51. [PubMed: 12088294]
42.
Agency for Healthcare Research and Quality (AHRQ) Healthcare Cost and Utilization Project (HCUP). Overview of the State inpatient databases (SID). Available at: http://www​.hcup-us.ahrq​.gov/sidoverview.jsp. Accessed March 2004.
43.
Wagner TH, Chen S, Barnett PG. Using average cost methods to estimate encounter-level costs for medical-surgical stays in the VA. Med Care Res Rev. 2003;60(3 Suppl):15S–36S. [PubMed: 15095543]
44.
Barnett PG, Wagner TH. Preface. Med Care Res Rev. 2003;60(Suppl):7S–15S.
PubReader format: click here to try

Views

  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this page (217K)

Other titles in this collection

Related information

  • PMC
    PubMed Central citations
  • PubMed
    Links to pubmed

Related citations in PubMed

See reviews...See all...

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...