U.S. flag

An official website of the United States government

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Henriksen K, Battles JB, Marks ES, et al., editors. Advances in Patient Safety: From Research to Implementation (Volume 2: Concepts and Methodology). Rockville (MD): Agency for Healthcare Research and Quality (US); 2005 Feb.

Cover of Advances in Patient Safety: From Research to Implementation (Volume 2: Concepts and Methodology)

Advances in Patient Safety: From Research to Implementation (Volume 2: Concepts and Methodology).

Show details

Mixed Methods Analysis of Medical Error Event Reports: A Report from the ASIPS Collaborative

, , , , , , , , and .

Author Information and Affiliations

Abstract

Objective: The Applied Strategies for Improving Patient Safety (ASIPS) collaborative developed an ambulatory primary care patient safety reporting system through an Agency for Healthcare Research and Quality (AHRQ)-funded demonstration grant. Such systems can potentially inform the development of interventions to improve patient safety, but only if the data contained in incident reports can be transformed into usable information. This paper presents our mixed methods approach to analyzing such data. Methods: We describe our approach in terms of its rationale, techniques, prioritization of quantitative and qualitative methods, implementation, and integration of mixed methods. We also describe the nature of the data reported to ASIPS. Results: We illustrate our approach using an analysis of diagnostic testing errors. We describe why this error type is significant, how we selected reports for analysis, the results of both our quantitative and qualitative analyses, and what we learned from them. Based on our experience, we present a protocol for applying a mixed methods approach to the study of patient safety reporting data to inform the development of interventions. Conclusions: Using mixed methods to study patient safety is an effective and efficient approach to data analysis that provides both information and motivation for developing and implementing patient safety improvements.

Introduction

Since the publication of To Err Is Human, 1 medical errors have received considerable national attention, leading to numerous efforts to reduce such errors and improve patient safety. One component of these efforts is the use of incident reporting systems that collect information about medical errors so that practitioners can learn from them and apply lessons learned to promote patient safety. Applied Strategies for Improving Patient Safety (ASIPS) is a 3-year demonstration project, funded by the Agency for Healthcare Research and Quality (AHRQ), that examined the ability of an ambulatory primary care patient safety reporting system (PSRS) to collect meaningful reports of medical errors. 2 ASIPS developed interventions to improve safety based on an analysis of submitted reports. The project was a joint effort among the University of Colorado Health Sciences Center, two practice-based research networks (PBRNs) 3 with primary care practices throughout Colorado, and The CNA Corporation of Alexandria, VA. The PSRS has collected more than 700 reports from clinicians and other staff at 34 participating practices within the PBRNs. 4

The error event reports submitted to ASIPS are largely free text narratives, augmented by fixed-choice responses to questions providing contextual detail. Such reports can be a rich source of data from which to develop interventions for improving patient safety, but only if usable information can be extracted to guide intervention development. The purpose of this paper is to describe our mixed methods approach to learning from medical errors, and to illustrate it with an analysis of diagnostic testing errors.

Methods

Mixed methods studies “integrate one or more qualitative and quantitative techniques for data collection and/or analysis.” 5 Creswell et al. 6 identified five attributes to consider in designing such studies: (1) the rationale for mixing methods, (2) the mixed data collection and analysis techniques to use, (3) the priority to give to quantitative versus qualitative aspects of the research, (4) whether to use a sequential or concurrent implementation plan for these techniques, and (5) the phase of the research process at which the integration or mixing of methods occurs. We summarize our methodology by our approach to these attributes.

Rationale

We believe that “identifying the correlates associated with variation” in safety report data and “gaining insight into the processes and events that lead up to the observed variation” 5 require a mixed methods approach to fully extract the information contained in submitted reports. Further, we believe that this approach provides the most complete and usable information for understanding medical errors and developing patient safety interventions.

Techniques

Because the ASIPS reports are our only data source, our mixed methods technique stems from how we coded and analyzed these reports. We applied two strategies for coding data for subsequent analyses. The first approach used a multiaxial taxonomy, adapted from one developed for a medical malpractice carrier, 7 with 421 codes distributed among 10 dimensions. A coding team read each report and, by consensus, selected one or more codes from each dimension to represent the nature of the reported event. This process resulted in a set of 10 or more taxonomy codes (14–15 codes on average) assigned to each report.

We used these codes primarily as input into quantitative analyses. First, we transformed the data from a list of codes associated with a reported event to a dataset of dichotomous variables. For each event (case) we scored each of the 421 codes (variables) as either a “1” (assigned to that event) or a “0” (not assigned to that event). We also created second-order dichotomies from the combination of selected dichotomous variables, and formed numeric variables by counting the number of selected codes associated with an event (e.g., the number of participants involved). We then analyzed these dichotomous and numeric variables quantitatively through frequency distribution, cross-tabulation, variance, discriminant, correlation, and logistic regression analyses. A report on how we used the taxonomy to code and analyze error reports appears elsewhere in this Advances in Patient Safety compendium. 8

Our second coding strategy resulted in qualitative codes. We adopted an inductive, grounded approach that permitted the coding scheme to emerge from our reading and rereading of report narratives. We used both in vivo or indigenous codes (common terms or phrases used by reporters become the codes) and analyst-supplied descriptive, interpretive, pattern, and inferential codes. 9 We followed an iterative constant comparison method of reading a small number of event narratives, extracting preliminary codes, revising them based on reading more narratives, and repeating the cycle, until a stable set of codes developed. 10 This approach avoided imposing a predetermined set of codes onto the narratives, allowing the coding scheme to grow and change in response to the addition of new reports over time. (Atlas.ti software facilitated this process.) This approach also allowed us to develop codes particular to the analysis of specific kinds of reported events and to transform commonly occurring codes into dichotomous variables for use in quantitative analyses.

In addition to qualitatively coding event narratives, we developed flow charts of event activities for various kinds of events to aid our understanding of their course and how they could go wrong. The qualitative analyses that proved most informative for developing interventions were those that focused on identifying (1) weak points in the flow of events at which errors occur; (2) contributing, mitigating, and contextual factors underlying the flow of events; and (3) reporter attributions of causality in the flow of events.

Priority

We gave equal priority to quantitative and qualitative techniques during our analysis of error events reported to ASIPS, but gave greater priority to qualitative results in developing interventions. Both approaches were essential to learn from errors, but our qualitative results proved more useful when working with participating practices to develop interventions.

Implementation

Because we were working with a single dataset, the issue of sequential versus concurrent data collection was not relevant; however, implementation sequence was relevant to our data analysis. This sequence was iterative rather than strictly sequential or concurrent. We used early qualitative analysis to guide our initial adaptation of the taxonomy, and then used the results of early quantitative analyses in selecting topics for in-depth investigation through qualitative analysis. We used results from these qualitative analyses to significantly revise the taxonomy midway through the project and incorporated selected qualitative codes in quantitative analyses.

Integration

Flowing from priority and implementation considerations, we integrated our quantitative and qualitative approaches iteratively throughout the analytic process. We collected a single set of reports that we coded both quantitatively and qualitatively. Analysis consisted of iteratively using the results from one mode to further analysis through the other. Finally, we used results from both modes to identify errors and design interventions.

Event report data

ASIPS participants voluntarily submit reports using the Web, paper, or telephone; details of the reporting system are described in previous ASIPS publications. 2, 4 Table 1 presents the data elements and their formats for reporting events to ASIPS. Reporters were asked to describe any event they “don't wish to have happen again that might represent a threat to patient safety,” to provide additional event-related information, and to respond to contextual questions about the event. Reports are based on reporters' perspectives and perceptions as well as their understanding of what and how much to report. Thus, the quality of the narrative data varies considerably in breadth and depth, completeness, and accuracy.

Table 1. Data elements in ASIPS error reports*.

Table 1

Data elements in ASIPS error reports*.

Reported events are only a subset of all medical errors that occur. We have no way of judging whether the sample of reports ASIPS receives is representative, and we cannot calculate incidence rates or true relative frequencies of different event types. We can only examine patterns and relationships within the data received, and must be cautious of parameter estimation or generalization beyond the cases reported.

Ongoing safety reporting systems expand and evolve as new event reports are received. Our hierarchical taxonomy allowed successive quantitative analyses at successively deeper levels of detail and specificity as we amassed sufficient numbers of cases to support the additional detail. We also conducted successive qualitative analyses until we approached conceptual saturation, when results stabilized and new cases added little new information.

Case example: diagnostic testing errors

We selected diagnostic testing errors for analysis because of their frequent occurrence in the ASIPS database and their relation to patient harm and risk of harm. We defined diagnostic testing errors as those that involved wrongful ordering (or failing to order), performing, reporting, documenting, or acting on results of laboratory tests, imaging and electronic tracings, or physical function tests. We identified diagnostic testing errors through a two-step process. First, we screened all reports for the presence of taxonomy codes associated with such errors (e.g., codes for diagnostic intent or procedural errors related to a test), keeping the screens broad to increase the likelihood of capturing all diagnostic testing errors (i.e., we emphasized sensitivity over specificity). Next, we read all cases that screened positive and eliminated any false positives that did not have diagnostic testing as their primary error activity.

Results

Of the 608 error reports received and coded by the ASIPS team between November 2001 and August 2003, we classified 325 (53 percent) as diagnostic testing errors. Of those, 44.3 percent involved blood tests, 24.6 percent involved the testing of other bodily specimens, 21.2 percent involved nonspecimen tests (imaging, electronic tracing, or physical function tests), and 12.6 percent involved unspecified labs or tests. (These figures add up to more than 100 percent because some reported errors involved more than one type of test.) Because diagnostic testing errors were reported so frequently, nearly half of all reports resulting in any patient harm were of this type, even though they resulted in harm about as frequently as reported events that did not include this kind of error (23.4 percent versus 28.3 percent, respectively, difference not statistically significant). Interestingly, reported diagnostic testing errors were more likely coded as unstable (i.e., it was too early to determine whether harm did or would likely result) than was true for other types of errors (10.8 percent versus 4.2 percent, P < 0.01). Further, when we included the unstable code in our measure of harm, diagnostic testing errors were as frequently associated with harm as were other reported types of errors (34.2 percent versus 32.5 percent, P not significant). Thus, reported diagnostic testing errors are both common and associated with harm or the possibility of harm.

Diagnostic testing error reports had more event activity codes than other types of error reports (mean 4.4 versus 3.8, P = 0.001 by F test), suggesting that they may have more qualitative detail or may be more complex than other reports, thus making them good candidates for qualitative analysis. The reported error included activities such as delay in performing the procedure (20 percent of reports), performance issue during the procedure (15 percent), not performing the procedure (12 percent), and delay in acting on information about the procedure (30 percent). Thus, these errors can occur at various points along the chain of activities associated with diagnostic testing procedures. This finding suggested that we focus our qualitative analysis on determining where and how the error occurred.

Quantitative analysis

We characterized diagnostic testing events through a detailed quantitative analysis of taxonomy codes. Table 2 identifies:

Table 2. Significant attributes of diagnostic testing errors (based on taxonomy codes).

Table 2

Significant attributes of diagnostic testing errors (based on taxonomy codes).

1.

Common attributes that we applied to at least 20 percent of all diagnostic testing error reports.

2.

Distinguishing attributes that (a) differentiated between diagnostic testing and other types of errors, (b) applied to at least 10 percent of diagnostic testing reports, and (c) had a statistically significant positive association with this type of report.

3.

Discriminating attributes that distinguished between event types in a stepwise discriminant function analysis.

Based on attributes that are both common and distinguishing, diagnostic testing errors were characterized by communication errors, especially to the clinician of record; missing information; procedural errors, especially involving delay; and systems issues, especially involving malfunctions.

We used the distinguishing attributes in a discriminant analysis to assess the attributes' multivariate ability to discriminate between diagnostic testing error reports and other error reports. Multivariate analysis with codes from a hierarchical taxonomy must be carefully conducted. Thus, communication error (3.4.4.2) should not be used in the same analysis as communication from a nonphysician provider (3.4.4.2.3) or any codes formed through a combination of subordinate communication error codes. The distinguishing attributes identified in Table 2 are a mixture of superordinate and subordinate codes, which cannot all be used together. We used the superordinate code if there was only one subordinate code that qualified for the discriminant analysis, or the subordinate codes if there were more than one for a given superordinate code.

We initially included 11 attributes in a stepwise analysis. Seven attributes were able to enter the discriminant function before the limitation criterion was reached. The final canonical function accounts for 21.3 percent of the variance in the diagnostic testing error binary variable (canonical correlation = 0.462, P < 0.001). The standardized function coefficients are highest for the four procedure-related attributes (ranging from 0.559 to 0.419), while the remaining three attribute coefficients range from 0.253 to 0.151. The discriminant function correctly classifies 63.1 percent of diagnostic testing error reports (compared with a prior probability of 53.5 percent), and 80.2 percent of other error reports (prior probability = 46.5 percent), for an overall correct classification rate of 71.1 percent.

In summary, the quantitative analysis sensitized us to look for deeper understanding of how communication issues influenced diagnostic testing errors, at what point in the procedure event chain the error occurred, how and why it occurred, and why harm did or did not occur.

Qualitative analysis

Having read through each of the selected cases to check for false positives during the selection process, we were somewhat familiar with the data when we began qualitatively coding them. This familiarity allowed us to adopt an initial coding approach that was “partway between the a priori and inductive approaches ...creating a general accounting scheme for codes that was not content-specific, but points to the general domains in which codes can be developed inductively.” 9 We drew on existing coding schemes 11 14 for an initial set of coding categories and guidelines that included actor/agent, acts/activities, setting/context, relationships, processes, and products/outcomes/consequences. We iteratively refined these codes and identified specific diagnostic testing event referents for them. We added new categories when indicated; e.g., “transitions” as juncture points in the diagnostic testing process when control over the test or information about the test transfers—or fails to properly transfer—from one person or setting to another. We also added additional codes to represent factors that appeared to drive the error event process, whether as contributors to or mitigators of the error or its consequences.

We eventually derived a model of the diagnostic testing process (Figure 1) and a framework for analyzing it (Figure 2) to summarize our work. Figure 1 classifies the stage in the diagnostic testing process and the transition points within and between stages at which errors can occur, and presents representative occurrences that fall into each of them. Note that this inductively-derived model is very similar to an already existing one 15 17 that identifies preanalytic, analytic, and postanalytic phases for laboratory tests. We found that our model works as well as the existing model for imaging and other diagnostic tests. We also found that specimen collection and handling, which the existing model classifies as preanalytic (“before”), was better placed into our “during” phase because tests in most primary care practices are sent to an external lab for processing (or imaging procedures are sent to external imaging centers) and there is little to no analytic processing internal to the practice. The internal analytic analog for the practice, which we differentiated from the “before” activities associated with selecting and ordering a test, is specimen collection and handling; thus, we included collection and handling as a “during” activity. Finally, we found that the transition juncture points are fertile breeding grounds for error; the transfer between persons and/or settings frequently goes awry.

Figure 1. Qualitatively derived model of diagnostic testing errors.

Figure 1

Qualitatively derived model of diagnostic testing errors.

Figure 2. Analytic framework of cascade of events.

Figure 2

Analytic framework of cascade of events.

The framework in Figure 2 emerged as we classified how various factors affected the cascade of events leading to and flowing from the main event error. We classified these factors as contributing to the occurrence of the main event or its outcome, mitigating the severity of the occurrence or outcome, or contextual to the event's flow. Contributing and mitigating factors are necessary for the event to have occurred the way it did, whereas contextual factors have less influence on the event chain but in some way complicate it or make the error's mitigation more problematic. We found that laying out the analysis in this fashion suggested potential intervention strategies for reducing the likelihood of an error occurring or of an occurring error cascading into patient harm.

We identified three major, parallel categories for contributing and mitigating factors: actions, system problems, and circumstances for contributing factors; and actions, system successes, and serendipity for mitigating factors (Table 3). Actions are deliberately taken and are a direct element of the course of events leading to an error or its outcome. Systemic factors involve an underlying system designed to control or manage the flow of a test, information regarding the test, or some other aspect of the testing process. In those cases when we couldn't attribute an error or outcome to an intentional act or to a person, we classified them as a contributing circumstance or mitigating serendipity.

Table 3. Framework for categorizing contributing and mitigating factors.

Table 3

Framework for categorizing contributing and mitigating factors.

We identified specific examples of each of these types of factors, associated them with types of error chains, and then based suggestions for intervening in these chains by addressing controllable patient safety issues revealed by these factors. Lastly, we selected representative quotations from the narrative reports to illustrate how these factors influenced the course of events and to link abstract principles to specific examples from actual primary care practices.

The overall project design required that we share the results of our analyses with groups of participating practice representatives who could then use them to develop applied strategies for improving patient safety in their practices. We found that our mixed methods approach produced results that practice representatives found useful and practical. In particular, a group designing methods to reduce diagnostic testing errors used our results to identify test-tracking systems as an intervention target. A discussion of this group's work, how it used our analysis, and the interventions it developed appears elsewhere in this Advances in Patient Safety compendium. 18

Discussion

Incident reporting systems collect a continually expanding number and range of error reports. As the report database grows, it becomes increasingly difficult to systematically learn from these reports without some guiding analysis protocol and a wide range of analytic tools to draw on.

This situation led aviation safety reporting system (ASRS) investigators to develop a four-step analysis protocol to identify and analyze relevant reports of aviation safety incidents contained in the ASRS. As McGreevy explains, “[T]he large numbers of incident reports and the many details they contain can overwhelm analysts...As a result, critically important patterns of incidents can be overlooked, or not recognized in a timely manner.” 19 The ASRS solution was to (a) identify topics for analysis based on a continual review of submitted reports, (b) select relevant reports from the database that met an investigator's selection criteria, (c) analyze the selected report narratives for keywords and keyword relationships and model the results, and (d) identify key representative incident reports for further, more detailed review.

Using the ASRS protocol as a model and our 2½ years of experience with ASIPS, we propose the following multistage mixed methods procedure for selecting and analyzing reports from a patient safety reporting system to inform the development of patient safety interventions.

1.

Develop a selection mechanism that will support the identification of relevant reports from the database for analysis. Rather than use a keyword approach as the ASRS analysts did, we relied on the taxonomy codes to classify, categorize, and search for relevant reports. We recommend our approach over a keyword approach because the taxonomy is useful not only as a selection mechanism, but in analyses as well.

2.

Continually monitor submitted reports for significant patient safety topics that are amenable to investigation. We recommend conducting quantitative analyses to aid this process. Significant sentinel events may also be used to initiate an analysis.

3.

Once a study topic is identified, develop selection criteria for including reports in the analysis. The criteria should be phrased in terms usable by the selection mechanism (e.g., the taxonomy codes). To ensure inclusion of all relevant reports, the selection process should emphasize sensitivity over specificity in screening cases. Positively screened cases should then be read and reviewed to eliminate false positives. We recommend using a main event approach during this review. Although reported medical errors, like errors in general, typically involve a series of actions gone awry, 20 there is most often a reported main event that is the crux of a chain of events and the primary event that a reporter emphasizes. The main event should involve the screening criteria or the case should be excluded as a false positive.

4.

Perform quantitative analysis on the selected cases to gain an initial understanding of them. We recommend using dichotomous variables (and secondary composite variables) formed from taxonomy codes as source data for this analysis.

5.

Perform qualitative analysis of each selected case. Identify the primary main event that relates to the selection criteria. Then divide and code the report narrative into aspects that—

  • Lead to or affect the occurrence of the main event.
  • Describe the event.
  • Lead to or affect the downstream outcome of the event.
  • Describe the outcome.
6.

Identify and categorize the contributing, mitigating, and contextual factors affecting the occurrence of the main event and the event's outcome. Refine the analysis by identifying cross-case patterns related to event flows and outcomes. Identify possible intervention mechanisms related to reducing the occurrence of various types of main events and to mitigating the consequences or outcomes of those events.

7.

Transform selected significant qualitative codes into dichotomous variables and use them in quantitative analyses to further identify and specify relationships between these key variables and other variables based on the taxonomy. In particular, examine multivariate relationships between contributing, mitigating, and contextual factors and the taxonomy's harm variables.

8.

Based on quantitative and qualitative analyses, identify representative reports that capture the essence of significant findings that are likely to provide needed information for developing interventions (event prevention as well as outcome mitigation). Provide illustrative, succinct quotations along with analysis findings (quantitative relationships, flow diagrams, and catalogs of factors affecting these events) to those who are developing interventions. The ability to draw on real examples of errors that occurred in the practices participating in ASIPS, and to use representative quotations about what led to the errors and their outcomes, was powerful in informing and motivating practice participants in their efforts to design interventions.

This protocol uses a mixed methods approach to learning from errors reported to a patient safety reporting system and to informing intervention development. This approach allows a given report to be selected for and used in multiple analyses, rather than forcing an event into a single category to be analyzed from that perspective alone. A diagnostic testing error, for example, may involve a failed communication between a clinician and a medical assistant, allowing this case to be selected for analyses of diagnostic testing errors, communication errors, and errors involving clinicians and nonclinicians. The report does not belong to any one analysis, nor is it excluded from any analysis on the basis of being included in another.

This protocol and the method we used for collecting and coding reports should be equally applicable to voluntary and mandatory reporting systems that allow for narrative reporting. It is best suited to large practices, groups of practices (e.g., PBRNs), or university- or hospital-affiliated practices that have the analytic and research expertise and staff to support it. It is also suited for use with federally designated patient safety organizations, or with statewide or national coalitions or professional associations, which could collect and analyze reports centrally.

Conclusions

Our mixed methods approach allowed us to efficiently and effectively extract patient safety lessons from our error reporting system. We used this approach to quantitatively examine relationships between aspects of error events and qualitatively identify intervention opportunities in the cascade of events leading to and flowing from an error. The quantitative and qualitative analyses complement each other, with the former providing breadth and the latter providing depth. Combined, they provide more information than either analysis provides individually.

In their introduction to a series on qualitative research published by the British Medical Journal in 1995, Pope and Mays 21 advise that “we need a range of methods at our fingertips if we are to understand the complexities of modern health care.” A mixed methods approach, as we advocate here, provides that range of analytic techniques to help expose and explain the complexities of medical errors. Such an approach allows investigators to use multiple tools to study varied aspects of error events without being forced to choose one approach to the exclusion of another.

We found that the results of our analyses were informative and useful to groups developing interventions to improve patient safety in primary care practice settings. These interventions typically require change in office practice, and change is often difficult and resisted by those involved. While quantitative analysis can reveal patterns in error events and help identify significant characteristics of errors to inform intervention development, qualitative analysis can provide needed insight into error processes and supply “stories” that engage and motivate both those who develop and those who implement patient safety interventions. Adopting or adapting the protocol we describe in this paper to incident reporting systems in ambulatory (as well as institutional) settings can contribute to making health care safer.

Acknowledgments

We would like to acknowledge all the clinicians and staff of the High Plains Research Network and the Colorado Research Network for entrusting us with their stories. Funding for this study was provided by the Agency for Healthcare Research and Quality, Grant #U18-HS011878, Wilson D. Pace, principal investigator.

References

1.
Kohn LT, Corrigan J, Donaldson MS, editors. To err is human: building a safer health system. A report of the Committee on Quality of Health Care in America, Institute of Medicine. Washington, DC: National Academy Press; 2000. [PubMed: 25077248]
2.
Pace WD, Staton EW, Higgins GS. et al. Database design to ensure anonymous study of medical errors: a report from the ASIPS Collaborative. J Am Med Inform Assoc. 2003 Nov–Dec;10(6):531–40. Epub 2003 Aug 04. [PMC free article: PMC264430] [PubMed: 12925548]
3.
Agency for Healthcare Research and Quality. Fact sheet: primary care practice-based research networks. AHRQ Publication No. 01-P020. Rockville, MD; AHRQ; June 2001.
4.
Fernald DH, Pace WD, Harris DM. et al. Event reporting to a primary care patient safety reporting system: a report from the ASIPS Collaborative. Ann Fam Med. 2004 Jul–Aug;2(4):327–32. [PMC free article: PMC1466702] [PubMed: 15335131]
5.
Borkan JM. Mixed methods studies: a foundation for primary care research. Ann Fam Med. 2004;2(1):4–6. [PMC free article: PMC1466623] [PubMed: 15053276]
6.
Cresswell JW, Fetters MD, Ivankova NV. Designing a mixed methods study in primary care. Ann Fam Med. 2004;2(1):7–12. [PMC free article: PMC1466635] [PubMed: 15053277]
7.
The ASIPS Collaborative. Dimensions of medical outcome. The ASIPS-Victoroff taxonomy; 2003. Available at: http://fammed​.uchsc.edu​/carenet/asips/taxonomy/. Accessed June 30, 2004.
8.
Pace WD, Fernald DH, Harris DM, et al. Developing and analysis of a taxonomy for coding ambulatory medical errors: an ASIPS Collaborative report. In: Henricksen K, Battles J, Marks E, Lewin DI, editors. Advances in patient safety: from research to implementation. Vol. 2. Concepts and methodology. Rockville, MD: Agency for Healthcare Research and Quality; 2005.
9.
Miles MB, Huberman AM. Qualitative Data Analysis. 2nd ed. Thousand Oaks, CA: Sage; 1994.
10.
Glaser BG, Straus AL. The discovery of grounded theory: strategies for qualitative research. Chicago: Aldine; 1967.
11.
Lofland J. Analyzing social settings: a guide to qualitative observation and analysis. Belmont, CA: Wadsworth Publishing Company; 1971.
12.
Bogdan R, Biklen SK. Qualitative research for education: an introduction to theory and methods. 2nd ed. Boston: Allyn & Bacon; 1992.
13.
Heise DR. Specifying event content in narratives. Available at: http://www​.indiana.edu​/~socpsy/papers/EventContent.html; 1995.
14.
Heise DR, Durig A. A frame for organizational actions and macroactions. J Math Sociol. 1997;22(2):95–123.
15.
Nutting PA, Main DS, Fischer PM. et al. Toward optimal laboratory use. Problems in laboratory testing in primary care. JAMA. 1996 Feb;275(8):635–9. [PubMed: 8594246]
16.
Bonini P, Plebani M, Ceriotti F. et al. Errors in laboratory medicine. Clin Chem. 2002 May;48(5):691–8. [PubMed: 11978595]
17.
Gandhi TK. Urine a tough position: (Commentary). Web M&M Current Cases & Commentaries: Family Medicine. Available at: http://webmm​.ahrq.gov/cases.aspx?ic=35; 2003.
18.
West DR, Westfall JM, Araya-Guerra R, et al. Using reported primary care errors to develop and implement patient safety interventions: A report from the ASIPS Collaborative. In: Henricksen K, Battles J, Marks E, Lewin DI, editors. Advances in patient safety: from research to implementation. Vol. 3. Implementation issues. Rockville, MD: Agency for Healthcare Research and Quality; 2005. [PubMed: 21249992]
19.
McGreevy MW. A practical guide to interpretation of large collections of incident narratives using the QUORUM method. NASA Technical Memorandum 112190. Moffett Field, CA: NASA Ames Research Center; 1997.
20.
Reason J. Human Error. New York: Cambridge University Press; 1990.
21.
Pope C, Mays N. Reaching the parts other methods cannot reach: an introduction to qualitative methods in health and health services research. BMJ. 1995 Jul;311(6996):42–5. [PMC free article: PMC2550091] [PubMed: 7613329]

Views

  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this page (238K)

Other titles in this collection

Related information

  • PMC
    PubMed Central citations
  • PubMed
    Links to PubMed

Similar articles in PubMed

See reviews...See all...

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...