Home > DARE Reviews > Systematic review of medication safety...

PubMed Health. A service of the National Library of Medicine, National Institutes of Health.

Database of Abstracts of Reviews of Effects (DARE): Quality-assessed Reviews [Internet]. York (UK): Centre for Reviews and Dissemination (UK); 1995-.

Database of Abstracts of Reviews of Effects (DARE): Quality-assessed Reviews [Internet].

Systematic review of medication safety assessment methods

C Meyer-Massetti, CM Cheng, DL Schwappach, L Paulsen, B Ide, CR Meier, and BJ Guglielmo.

Review published: 2011.

CRD summary

The review found that incident report review, chart review, direct observation and trigger tools had differing strengths and weaknesses for assessment of medication safety, with minimal overlap in the drug-related problems they identified. Limitations in the review, in particular marked heterogeneity between studies and a failure to assess study quality, mean that these conclusions should be interpreted with caution.

Authors' objectives

To compare the accuracy, efficiency and efficacy of commonly used medication safety assessment methods.

Searching

In order to identify commonly used medication safety assessment methods, a preliminary search was undertaken of University of California library books and advice issued by relevant United States institutions. PubMed, EMBASE and Scopus were searched from 2000 to October 2009; search terms were reported. A related-articles search was run in PubMed for relevant studies. Reference lists of retrieved articles were checked. The search was not restricted by language.

Study selection

Studies that compared two or more of the selected medication safety assessment methods were eligible for inclusion. Studies were required to report numerical data on the number of drug-related problems identified by each tool and/or the accuracy or efficiency (time, effort and cost) of the tool. Drug-related problems included any medication-related event, whether or not harm was caused. Assessment tools eligible for inclusion were incident report review (voluntary reporting by health care staff, patients and parents), direct observation, chart review (such as medical records and pharmacy databases) and trigger tool (manual or automatic record review). Studies of interventions in different time periods were excluded unless analyses were adjusted for this difference.

The included studies were conducted in a wide range of settings that included paediatric units, primary care clinics, long-stay geriatric units and anaesthetic departments. Reporting methods varied for each assessment tool (for example, incident reports could be anonymous or not). Types and definitions of medication problems varied and ranged in gravity from omission of documentation to patient injury and were reported as the number of events or the number of opportunities for events. Different units of analysis were used and included beds, patients and patient days. Review outcomes for accuracy included positive predictive values, sensitivity, specificity, false negatives, false positives and inter-rater agreement. The observation period ranged from four months to 34 months (where reported).

Two reviewers independently selected the studies. Disagreements were resolved by consensus.

Assessment of study quality

The authors did not state that they assessed validity.

Data extraction

Two authors independently categorised the studies according to type of assessment tool used in accordance with review definitions. Descriptive data were extracted from the individual studies and recorded in a table of studies. P values for differences between the groups were calculated for some comparisons.

The authors did not state how many reviewers extracted data.

Methods of synthesis

The studies were combined in a narrative synthesis organised by outcomes.

Results of the review

Twenty-eight studies were included. They apparently included one randomised study, two comparative studies, nine prospective cohort studies, four prospective studies, five retrospective reviews and seven observational studies. The total number of participants included was not reported.

Direct observation identified a larger number of problems than other methods (six studies). Incident report review identified the fewest problems in most comparisons; chart review consistently reported more events compared to incident report review (13 studies). Findings that compared chart reviews versus trigger tools were inconsistent (five studies). Overlap in the identification of events between different methods was rare. Direct observation was the method most likely to identify problems found with other tools; all events detected by incident report reviews or chart review were also detected by direct observation.

Sensitivity, specificity and positive predictive values differed widely between studies reporting these data. Incident report review was more specific than other methods in identifying problems (three studies), but was less sensitive than trigger tools (four studies). The positive predictive value for trigger tools ranged from zero to 100% (six studies) depending on the design of the tool.

Trigger tool was the most time-efficient (two studies) and least labour-intensive assessment method, followed by incident report review and then direct observation (one study).

Other findings were reported in the review.

Cost information

Trigger tools (once fully established) were less expensive than chart review (two studies): US$42.40 per adverse drug reaction using a trigger tool versus $68.70 using chart reviews (one study). Chart review cost $0.63 per drug dose versus $4.82 for direct observation (one study). One study that compared incident report review with trigger tools attributed an annual cost saving of $56,000 to the avoidance of detectable drug-related problems taking the cost of detection into account.

Authors' conclusions

Incident report review, chart review, direct observation and trigger tools had differing strengths and weaknesses for assessment of medication safety with minimal overlap in the drug-related problems they identified.

CRD commentary

The objectives and inclusion criteria of the review were stated clearly. The restriction to the four most commonly recommended methods of medication safety assessment did not appear to be clearly based on evidence of their effectiveness (compared to the other eight methods identified). Relevant sources were searched for studies without language restriction. It appeared that no that specific efforts were made to retrieve unpublished studies. Potential for publication bias was unclear. Steps were taken to minimise the risk of reviewer bias and error by having more than one reviewer independently select studies. The process used to extract data was not described. It appeared that study validity was not assessed.

The authors noted that the studies varied very markedly in their settings, interventions and definition of outcomes. The studies also used widely differing methods, although few or no details were provided in the review about study design, sample numbers and statistical techniques. The authors noted that the studies used differing denominators and units of analysis. It was unclear how measures of diagnostic accuracy were calculated in the primary studies. No reference standard was described. There were few studies for most comparisons and few statistical data or measures of statistical significance were reported. All these factors made it difficult to interpret the reliability or clinical applicability of the review findings.

Due to limitations in the review, in particular marked heterogeneity between studies and a failure to assess study quality, the authors' conclusions should be interpreted with caution.

Implications of the review for practice and research

Practice: The authors stated that trigger tools appeared to be the most effective and labour-efficient method for identifying drug-related problems, but incident report reviews were better at identifying high-severity problems. They suggested that accuracy, effectiveness and cost should be considered when comparing methods of medication safety assessment.

Research: The authors did not state any implications for research.

Funding

Not stated.

Bibliographic details

Meyer-Massetti C, Cheng CM, Schwappach DL, Paulsen L, Ide B, Meier CR, Guglielmo BJ. Systematic review of medication safety assessment methods. American Journal of Health-System Pharmacy 2011; 68(3): 227-240. [PubMed: 21258028]

Indexing Status

Subject indexing assigned by NLM

MeSH

Drug-Related Side Effects and Adverse Reactions /prevention & control; Humans; Medication Errors /prevention & control; Quality Assurance, Health Care /methods

AccessionNumber

12011001774

Database entry date

28/09/2011

Record Status

This is a critical abstract of a systematic review that meets the criteria for inclusion on DARE. Each critical abstract contains a brief summary of the review methods, results and conclusions followed by a detailed critical assessment on the reliability of the review and the conclusions drawn.

CRD has determined that this article meets the DARE scientific quality criteria for a systematic review.

Copyright © 2014 University of York.

PMID: 21258028

PubMed Health Blog...

read all...

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...