• We are sorry, but NCBI web applications do not support your browser and may not function properly. More information
Logo of jamiaJAMIA - The Journal of the American Medical Informatics AssociationInstructions for authorsCurrent TOC
J Am Med Inform Assoc. 2010 Sep-Oct; 17(5): 575–583.
PMCID: PMC2995669

Errors and electronic prescribing: a controlled laboratory study to examine task complexity and interruption effects

Abstract

Objective

To examine the effect of interruptions and task complexity on error rates when prescribing with computerized provider order entry (CPOE) systems, and to categorize the types of prescribing errors.

Design

Two within-subject factors: task complexity (complex vs simple) and interruption (interruption vs no interruption). Thirty-two hospital doctors used a CPOE system in a computer laboratory to complete four prescribing tasks, half of which were interrupted using a counterbalanced design.

Measurements

Types of prescribing errors, error rate, resumption lag, and task completion time.

Results

Errors in creating and updating electronic medication charts that were measured included failure to enter allergy information; selection of incorrect medication, dose, route, formulation, or frequency of administration from lists and drop-down menus presented by the CPOE system; incorrect entry or omission in entering administration times, start date, and free-text qualifiers; and omissions in prescribing and ceasing medications. When errors occurred, the error rates across the four prescribing tasks ranged from 0.5% (1 incorrect medication selected out of 192 chances for selecting a medication or error opportunities) to 16% (5 failures to enter allergy information out of 32 error opportunities). Any impact of interruptions on prescribing error rates and task completion times was not detected in our experiment. However, complex tasks took significantly longer to complete (F(1, 27)=137.9; p<0.001) and when execution was interrupted they required almost three times longer to resume compared to simple tasks (resumption lag complex=9.6 seconds, SD=5.6; resumption lag simple=3.4 seconds, SD=1.7; t(28)=6.186; p<0.001).

Conclusion

Most electronic prescribing errors found in this study could be described as slips in using the CPOE system to create and update electronic medication charts. Cues available within the user interface may have aided resumption of interrupted tasks making CPOE systems robust to some interruption effects. Further experiments are required to rule out any effect interruption might have on CPOE error rates.

Introduction

New errors are an unintended consequence of using information and communication technologies in complex socio-technical environments such as healthcare.1–3 Errors arising from the use of computerized provider order entry (CPOE) systems have not been previously examined in a controlled setting. A number of uncontrolled studies have identified new mechanisms for making prescribing errors directly related to the use of CPOE systems for electronic prescribing.4–6 The range of prescribing errors include selection of the wrong patient; missed drug allergies7; incorrect entry or selection of medication, dose, route, formulation, administration time, or frequency4 8; incorrect entry or omission of start date4 and free-text qualifiers needed to administer drugs.8 9 These errors have been attributed to incorrect selection of information from drop-down menus and inappropriate use of free-text fields, but the primary causal mechanisms behind those actions remains unclear.

New ways of making errors have also been linked to the interruption-prone environment in which clinical tasks are undertaken.10–12 A comprehensive examination of 7029 CPOE-related medication incidents reported to the US Pharmacopeia Medmarx database found that distractions were reported to be a significant contributing factor, contributing to eight out of ten errors.13 Few studies have quantified the effects of interruption on clinical tasks.14 15 Using observations of nurses administering medications on hospital wards, a recent study found that each interruption was associated with a 12% increase in procedural failures and a 13% increase in clinical errors.16 However, little is known about the effects of interruption on prescribing tasks using a CPOE system. A US observational study found that 31% of distractions interrupted clinicians' use of an IT system.17 The potential for interruptions to disrupt doctors' use of a CPOE system for electronic prescribing tasks and to generate errors has not been previously examined.

We sought to study the effects of interruption and task complexity on electronic prescribing with CPOE. Studies from other domains, such as aviation and clerical office work, suggest that interruption leads to more errors in flight-deck procedures,18 is associated with a delay resuming, and increases the time to complete computer-based document editing tasks.19 Furthermore, the disruptive effects of interruption appear to be more pronounced in complex tasks than simple tasks.20 21 Based on these previous findings, we sought to test the following hypotheses:

  • H1: Interruptions increase the number of prescribing errors when prescribing is the primary task.
  • H2: Interruptions increase the time taken to complete primary tasks (task completion time).
  • H3: Interruptions are associated with a delay in resuming primary tasks (resumption lag).
  • H4: Complex primary tasks are more susceptible to the effects of interruption resulting in higher error rates and longer task resumption times.

Methods

Participants

Thirty-four doctors participated in the study. Junior doctors and resident medical officers who used a CPOE system for electronic prescribing at a 300-bed teaching hospital attached to the University of New South Wales were invited via a call for volunteers advertised on the hospital's notice boards and via a clinician list-server. All junior doctors and resident medical officers are required to complete a formal CPOE training session conducted by the hospital and all participants in this study had complied with this requirement.

Primary prescribing tasks

Four primary prescribing tasks (two simple and two complex) were designed with advice from an expert panel, including four hospital doctors, a clinical pharmacologist and a pharmacist. The design of the prescribing tasks was based upon our observations of doctors' use of CPOE for electronic prescribing within a medical ward of the same hospital.22 Primary tasks were based on hypothetical clinical scenarios representing typical prescribing tasks undertaken by doctors working on a medical ward. The two simple tasks involved creation of an electronic medication chart. For each new chart, doctors were required to locate the patient's record and enter current allergy status before prescribing a required list of medications (table 1). The two complex tasks were based on updating existing electronic medication charts. In response to new therapeutic decisions, this task required a doctor to locate an existing medication chart and replace medications by first ceasing old medications and then adding new ones (table 2).

Table 1
Simple primary prescribing tasks
Table 2
Complex primary prescribing tasks

Complexity of the primary prescribing tasks was measured by the number of information cues a clinician needs to complete the task.23 We used GOMS (Goals, Operators, Methods, and Selection rules), a well-recognized cognitive engineering method, to differentiate the relative complexity of prescribing tasks by working memory load requirements.22 A GOMS model of prescribing was used to compute the total, average, and peak number of working memory changes during task execution (simple tasks: total=214 items, average=9.4, peak= 12 vs complex tasks total=333, average=10.9, peak= 16; see Appendix A available as an online data supplement at http://jamia.bmj.com). The correct response to each of the primary prescribing tasks was pre-determined by consensus agreement among the doctors on the expert panel and validated by the pharmacist. Although each clinical scenario was different, the prescribing tasks within each level of complexity were designed to be operationally identical in terms of the number of medications to be prescribed or replaced. For instance, in the simple prescribing tasks, a new medication chart required prescription of six new medications with scheduled doses in addition to entering the patient's allergy status. The complex prescribing task involved updates to an existing electronic medication chart with six medications. Two medications were replaced and two new medications required prescription resulting in a total of eight medications on the chart.

Interrupting tasks

Two interrupting tasks were designed with input from the expert panel. In observations on the hospital wards we found that interruptions to doctors prescribing electronically often involved use of a different clinical application from the same computer terminal. Thus, interrupting tasks that involved use of an electronic medications reference manual (the Australian Medicines Handbook) were designed. Participants were asked to look up the appropriate dosage for the medication described in a hypothetical clinical scenario relating to a different patient (ie, not the same patient as in the primary task) (see Appendix B available as an online data supplement at http://jamia.bmj.com). The tasks required a categorical response for the correct route, dose, interval/duration, and maximum dose. Answers were pre-determined by consensus agreement among the expert panel doctors and validated by the pharmacist. Based on previous experiments, the duration of interrupting tasks was designed to be at least 75 seconds to ensure that participants disengaged from the primary task.24

Experimental design

This study had two within-subject factors: task complexity (simple vs complex) and interruption (interruption vs no interruption) giving four experimental conditions: (a) simple with interruption, (b) simple with no interruption, (c) complex with interruption, and (d) complex with no interruption (figure 1). Each participant was asked to complete four primary prescribing tasks, half of which were interrupted with interruptions evenly distributed between simple and complex tasks (ie, one simple and one complex task was interrupted for each participant). To avoid order effects, the sequence of task presentation was counterbalanced using a balanced Latin square design, which also takes into account the distribution of interruptions, ensuring all four primary prescribing tasks were interrupted equally across all participants.

Figure 1
Experimental design.

Procedure

Volunteers were asked to attend a 1-hour session at a computer laboratory within the hospital, which provided access to a training version of the CPOE that was set up with four hypothetical patient records. Electronic medication charts were created for the complex primary tasks to update existing electronic medication charts (see Appendix C available as an online data supplement at http://jamia.bmj.com). Patient records and medication charts were reset for each participant.

Instructional phase

Following written informed consent, doctors were asked to complete a short survey about their CPOE use, clinical experience, and provide demographic information (see Appendix D available as an online data supplement at http://jamia.bmj.com). As all participants were experienced in using CPOE and the online medications reference manual no training was provided specifically for the experiment. Participants were informed that the study was not a test of their clinical knowledge or their computer skills and the true objective of the study was not disclosed until the end of the session. However, participants were informed that they may sometimes be required to look up medications-related information using the online medications reference manual, which was available on their computer task bar (Appendix C available as an online data supplement at http://jamia.bmj.com). An investigator showed the participants how to switch back and forth between the primary prescribing task and the medications reference task by clicking on the computer task bar. Participants were instructed that it was important to attend immediately to any requests from the investigator in order to prevent delays in attending to the interruption (ie, by completing current activity related to the primary task).

Primary task assignment

Participants were sequentially presented the primary prescribing tasks in random order according to the Latin square design. Following the procedure used at the time in the hospital's medical wards, participants were provided with paper medication charts to complete the simple prescribing tasks to create new electronic medication charts. No paper charts were provided for the complex tasks as participants were required to make changes to existing electronic charts.

Interrupting task assignment

In order to maintain a uniform interruption position across all primary prescribing tasks, the interrupting task was introduced when a participant began entering the first medication. An investigator (SYWL) initiated the interruption by asking the participant to carry out the interruption task on a paper form. Interruptions were controlled by monitoring participants' computer activity, which was observed at a separate computer workstation using the TechSmith Morae screen capture software that allowed live observation. Participants were instructed to return the interrupting task to the investigator once it was completed and continue with the primary prescribing task.

Data collection

Use of the CPOE and online medications reference manual to complete prescribing and interrupting tasks was recorded using Morae, which also captured a video of all screen activity and an audio recording of all verbalizations. Clinical or technical assistance was not provided. We did not intervene to encourage speed of performance. All participants completed the study within the hour allocated. Ethical approval for the protocol was received from the University of New South Wales and hospital research ethics committees. All participants received two movie vouchers (AUD 20) for an hour of their time to participate in the study.

Outcome measures

Participants' responses to the prescribing tasks with and without interruptions were compared using data extracted from computer printouts of all medication charts from the CPOE system. Prescribing errors were determined by checking participants' responses against correct answers determined by the expert panel and validated by the pharmacist. This process was carried out by one investigator (SYWL) and then double-checked by a second for accuracy (FM). When coders disagreed, the medication chart was re-examined and a consensus code was assigned. The impact of interruptions was determined by examining Morae screen capture videos and audio recordings to calculate the various timing measures. The following measures were examined:

1. Types of prescribing errors

We identified specific errors associated with creating and updating electronic medication charts, such as failure to enter allergy information; selection of incorrect medication, dose, route, formulation, or frequency from lists and drop-down menus presented by the CPOE; incorrect entry or omission in entering administration times, start date, and free-text qualifiers; and omissions in prescribing and ceasing medications (table 3).

Table 3
Mean error rate (%) by prescribing error type

2. Error rate

A normalized error rate (equation 1), which takes into account the number of opportunities for making an error (or error opportunities) in executing a particular prescribing task was calculated:24 25

Error_rate=EjEn
(1)

where, Ej=no. of errors and En=no. of error opportunities.

An error opportunity is a ‘chance to make a mistake’.26 The error opportunities we examined were based upon prescribing error types (table 3; wrong patient, missed allergy, wrong medication, etc). For each error type we identified critical subtasks or interactions with the CPOE that could result in an incorrect prescription if incorrectly executed (see Appendix E available as an online data supplement at http://jamia.bmj.com). For example, when prescribing a medication, there is one opportunity to make a mistake in selecting the correct name (item) from the list of medications presented by the CPOE system. In scenario A, which is completed by all 32 participants, six medications are prescribed, presenting 6 times 32 = 192 opportunities for error in selecting a medication name. Thus, error rates were calculated for each prescribing error type and an overall error rate was calculated by summing all errors across all opportunities to make a prescribing error (table 3).

3. Resumption lag (trlag)

This is the time taken to re-orient and then restart the primary task after an interruption is over (figure 2). To compute the resumption lag we examined the time elapsed from when a participant clicks back to the CPOE system on their computer task bar (after completing the interruption task using the online medications reference manual) and the first action on screen—that is, key stroke or mouse click from the video log.

Figure 2
Resumption lag: the time taken to re-orient and resume a primary task at the end of an interruption (adapted from31).

4. Task completion time (Tc)

The time taken to complete primary tasks was also calculated using the video log (equation 2):

Tc=TtotalTitrlag
(2)

where, Ttotal = time taken to complete the primary task including any interrupting tasks;

Ti = time taken to complete the interrupting task;

trlag = resumption lag.

Statistical analyses

We calculated that 32 primary tasks were required in each arm to detect an increase in errors when interrupted, from 10% to 30% in complex tasks, and from 4% to 10% in simple tasks, with 80% power and p< 0.05.27 Baseline measures for the power calculation were based on a conservative estimate derived from the literature. For simple computer-based tasks, which included reading, counting, comprehension, and selecting tasks, errors were observed in 20% of tasks undertaken at baseline.28 A two-way repeated-measures analysis of variance (ANOVA) was used to assess main effects of interruption and task complexity and an interaction effect of interruption × task complexity. Measures of effect size in AVOVA were examined using partial eta squared (ηp2), which is the proportion of variance in the error rate and task completion time (dependent variables) that is attributable to each effect (independent variables) (ηp2: 0.01= small; 0.06=medium; 0.14=large effect size).27 A one-way repeated measures ANOVA was used to compare error rates by task complexity.

For interrupted tasks, differences in resumption lag between simple and complex tasks were examined using Student t test. The relative magnitude of the difference was examined using ηp2, which is the proportion of variance in the resumption lag (dependent variable) that is explained by task complexity (independent variable). All statistical analyses were undertaken using SPSS v17.0 software.

Results

We recruited 34 participants from a cohort of 80 junior doctors and residents who worked at the hospital. The average age of the participants was 27.5 years (SD 3.2) and 59% (N=19) were female. On average, doctors had 2–5 years of clinical experience and reported 6–9 months' experience in using the hospital CPOE for electronic prescribing. Participants indicated having good computer skills in using the CPOE at an average frequency of 2–6 times/week in the last month. Data from two participants were excluded from analysis; one participant's data was lost due to a technical error and a second participant who was interrupted for a considerable length of time because of an emergency event was also excluded.

Impact of interruption on prescribing tasks

We examined the mean error rate for simple and complex tasks with and without interruptions (table 4). The effects of interruption on error rate (F(1, 31)=0.899, p=0.350 (ηp2 = 0.028, observed power = 0.151)) and task complexity (F(1, 31)= 3.229, p=0.082 (ηp2 = 0.094, observed power = 0.414)) were not statistically significant. The interaction interruption × task complexity also did not reach statistical significance (F(1,31)=0.018, p=0.893 (ηp2 = 0.001, observed power = 0.052)).

Table 4
Error rate and task completion times for simple and complex tasks

Task completion times for simple and complex prescribing tasks with and without interruptions are listed in table 4. Complex tasks took significantly longer to complete than simple tasks (F(1, 27)=137.9, p<0.001 (ηp2 = 0.836, observed power = 1.000)). Using ηp2 as the measure of association, task complexity accounted for 83.6% of the total variability in the task completion time. Neither the effect of interruption on task completion time (F(1, 27)=0.54, p=0.469 (ηp2 = 0.020, observed power = 0.109)) nor the interaction interruption × task complexity reached statistical significance (F(1, 27)=1.41, p=0.245 (ηp2 = 0.050, observed power = 0.209)).

Resumption lag was examined for interrupted tasks. Complex tasks required almost three times longer to resume compared to simple tasks (resumption lagcomplex=9.6 s, SD=5.6; resumption lagsimple=3.4 s, SD=1.7; t(28)=6.186, p<0.001; ηp2= 0.415).

Types of prescribing errors

To understand types of error associated with CPOE use we pooled data from interrupted and non-interrupted tasks to quantitatively and qualitatively examine error rates for specific types of prescribing errors (table 3). The main errors observed were:

Missed allergy

Failure to enter allergy information into the allergy field accounted for the highest error rate in simple tasks (error rate =16%; N=32).

Medication name

Incorrect selection of a medication from the list presented by the CPOE resulted in prescription of the wrong medication (error rate up to 0.5%; N=192).

Dose

Incorrect selection of a dose from a list presented by the CPOE resulted in prescription of the wrong dose (error rate up to 2.3%; N=128).

Route

Incorrect selection from a CPOE list resulted in prescription of tablets instead of intravenous medication (error rate up to 0.8%; N=128).

Administration time

Failure to change default times for administration of medications was common. The error rate was 2.8% for scenario A (N=320) and 3.8% for scenario B (N=416).

Frequency

Incorrect selection from a CPOE drop-down menu resulted in the wrong frequency (scenario A: error rate=1.0%; N=192). For complex tasks, incorrect selection of the frequency involved PRN doses and a weekly dose (scenario C error rate=3.9%; N=128).

Medication omissions

These involved failure to add new medications when updating an existing electronic medication chart (error rate=0.8% and 3.2% for scenarios C and D, respectively; N=128).

Date error

Incorrect selection of the start date resulted in early commencement of a medication.

Qualifier omission

Omissions in including a text qualifier for administration of a topical medication in scenario B were associated with an error rate of 12% (N=32).

Cessation error

Failure to cease a medication accounted for the highest proportion of errors in scenario C (error rate=8%; N=64). In comparison, the error rate associated with failure to cease a medication in scenario D was 3% (N=64).

PRN max dose omission

A relatively high proportion of errors in scenario C involved errors or omissions in specifying the maximum dose for a PRN medication (error rate=6%; N=64).

Tables 5 and and66 list examples of each prescribing error type. A comparison of error rates by task complexity showed that there was a significant effect (F(29, 3)=4.51, p=0.010 (ηp2 = 0.318, observed power = 0.834)). Error rates for complex scenario C were higher than the simple scenario B (error rateC=2.59%, N=1120; error rateB=1.51%, N=1792; p=0.007).

Table 5
Medications associated with errors in simple tasks
Table 6
Medications associated with errors in complex tasks

Discussion

Main findings and implications

Impact of interruptions

Any impact of interruptions on prescribing errors was not detected in our experiment. One possible reason is that the effect of interruptions on error rate may have been minimized by cues available within the user interface of the CPOE, which may have aided resumption of interrupted tasks. Moreover, the laboratory setting is not a true reflection of cognitive loads in the busy work environment where interruptions have been shown to disrupt other types of clinical tasks resulting in procedural failures and clinical errors.16 In our experiment, upon task resumption participants had ample time to recall their next action and the task environment provided cues to aid task resumption. This is consistent with observations from other studies of interruptions to computer-based tasks where participants were aided by the screen environment and did not fail to resume an interrupted task.20 29 Performance under cognitive load from more demanding competing tasks in a clinical environment may have resulted in a different outcome. Furthermore, interruptions are a complex phenomenon where multiple variables, including the characteristics of primary tasks, the interruptions themselves, and the environment, may influence impact on clinical tasks and errors.30 As interruptions seem to be inherent to clinical work and cannot completely be eliminated it is necessary to understand the circumstances under which interruptions are likely to be dangerous and to design systems that are interruption-tolerant.

While the size of our sample was adequate to detect a difference in resumption lag, the study was not adequately powered to examine the smaller differences in prescribing errors mainly because the error rates we measured were much lower than the initial estimates we used for sample size calculations. Given that total error rates for our prescribing tasks ranged from 0.92–2.59% (N=1088 to N=1120) (table 3), error rate may not be a sensitive enough measure to detect the disruptive effect of interruptions compared to the resumption lag, which has been reliably measured in other laboratory studies.31 Observed power to examine the effects of interruption and task complexity and the interaction effect of interruption × task complexity on error rate was less than 0.80, which suggests insufficient power of the experiment rather than absence of an effect of interruption on error rate. Further experiments that are adequately powered to detect small differences in error rate are required to rule out any effect interruption might have on CPOE error rates.

Interruptions did not have any reliable effect on the overall task completion times of simple or complex tasks. A possible explanation is that overall task completion time is a coarse measure and participants could have developed strategies (eg, speeding up task execution) to compensate for the disruptive effect of interruptions throughout the task execution period.32 As a consequence, any disruptive effect of interruption may not have been sufficiently large to be reflected in the overall task completion time. As with error rate, the observed power for the effects of interruption and the interaction effect of interruption × task complexity on task completion time was less than 0.80, which suggests insufficient power of the experiment rather than absence of an effect of interruption on task completion time.

After interruption, complex tasks required almost three times longer to resume compared to simple tasks (resumption lagcomplex=9.6 seconds, resumption lagsimple=3.4 seconds). This finding is consistent with Burmistrov and Leonova's 19 study showing a disruptive effect on complex cognitive operations in a computer-based editing task. However, some caution is needed in interpreting this result as a disparity in the task resumption environment between simple and complex prescribing tasks may have exaggerated the magnitude of the resumption lag. During resumption the computer screen for the simple tasks did not list any medications and the next correct action was to begin prescribing by clicking the ‘Prescribe’ button (Appendix C available as an online data supplement at http://jamia.bmj.com and figure 1). In comparison, the resumption screen for complex task had six existing medications and the next action could relate to any of the six medications (Appendix C: figure 1). Thus the complex resumption screen provided more competing information cues than the simple resumption screen and may have caused participants to spend more time differentiating among competing cues in order to recall their next action. This notion of making the right memory associations with environmental cues upon task resumption is well founded in the activation-based goal memory model,33 which has been widely used in the study of cognitive processes in interruption.24 34 35 Better measurement of the context surrounding interruption may be necessary in future studies.

Types of prescribing errors

We have quantified the types of prescribing errors when using a CPOE system in a controlled setting. Most prescribing errors we observed could be described as slips in using the CPOE system to create and update electronic medication charts, including failure to enter allergy information; selection of incorrect medication, dose, route, formulation, or frequency of administration from lists and drop-down menus presented by the CPOE; incorrect entry or omission in entering administration times, start date, and free-text qualifiers; and omissions in prescribing and ceasing medications. Such slips are errors in the execution or storage phase in an action sequence and they occur with the correct intention from users.36 37 Prescribing errors resulting from slips occur not because of deficiency in the clinician's knowledge about prescribing or use of the CPOE but because of human fallibility in executing routine procedures despite having the correct how-to knowledge.

Other CPOE errors in specifying administration times involved changes to default times specified by the system. This highlights a need to train users to double-check administration times for medications such as diuretics and angiotensin-converting enzyme (ACE) inhibitors, which may at times be exceptions to standard times specified in the national inpatient medication chart upon which CPOE default times were based (scenario A: error rate=2.8%, N=320; scenario B: error rate=3.8%, N=416). Specification of PRN medications (error rate=6%, N=64) and weekly medications (error rate=4.7%, N=128) was particularly problematic underlining a need to improve user skills in minimizing mistakes in prescribing these medications.

The consequences of specific types of prescribing errors may be more important than others. For example, the top two error types related to missed allergies and omission of free-text qualifiers. The potential significance of missing allergy information in 10 out of 64 new medication charts is likely to be greater than missing directions for administration in 4 out of 32 prescriptions for a topical medication. The absolute error rate for most other types of prescribing errors ranged from 0.5% (N=192) to 7.8% (N=64), which may have significant consequences for medications such as cytotoxics. Moreover, the impact of prescribing errors could be magnified due to a greater number of opportunities for making such errors in a hospital setting.

Evaluating safe designs

Our method used hypothetical clinical scenarios in a controlled setting. Such an approach provides a quantitative basis to compare different CPOE systems and the design of corrective or preventive strategies to minimize errors. Some of the errors caused by slips can be prevented through improved user interface design and user training. Indeed, two of the problems identified in this experiment have been rectified in the new version of the CPOE deployed at the hospital. Missed allergies that now appear in red text are much more likely to be noticed by a clinician. A monthly calendar has also been introduced to simplify specification of weekly doses.

Comparison with the literature

Few studies have examined the impact of interruptions in healthcare. Task disparity makes direct comparison difficult with other domains, such as aviation, where the effects of interruption have been traditionally studied. The hypotheses (H1, H2) of the current study were not supported despite being derived from studies in the interruption literature. One of the main reasons is that experimental studies have examined the effects of interruption in a range of different tasks. Our hypotheses were based on studies from aviation,18 production management,20 21 and computer-based editing,19 which are different to prescribing using a CPOE. The difference in task nature might have contributed to the different findings. However, H3 is supported and H4 is partially supported and can be related to existing studies. We found that complex tasks took significantly longer to resume and this finding is consistent with Burmistrov and Leonova19 who found that interrupting cognitively complex operations in a computer-based editing task resulted in longer resumption lags than interrupting simpler cognitive operations.

Our findings about the mechanisms for CPOE-related error are comparable to other studies of online clinical information systems. As identified in a previous study of online decision support systems using hypothetical scenarios,38 we found that slips in using the CPOE were responsible for the majority of the prescribing errors we identified. The types of errors we have identified are consistent with the mechanisms of prescribing errors found in evaluations of CPOE in clinical settings, including missed drug allergies7; incorrect medication, dose, route, formulation, administration time or frequency4 8; and incorrect entry or omission of start date4 and free-text qualifiers needed to administer drugs.8 9

Limitations

There are several limitations to the design of the current study. First, the laboratory environment may not be representative of the setting in which a CPOE may be generally used for electronic prescribing. Within a laboratory setting there is a trade-off between obtaining causal relationships among controlled variables and ensuring ecological validity. Therefore, it is possible that participants devoted more time to completing the tasks. Second, the interruptions in our study also related to use of the CPOE and may not have had the same impact of typical interruptions in a clinical setting, which may require a clinician to walk away from their primary task. Third, we used simulated prescribing scenarios for creating and updating electronic medication charts for a specific CPOE system at one teaching hospital attached to the university. Therefore, our results may not be generalizeable to other CPOE systems, which may have a different user interface and use model. Finally, the 32 participants in this study (mean age 27.5 years) were recruited from a cohort of 80 junior doctors and residents employed by the same hospital and may not be representative of other populations of doctors. The prescribing tasks examined are primarily undertaken by this group of doctors as part of their routine clinical duties; therefore, we recruited our participants from the same cohort to minimize bias. As all junior doctors and residents are required to complete a formal CPOE training session conducted by the hospital and routinely use the CPOE system, we would expect the skills of the current participants to be comparable to those of non-participants but this was not measured.

Conclusion

Most electronic prescribing errors found in this study could be described as slips in using the CPOE system to create and update electronic medication charts. Laboratory-based evaluation to measure the types of prescribing error using hypothetical clinical scenarios provides a quantitative basis to compare different CPOE systems and the design of corrective or preventive strategies to minimize errors. Any impact of interruptions on prescribing errors and task completion was not detected in our experiment as the study was underpowered. Cues available within the user interface may have aided resumption of interrupted tasks making CPOE systems robust to some interruption effects. Further experiments that are adequately powered to detect small differences in error rate are needed to rule out any effect interruption might have on CPOE error rates. It is also necessary to understand the circumstances under which interruptions to clinical tasks are likely to be dangerous.

Supplementary Material

Web Only Data:

Acknowledgments

We wish to thank the 34 doctors who gave their time to take part in the laboratory study; Drs N Brennan, S Beveridge, G Nicholls, and M Manghani; Maureen Heywood, Silvia Fazekas, Natasha McBrearty, Kelly Tank, and Jo Fowler for their contribution to the development and pilot testing of the clinical scenarios related to the prescribing tasks examined in this study; Dr F Lin for his assistance in pilot testing the scenarios and extracting data from Morae.

Footnotes

Funding: This research is supported in part by grants LP0775532 and DP0772487 from the Australian Research Council. FM is supported by an ARC APDI Fellowship and the University of New South Wales, Faculty of Medicine.

Competing interests: None.

Ethics approval: This study was conducted with the approval of the University of New South Wales and hospital research ethics committees.

Provenance and peer review: Not commissioned; externally peer reviewed.

References

1. Coiera E, Westbrook J, Wyatt J. The safety and quality of decision support systems. Methods Inf Med 2006;45:(Suppl 1):20–5.
2. Ash JS, Berg M, Coiera E. Some unintended consequences of information technology in health care: the nature of patient care information system-related errors. J Am Med Inform Assoc 2004;11:104–12. [PMC free article] [PubMed]
3. Ash JS, Sittig DF, Dykstra R, et al. The unintended consequences of computerized provider order entry: findings from a mixed methods exploration. Int J Med Inform 2009;78(Suppl 1):S69–76. [PMC free article] [PubMed]
4. Donyai P, O'Grady K, Jacklin A, et al. The effects of electronic prescribing on the quality of prescribing. Br J Clin Pharmacol 2008;65:230–7. [PMC free article] [PubMed]
5. Koppel R, Metlay JP, Cohen A, et al. Role of computerized physician order entry systems in facilitating medication errors. JAMA 2005;293:1197–203. [PubMed]
6. Mahoney CD, Berard-Collins CM, Coleman R, et al. Effects of an integrated clinical information system on medication safety in a multi-hospital setting. Am J Health Syst Pharm 2007;64:1969–77. [PubMed]
7. Spencer DC, Leininger A, Daniels R, et al. Effect of a computerized prescriber-order-entry system on reported medication errors. Am J Health Syst Pharm 2005;62:416–19. [PubMed]
8. Shulman R, Singer M, Goldstone J, et al. Medication errors: a prospective cohort study of hand-written and computerised physician order entry in the intensive care unit. Crit Care 2005;9:R516–21. [PMC free article] [PubMed]
9. Singh H, Mani S, Espadas D, et al. Prescription errors and outcomes related to inconsistent information transmitted through computerized order entry: a prospective study. Arch Intern Med 2009;169:982–9. [PMC free article] [PubMed]
10. Coiera E. Clinical communication: a new informatics paradigm. Proc AMIA Annu Fall Symp 1996:17–21. [PMC free article] [PubMed]
11. Coiera E, Tombs V. Communication behaviours in a hospital setting: an observational study. BMJ 1998;316:673–6. [PMC free article] [PubMed]
12. Parker J, Coiera E. Improving clinical communication: a view from psychology. J Am Med Inform Assoc 2000;7:453–61. [PMC free article] [PubMed]
13. Zhan C, Hicks RW, Blanchette CM, et al. Potential benefits and problems with computerized prescriber order entry: analysis of a voluntary medication error-reporting database. Am J Health Syst Pharm 2006;63:353–8. [PubMed]
14. Grundgeiger T, Sanderson P. Interruptions in healthcare: theoretical views. Int J Med Inform 2009;78:293–307. [PubMed]
15. Rivera-Rodriguez AJ, Karsh BT. Interruptions and distractions in healthcare: review and reappraisal. Qual Saf Health Care 2010;19:304–12. [PMC free article] [PubMed]
16. Westbrook JI, Woods A, Rob MI, et al. Association of interruptions with an increased risk and severity of medication administration errors. Arch Intern Med 2010;170:683–90. [PubMed]
17. Collins S, Currie L, Patel V, et al. Multitasking by clinicians in the context of CPOE and CIS use. Stud Health Technol Inform 2007;129:958–62. [PubMed]
18. Latorella KA. Investigating interruptions: an example from the flightdeck. In: Proceedings of the Human Factors and Ergonomics Society 40th Annual Meeting 1996:249–53.
19. Burmistrov I, Leonova A. Do interrupted users work faster or slower? the micro-analysis of a computerized text editing task. In: Jacko J, Stephanidis C, editors. , eds. Human-computer interaction: theory and practice (Part I). Mahwah, New Jersey: Lawrence Erlbaum Associates, 2003:621–5.
20. Speier C, Vessey I, Valacich JS. The influence of task interruption on individual decision making: an information overload perspective. Decision Sci 1999;30:24.
21. Speier C, Vessey I, Valacich JS. The effects of interruptions, task complexity, and information presentation on computer-supported decision-making performance. Decision Sci 2003;34:771–97.
22. Magrabi F. Using cognitive models to evaluate safety-critical interfaces in healthcare. CHI '08 extended abstracts on Human factors in computing systems. Florence, Italy: ACM, 2008.
23. Wood RE. Task complexity: definition of the construct. Org Behav Human Dec Process 1986;37:60–82.
24. Li SYW, Blandford A, Cairns P, et al. The effect of interruptions on postcompletion and other procedural errors: an account based on the activation-based goal memory model. J Exp Psychol Appl 2008;14:314–28. [PubMed]
25. Byrne MD, Bovair S. A working memory model of a common procedural error. Cogn Sci 1997;21:31–61.
26. Tullis T, Albert A. Measuring the user experience: collecting, analyzing, and presenting usability metrics. Elsevier /Morgan Kaufmann, Amsterdam. Boston, 2008.
27. Cohen J. Statistical power analysis for the behavioral sciences. 2nd edn Hillsdale, New Jersey: Lawrence Erlbaum Associates, Inc, 1988.
28. Bailey BP, Konstan JA. On the need for attention-aware systems: Measuring effects of interruption on task performance, error rate, and affective state. Comput Human Behav 2006;22:685–708.
29. Dodhia RM, Dismukes RK. Interruptions create prospective memory tasks. Appl Cogn Psychol 2009;23:73–89.
30. Magrabi F, Li SYW, Dunn AG, et al., editors. , eds. Why is it so difficult to measure the effects of interruptions in healthcare? to be presented at the 13th International Congress on Medical Informatics. Medinfo 2010.
31. Trafton JG, Altmann EM, Brock DP, et al. Preparing to resume an interrupted task: effects of prospective goal encoding and retrospective rehearsal. Int J Hum Computer Stud 2003;58:583–603.
32. Mark G, Gudith D, Klocke U. The cost of interrupted work: more speed and stress. Proceeding of the twenty-sixth annual SIGCHI conference on Human factors in computing systems; Florence, Italy: ACM, 2008.
33. Altmann EM, Trafton JG. Memory for goals: an activation-based model. Cogn Sci 2002;26:39–83.
34. Hodgetts HM, Jones DM. Interruption of the Tower of London task: Support for a goal-activation approach. J Exp Psychol 2006;135:103–15. [PubMed]
35. Monk CA, Trafton JG, Boehm-Davis DA. The effect of interruption duration and demand on resuming suspended goals. J Exp Psychol 2008;14:299–313. [PubMed]
36. Norman DA. Categorization of action slips. Psychol Rev 1981;88:1–15.
37. Reason J. Human error. Cambridge: Cambridge University Press, 1990.
38. Graham TA, Kushniruk AW, Bullard MJ, et al. How usability of a web-based clinical decision support system has the potential to contribute to adverse medical events. AMIA Annu Symp Proc 2008;6:257–61. [PMC free article] [PubMed]

Articles from Journal of the American Medical Informatics Association : JAMIA are provided here courtesy of American Medical Informatics Association

Formats:

Related citations in PubMed

See reviews...See all...

Cited by other articles in PMC

See all...

Links

  • Cited in Books
    Cited in Books
    PubMed Central articles cited in books
  • PubMed
    PubMed
    PubMed citations for these articles

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...