Logo of procamiaLink to Publisher's site
AMIA Annu Symp Proc. 2008; 2008: 257–261.
Published online 2008.
PMCID: PMC2655970

How Usability of a Web-Based Clinical Decision Support System Has the Potential to Contribute to Adverse Medical Events

Timothy A.D. Graham, MD MSc, CCFP(EM),1 Andre W. Kushniruk, PhD,2 Michael J. Bullard, MD, FRCPC,1 Brian R. Holroyd, MD, FRCPC,1 David P. Meurer, RN, BScN,1 and Brian H. Rowe, MD, MSc, CCFP(EM)1



Clinical decision support systems (CDSS) have the potential to reduce adverse medical events, but improper design can introduce new forms of error. CDSS pertaining to community acquired pneumonia and neutropenic fever were studied to determine whether usability of the graphical user interface might contribute to potential adverse medical events.


Automated screen capture of 4 CDSS being used by volunteer emergency physicians was analyzed using structured methods.


422 events were recorded over 56 sessions. In total, 169 negative comments, 55 positive comments, 130 neutral comments, 21 application events, 34 problems, 6 slips, and 5 mistakes were identified. Three mistakes could have had life-threatening consequences.


Evaluation of CDSS will be of utmost importance in the future with increasing use of electronic health records. Usability engineering principles can identify interface problems that may lead to potential medical adverse events, and should be incorporated early in the software design phase.


Many health organizations in the developed world are striving to provide clinicians with information systems that will grant electronic access to a patient’s complete set of medical data. Perhaps the most exciting aspect of this trend is the potential for computers to add value by providing decision-support at the point of care. It is this aspect of Health Informatics that has the potential to truly revolutionize medicine.1

Clinical decision support has been defined as any electronic or non-electronic system designed to aid directly in clinical decision-making, that uses patient-specific information or characteristics to generate assessments or recommendations that are subsequently presented to clinicians for consideration.2 While having access to such clinical decision-support systems (CDSS) should improve health outcomes, it is also true that they have the potential to have the opposite effect.37 This could occur if programs function poorly, education is ineffective, information is erroneous and/or such systems are integrated into existing workflows without regard for the changes they will induce. Before widespread implementation it is essential that CDSS be comprehensively studied to ensure that they do no harm to patients and add improvements for clinicians.

Medical errors may have a number of causes, ranging from the cognitive limitations of humans, to temporary slips in knowledge and problems with health care workflow.7,9 Information technology has the potential to decrease medical errors by streamlining workflow and providing such features as automated alerts and reminders.7 Several studies have documented reductions in adverse events including unnecessary laboratory testing, drug-drug interactions and transcription errors.913 Such tasks are ideally suited to computerization, and it makes intuitive sense that designing appropriate computer systems will lead to fewer adverse drug events. Despite this, systems and applications that have historically been considered a source of error reduction may actually lead to other latent types of errors when such systems are introduced. In some cases they may contribute to more and new overall errors.37 These types of error can be termed technology induced errors. 7,8

For health care CDSS to be employed, they must not only be factually accurate, but also designed in a way users will find sensible and useful. As such, certain principles of usability should be employed in the design of CDSS. Usability is a quality attribute relating to how easy something is to use. More specifically, it refers to how quickly people can learn to use something, how efficient they are while using it, how memorable it is, how error-prone it is, and how much users enjoy its use. If people can't or won't use a feature, it might as well not exist.14 Another definition states that usability can be broadly defined as the capacity of a system to allow users to carry out their tasks safely, effectively, efficiently, and enjoyably.15

A variety of cognitive approaches for assessing the usability health information systems have been developed based on ideas from cognitive and usability engineering. The methods typically borrow from an interdisciplinary perspective and draw from a number of areas including cognitive psychology, computer science, systems engineering, and the field of usability engineering.15 Starting in the 1990s, the emerging field of usability engineering began to apply scientific methods to this multidisciplinary approach in an effort to fully elaborate the complexities of the human-computer interaction.16, 17

This study employs usability engineering methods (described below) to study two existing and two new prototype designs for two intranet-based CDSS pertaining to Community Acquired Pneumonia (CAP) and Neutropenic Fever (NF) that have been in use in a tertiary care Canadian Emergency Department (ED) for several years. It will focus on how interface design might contribute to medical errors and potential adverse events and how this can be detected.


Study Setting:

This study took place in the emergency department (ED) of a tertiary care teaching hospital staffed by full time emergency physicians 24 hours a day with approximately 60 patient care spaces and more than 60 computer terminals. During the 2006–2007 study period the annual patient volume was about 73,000.


The emergency physicians have access to an intranet-based CDSS that is hosted by the regional health authority and used by EDs in the region. It contains a variety of electronic clinical tools that include: patient informational handouts, order sets, assessment tools, clinic referral forms, clinical practice guidelines with varying degrees of interactivity, other information such as public health notices and an email feedback and issue reporting system. Clinicians access these resources for appropriate patients, then complete and print a pdf version which is added to the paper chart.

The CDSS described in this study are for CAP and NF patients, and are accessible on the intranet. They were developed by and for emergency physicians and emergency nurses, and offer predefined order sets and treatment guidelines based on current evidence and regional expert consensus. For example, a physician treating a patient with possible pneumonia can access the CAP CDSS which will suggest pertinent blood tests, radiographs and antibiotics (based on local resistance patterns). The interface has the ability to default all the suggested tests and therapy, or choose each individually. It also assists in the calculation of the Pneumonia Severity Index, a well-described decision support instrument that predicts inpatient and outpatient mortality in patients with pneumonia and therefore guides admission decisions.18 A similar format is used for NF; however, the NF form provides more interactivity, teaching points, and access to drug formulary for rare treatments (e.g., granulocyte colony stimulating factor {CGS-F}).

In this study two prototypes (one for CAP and one for NF) were also studied. These prototypes were developed with extensive input from the developers of the original application. Some of the critical design improvements included bi-directional flow of data, better user interface, more emphasis on ease of use and effective workflow with electronic charting, and HL7 messaging to enable communication between the application and the regional electronic health record (Alberta Netcare). The prototypes were, however, still in development with the understanding that much work was required before they could be considered ready for deployment in a clinical environment.

Usability Testing:

Seven attending emergency physicians were observed using the CDSS in scripted scenarios that mimicked clinical encounters. Subjects were experienced using the existing intranet CDSS web-portal with the majority having more than 50 uses. They also viewed a five minute video tutorial and had a verbal walkthrough describing how to use the prototype CDSS applications. Automated screen capture was recorded on a Toshiba Satellite P100 research laptop (Model PSPA3C-JR100E), with an Intel T2300 processor running at 1.66GHz, with 512MB of RAM and a 100GB hard drive. Automated screen and audio capture was obtained with Hypercam™ software installed on the laptop hard drive, and a MicFlex™ USB microphone plugged into the USB port of the laptop computer. A second audio recording was captured using a Panasonic RR-US36 portable audio recorder. The audio recordings were transcribed into text files by a transcriptionist and coded following methods described by Kushniruk et al.15


Each subject physician was observed through four sessions (2 for each of the existing CAP and NF CDSS). In the first session using each tool, the subject was given a task such as “please use the CAP CDSS to derive the Pneumonia Severity Index based on the following clinical scenario”, and asked to use the application as they normally would. They were asked not to just access parts of the CDSS because they were being studied, but rather to use as they normally would in the ED. In the second run through of each application, they were asked to “think-aloud” (i.e. verbalize their thoughts) as they used the system, specifically addressing aspects of the interface that they liked, disliked or wished to be changed.

For the CAP CDSS the subjects were asked to use the information provided to calculate the Pneumonia Severity Index Score using the CDSS, and then complete the web-based interface. At the end of each think-aloud session, a researcher (TG) asked a series of questions to elicit more information from the subjects as to their experience with the system. These included questions such as: 1) Did you have any problems using the system or completing the task?; 2) What do you like about the system?; 3) What do you dislike about the system?; etc. At the end of the question period the audio recording was left running. Often at this point further spontaneous feedback was offered about some of the features of the system.

Development of Coding Categories:

Prior to analysis, coding categories were selected for characterizing the systems’ usability, following the work of Kushniruk et al.6,7 The scheme included categories for subject comments regarding the following: interface consistency, response time, comprehensibility of system messages, help availability, comprehension of graphs and tables, challenges to entering data and comments regarding the entry of chronological information. Coding categories for identifying usability problems from the analysis of video based data included:

  1. interface problems - including categories for identifying problems with: data entry, provision of too much or too little information, display visibility, navigation, locating information, definitions of terms and general layout
  2. content problems – including categories for problems with: database content (e.g. the content of the medication database does not include a desired dose), clinical decision support (e.g. the ability to flag drug interactions with allergies), and relevance of information to the ED
  3. slips (errors like incorrect medication entry which at some point the user notices and corrects - typically unintentional events such as typographical errors) and mistakes (errors like incorrect medication entry which are not corrected by the subject)7

Annotation Phase:

This involved coding the automated screen capture for the presence of the categories described above. The text files of the transcribed audio were studied as the screen capture was running in another window. As problems were identified in the video, the corresponding event in the text was “time-stamped” by inserting the time of the occurrence and the type of problem. For instance, if the video showed the subject searching for a way to proceed to the next screen of the system and saying something like “I find this difficult to find”, the annotation would read 01:33 COMMENT: CRITIQUE NAVIGATION. In this manner each of the subject’s interactions with the existing and prototype applications were coded from beginning to end for the presence of the usability coding categories.


From 56 recorded sessions, a total of 422 events were recorded. The events were further grouped into seven main categories: 1) Negative Comments, 2) Positive Comments, 3) Neutral comments, 4) Application Events, 5) Problems, 6) Slips and 7) Mistakes. In total there were 169 Negative comments, 55 Positive comments, 130 Neutral Comments, 21 Application Events, 34 Problems, 6 Slips, and 5 Mistakes identified.

Application Events accounted for 5% of the recorded observed events. Sixty-two percent of these were events where the System Precluded the Desired Choice of the subject. Most of these were subjects wanting to put in a therapeutic or testing option that was not present. For example one subject noted that there was no option to order lactate, considered to be an important part of the work-up of a potentially septic patient.

In 24% (5 instances) of cases, subjects purposefully either ignored or overrode the CDSS. During the CAP Prototype Think-Aloud session one subject noted that there was no box to record crackles (a common lung sound that can be heard with pneumonia) and recorded an answer that he felt was “close enough”: “And we have right sided bronchial breath sounds…interesting... I think that we don’t have a crackles category here. In the absence of a crackles category if I heard crackles I would probably just put bronchial breath sounds as crackles for that category.” Another subject seemingly invented data at one point to fill in a box that was empty: “Um, symptoms began 2 days ago and I’m going to imagine they have just got worse since onset, or worse in the last 24 hours. So, worse in the last 1 day.” Two subjects ordered intravenous levofloxocin on their patients with pneumonia despite the CDSS recommendation (and pop-up box) reminding them that oral antibiotics would be indicated in this type of patient.

Eight percent of all recorded events were categorized as Problems. Of these 57% were navigational. On the NF CDSS, it was noted that if the height and weight were not enetered on the first page, when it was time to order wieght-based medications on the second page, it required scrolling back up to the beginning of the form which frustrated users.

Three subjects neglected to tick off the nursing home box on the CAP Prototype. In one case this resulted in an under-estimation of disease severity, with a corresponding under-estimation of mortality (from 9% to 1%). If the subject had acted on this by sending the patient home, then this might have led to an adverse event including death.

Five Mistakes were noted, two of which potentially could have led to life threatening consequences. One subject forgot to order the appropriate antibiotics on the NF CDSS. This occurred after the subject appeared to become frustrated and flustered from scrolling up and down the PCO trying to get the patients weight-based medication dosing ordered correctly. Another subject completely missed the antibiotic section in the NF Prototype. This could be a symptom of the lack of Visibility of System Status16, i.e. there was no prompt to inform the user that there was something important “behind” a minimized subsection. Failure to order timely antibiotics in patients witih impaired immune function from their cancer treatment could potentially have fatal consequences.


In the future as electronic health records enter into many clinical domains, evaluations and analyses of CDSS as described here will be critically important. Other industries have already incorporated a rigorous system of testing and standardization into their design and implementation process. For example, it would be unthinkable that the airline industry would have its first trial of an airplane’s flight capabilities with real passengers, but something not far off is currently occurring in the healthcare field, where systems are being rapidly implemented with no true understanding of their risks and benefits.

Campbell et al.19 report a study of types of unintended consequences of Computerized Physician Order Entry (CPOE) after doing focus group sessions and field observations in five hospitals that had successfully implemented CPOE. Unintended adverse consequences fell into nine major categories (in order of decreasing frequency): 1) more/new work for clinicians; 2) unfavourable workflow issues; 3) never-ending system demands; 4) problems related to paper persistence; 5) untoward changes in communication patterns and practices; 6) negative emotions; 7) generation of new kinds of errors; 8) unexpected changes in the power structure and; 9) overdependence on the technology. Clinical decision support features introduced many of these unintended consequences. The authors concluded that identifying and understanding the types (and, in some instances, the causes) of unintended adverse consequences associated with CPOE will enable system developers and implementers to better manage implementation and maintenance of future CPOE projects.

Using a usability engineering approach, others have shown that particular usability problems were associated with the occurrence of error in entering medications.6,7 For example, the problem of inappropriate default values automatically populating the screen of the system under study was closely related to errors in entering wrong dosages of medications. Furthermore, it was found that certain types of errors were associated with “mistakes” (not detected by users) while others were associated with “slips” often consisting of unintentional errors. They found that subjects only recognized 50% of the mistakes that they made. The authors note that the high incidence of errors and the inability of users to notice them is food for thought as the health care system moves quickly towards implementing similar systems.

It is important to recognize that the CDSS used for this study are research-based tools with design limitations that the original designers were aware of and working to address. In addition, certain limitations (for example the list of available drugs) were out of the control of the software designers and in the hands of external committees. Nevertheless, we noted that the usability of a CDSS graphical user interface could lead directly to mistakes by clinician end-users. On one interface, there were a series of choices that could be clicked to delineate what parts of the physical examination were positive. Absence of the desired category led users to choose ones that were similar yet different. It seemed that the users felt compelled to click on the interface to give some sort of answer rather than leaving it blank if the exact answer they were looking for was not present. If such a system were implemented widely, it could lead to incomplete information being disseminated. In other cases, users experienced frustration when the interface allowed them to leave parts incomplete without flagging them, which could lead to serious consequences. If there is no feedback to a user that a form or order set is incomplete, work pressure may cause them to fail to correct the oversight.


This study provides a framework for evaluating CDSS applications in a clinical environment and has identified specific areas for improvement in the applications utilized. Furthermore, a number of interface issues that could lead directly to adverse medical events that were identified in this relatively small sample gives pause for thought about the potential for similar undocumented problems in other clinical applications that are currently in use or being developed for implementation. End-users must demand to be involved in the rigorous testing of healthcare CDSS before implementation. Application of usability engineering principles can help identify interface problems that may lead to medical adverse events, and need to be incorporated early in the design phase to ensure that such problems can be corrected while there is still time and it is cost effective to do so. Such techniques should be applied widely to ensure the systems we create do not inadvertently lead to medical errors.


1. Handler JA, Feied CF, Coonan K, et al. Computerized physician order entry and online decision support. Acad Emerg Med. 2004 Nov;11(11):1135–1141. [PubMed]
2. Hunt DL, Haynes RB, Hanna SE, Smith K. Effects of computer-based clinical decision support systems on physician performance and patient outcomes: a systematic review. Jama. 1998 Oct 21;280(15):1339–1346. [PubMed]
3. Ornstein C. Hospital heeds doctors, suspends use of software. Los Angeles Times. 2003 Jan 22;:B1.
4. Han YY, Carcillo JA, Venkataraman ST, et al. Unexpected Increased Mortality After Implementation of a Commercially Sold Computerized Physician Order Entry System. Pediatrics. 2005 2005 Dec 1;116(6):1506–1512. [PubMed]
5. Koppel R, Metlay JP, Cohen A, et al. Role of computerized physician order entry systems in facilitating medication errors. JAMA. 2005 Mar 9;293(10):1197–1203. [PubMed]
6. Kushniruk AW, Triola MM, Borycki EM, Stein B, Kannry JL. Technology induced error and usability: the relationship between usability problems and prescription errors when using a handheld application. Int J Med Inform. 2005 Aug;74(7–8):519–526. [PubMed]
7. Kushniruk A, Triola M, Stein B, Borycki E, Kannry J. The relationship of usability to medical error: an evaluation of errors associated with usability problems in the use of a handheld application for prescribing medications Medinfo 2004. 11Pt 21073–1076.1076 [PubMed]
8. Leape LL, Lawthers AG, Brennan TA, Johnson WG. Preventing medical injury. QRB Qual Rev Bull. 1993 May;19(5):144–149. [PubMed]
9. Bates DW, Gawande AA. Improving safety with information technology. N Engl J Med. 2003 Jun 19;348(25):2526–2534. [PubMed]
10. Bates DW, Leape LL, Cullen DJ, et al. Effect of computerized physician order entry and a team intervention on prevention of serious medication errors. Jama. 1998 Oct 21;280(15):1311–1316. [PubMed]
11. Evans RS, Pestotnik SL, Classen DC, et al. A computer-assisted management program for antibiotics and other antiinfective agents. N Engl J Med. 1998 Jan 22;338(4):232–238. [PubMed]
12. Obradovich JH, Woods DD. Users as designers: how people cope with poor HCI design in computer-based medical devices. Hum Factors. 1996 Dec;38(4):574–592. [PubMed]
13. Tierney WM. Improving clinical decisions and outcomes with information: a review. Int J Med Inform. 2001 Jun;62(1):1–9. [PubMed]
14. Neilsen J, Loranger H. Prioritizing Web Usability. Berkeley, CA: New Riders; 2006.
15. Kushniruk AW, Patel VL. Cognitive and usability engineering methods for the evaluation of clinical information systems. J Biomed Inform. 2004 Feb;37(1):56–76. [PubMed]
16. Neilson J. Usability Engineering. New York: Academic Press; 1993. pp. 744–748.
17. Rossen MB, Carroll JM. Usability Engineering. New York: Morgan Kaufman Publishers; 2002.
18. Fine MJ, Auble TE, Yealy DM, et al. A prediction rule to identify low-risk patients with community-acquired pneumonia. N Engl J Med. 1997 Jan 23;336(4):243–250. [PubMed]
19. Campbell EM, Sittig DF, Ash JS, Guappone KP, Dykstra RH. Types of unintended consequences related to computerized provider order entry. J Am Med Inform Assoc. 2006 Sep-Oct;13(5):547–556. [PMC free article] [PubMed]

Articles from AMIA Annual Symposium Proceedings are provided here courtesy of American Medical Informatics Association
PubReader format: click here to try


Related citations in PubMed

See reviews...See all...

Cited by other articles in PMC

See all...


  • Cited in Books
    Cited in Books
    PubMed Central articles cited in books
  • PubMed
    PubMed citations for these articles

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...