Skip to main content
Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
AMIA Annu Symp Proc. 2012; 2012: 979–987.
Published online 2012 Nov 3.
PMCID: PMC3540572
PMID: 23304373

The Orderly and Effective Visit: Impact of the Electronic Health Record on Modes of Cognitive Control

Charlene Weir, PhD, RN,1,2,4 Frank A Drews, PhD,2,5 Molly K Leecaster, PhD,2,3 Robyn J Barrus, MS,2 James L Hellewell, MD,2,4 and Jonathan R Nebeker, MS, MD1,2,3

Abstract

The clinical Joint Cognitive System (JCS) includes the clinicians, electronic health record (EHR), and other infrastructure that maintain control in the system in the service of accomplishing clinical goals. The purpose of this study is to examine the relationship between levels of control using the COCOM model (scrambled, opportunistic, tactical, and strategic) and patterns of EHR use. Forty-five primary care visits were observed and audio-recorded. Each was coded for COCOM levels of control (IRR = 90%). Screen changes were recorded and time stamped (as either searching or entering). Levels of control were significantly related to preparation intensity (F 2,23 = 6.62; p=0.01), the number of screen changes involved in both searching (F 2,30 = 6.54; p=0.004), and entering information (F 2,22 = 9.26; p=0.001). Combined with the qualitative data, this pattern of EHR usage indicates that the system as designed may not provide effective cognitive support.

Introduction

The clinical healthcare environment is a “Joint Cognitive System” that includes the provider, electronic health record, auxiliary staff, work processes, and other infrastructure.1,2 Together, these components of the JCS function to maintain control and support performance toward accomplishing clinical goals. The electronic health record (EHR) is a key component of the JCS, providing patient-specific clinical information, serving as a platform for communication, and a central repository for recording clinical events. From the JCS perspective, the CPOE system is an active partner in the process of care. The provider adapts his or her behavior to the limitations of the CPOE system and also alters the functions of the information system to better meet their own needs.3,4 These reciprocal adaptations and work-arounds are often effective in improving the performance of the JCS, but also may come at a performance cost.

The degree to which an EHR improves clinical performance is under scrutiny. Several lines of evidence are converging that suggest significant limitations on the degree to which the EHR can live up to expectations to improve care. First, several literature reviews on the topic of computerized decision support and patient outcomes have failed to find generalizable effects.5,6,7 Second, EHRs may actually decrease performance of the JCS or cause harm. Recent publications have noted the presence of unanticipated negative consequences of CPOE,8 higher mortality rates after implementation of a CPOE system,9 and reports of increased medication errors. 10

Even in the absence of harm, research has suggested that EHRs often fail to provide adequate cognitive support.11,12,13 Information that is only organized temporally and by category takes significant cognitive resources to absorb. A provider has to keep in mind the task while trying to identify relevant information amongst a very large pool of non-relevant and scattered information. They must organize the information to support a variety of active cognitive goals (e.g. decision-making, patient education, and/or documentation) and they need to integrate this information across multiple clinical problems. Most systems do not support this level of synthesis efficiently. 13,14,15

Other authors have noted that the use of an EHR may change how providers use information and reason regarding clinical cases, how they organize the medical knowledge, and how they interact with the patient.3 Others have observed the development of skillful information management strategies to cope with overload and imprecision.4 Consensus continues to grow regarding the need to increase emphasis on understanding the basis of effective cognitive support in the medical record.

The purpose of this paper is to report an exploration of the association between JCS performance in outpatient visits with patterns of use of the electronic medical record. Although there are a number of studies that examine the experience of using an EHR system, few link patterns of use with some measure of judged performance. We use Hollnagel and Wood’s contextual control model as the basis for evaluating performance. 1,2

Contextual Control Model (COCOM)

COCOM is a cognitive engineering model. The model is defined by three main concepts: competence, control and constructs. Competence is the capacity of the system or set of possible actions to meet the requirements of the situation. Constructs refer to “information” embodied in the system or what is known. Control refers to the “orderliness” of performance. Control is characterized as having 4 control modes in increasing order. In the scrambled mode (the lowest form of control), minimal information is available, goals or problems are underspecified and considered one at a time, time horizons are limited, and decision heuristics are rarely used. In the opportunistic mode, goals are poorly defined, only current information is considered, uncertainty is not well recognized, and decision heuristics are strongly influenced by habits and pattern recognition. In the tactical mode, goals are more defined, past and current information is considered, uncertainty is recognized and strategies are in place to cope with the uncertainty, but decision heuristics are rigidly based on guidelines. In the strategic mode, interactions among high-level goals are considered, timelines are much longer with significant planning and forecasting, and decision heuristics are flexibly adapted to the current context.2 Movement between the modes is a function of the quality of planning, which, in turn is a function of the decisions made, the time available and the inherent goals. (see http://www.ida.liu.se/~eriho/COCOM_M.htm for a fuller explanation of the model)

We operationalized COCOM by using the following components or dimensions to assess control modes during the visit: 1) Goal interactions refers to the degree to which goals are adequately defined and integrated. Specification of goals allows for a feed-forward control and adequate monitoring “online” of performance; 2) The Time horizon refers to the breadth of information considered, conceptualization of change over time and adequate forecasting of future events as anticipated; 3) Assessment of uncertainty refers to how well limitations of available information is recognized, explained and solutions are planned to adjust; and 4) Decision heuristics refers to the degree to which the action plan is customized to the current situation.

Although this study is exploratory, we hypothesized that visits rated at higher levels of control would be associated with more pre-planning activity, less searching behavior in the computerized record during the visit and more intensive planning prior to the visit. Other studies have noticed variations in how computers are used during the visit, but these studies did not relate patterns of use to performance.16,17

Methods

Design

This study was part of a larger study focusing on designing and developing an effective system of cognitive support for the electronic health record. For this portion of the study, a descriptive, observational design was employed. This approach was thought by the authors to be the most appropriate to efficiently gather information about provider’s mental models regarding medication management, goals associated with ordering and information search behavior, and information needs for the generation of hypotheses. In prior work, we had significant difficulty getting unbiased responses from providers by directly asking them.

Settings

VA medical centers were chosen randomly based on their geographical distribution, size, and academic affiliation, i.e. they had resident training programs. Potential site PI’s were identified and asked whether or not they would like to volunteer to participate. Sites were identified and recruited until we obtained the participation of four sites that were somewhat evenly distributed according to size, location and presence or absence of resident training. In addition to the main site, Salt Lake City UT, the participating sites were: Asheville NC, West Haven CT, Seattle WA, and American Lake WA. The participating provider selected a patient that would be seen on that day who had hypertension or was considered complex. The research assistant sat with the provider while they prepared for the visit and during the visit itself. All interactions (including the visit) was tape-recorded and the tape recordings were transcribed.

VA Electronic Medical Record

The VA’s Computerized Medical Record System (CPRS) is an integrated system, covering both inpatient and outpatient clinical areas with computerized provider order entry (CPOE), electronic documentation and decision support systems. Progress notes, procedure results could be printed, but are usually not used.

Participants

Within each site, up to 10 provider participants were selected based on their staffing in primary care outpatient clinics. Eight to ten primary care providers were recruited at each site: West Haven (N=9), Asheville (N=9), Salt Lake City (N=9), American Lake N=8, Seattle N=10), for a total of 45 providers. Provider participants were recruited by email and word of mouth. Employees’ supervisors were not asked to recruit participants. Additional provider participant selection criteria included being a prescribing provider and having at least one year of involvement in the VA. Patient involvement was based on provider participation and the presence of hypertension. Once a provider volunteered, one of his/her patients who met the criterion was approached and asked if he/she would like to participate in the study and consented.

A description of participating providers included in this analysis is presented in Table 1. Clinical pharmacists and a nurse case manager were eliminated because the focus was on the primary care visit itself and how the provider treated the disease(s) in question. The nurse case manager was involved instead in a discussion about information needs. One nurse practitioner was excluded because the entire patient visit did not include any mention of any of the five diseases focused on. In addition, there were several cases where the tablet data was not complete because of researcher error, leaving 33 transcripts to be included in this study.

Table 1.

Description of participants

SiteAttendingResidentPhysician AssistantNurse Practitioner
1322
2611
331
453
551

Data Collection Procedures

Observations

Primary care provider participants were observed and audio recorded in three phases by a Social Psychologist/RN (Weir) and a trained research assistant. The three phases consisted of: 1) observing the provider prepare for the visit (including how they used the computer) and using “think-aloud” techniques; 2) sitting in on the visit which was tape recorded – no questions were asked; and 3) a short follow-up interview to collect data on how busy the provider felt and how complex they viewed the patient.

Phase 1: During this phase providers were observed while they prepared for the patient visit using a “think aloud” protocol. In all cases, they used the electronic medical record system (CPRS) as part of the review. The researcher asked the participant, “How are you doing on time right now? Are you on time to see this patient, behind or ahead? If behind or ahead, by how many minutes?” Providers were then instructed to “Please think aloud what is in your mind while you do this task. Please indicate where you are in CPRS, what tab you are on, what you are doing and what information you are attending to at each step. Also let us know when you are entering orders or creating notes. Please be as descriptive and explicit as possible. There are no right or wrong things to say.”

If they prepped for the patient earlier in the day and/or set up their note prior, they were asked to describe what they did and how they did it. In order to identify information management tasks and key decision goals, they were prompted when necessary, including, “Please tell me what you’re doing explicitly during the templating and updating process. You’re doing this in order to do what? And you’re doing that by doing what?” This phase lasted around 5–10 minutes.

Phase 2: Phase 2 consisted of the patient visit itself. The task of observing the visits was divided between two research observers. Only one observer attended the patient visit. The research observer situated herself silently in the room and started the tape recorder and the PC tablet data collection program (in Access). Microsoft Access was used to build a data entry interface to code provider’s behavior and make notes about their observations in real time. Every provider had a computer in the patient exam room. Whenever the provider changed screens, the location and time was noted (one click). In addition, the apparent purpose was also noted (entering notes or orders vs. searching for information). Possible screen categories included: notes, orders, medications, consults, and labs/procedure results. The researcher could also select when the provider was talking to or examining the patient and if the researcher left the room to give the patient and provider privacy. The observer always positioned herself close enough to the screen in order to identify what the provider was doing and when screen changes occurred. Qualitative notes and comments regarding the visit were also recorded on the tablet in narrative form.

Phase 3: A short semi-structured interview was conducted after the visit. The provider was asked questions regarding how they managed hypertension (or the main problem addressed during the visit), aspects of his/her decision-making, and information-gathering, as well as structured interview questions about how well the provider knew that patient, how many patients he/she would see that day, etc. This discussion lasted about 10–15 minutes.

Qualitative Notes:

At all stages of observation, qualitative notes were collected and transcribed for later analysis. These notes were not substantial (as time rarely allowed), but were useful in interpreting quantitative data.

Data Coding Procedures

The COCOM theory was modified to medicine and the modified version was used as the underlining basis for the creation of a quantitative coding instrument (in Excel) to code the transcribed visit and interviews. The coding instrument captured specific behaviors such as the number of data points over time the provider considered to assess BP over time, identified goals of care, integration of goals across clinical processes, breadth of data included in decision making, and forecasting of future expectations. One coding sheet with these behaviors was created for each of the five diseases: congestive heart failure, coronary artery disease, diabetes, depression, and hypertension. Of the 33 transcripts containing complete interviews about one of the five diseases, hypertension was the main disease focused on for 29 visits, two focused on diabetes, two focused on congestive heart failure, and none focused on coronary artery disease or depression.

Seven coders were divided into groups of expert (clinician) and non-expert (non-clinician) categories based on level of clinical expertise. The expert group consisted of one primary care physician, one family practice physician, one infectious disease physician, and one pharmacist. The non-expert group contained three non-clinical members, consisting of two masters level and one PhD level study team members.

Multiple training sessions were conducted for both groups to ensure that each coder understood the different categories and concepts to code using the instrument. Each coder was asked to code a pilot transcript first and percent agreement was calculated across both groups. Both groups were brought together again to discuss the coding of the transcript, answer any questions, clear up any misunderstandings, and give further education or training if needed. Next, all coders were assigned a full visit transcript. Percent agreement was again computed and another meeting set to discuss coding and more training or education given if needed. Percent agreement was high enough at this point for coders to begin coding individually (>80%).

Each coder was randomly assigned transcripts to code using the “=randbetween()” function in Excel. One non-clinician coder was also one of the researchers who conducted more than half of the observations. She was only randomly assigned transcripts from observations that she did not conduct. This resulted in her coding fewer transcripts than the other two non-clinician coders. To ensure reliability, the non-clinicians coded in rounds. Each non-clinician was randomly assigned three separate transcripts to code. When these three were coded, they were all assigned the same fourth transcript to code, from which percent agreement was computed. If the percent agreement was too low (<80%), the three non-clinicians met to discuss their coding, come to agreement, and provide further education. This cycle was conducted three times. In the end, the three non-clinicians coded between nine to fourteen transcripts each.

One of the four clinical coders was only able to code one transcript. The other three were each assigned one transcript from each site totaling 16 clinician-coded transcripts in total. To compare non-clinician vs. clinician coding, each clinician was paired with each non-clinician at least once.

After coding the providers’ behaviors noted above for one to two of the five diseases, the coder categorized the visit in one of the four control modes. For this study, the mode used for analysis was the average of the modes across conditions. The number of conditions addressed in a visit that were coded ranged from 1 to 4. Inter-rater reliability for mode categorization was established prior to coding (after many training sessions) and reached a Kappa = 0.88.

Results

Descriptive

Mode ratings were averaged over all clinical conditions to provide a single rating for the visit. The overall distribution showed that most visits were rated as opportunistic (n=15), followed closely by tactical (n=13), and very few visits were rated overall as strategic (n=5). The overall mean for screen changes was 18.5 per visit (searching + entering) with 60% being searching actions and 40% associated with entering either notes or orders.

There was no significant difference between modes in terms of years since graduation, familiarity with the patient, ratings of levels of mental stress, and overall time of the visit. (See Table 2)

Table 2.

Mean values across modes.

OpportunisticTacticalStrategic
Years*161810
Role*11911
Work Pressure*4.94.33.0
Time*27. 426.328.7
*Not significant

Pre-planning Activity

The level of the intensity and completeness of planning was coded on a 1 (not well) to 7 (very-well) scale. Coding of “level of planning” was done by different coders unaware of the mode categorization in order to ensure that the coding protocols would remain independent. The overall mean across all 33 providers included in this analysis was 4.9. The test of the hypothesis that the intensity of pre-visit planning would differ across performance mode was found to be statistically significant in an ANOVA (F2,23=5.64; p=0.01). Statistical significance remained after controlling for years of experience (F 2,23 = 6.62; p=0.01). Figure 1 displays the modes.

An external file that holds a picture, illustration, etc.
Object name is amia_2012_symp_0979f1.jpg

Pre-planning mean levels by mode.

Searching Activity

The proportion of times a provider moved to the computer to search for information was counted and used for this analysis. This category included searching for information across any kind of screen location. Typically, the medication list screen, labs and notes were accessed, although occasionally other screens were selected, such as the cover page (where other appointments were located) or even the link to other VAs. In this analysis, differences between modes were statistically significant with higher levels of performance associated with lower incidence of search activities (F 2,30 = 6.54; p=0.004). Figure 2 displays the primary means.

An external file that holds a picture, illustration, etc.
Object name is amia_2012_symp_0979f2.jpg

Proportion of computer interactions associated with searching.

However, the difference between modes appeared to be driven by large differences between opportunistic and strategic and less as a result of differences between the tactical and strategic modes. Figure 2 displays the means across modes. This difference remained statistically significant after controlling for pre-planning activities (F 2,22 = 4.2; p=0.03).

Entering Notes and Orders

The proportion of computer screen changes used to enter information was used for this analysis. This category included ordering (medications, labs, etc.) and entering progress notes (which often included clinical reminders). Figure 3 displays the means. The differences were statistically significant in an ANOVA, (F2,22 = 9.26; p=0.001) and remained significant after controlling for pre-visit preparation (F 2,22 = 4.9; p=0.02). Visits rated at a higher level of control had higher proportion of computer interactions involving ordering.

An external file that holds a picture, illustration, etc.
Object name is amia_2012_symp_0979f3.jpg

Proportion of total screen changes associated with entering orders or notes.

Discussion

In this study, we found a systematic pattern of results showing that higher levels of performance are associated with differences in usage of the information system. Visits rated at a higher level of performance were more likely to be associated with higher levels of pre-visit planning. Providers where visits were rated as opportunistic were using the computer much more extensively during the visit for searching. In contrast, visits rated at high level of performance tended to have much more screen changes for ordering purposes.

These quantitative results appeared to be congruent with qualitative observations. In the most well-organized visits, providers tended to put together a new progress note prior to the visit itself which listed all previous activities, lab values in natural chronological order, and results of procedures (cut and pasted into the note). When they were in the visit, they simply had to glance at the note to recall what different lab values were and their relationship over time. As a result, the visit appeared to be much more orderly and integrative. In contrast, when clinicians had to stop the conversation to look something up, their focus went to the computer and they literally started talking about where they could find the information, often going through many screens in rapid succession. Although the direct evidence regarding how an extensive search interrupted the visit or thought processes was not available, the transcripts themselves were rated as less strategic by independent raters. Most providers completed all of their orders, if possible, before the patient left. When asked about this pattern, they replied that they would likely forget and also they would invariably need to discuss changes with the patient.

The JCS includes the patient as well as the provider, the computer and all other distributed information. The complexity of patients were included as part of the visit itself. The coding was done on transcripts of the dialogue between the patient and the providers as well as the interviews and observations. In that sense, the visit had to be considered a “whole” - a unit of analysis that was the patient, the provider and the transaction between the two.

Implications for Cognitive Support

The failure of many information systems to adequately provide cognitive support was noted in a recent publication of the National Science Foundation. “IT applications appear designed largely to automate tasks or business processes. They are often designed in ways that simply mimic existing paper-based forms and provide little support for the cognitive tasks of clinicians or the workflow of the people who must actually use the system.” (p. 3). 13

When the results presented here are interpreted in the light of the qualitative findings it suggests that the system appears to lack cognitive support. The EHR offers little support for finding and integrating information. Providers operating at higher modes of control had to work around the EHR by imposing their own order on information. They used mental (as opposed to electronic) templates of the information that they needed to guide and document relevant information prior to the patient encounter. Other factors such as years or experience or time pressure did not explain variation in mode of control.

The implication for EHR design is that to facilitate higher modes of control in more providers at decreased effort, the EHR should offer affordances to gather together relevant information about patient conditions and present it to the provider before he or she starts the face-to-face portion of the encounter.

There are also implications regarding training clinicians to more effectively utilize the computer. Organizing information is part of the preparation stage and those users adept and tracking and pasting information did well. However, the need for training should be minimized if proper design and tools are incorporated into the EHR.

Limitations

The sample size in this study was small, although geographically diverse. The coding protocols for COCOM control modes are new and will have to be verified further by future research. The cause and effect relationship between performance and patterns of EHR usage cannot be established from this design as only associations were assessed. The mode level categorization was based on human judgment and, therefore, subject to bias. However, standardized training and protocols for classification with resultant high inter-rater reliability suggest that the bias was minimized and results can be reproduced using the same methods..

Conclusion

Information relevant to a patient encounter is scattered throughout the EHR. Providers spend significant time organizing information in the EHR, either by searching and sifting or by constructing pre-documents that organize the material in a format that is useful. When providers organize information prior to the visit, they achieve higher levels of control and conduct a more orderly encounter. When they attempt to collect and organize information during the visit, the visit is more disorderly. The mode of control of the visit is not explained by other factors such as experience or time pressure. The VA’s EHR and most other available today do little to help providers collect and organize pertinent information. These findings imply that utilities built into the EHR to help organize information before patient encounters are likely to improve healthcare processes.

Acknowledgments

This project was funded under contract/grant number R18 HS017186 from the Agency for Healthcare Research and Quality (AHRQ), U.S. Department of Health and Human Services. The opinions expressed in this document are those of the authors and do not reflect the official position of AHRQ or the U.S. Department of Health and Human Services

APPENDIX 1: INTERVIEW QUESTIONS (selected for relevancy)

Before asking questions, instruct them: “I am going to ask you a series of questions. Again, I am interested in how physicians gather information about their patients. There is no right or wrong thing to say and all of your answers will be confidential – no names are kept. I am just interested in how you think about the process of gathering information. If any of these questions seem redundant and you’ve already answered the question, then you can just say that.” For the purposes of the transcript, can you tell me this patient’s active problems and current medications? Feel free to refer to your notes. (Don’t ask if a structured summary of meds was already given Now we will talk about the patient’s problems. We will first talk about hypertension (if no HTN, the most important problem). If we have enough time, we will review a 2nd problem.

PROBLEM-SPECIFIC QUESTIONS (may repeat for second problem)

Could you please summarize how you are managing this problem for this patient?

Can you elaborate on the different factors, steps, or components you have to consider when thinking about managing this problem? I am looking for information such as social and administrative issues, pros and cons of a particular intervention, and when you consider assigning tasks to other people to help manage the problem.

For this problem, what would you say you are trying to achieve, what are your goals?

(If clinical goals not mentioned) Can you elaborate on the goals for achieving clinical outcomes?

How do the goals for this problem interact with the patient’s other medical problems?

(If this is not a new problem) Are you achieving your goals and how did you decide that?

(If hypertension) What is the metric or indicator you track for this patient? For example, diastolic versus systolic, home versus clinic, nurses measurements versus your own, etc.

(If hypertension) If you have other goals related to hypertension management, what do you explicitly track to evaluate those goals?

(If not hypertension) An example of an explicit metric for hypertension is office measurement of systolic blood pressure. Is there any way you explicitly measure progress to know whether you have achieved your clinical goals for this problem? What is the metric or indicator you track?

(If thresholds not given) So let’s say you have feedback on (this metric) _________ and the other metrics you may have mentioned. What are the values (upper and lower limits) that would indicate you are not achieving your goals for this patient’s problem? This may not be so-called “normal limits.”

You have talked about the information needed for follow up. Now talk about the plan for obtaining and evaluating this information. Please provide explanations for your choices.

If you made changes in the treatment plan today for this problem, please briefly summarize again those changes and explain why you made them? If there were no changes, why not?

Based on your understanding of this patient’s problem, what do you expect to happen with each of your goals (restate if necessary) and within what time frame?

Some people use nationally accepted protocols, some people use their own based on their clinical expertise and understanding of the literature, and some don’t use a protocol. Do you have a standard protocol for this problem? Briefly, what is it? If you used a protocol, how well does the protocol you used apply to this patient’s problem?i

Not Well    1    2    3    4    5    6    7    8    9    Very Well

Did you have any problems with the information being inadequate or unreliable? What were they?

How did you deal with uncertainty caused by inadequate or unreliable information when making your treatment plan?

iLooking for adaptation of decision strategy. Characteristic: decision strategy.

References

1. Hollnagel E, Woods DD. Joint cognitive systems: foundations of cognitive systems engineering. Taylor & Francis; 2005. [Google Scholar]
2. Woods D, Hollnagel E. Joint Cognitive Systems: Patterns in Cognitive Systems Engineering. Baco Raton (FL): Taylor & Francis; 2006. [Google Scholar]
3. Patel VL, Kushniruk AW, Yang S, Yale JF. Impact of a Computer-based patient record system on Data Collection, Knowledge, Organization, and Reasoning. JAMIA. 2000;7(6):569–585. [PMC free article] [PubMed] [Google Scholar]
4. Weir CR, Nebeker JJ, Hicken BL, Campo R, Drews F, Lebar B. A cognitive task analysis of information management strategies in a computerized provider order entry environment. J Am Med Inform Assoc. 2007 Jan-Feb;14(1):65–75. [PMC free article] [PubMed] [Google Scholar]
5. Chaudhry B, Wang J, Wu S, et al. Systematic review: impact of health information technology on quality, efficiency, and costs of medical care. Ann Intern Med. 2006;144(10):742–52. [PubMed] [Google Scholar]
6. Garg AX, Adhikari NK, McDonald H, et al. Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: a systematic review. JAMA. 2005;293(10):1223–38. [PubMed] [Google Scholar]
7. Kaushal R, Shojania KG, Bates DW. Effects of computerized physician order entry and clinical decision support systems on medication safety: a systematic review. Arch Intern Med. 2003;163(12):1409–16. [PubMed] [Google Scholar]
8. Ash JS, Sittig DF, Poon EG, Guappone K, Campbell E, Dykstra RH. The extent and importance of unintended consequences related to computerized provider order entry. J Am Med Inform Assoc. 2007;14:415–23. [PMC free article] [PubMed] [Google Scholar]
9. Han YY, Carcillo JA, Venkataraman ST, et al. Unexpected increased mortality after implementation of a commercially sold computerized physician order entry system. Pediatrics. 2005;116(6):1506–12. [PubMed] [Google Scholar]
10. Koppel R, Metlay J, Cohen A, et al. Role of computerized physician order entry systems in facilitating medication errors. J Am Med Inform Assoc. 2005;11(2):104–12. [Google Scholar]
11. Rosenbloom ST, Geissbuhler AJ, Dupont WD, et al. Effect of CPOE user interface design on user-initiated access to educational and patient information during clinical care. J Am Med Inform Assoc. 2005;12:458–73. [PMC free article] [PubMed] [Google Scholar]
12. Weir CR, Nebeker JR. Critical issues in an electronic documentation system. AMIA Annu Symp Proc; 2007. pp. 786–90. [PMC free article] [PubMed] [Google Scholar]
13. Stead WW, Lin HS. Computational technology for effective health care: immediate steps and strategic directions. Washington, D.C.: National Academies Press; 2009. [Google Scholar]
14. Berg M. The search for synergy: interrelating medical work and patient care information systems. Methods Inf Med. 2003;42(4):337–44. [PubMed] [Google Scholar]
15. Horsky J, Zhang J, Patel VL. To err is not entirely human: complex technology and user cognition. J Biomed Inform. 2005;38(4):264–6. [PubMed] [Google Scholar]
16. Zheng K, Padman R, Johnson MP, et al. System interactions with an electronic health records. JAMIA. 2009;16:228–37. [PMC free article] [PubMed] [Google Scholar]
17. Linder JA, Schnipper JL, Tsurikova R, Melnikas AJ, Volk LA, Middleton B. Barriers to electronic health record use during patient visits. AMIA Annu Symp Proc; 2006. pp. 499–503. [PMC free article] [PubMed] [Google Scholar]

Articles from AMIA Annual Symposium Proceedings are provided here courtesy of American Medical Informatics Association