NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Committee on Measuring the Impact of Interprofessional Education on Collaborative Practice and Patient Outcomes; Board on Global Health; Institute of Medicine. Measuring the Impact of Interprofessional Education on Collaborative Practice and Patient Outcomes. Washington (DC): National Academies Press (US); 2015 Dec 15.

Cover of Measuring the Impact of Interprofessional Education on Collaborative Practice and Patient Outcomes

Measuring the Impact of Interprofessional Education on Collaborative Practice and Patient Outcomes.

Show details

Appendix AReview: Measuring the Impact of Interprofessional Education (IPE) on Collaborative Practice and Patient Outcomes

, M.D., , M.P.H., Ph.D., R.N., , Ph.D., R.N., and , Ed.D., M.Sc.


Although the complexity of patient care demands that health care teams collaborate effectively, there remains a paucity of high-quality research that measures the impact of interprofessional education (IPE) on practice processes and patient outcomes. A recent Cochrane review found a total of 15 articles published between 1999 and 2011 whose methodology met their stringent criteria for inclusion (Reeves et al., 2013). While those studies did provide evidence that IPE interventions can produce positive outcomes, there remains a need to identify best practices for research that effectively link IPE interventions with measurable changes in practice processes and patient outcomes.


The two objectives of this review are to

  • examine the currently best-available methods used for measuring the impact of IPE on collaborative practice, patient outcomes, or both; and
  • describe the challenges to conducting high-quality research that seeks to link IPE interventions with measurable changes in practice and patient outcomes.


This review focuses on studies reviewed in the Reeves and colleagues (2013) Cochrane review, and on any national and international studies published from January 2011 to July 2014.

Criteria for Considering Studies for This Review

Types of Studies

This review includes randomized controlled trials (RCTs), controlled before-and-after (CBA) studies, interrupted time series (ITS) studies, and uncontrolled before-and-after (BA) studies.

Types of Participants

This review includes various types of health care professionals (physicians, dentists, chiropractors, midwives, nurses, nurse practitioners, physical therapists, occupational therapists, respiratory therapists, speech and language therapists, pharmacists, technicians, psychotherapists, and social workers).

Types of Interventions

As defined by Reeves and colleagues (2013, p. 5), “An IPE intervention occurs when members of more than one health or social care (or both) profession learn interactively together, for the explicit purpose of improving interprofessional collaboration or the health/well-being (or both) of patients/clients. Interactive learning requires active learner participation, and active exchange between learners from different professions.”

Types of Outcome Measures

Outcome measures include

  • objectively measured patient/client outcomes (disease incidence; morbidity, mortality, readmission, and complication rates; length of stay; patient/family satisfaction);
  • objectively measured health care process measurements (changes in efficiency [resources, time, cost]; teamwork; approach to patient care or follow-up); and
  • subjective self-reported outcomes, included only when objective measures were also reported.

Search Methods

For this review, the following search methods were used:

  • A search was conducted of Ovid, PubMed, and CINAHL (Cumulative Index to Nursing and Allied Health Literature) via MeSH (Medical Subject Headings) terms “Interprofessional education AND (Cochrane terms OR Quality OR Clinical Outcomes OR Patient Outcomes OR Cost Benefit OR Quality OR Patient Safety OR Patient Satisfaction OR Provider Satisfaction OR Morbidity)” from January 2011 to the present.
  • A keyword search from PubMed using “interprofessional education” or “team training” in the title/abstract (limit 2008-July 2014) was also conducted.
  • Articles were hand-pulled from the Reeves et al. (2013) Cochrane review.

Data Collection and Analysis

Two of the review authors (EKP and JKM) jointly reviewed 2,347 abstracts retrieved by the searches to identify all those that indicated

  • an IPE intervention was implemented;
  • health care clinicians of various backgrounds were trained; and
  • patient outcomes (patient safety, patient satisfaction, quality of care, cost, clinical outcomes, community health outcomes, etc.) and/or provider outcomes (provider satisfaction, measures of collaborative practice, communication) were reported.

Abstracts were excluded if

  • the interprofessional intervention lacked a concrete educational component;
  • interprofessional activities involved only students;
  • learning outcomes were the only outcomes measured; or
  • reported outcomes included only feelings, beliefs, attitudes, or perceptions.

Forty-seven studies were identified from the abstract search as potentially meeting these inclusion criteria. The full text of each of these articles as well as each of the 15 articles pulled from the Cochrane review was independently reviewed by three of the review authors (EKP, JKM, VLB). An appraisal form was developed specifically for this review that evaluated the studies for

  • type of study (RCT, CBA, ITS, or BA study with historical control, contemporaneous control, or no control);
  • outcome measures;
  • outcome tool;
  • sample size and composition;
  • setting;
  • type of IPE intervention; and
  • findings (a brief overview of findings is included in a detailed table in the annex at the end of this appendix, but findings are not discussed as part of this review, which is focused on methodology).

These data were entered into a spreadsheet, and any disagreements and uncertainties were resolved by discussion. These studies were then given an overall rating based on the following definitions:

XStudy did not meet inclusion criteria
LEVEL IRCT or experimental study
LEVEL IIQuasi-experimental (no manipulation of independent variable; may have random assignment or control)
LEVEL IIINonexperimental (no manipulation of independent variable; includes descriptive, comparative, and correlational studies; uses secondary data)
LEVEL IIIQualitative (exploratory [e.g., interviews, focus groups]; starting point for studies where little research exists; small samples sizes; results used to design empirical studies)

The following descriptions were used as general guidelines for rating:


  • Consistent, generalizable results
  • Sufficient sample size
  • Adequate control
  • Definitive conclusions
  • Consistent recommendations based on a comprehensive literature review that includes thorough reference to scientific evidence


  • Reasonably consistent results
  • Sufficient sample size for the study design
  • Some control
  • Fairly definitive conclusions
  • Reasonably consistent recommendations based on a fairly comprehensive literature review that includes some reference to scientific evidence


  • Little evidence with inconsistent results
  • Insufficient sample size for study design
  • Conclusions cannot be drawn


In addition to the 15 studies from the Cochrane review, 24 additional studies met all criteria and were included in this review. Table A-1 presents an overview of the results of the review.

TABLE A-1. Overview of Results.


Overview of Results.

Study Types

Randomized Controlled Trials: Three new RCTs (Hoffmann et al., 2014; Nurok et al., 2011; Riley et al., 2011) were added to the seven RCTs described in the 2013 Cochrane review (Reeves et al., 2013). These three studies suffered from many of the same methodologic limitations noted for the studies discussed in the Cochrane review, such as the lack of concealed allocation, inadequate blinding in the assessment of outcomes, and evidence of selective outcome reporting. These studies were also characterized by additional sources of error that are common in evaluating educational programs (Sullivan, 2011), including differences in the quality of the education intervention (e.g., type of learners trained, variation in learner and instructor experience and training) and difficult-to-measure endpoints.

Controlled Before-and-After Studies: No new CBAs were added during this review. As described in the 2013 Cochrane review, the CBAs were characterized by many of the same limitations described for RCTs, except that there was often a more well-documented effort to ensure that baseline characteristics of the intervention and control groups were similar.

Interrupted Time Series Studies: One additional ITS (Pettker et al., 2009) was added to those listed in the 2013 Cochrane Review. The primary strength of this study was the documentation of long-term changes in outcomes. There was also a sequential introduction of interventions in an effort to isolate the effect of the IPE intervention from numerous other practice changes introduced during the study period. However, while the trend in outcomes was calculated on a monthly basis, it is not clear from the analysis whether the team training alone significantly affected outcome trends.

Before-and-After Studies: The 20 BA studies that were included in this review were carefully chosen for having used credible research methods based on our rating scale (i.e., IIB or IIC, as defined earlier). These studies were highly diverse in their outcome measures, measurement tools, setting, number and composition of participants, presence of historical controls, and type and quality of IPE interventions. Two BA studies that were rated IIC were included because of the quality of their design, but their interpretation of the results went beyond what the data could support (Capella et al., 2010; Pingleton et al., 2013). One study rated IIC was included because it was conducted in an unusual but important care setting (Lang et al., 2010).

Outcome Measures

Studies chosen for inclusion in this review reported objective and measurable outcomes. Patient outcome measures addressed many important issues in care quality, such as number of adverse events, specific indices of disease progression, length of stay, improvement in symptoms, morbidity, and mortality as derived from review of the clinical database for BA IPE interventions. Two studies assessed provider-with-patient communication skills (Brown et al., 1999; Helitzer et al., 2011). Only four studies measured patient satisfaction (Banki et al., 2013; Brown et al., 1999; Campbell et al., 2001; Morey et al., 2002), and one measured family satisfaction (Shaw et al., 2014).

Practice outcome measures most often addressed clinical decision making, behaviors related to patient safety, care efficiency, error reporting, adherence to guidelines, use of checklists, organization of care, and specific care competencies. Nine studies included objective observation of teamwork skills in the actual delivery of care (Bliss et al., 2012; Capella et al., 2010; Halverson et al., 2009; Mayer et al., 2011; Morey et al., 2002; Nurok et al., 2011; Patterson et al., 2013; Steinemann et al., 2011; Weaver et al., 2010), and two studies reported observed team behaviors in the simulated setting in addition to the care delivery site (Knight et al., 2014; Patterson et al., 2013). Only one study directly measured changes in practice costs (Banki et al., 2013).

Several studies measured outcomes over many months and even years to assess for sustained changes in patient or provider outcomes (Armour Forse et al., 2011; Helitzer et al., 2011; Mayer et al., 2011; Morey et al., 2002; Pettker et al., 2009; Phipps et al., 2012; Pingleton et al., 2013; Rask et al., 2007; Sax et al., 2009; Thompson et al., 2000b; Wolf et al., 2010). For these studies, improvements were sustained over the study period, although some reported partial decay over time. Another complication is that while these studies included graphics that listed outcomes at multiple time points before and after the IPE intervention, only two were actual ITS studies (Hanbury et al., 2009; Pettker et al., 2009). One based its conclusions on the single lowest and highest pre- and postintervention values (Pingleton et al., 2013), and the rest based their conclusions on the average of before and after outcomes.

Patient and Practice Outcome Tools

The most commonly used measurement tool for both provider and patient outcomes involved chart review/clinical database access for retrieving specific patient data, error/adverse event/incident reporting, and OR reports. Most observational studies used validated tools such as the Trauma Oxford Non-Technical Skills scale (Steinemann et al., 2011), Teamwork Evaluation of Non-Technical Skills tool (Mayer et al., 2011), American College of Surgeons National Surgical Quality Improvement Program tool (Bliss et al., 2012), Behavioral Markers for Neonatal Resuscitation Scale (Patterson et al., 2013), Medical Performance Assessment Tool for Communication and Teamwork (Weaver et al., 2010), and Trauma Team Performance Observation Tool (Capella et al., 2010). One study used the validated Roter Interaction Analysis System provider–patient communication tool (Helitzer et al., 2011). Shaw and colleagues (2014) used a validated Family Satisfaction in the ICU tool to link teamwork with family-perceived provider communication. Patient satisfaction was measured using the Press Ganey Patient Satisfaction Tool in one study (Banki et al., 2013), and the Patient Safety Satisfaction Survey in another (Campbell et al., 2001). A blended tool taken from several sources was used in one study (Morey et al., 2002), and a tool designed by the researchers was used in another (Brown et al., 1999).

Sample Size and Composition of Providers Trained

All but 3 of the 10 RCTs (Brown et al., 1999; Helitzer et al., 2011; Nurok et al., 2011) and 1 CBA (Weaver et al., 2010) described in this updated review had large sample sizes involving multiple practice sites. For example, one cluster RCT trained more than 1,300 providers whose outcomes were measured in 15 military and civilian hospitals across multiple states (Nielsen et al., 2007). Sample size in the ITS and BA studies varied widely, and several studies failed to report a specific number of participants trained (Armour Forse et al., 2011; Knight et al., 2014; Nurok et al., 2011; Theilen et al., 2013). The composition of providers trained varied significantly. All studies included nurses (either registered nurse [RN] or advanced practice registered nurses [APRN]), and only two did not include physicians (Lang et al., 2010; Rask et al., 2007); however, the specific number of participating physicians often was not reported. Four studies specifically listed doctorate of pharmacy (PharmD) participation, eight reported therapist participation, nine reported technician participation, and four reported social worker participation. Other reported participants included nutritionist, housekeeping, scheduler, physician assistant, unit secretary, chaplain, psychologist, and security officer. The accuracy of these counts is limited because some of these participants may have been included in a broad description such as “ancillary personnel,” “support personnel,” “OR team,” “health care team,” and “health care assistants.” The number of patient and provider outcomes measured in each study also varied widely. For example, one study reported patient outcomes for only 21 patients (Helitzer et al., 2011), whereas another reported outcomes for 21,409 patients (Thompson et al., 2000a).


This review included studies reflecting a broad range of locales, including inpatient and outpatient settings. Interestingly, there were similar numbers of U.S. studies conducted in community hospitals and practices (14) and in academic health centers (15). The OR was the most commonly studied academic setting, accounting for six studies (Armour Forse et al., 2011; Bliss et al., 2012; Halverson et al., 2009; Nurok et al., 2011; Sax et al., 2009; Wolf et al., 2010). Acute care settings accounted for 10 of the 13 U.S. studies conducted in the community, while primary care clinics (including mental health) accounted for only 3 studies (Taylor et al., 2007; Thompson et al., 2000b; Young et al., 2005). The Veterans Health Administration hosted three large studies (Neily et al., 2010; Strasser et al., 2008; Young-Xu et al., 2011). Five international studies were included (Britain = three, Germany = one, Mexico = one). An unusual setting for reporting team training was U.S. combat operations in Iraq. Finally, one nursing home and one free-standing MRI facility were included.

Type of IPE Intervention

The type of IPE intervention varied widely. The two most cited interventions were Crew Resource Management (n = 9) and TeamSTEPPS (n = 6) (see Table A-2 in Annex A-1); however, these were almost always implemented in a modified format. Several other standardized programs were used (see Annex A-1), but in-house-designed programs were the most common type of IPE intervention. The descriptions of these programs varied from general and nonspecific to highly detailed. Several studies combined teamwork training with training focused on selected care outcomes, such as prevention of venous thromboembolism (Pingleton et al., 2013; Tapson et al., 2011) or best practices in diabetes management (Barceló et al., 2010; Janson et al., 2009; Taylor et al., 2007).

TABLE A-2. Measuring the Impact of IPE on Collaborative Practice and Patient Outcomes: Detailed Data Table.


Measuring the Impact of IPE on Collaborative Practice and Patient Outcomes: Detailed Data Table.

Overview of Findings

Learner teamwork competencies and communication skills were improved in most of the observational studies. Morbidity and mortality were directly measured in some of the larger studies, especially those focused on the OR (Armour Forse et al., 2011; Bliss et al., 2012; Neily et al., 2010; Young-Xu et al., 2011) and labor and delivery (Riley et al., 2011). One study looked at teamwork during resuscitations in the ICU and found significant improvements in survival (Knight et al., 2014). Care quality was improved in the majority of studies included in this review, most often reported as changes in practice processes, such as adherence to best practices, use of checklists, and participation in briefings. For most of these studies, team training was implemented as one part of a more comprehensive approach to practice changes (e.g., procedure manuals, mandatory OR briefings, checklists, new reporting systems). Improvements in specific patient care quality outcomes, such as HgbA1C, cholesterol, blood pressure, and mobility after stroke, were reported in four studies (Barceló et al., 2010; Janson et al., 2009; Strasser et al., 2008; Taylor et al., 2007). Patient safety outcomes were also improved in most studies as measured by decreases in adverse outcomes (Bliss et al., 2012; Mayer et al., 2011; Patterson et al., 2013; Pettker et al., 2009; Phipps et al., 2012; Pingleton et al., 2013; Riley et al., 2011) and error reporting (Hoffmann et al., 2014). A reduction in error rates was reported in two studies (Deering et al., 2011; Morey et al., 2002). Patient satisfaction was improved in two studies (Banki et al., 2013; Campbell et al., 2001) and unchanged in two others (Brown et al., 1999; Morey et al., 2002). Care efficiency improvements were measured in several studies (Banki et al., 2013; Capella et al., 2010; Wolf et al., 2010), and direct improvements in costs were reported in one study (Banki et al., 2013).

Overview of Methodologic Limitations

The following methodologic limitations were noted:

  • for controlled studies, inability to control for differences between control and intervention study groups, lack of concealed allocation, inadequate blinding in the assessment of outcomes, evidence of selective outcome reporting, differences in the type and quality of the educational intervention, and difficult-to-measure endpoints;
  • inadequate control for multiple other simultaneous practice changes that affect patient outcomes;
  • lack of adequate timeline to document sustained changes in practice or patient outcomes;
  • paucity of evidence for patient-centered changes in care;
  • lack of studies addressing cost outcomes (business case);
  • poor description of participants (how many, which disciplines);
  • lack of clarity as to whether those trained together actually worked as a team in the practice setting;
  • lack of evidence that teamwork training resulted in improved teamwork behaviors prior to assessment of clinical outcomes; and
  • lack of adequate description of the type and quality of the IPE intervention as significant variables influencing outcomes.


The number of studies that link IPE with changes in practice and patient outcomes is growing. However, methodologic limitations continue to confound interpretation and generalization of the results.

While the RCT is considered the “gold standard” methodology for clinical studies, for educational research, they (like CBAs) suffer from less well-matched controls resulting from differences both within and among care delivery settings. Smaller studies are particularly vulnerable to the impact of differences among study groups. These barriers can be minimized to some degree by large-scale studies in which many clinician learners and practice settings can be randomized; however, differences among study sites likely remain, limiting meaningful comparisons in measured outcomes. Other methodological challenges related to participant allocation, investigator blinding, and variations in the quality of the IPE intervention cannot be completely avoided (Sullivan, 2011). As was stated in an Institute of Medicine (IOM) report on continuing medical education, “While controlled trial methods produce quantifiable end points, they do not fully explain whether outcomes occur as a result of participation in CE [continuing education], thus, a variety of research methods may be necessary” (IOM, 2010, p. 39).

Regardless of the study type, the implementation of other practice changes during the course of the study makes it difficult to ascribe documented changes in outcomes directly to the IPE intervention alone. One can argue that a combination of teamwork training and other practice changes would likely be even more effective in improving care (Weaver et al., 2014). Nevertheless, it is still important to better understand the independent and relative impact of teamwork training given the challenges inherent in scheduling and appropriately implementing effective IPE interventions.

The choice of outcome measures and measurement tools is a complex decision. Most of the studies in this review used retrieval of data from medical records to identify patient and practice outcome measures. While broad justifications are included in the background or introduction portions of these articles, few of the investigators make clear why particular outcome measures were chosen. At least three limitations should be considered when interpreting these data. First, studies using aggregate data collected from medical records pre- and postintervention are less likely to account for other changes in care unrelated to the IPE intervention than are studies in which specific cohorts of patients are carefully monitored and compared over time. Second, as described in the 2013 Cochrane review (Reeves et al., 2013), careful reading suggests that at least some studies engaged in selective reporting of outcomes, which limits complex interpretation of the effectiveness of the intervention. Finally, it is of concern that only four studies in this review focused on patient and family satisfaction. While objective measurement of practice and patient outcomes is essential, a patient-centered approach requires a more focused and nuanced tool for linking teamwork-based changes in care with the patient and family experience. Patients should not only be safe and well cared for, but should also feel safe and well cared for, and it is important to identify those teamwork factors that best promote that perception. Future research should focus on developing IPE interventions that teach patient-centered skills along with those skills needed to affect objective outcomes.

As with any education intervention, there is concern that the impact on knowledge, skills, and behavior will decay over time. All 11 of the long-term studies included in this review document a sustained impact on provider or patient outcomes, although the effects tended to decay over time. This is consistent with a 2007 comprehensive analysis of the effectiveness of continuing medical education (CME) in imparting knowledge and skills, changing attitudes and practice behavior, and improving clinical outcomes (Marinopoulos et al., 2007). While fewer than half of the studies in that analysis measured outcomes beyond 30 days postintervention, those that did found sustained changes in practice behaviors. Additional studies are needed to explore the best timing, content, format, and length of IPE interventions to provide the most sustained impact.

One challenge is that care-delivery in most institutions does not occur in the context of stable teams composed of professionals who train and work together in an intact group. Teams are most often ad hoc and may change on a weekly, daily, or even hourly basis for any given patient. With the exception of some of the operating room studies in this review, it is not clear whether the teams that trained together actually functioned as a team at the bedside. Although one meta-analysis suggests that improvements in team performance with team training are similar for intact and ad hoc teams (Salas et al., 2008), it may be that a team needs a “critical mass” of trained members in order to function effectively. Furthermore, while many of the studies provide the overall number of trainees and a list of participating professions, few document whether the teams that participated in any specific training session actually represented an appropriate number of trainees from each profession. These limitations suggest that the demonstration of improved teamwork skills in the actual clinical setting is an essential step before measuring practice or patient outcomes. While the Hawthorne effect is a consideration, there is evidence that observation in the clinical setting does not result in prolonged contamination of the data (Hohenhaus et al., 2008; Schnelle et al., 2006). Observation of actual changes in team behaviors provides stronger evidence for the link between team training and measureable changes in practice and patient outcomes (Morey and Salisbury, 2002).

It is interesting to note that few of the studies in this review gave in-depth consideration to the influence of IPE intervention implementation factors (timing, content, format, length, instructor and learner preparation) on outcomes. Even when researchers used well-respected programs such as Crew Resource Management and TeamSTEPPS, the programs were frequently modified for logistical reasons. It is impossible to know how the modifications affected the outcomes; for that reason, the studies cannot be compared as if the same intervention were tested. The majority of investigators created IPE interventions of their own design. Many of the most effective IPE interventions in this review combined team training with “taskwork” training related to best practices for a specific patient population (e.g., diabetes patients). Salas and colleagues (2008) report that both teamwork and taskwork are effective in improving outcomes; however, the relative emphasis of each in the interventions in this study is not well described. IPE interventions that are created by local stakeholders to address institutional priorities have the advantage of eliciting increased participation by providers, integrating faculty development, and allowing for assessment of specific teamwork behaviors and competencies (Owen et al., 2012), but they often vary widely in scope, content, format, and duration. There is a great deal of information available to inform the design and implementation of continuing IPE programs. Core principles that should be applied include ensuring adequate incorporation of effective theoretical foundations, adult learning principles, interprofessional learning objectives, and strategies for increased knowledge transfer and retention (IOM, 2013; Merriam and Leahy, 2005; Owen et al., 2014; Reeves and Hean, 2013). Yet for many of the studies in this review, it is not clear whether evidence-based principles were applied to the design and implementation of the IPE interventions. More guidance may be needed to help investigators choose the best approach.

Given the many methodologic limitations of these studies, outcome data must be interpreted carefully. Yet it is important to note that the majority of studies in this review found improvements in care processes, patient outcomes, or both. While the diversity of approaches and methodologic limitations make it difficult to draw clear conclusions with respect to best practices for linking IPE with patient and practice outcomes, this limited review suggests that the characteristics of those studies with the most significant improvements in outcomes include

  • high learner participation rates or self-selection to intervention group,
  • combination of IPE and goal-specific education (teamwork + taskwork),
  • combination of IPE and other changes in practice processes,
  • use of simulation and videotaping,
  • repetition of IPE interventions with regular feedback to learners, and
  • correlation of IPE intervention with observed and measurable changes in teamwork behaviors/skills.

While this review has attempted to describe the limitations of current research methodologies so that recommendations for future research can be made, it is important to recognize that many of the studies in this review represent high-quality groundbreaking research in a highly complex area of investigation. As stated in the 2010 IOM report, “In health care settings, it may remain difficult to measure dependent variables because linking participation in CE to changes in the practice setting is a complex process that cannot easily be tracked using current methods” (IOM, 2010, p. 35). In a recent synthesis of the team-training research literature, Weaver and colleagues (2014) note that research in this area is still plagued with limitations, including “small sample sizes, weak study design and limited detail regarding the team training curriculum or implementation strategy.” When research limitations are compounded by the complexities of bringing together professionals from diverse backgrounds and perspectives, it is unsurprising that much work remains to be done.


Based on this extensive review, it is the authors’ opinion that key recommendations necessary for meaningful research linking IPE interventions with sustained changes in practice and patient outcomes include the following:

  • Conduct large-scale controlled studies that minimize confounding variables; when this is not possible, consideration should be given to conducting well-designed ITS studies with careful monitoring of the study cohort to account for other variables that may impact outcomes.
  • Use objective, relevant provider and patient outcome measures chosen prospectively, and report all results.
  • Implement the IPE intervention at a defined time and adequately isolated from other practice changes.
  • Collect pre- and postintervention data at multiple time points over several years.
  • Include in patient outcome data an assessment of patient-centered team-based care.
  • Observe and measure team behaviors in the actual practice setting before collecting practice or patient outcome data.
  • Ensure that the IPE intervention is evidence- and competency-based, builds on sound theoretical underpinnings, is conducted by well-trained instructors, and is provided to the proper mix of learners.


  • Armour Forse R, Bramble JD, McQuillan R. Team training can improve operating room performance. Surgery. 2011;150(4):771–778. [PubMed: 22000190]
  • Banki F, Ochoa K, Carrillo ME, Leake SS, Estrera AL, Khalil K, Safi HJ. A surgical team with focus on staff education in a community hospital improves outcomes, costs and patient satisfaction. American Journal of Surgery. 2013;206(6):1007–1014. discussion 1014-1015. [PubMed: 24139667]
  • Barceló A, Cafiero E, de Boer M, Mesa AE, Lopez MG, Jimenez RA, Esqueda AL, Martinez JA, Holguin EM, Meiners M, Bonfil GM, Ramirez SN, Flores EP, Robles S. Using collaborative learning to improve diabetes care and outcomes: The VIDA project. Primary Care Diabetes. 2010;4(3):145–153. [PubMed: 20478753]
  • Bliss LA, Ross-Richardson CB, Sanzari LJ, Shapiro DS, Lukianoff AE, Bernstein BA, Ellner SJ. Thirty-day outcomes support implementation of a surgical safety checklist. Journal of the American College of Surgeons. 2012;215(6):766–776. [PubMed: 22951032]
  • Brown JB, Boles M, Mullooly JP, Levinson W. Effect of clinician communication skills training on patient satisfaction. A randomized, controlled trial. Annals of Internal Medicine. 1999;131(11):822–829. [PubMed: 10610626]
  • Campbell JC, Coben JH, McLoughlin E, Dearwater S, Nah G, Glass N, Lee D, Durborow N. An evaluation of a system-change training model to improve emergency department response to battered women. Academic Emergency Medicine. 2001;8(2):131–138. [PubMed: 11157288]
  • Capella J, Smith S, Philp A, Putnam T, Gilbert C, Fry W, Harvey E, Wright A, Henderson K, Baker D. Teamwork training improves the clinical care of trauma patients. Journal of Surgical Education. 2010;67(6):439–443. [PubMed: 21156305]
  • Deering S, Rosen MA, Ludi V, Munroe M, Pocrnich A, Laky C, Napolitano PG. On the front lines of patient safety: Implementation and evaluation of team training in Iraq. Joint Commission Journal on Quality & Patient Safety. 2011;37(8):350–356. [PubMed: 21874970]
  • DeVita MA, Schaefer J, Lutz J, Wang H, Dongilli T. Improving medical emergency team (MET) performance using a novel curriculum and a computerized human patient simulator. Quality & Safety in Health Care. 2005;14(5):326–331. [PMC free article: PMC1744065] [PubMed: 16195564]
  • Halverson AL, Andersson JL, Anderson K, Lombardo J, Park CS, Rademaker AW, Moorman DW. Surgical team training: The Northwestern Memorial Hospital experience. Archives of Surgery. 2009;144(2):107–112. [PubMed: 19221320]
  • Hanbury A, Wallace L, Clark M. Use of a time series design to test effectiveness of a theory-based intervention targeting adherence of health professionals to a clinical guideline. British Journal of Health Psychology. 2009;14(Pt. 3):505–518. [PubMed: 18851769]
  • Helitzer DL, Lanoue M, Wilson B, de Hernandez BU, Warner T, Roter D. A randomized controlled trial of communication training with primary care providers to improve patient-centeredness and health risk communication. Patient Education and Counseling. 2011;82(1):21–29. [PMC free article: PMC3539754] [PubMed: 20219315]
  • Hoffmann B, Muller V, Rochon J, Gondan M, Muller B, Albay Z, Weppler K, Leifermann M, Miessner C, Guthlin C, Parker D, Hofinger G, Gerlach FM. Effects of a team-based assessment and intervention on patient safety culture in general practice: An open randomised controlled trial. BMJ Quality and Safety. 2014;23(1):35–46. [PubMed: 23955468]
  • Hohenhaus SM, Powell S, Haskins R. A practical approach to observation of the emergency care setting. Journal of Emergency Nursing. 2008;34(2):142–144. [PubMed: 18358355]
  • IOM (Institute of Medicine). Redesigning continuing education in the health professions. Washington, DC: The National Academies Press; 2010.
  • IOM. Interprofessional education for collaboration: Learning how to improve health from interprofessional models across the continuum of education to practice: Workshop summary. Washington, DC: The National Academies Press; 2013.
  • Janson SL, Cooke M, McGrath KW, Kroon LA, Robinson S, Baron RB. Improving chronic care of type 2 diabetes using teams of interprofessional learners. Academic Medicine. 2009;84(11):1540–1548. [PubMed: 19858812]
  • Knight LJ, Gabhart JM, Earnest KS, Leong KM, Anglemyer A, Franzon D. Improving code team performance and survival outcomes: Implementation of pediatric resuscitation team training. Critical Care Medicine. 2014;42(2):243–251. [PubMed: 24158170]
  • Lang EV, Ward C, Laser E. Effect of team training on patients' ability to complete MRI examinations. Academic Radiology. 2010;17(1):18–23. [PubMed: 19734060]
  • Marinopoulos SS, Dorman T, Ratanawongsa N, Wilson LM, Ashar BH, Magaziner JL, Miller RG, Thomas PA, Prokopowicz GP, Qayyum R, Bass EB. Effectiveness of continuing medical education. Rockville, MD: Agency for Healthcare Research and Quality; 2007. (Evidence Report/Technology Assessment No. 149).
  • Mayer CM, Cluff L, Lin WT, Willis TS, Stafford RE, Williams C, Saunders R, Short KA, Lenfestey N, Kane HL, Amoozegar JB. Evaluating efforts to optimize TeamSTEPPS implementation in surgical and pediatric intensive care units. Joint Commission Journal on Quality and Patient Safety. 2011;37(8):365–374. [PubMed: 21874972]
  • Merriam SB, Leahy B. Learning transfer: A review of the research in adult education and training. PAACE Journal of Lifelong Learning. 2005;14:1–24.
  • Morey JC, Salisbury M. Introducing teamwork training into healthcare organizations: Implementation issues and solutions. Baltimore, MD. Santa Monica, CA: Human Factors and Ergonomics Society; 2002. pp. 2069–2073. (Proceedings of the Human Factors and Ergonomics Society Annual Meeting).
  • Morey JC, Simon R, Jay GD, Wears RL, Salisbury M, Dukes KA, Berns SD. Error reduction and performance improvement in the emergency department through formal teamwork training: Evaluation results of the MedTeams project. Health Services Research Journal. 2002;37(6):1553–1581. [PMC free article: PMC1464040] [PubMed: 12546286]
  • Neily J, Mills PD, Young-Xu Y, Carney BT, West P, Berger DH, Mazzia LM, Paull DE, Bagian JP. Association between implementation of a medical team training program and surgical mortality. Journal of the American Medical Association. 2010;304(15):1693–1700. [PubMed: 20959579]
  • Nielsen PE, Goldman MB, Mann S, Shapiro DE, Marcus RG, Pratt SD, Greenberg P, McNamee P, Salisbury M, Birnbach DJ, Gluck PA, Pearlman MD, King H, Tornberg DN, Sachs BP. Effects of teamwork training on adverse outcomes and process of care in labor and delivery: A randomized controlled trial. Obstetrics & Gynecology. 2007;109(1):48–55. [PubMed: 17197587]
  • Nurok M, Evans LA, Lipsitz S, Satwicz P, Kelly A, Frankel A. The relationship of the emotional climate of work and threat to patient outcome in a high-volume thoracic surgery operating room team. BMJ Quality and Safety. 2011;20(3):237–242. [PubMed: 21209131]
  • Owen JA, Brashers VL, Peterson C, Blackhall L, Erickson J. Collaborative care best practice models: A new educational paradigm for developing interprofessional educational (IPE) experiences. Journal of Interprofessional Care. 2012;26(2):153–155. [PubMed: 22233285]
  • Owen JA, Brashers VL, Littlewood KE, Wright E, Childress RM, Thomas S. Designing and evaluating an effective theory-based continuing interprofessional education program to improve sepsis care by enhancing healthcare team collaboration. Journal of Interprofessional Care. 2014;28(3):212–217. [PubMed: 24593326]
  • Patterson MD, Geis GL, LeMaster T, Wears RL. Impact of multidisciplinary simulation-based training on patient safety in a paediatric emergency department. BMJ Quality and Safety. 2013;22(5):383–393. [PubMed: 23258388]
  • Pettker CM, Thung SF, Norwitz ER, Buhimschi CS, Raab CA, Copel JA, Kuczynski E, Lockwood CJ, Funai EF. Impact of a comprehensive patient safety strategy on obstetric adverse events. American Journal of Obstetrics and Gynecology. 2009;200(5):492.e1–492.e8. [PubMed: 19249729]
  • Phipps MG, Lindquist DG, McConaughey E, O'Brien JA, Raker CA, Paglia MJ. Outcomes from a labor and delivery team training program with simulation component. American Journal of Obstetrics and Gynecology. 2012;206(1):3–9. [PubMed: 21840493]
  • Pingleton SK, Carlton E, Wilkinson S, Beasley J, King T, Wittkopp C, Moncure M, Williamson T. Reduction of venous thromboembolism (VTE) in hospitalized patients: Aligning continuing education with interprofessional team-based quality improvement in an academic medical center. Academic Medicine. 2013;88(10):1454–1459. [PubMed: 23969376]
  • Rask K, Parmelee PA, Taylor JA, Green D, Brown H, Hawley J, Schild L, Strothers HS III, Ouslander JG. Implementation and evaluation of a nursing home fall management program. Journal of the American Geriatric Society. 2007;55(3):342–349. [PubMed: 17341235]
  • Reeves S, Hean S. Why we need theory to help us better understand the nature of interprofessional education, practice and care. Journal of Interprofessional Care. 2013;27(1):1–3. [PubMed: 23256880]
  • Reeves S, Perrier L, Goldman J, Freeth D, Zwarenstein M. Interprofessional education: Effects on professional practice and healthcare outcomes (update). Cochrane Database of Systematic Reviews. 2013;3(CD002213) [PMC free article: PMC6513239] [PubMed: 23543515]
  • Riley W, Davis S, Miller K, Hansen H, Sainfort F, Sweet R. Didactic and simulation nontechnical skills team training to improve perinatal patient outcomes in a community hospital. Joint Commission Journal on Quality and Patient Safety. 2011;37(8):357–364. [PubMed: 21874971]
  • Salas E, DiazGranados D, Klein C, Burke CS, Stagl KC, Goodwin GF, Halpin SM. Does team training improve team performance? A meta-analysis. Human Factors: The Journal of the Human Factors and Ergonomics Society. 2008;50(6):903–933. [PubMed: 19292013]
  • Sax HC, Browne P, Mayewski RJ, Panzer RJ, Hittner KC, Burke RL, Coletta S. Can aviation-based team training elicit sustainable behavioral change? Archives of Surgery. 2009;144(12):1133–1137. [PubMed: 20026831]
  • Schnelle JF, Ouslander JG, Simmons SF. Direct observations of nursing home care quality: Does care change when observed? Journal of the American Medical Directors Association. 2006;7(9):541–544. [PubMed: 17095417]
  • Shaw DJ, Davidson JE, Smilde RI, Sondoozi T, Agan D. Multidisciplinary team training to enhance family communication in the ICU. Critical Care Medicine. 2014;42(2):265–271. [PubMed: 24105452]
  • Steinemann S, Berg B, Skinner A, DiTulio A, Anzelon K, Terada K, Oliver C, Ho HC, Speck C. In situ, multidisciplinary, simulation-based teamwork training improves early trauma care. Journal of Surgical Education. 2011;68(6):472–477. [PubMed: 22000533]
  • Strasser DC, Falconer JA, Stevens AB, Uomoto JM, Herrin J, Bowen SE, Burridge AB. Team training and stroke rehabilitation outcomes: A cluster randomized trial. Archives of Physical Medicine & Rehabilitation. 2008;89(1):10–15. [PubMed: 18164324]
  • Sullivan GM. Getting off the “gold standard”: Randomized controlled trials and education research. Journal of Graduate Medical Education. 2011;3(3):285–289. [PMC free article: PMC3179209] [PubMed: 22942950]
  • Tapson VF, Karcher RB, Weeks R. Crew resource management and VTE prophylaxis in surgery: A quality improvement initiative. American Journal of Medical Quality. 2011;26(6):423–432. [PubMed: 21609940]
  • Taylor CR, Hepworth JT, Buerhaus PI, Dittus R, Speroff T. Effect of crew resource management on diabetes care and patient outcomes in an inner-city primary care clinic. Quality & Safety in Health Care. 2007;16(4):244–247. [PMC free article: PMC2464938] [PubMed: 17693668]
  • Theilen U, Leonard P, Jones P, Ardill R, Weitz J, Agrawal D, Simpson D. Regular in situ simulation training of paediatric medical emergency team improves hospital response to deteriorating patients. Resuscitation. 2013;84(2):218–222. [PubMed: 22796407]
  • Thompson C, Kinmonth AL, Stevens L, Peveler RC, Stevens A, Ostler KJ, Pickering RM, Baker NG, Henson A, Preece J, Cooper D, Campbell MJ. Effects of a clinical-practice guideline and practice-based education on detection and outcome of depression in primary care: Hampshire depression project randomised controlled trial. Lancet. 2000a;355(9199):185–191. [PubMed: 10675118]
  • Thompson RS, Rivara FP, Thompson DC, Barlow WE, Sugg NK, Maiuro RD, Rubanowice DM. Identification and management of domestic violence: A randomized trial. American Journal of Preventive Medicine. 2000b;19(4):253–263. [PubMed: 11064229]
  • Weaver SJ, Rosen MA, DiazGranados D, Lazzara EH, Lyons R, Salas E, Knych SA, McKeever M, Adler L, Barker M, King HB. Does teamwork improve performance in the operating room? A multilevel evaluation. Joint Commission Journal on Quality and Patient Safety. 2010;36(3):133–142. [PubMed: 20235415]
  • Weaver SJ, Dy SM, Rosen MA. Team-training in healthcare: A narrative synthesis of the literature. BMJ Quality and Safety. 2014;23(5):359–372. [PMC free article: PMC3995248] [PubMed: 24501181]
  • Wolf FA, Way LW, Stewart L. The efficacy of medical team training: Improved team performance and decreased operating room delays: A detailed analysis of 4863 cases. Annals of Surgery. 2010;252(3):477–483. [PubMed: 20739848]
  • Young AS, Chinman M, Forquer SL, Knight EL, Vogel H, Miller A, Rowe M, Mintz J. Use of a consumer-led intervention to improve provider competencies. Psychiatric Services. 2005;56(8):967–975. [PubMed: 16088014]
  • Young-Xu Y, Neily J, Mills PD, Carney BT, West P, Berger DH, Mazzia LM, Paull DE, Bagian JP. Association between implementation of a medical team training program and surgical morbidity. Archives of Surgery. 2011;146(12):1368–1373. [PubMed: 22184295]


Copyright 2015 by the National Academy of Sciences. All rights reserved.
Bookshelf ID: NBK338366


  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this title (1.5M)

Related information

  • PMC
    PubMed Central citations
  • PubMed
    Links to PubMed

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...