• We are sorry, but NCBI web applications do not support your browser and may not function properly. More information

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Henriksen K, Battles JB, Keyes MA, et al., editors. Advances in Patient Safety: New Directions and Alternative Approaches (Vol. 2: Culture and Redesign). Rockville (MD): Agency for Healthcare Research and Quality (US); 2008 Aug.

Cover of Advances in Patient Safety: New Directions and Alternative Approaches (Vol. 2: Culture and Redesign)

Advances in Patient Safety: New Directions and Alternative Approaches (Vol. 2: Culture and Redesign).

Show details

Harnessing the Potential of Health Care Collaboratives: Lessons from the Keystone ICU Project

, RN, MPA, MPS and , MD, PhD.

Author Information

, RN, MPA, MPS and , MD, PhD.*

The Johns Hopkins University School of Medicine, Department of Anesthesiology and Critical Care Medicine, Quality and Safety Research Group
*Address correspondence to: Christine A. Goeschel RN, MPA, MPS, The Johns Hopkins University School of Medicine, Department of Anesthesiology and Critical Care Medicine, Quality and Safety Research Group, 1909 Thames Street, 2nd Floor, Baltimore MD 21231; e-mail: ude.imhj@1hcseogc.

In October 2003, the Quality and Safety Research Group of the Johns Hopkins University School of Medicine, the Michigan Health and Hospital Association, and 108 intensive care units (ICUs) from 77 hospitals began a collaborative improvement project. Goals to improve care included creating a culture of safety, reducing central line-associated bloodstream infections (CLABSI) and ventilator associated pneumonias (VAP), and improving compliance with evidence-based practices for ventilator care. Improvement teams were assembled in each ICU to do the work, and the chief executive officer of each hospital partnered to support project efforts. The teams achieved a 50 percent improvement in safety climate, attained a median CLABSI rate of zero, and reported 99 percent compliance with evidence-based ventilator care practices. Understanding how and why this collaborative succeeded may expedite progress in other improvement efforts. In this article we present some of the lessons we learned while leading the Keystone ICU project.

Introduction

In October 2003, 108 intensive care units in 77 hospitals, the Michigan Health and Hospital Association, and the Quality and Safety Research Group at The Johns Hopkins University School of Medicine embarked on a collaborative journey with bold but focused goals. Our purpose was to improve intensive care in Michigan by creating a culture of safety, reducing central line-associated bloodstream infections (CLABSI) and ventilator associated pneumonias (VAP), and improving compliance with evidence-based ventilator care. By September 2005 in participating units, the median rate of CLABSI was zero, safety culture had improved more than 50 percent, and compliance with evidence-based ventilator care was 99 percent. Additional data on these measures were recently received but have not been analyzed as yet. VAP rates also fell, but those data are not yet published.

Given the effectiveness of these interventions and the potential for replicating this project (the Keystone ICU project), it is important to understand how this program differed from other quality improvement efforts and why it was successful. The limitations of qualitative reports of “lessons learned” from quality improvement collaboratives are widely acknowledged in the literature.1, 2, 3 However, a more complete picture of the collaborative process is possible when qualitative reports are coupled with complementary, objective, and rigorously conducted outcome evaluations.1, 4, 5 The significant improvement in CLABSI rates achieved by the Keystone ICU teams has been published.6

Thus, the specific aim of this paper is to present some of the lessons we learned as coleaders of the Keystone ICU (KICU) collaborative. Our intention is to further the dialogue and advance the science of large-scale quality and safety improvement projects. Given the magnitude of improvements in quality of care in this collaborative relative to improvements with pay for performance,7 use of this collaborative model may increase in frequency. Strategies to improve the efficiency and effectiveness of collaboratives will therefore be an important research priority.

Lessons Learned from the Keystone ICU Collaborative

A virtual learning community evolved during our work with the Keystone ICU teams, with frequent dialogue occurring among teams outside the formal collaborative structure. Thus, we recognized that the perspectives of other health care organizations or researchers regarding what contributed to the success of this project were important and could vary from the perspectives we present in this article. We offer the following lessons, not as an exhaustive list, but as a starting point for others to consider when embarking on a large-scale initiative (Table 1).

Table 1. Lessons from the Keystone ICU Collaborative.

Table 1

Lessons from the Keystone ICU Collaborative.

Understand the Differences Between Leadership and Authority: Cultivate Leaders

Leading change efforts (e.g., an improvement collaborative) is a challenge that benefits from an understanding of the differences between leadership and authority. Social psychology literature describes five bases of social power in organizations: legitimate power, referent power, expert power, reward power, and coercive power.8 Individuals with legitimate power (also referred to as “position power”) have organizational authority and are typically expected to control conflict, maintain norms, and provide direction, protection, and orientation to role and place. Within the collaborative structure, the engagement and support of organizational authorities are necessary but not sufficient for success.9 Authorities may or may not exhibit strong leadership skills.

Leadership is often independent of authority. In collaboratives, changes in practice and adherence to measurement criteria are often led by staff who exhibit strong informal leadership traits. Such individuals share knowledge (“expert power”) and build consensus (“coercive power”), even though they are not in a job position that provides them with formal authority.8, 10

Leaders involve other people in setting directions, facing challenges, adjusting values, changing perspectives, and developing new behaviors.10 Within the context of the KICU collaborative, we encouraged teams to acknowledge and support both formal (e.g., CEO) and informal leaders (e.g., bedside nurse) at every level of the organization. For example, to help facilitate engagement, we asked for input and recognized the perspectives and contributions of everyone involved in the project. As a result, leaders on the frontlines—such as attending physicians, nurses, therapists, pharmacists, unit coordinators and environmental health workers—and organizational leaders with formal authority—such as ICU medical directors, managers and hospital executives—were all well-informed. They understood the goals of the project, recognized why these were important, supported the interventions, believed the performance measures were feasible and valid, and committed the effort needed to improve care.

Get Both the Technical and Adaptive Work Right

Conceptually, technical and adaptive challenges are separate beasts to conquer, yet they often intersect in the process of completing work in a collaborative. Technical problems are those that can be clearly defined and most often have existing solutions. Technical work (content) can certainly be difficult, but we typically know what to do or can solicit the assistance of an expert for advice.10 In an improvement collaborative, technical work enlists all participants.

Examples of technical work at the team level in the KICU project included stocking chlorhexadine in their respective ICUs; developing a method to procure and organize the supplies needed for a line-insertion procedure using evidence-based practice guidelines; educating everyone involved with line insertion on the evidence supporting the project interventions; reliably collecting required data elements; and reporting the results.

Examples of technical work at the project coordinator level for any collaborative initiative include:

  • Defining the focus for improvement.
  • Selecting the measures to assess this improvement.
  • Defining the variables and data collection methods.
  • Developing a database and data management plan.
  • Analyzing data and generating reports.
  • Providing feedback to local leaders and frontline staff
  • Determining the consequences if teams do not consistently meet project participation requirements.

In KICU, the stringent technical work described above resulted in reliable data we could analyze to track safety culture, CLABSI rates, and compliance with the evidence-based ventilator care bundle.11

Adaptive challenges are often difficult to clearly identify. Yet, the ability to pinpoint and manage these challenges is critical to the success of any improvement initiative. Adaptive work (context) involves changing hearts, minds, and behaviors and typically takes longer than technical work, since it generates disequilibrium in existing systems. Individuals who care deeply but may not have formal authority to mandate change are often the champions of adaptive work. Adaptive challenges share several properties:12

  • There is a gap between aspirations and reality.
  • Narrowing the gap requires difficult changes.
  • The people with the problem are the problem as well as the solution. Problem-solving responsibility must shift from authoritative experts to those required to change their hearts, minds, or behaviors for resolution to occur.
  • Solving adaptive challenges requires moving beyond a comfort zone or “stepping outside the box.”

In KICU and our other collaborative initiatives, the adaptive challenges far outweighed the technical work. We believe success was tied to addressing both effectively. As an example, disseminating the evidence supporting strategies to reduce CLABSI and the methods to measure these infections was a technical problem, but getting clinicians to ensure that every patient reliably received these evidence-based interventions was an adaptive challenge.

Teams were encouraged to share data showing the gap between the project goals and the unit’s current performance. Many teams reported sharing data throughout their unit, often posting performance reports on unit bulletin boards or in staff lounges or conference rooms. We also heard from teams that routinely reported their data at medical and nursing staff meetings, project team meetings with their executive partner, and management and board meetings. Thus, the technical work of rigorous data collection and transparency of performance reinforced the need to modify practice behaviors.13

The importance of addressing both content (“technical work”) and context (“adaptive work”) in quality improvement studies is not new, although most published studies have focused on content (e.g., evidence-based medicine).14, 15, 16 During this project, we developed a change model designed to help participants understand and address both technical and adaptive challenges.13 This model targets senior leaders, team leaders, and staff through four phases of work. The first and third phases are primarily, though not exclusively, adaptive; the second and fourth are primarily, though not exclusively, technical. The phases are:

  1. Engage staff in the importance of the work through stories, sharing of baseline data, and identification of performance gaps.
  2. Educate staff about the evidence supporting practices known to improve outcomes.
  3. Execute the work to ensure patients in the organization receive these evidence-based practices.
  4. Evaluate the outcome to answer whether we are safer.

Each phase of the change model is important. But, the engagement phase deserves detailed discussion here because it provides the foundation for all other phases and relies on a role-specific understanding of how the requested interventions will make the world better. To achieve engagement requires capturing both the “head” and the “heart” of individuals.17 Our cognitive engagement strategies included use of baseline performance reports to estimate the number of preventable infections, deaths, and potentially avoidable ICU days, compared with optimal performance. While such estimates are helpful as tools for engagement, they are fraught with bias and should be used cautiously, if at all, to evaluate outcomes of a collaborative. Effective engagement strategies included the use of stories about patients who developed a preventable complication (e.g., CLABSI) and the impact it had on their lives.

It is critical to communicate regularly and transparently with project leaders and frontline staff both to establish and maintain this engagement. Enthusiasm is often easy to muster at the beginning of a project because it is new and offers hope for better clinical outcomes, camaraderie, and improved teamwork. However, sustaining this enthusiasm requires real and constant work at the project leadership and local levels.

Teams that consistently reported their data to all levels of the organization (from senior leaders to bedside staff) perceived the data as valid, continued to bring forth stories of harm, and seemed to build an institutional support network that sustained project vitality. Moreover, the collaborative created a virtual learning community that enabled teams to learn together and from each other. This support system helped many solve local adaptive challenges. As coordinators of the collaborative, we periodically sent aggregate project updates to hospital leaders and ICU teams. We also encouraged local teams to provide monthly project updates to their senior leaders.

Strive to Find the “Sweet Spot” Between Scientifically Sound and Feasible Interventions and Measures

We learned that an important principle for effective collaborative work, and likely quality improvement in general, is to find a balance between what is scientifically sound and what is feasible. We selected interventions with the strongest evidence (e.g., the lowest number needed to treat) and the fewest barriers to implementation. For example, we culled out of a nearly 100-page guideline to prevent catheter-related infections (from the Centers for Disease Control and Prevention) five behaviors that were closely associated with reduced infection rates and were also easy to implement with few barriers.18 Educating staff about these five simple behaviors that are associated with reduced infection rates was sound, feasible, and effective.6

Accurately evaluating the impact of any quality improvement collaborative is contingent upon the data teams provide. Thus, respecting the importance of science, while at the same time the realities of data collection burden, we chose to minimize the quantity of data collected without sacrificing data quality. For example, we used a rigorous definition for CLABSI data, but we did not collect data on the types of organisms causing the infection. We also implemented a robust data quality control program that sought to reduce measurement error and missing data.

Match Project Goals, Objectives, and Database Design from the Outset

A major challenge for quality and safety improvement projects is the eagerness of teams to forge ahead without clearly defined goals, a plan to measure progress toward those goals, an estimate of baseline performance, and a system for measuring performance.2, 5 In our experience, the “just do it” mentality hinders or precludes the ability to estimate whether or not an intervention is associated with improved performance. On the contrary, before any work is done, it is important to spend ample time conveying explicit objectives and goals to teams and collecting a sufficient amount of baseline data to make a precise estimate for each measure. A database that allows you to efficiently and effectively manage your data is most often worth the time and resources required to develop or purchase and install it. Perhaps the best test of the database and the team’s ability to collect and submit data is to ensure that you have collected valid baseline data and can produce performance reports.

Stay Focused on Original Aims

A team’s enthusiasm to improve care or its frustration with the challenges of redesigning care often results in a desire to “add on” to a project, perhaps before the original goal is achieved. From our experience, project leaders should be clear and consistent, focus on the original project aims, and caution teams about adding more work. “Scope creep” will deplete scarce resources and add ambiguity when evaluating the impact of the original intervention(s) on the collaborative’s goal.

Moreover, when teams struggle to implement significant system redesign or achieve results, they often become frustrated and move on to a new intervention. In this collaborative, we asked teams to refrain from starting a new intervention until they achieved the stated goal(s) for the current intervention. For example, if teams were working on the CLABSI interventions, we urged them not to begin working on VAP until they had significantly reduced or eliminated CLABSI.

Link Culture Improvement and Clinical Outcomes

Organizations need to learn how to influence cultural change, just as they need to learn how to eliminate infections. Creating a culture of safety and teamwork can enhance a team’s capacity to implement clinical interventions and increase the likelihood of sustaining the results achieved.19, 20 Culture and climate are used interchangeably in many industries, yet they are conceptually distinct. Culture is the underlying values, beliefs, and assumptions that make an organization or group distinct, whereas climate is a snapshot of culture at a specific point in time. Climate can be measured using survey instruments

At the beginning of the KICU project, teams measured safety and teamwork climate using the safety attitudes questionnaire (SAQ).21 Teams were presented their SAQ scores relative to an aggregated score of the other ICU teams in Michigan and were asked to meet with their CEOs to review the results. Following this meeting, teams implemented the Comprehensive Unit-based Safety Program to improve culture.22

We found that teams wanted to understand why safety culture was important, identify their current culture, and implement interventions intended to improve culture. Culture change is not likely to occur by ordering clinicians to cooperate, work more collaboratively, or communicate more effectively. SAQ scores provided a context for teams to discuss their safety culture and evaluate their annual progress. Teams with the best safety and teamwork climate also demonstrated the most rapid improvement in CLABSI rates.a Additional analyses suggest similar associations between safety climate and other clinical outcomes.

Minimize the Bias in Data Collection

Because quality improvement studies are often conducted with less rigor and resources than clinical studies, and because we have a strong desire to improve safety, there is a propensity to perceive that an intervention improved safety when, in reality, it may not have (i.e., type I error).2, 5 Project leaders can take several steps to help reduce these errors:

  • First, create a manual of operations that includes a data dictionary with explicit definitions for each variable. The numerator (events), those at risk for the event (denominator), and methods to monitor both variables should also be explicitly defined.
  • Second, create, pilot test, and revise data collection forms.
  • Third, create a data quality control plan. At a minimum, this plan should include a process to train data collectors and evaluate their reliability, imbed range checks in the database, generate monthly reports of missing data and outliers, and strategize ways to minimize missing data.23

Reduce the Quantity, Not the Quality, of Data

The quality of data is more important than the quantity of data when evaluating the impact of your intervention(s). Yet, many projects collect relatively large amounts of poor quality data. In evaluating whether an intervention was associated with improvements in quality or safety, we believe that biased data are worse than no data at all. Our efforts to engage and maintain physician involvement with this project were tied to our capacity to collect and report data that had face validity (i.e., the data are important and useful to those who are collecting and using the data). Nevertheless, we recognize that it is often difficult to find the correct balance between data that are feasible to collect and also scientifically sound. Quality improvement studies would be better served by collecting a smaller amount of high-quality data than a large amount of biased data. In the KICU project, we repeatedly had to scale back our desired list of data elements to those that were meaningful and feasible.

Keep a “Laser-Sharp” Focus on Patients

Hospitals, clinicians, and local cultures are unique, and sometimes conflict will arise. By keeping a “laser-sharp” focus on what is best for patients, project leaders can help overcome political and power struggles. At the beginning of our collaborative, we asked hospital leaders and local ICU teams to publicly commit that harming patients was untenable. This statement galvanized our virtual learning community to keep patients first.

Because poor communication is a primary cause of many sentinel events (www.jointcommission.org), and because we know that physicians and nurses often have different mental models that result in communication lapses, we introduced tools to improve patient-focused communication, teamwork, and ultimately, patient safety. One tool, called “the goals sheet,” focused attention on the patient by prompting clinicians during morning rounds to clearly outline a plan of care and identify potential safety hazards for each patient.24 This tool and others (e.g., multidisciplinary rounds, morning briefings) provided safe methods to practice teamwork behaviors that kept the patient as the central focus.

Another example was the CLABSI checklist. The goal of this tool was to reduce or eliminate the possibility of a patient developing a preventable blood stream infection. CLABSIs are common, costly, and often lethal and are best reduced or avoided by following the CDC guidelines to prevent infections. The CLABSI checklist provides an independent check, in which the assisting nurse checks off each evidence-based step that should be completed for line insertion. We asked executives to support the authority of nurses to “stop the line” if evidence-based practices were not followed during line insertion, and each executive complied.

This shift in authority has the potential to create conflict between the nurse and physician or other provider inserting the line. However, we learned that executive support, the ability to reference an organizational commitment to the concept that harm is untenable, and use of the CLABSI tool helped clinicians stay focused on the patient. While we do not pretend culture change is easy, maintaining a focus on what is best for patients often dispels any role-related tensions.

Expect the Project to Stall at Times

Rather then becoming frustrated or moving on to a new intervention when projects stall, leaders should listen to “the music beneath the words,” as Heifetz10 has explained. Hear what teams are struggling with and why. With a better understanding of local needs, leaders can work in concert with project staff to develop and implement a go-forward strategy. Not surprisingly perhaps, it seemed that a project would lose momentum most often because of adaptive challenges (e.g., changing behaviors).

When the local KICU teams appeared unusually overwhelmed, we put a hold on any new project activities, allowed the atmosphere to calm, and listened. Adaptive leaders do not provide answers. Instead, they frame the right questions, identify the current realities that need to be addressed, and challenge people to identify creative solutions.

We routinely tried to role-model adaptive leadership by leading with a single question. If the project was struggling, the question was often “Why?” Starting a dialogue about barriers with a wide participant audience allowed potential solutions to surface. In reflecting on why momentum periodically faltered, it was helpful to recognize and remember that change in and of itself is not a barrier to improvement. We know that if change is perceived to be positive, like winning the lottery, we welcome it with open arms. What people fear more than change is loss.10, 12 When teams reported local resistance to change, we suggested they try to bring to the surface and openly discuss what individuals or groups anticipated losing as a result of the change. These forums often resulted in a realization that loss(es) were more perceived than real. Often, projects fall apart because leaders assume tension means the project is not working. In fact, we observed that the tension points were where the greatest learning and forward momentum were occurring.

For example, at one point, teams were invited to voluntarily beta-test the Joint Commission ICU measures. A subset of teams agreed to do so, but it took far more work than many teams realized. Once the beta-test was completed, the test teams requested time to recoup their energy and catch up on local efforts. While adhering to the project plan was important, and some teams urged moving forward, we had agreed at the outset of the project that no one would be left behind. In spite of the tension, once reminded of our commitment to each other, all teams agreed to slow down implementation of the project plan.

During the hiatus, several teams developed internal project newsletters and shared project-related protocols they had developed with other KICU teams. This short but forced hiatus taught many teams the value of what Heifetz12 would describe as stepping off the dance floor and moving to the balcony—that is, observing the project from an objective distance, rather than from the day-to-day flow of activity. It also resulted in some new communication tools and project vitality.

At another juncture, many teams were experiencing physician leadership change and did not know how to cultivate a new leader. Engaging new physicians in an existing project was an important pause point for us. We did not modify expectations; we modified our project plan by adding coaching calls on physician engagement. During the calls, we provided tools (e.g., shadowing another practice domain) to facilitate mutual appreciation of roles and time demands. We also refined our monthly team checkup tool to collect additional team turnover information, so we could more effectively anticipate team needs.

Delays also may be caused by project management issues at the collaborative level. Well into the first year of the project, just as things were implemented and running smoothly, we experienced a turnover in project management at the hospital association’s Keystone Center for Patient Safety and Quality. This turnover resulted in missed project deadlines. Rather than exhibit frustration, however, we viewed it as an important opportunity to examine the departing project manager’s tasks and determine if any could be automated.

Our leading question was “How?” How could we continue this important work with fewer disruptions the next time we experienced the inevitable staff turnover? It was during this unexpected stall that we created new electronic capabilities for communicating with participants and for receiving local data, including an enhanced participant Web site.

Improve Upon Quality Improvement Models

Quality improvement methods have varied over the past two decades, but most have incorporated a bias toward action over evidence. In spite of weak study designs, the effectiveness of these studies was often accepted based on anecdotal accounts of success and intuition.1, 3, 4, 5, 14 Given the urgency to improve quality and the scare resources devoted to quality improvement, we needed to improve the efficiency and effectiveness of quality improvement efforts.

Our goal in conducting collaborative projects with methodologic rigor and strict data management is not to conduct research for the sake of publication or to impose burden on caregivers. Our goal is to learn with confidence what works to improve clinical outcomes and patient safety to benefit all patients.

The KICU project reinforced our belief that inferences regarding the benefit of a quality improvement intervention must be made with scientific rigor. We learned from participants that our approach was different from many, and in some cases all, previous quality improvement efforts in which they had been involved. We had a clear and articulated project plan, including a hypothesis/objective, study design, explicit interventions with timeframes for implementation, and valid outcome measures. Data collection forms and procedures were thoughtfully developed and pilot-tested for clarity and reliability. As these evolved during the project, we noted when and why so changes could be accounted for in the analysis. We provided training for personnel collecting the data. The research team regularly reviewed submitted data to minimize the risk for data entry errors and contacted teams for resolution when data were missing or suspect. Similar to clinical research, we committed to describe, conduct, and report the analyses appropriately. This included reporting missing data, accounting for nonindependence of outcome data, adjusting for confounders, and providing an estimate of precision for results. We understand that the knowledge or skills within an institution to adhere to these criteria may not always exist.

However, suggesting that data for quality improvement can be held to a different standard than data for research does not serve patients or the goal of improving care.2 In our experience, we observed that teams follow in the shadow of the leader. Within a supportive shadow, where rigorous study constructs are provided, expectations are clear, and those with analytic skill provide regular feedback and support, all teams are capable of rising to the challenges and expectations of collecting and submitting valid data.

Conclusion

The science of how to broadly improve quality of care is growing. We believe that these efforts require both technical and adaptive work, and both need to be done well. In this paper, we outlined some of the insights we gained while leading a large and successful collaborative. We learned lessons from the KICU teams that have infused all of our subsequent patient safety efforts.

Resources are too scarce and the need to improve is too great to support quality improvement activities that are inefficient or ineffective. Project coordinators should strive to make certain that frontline wisdom is respected and reflected in the work; that resources needed to conduct the work are part of leadership’s commitment to participate, a commitment that must be honored; and that the impact of each intervention is measured in a manner that will allow the industry to understand whether clinical outcomes improved and whether patients are safer because of our efforts.

Acknowledgments

We thank the MHA Keystone Center, executives at each of the participating hospitals, and all the ICU teams for their tremendous efforts and their dedication to improving quality of care and the safety of their patients. We also thank Christine G. Holzmueller, BLA, for her assistance in editing this manuscript.

The Agency for Healthcare Research and Quality provided financial support under grant number 1UC1HS14246 for the Keystone ICU project but had no role in the design or conduct of the study or the collection, management, analysis, and interpretation of the data. Dr. Pronovost and Ms. Goeschel both receive honoraria for speaking about improving quality and patient safety.

References

1.
Mittman BS. Creating the evidence base for quality improvement collaboratives. Ann Intern Med. 2004;140:897–901. [PubMed: 15172904]
2.
Auerbach AD, Landefeld CS, Shojania KG. The tension between needing to improve care and knowing how to do it. N Engl J Med. 2007;357:608–613. [PubMed: 17687138]
3.
Plsek PE. Collaborating across organizational boundaries to improve the quality of care. Am J Infect Control. 1997;25:85–95. [PubMed: 9113283]
4.
Ovretveit J, Bate P, Cleary P, et al. Quality collaboratives: Lessons from research. Qual Saf Health Care. 2002;11:345–351. [PMC free article: PMC1757995] [PubMed: 12468695]
5.
Shojania KG, Grimshaw JM. Evidence-based quality improvement: The state of the science. Health Aff. 2005;24:138–150. [PubMed: 15647225]
6.
Pronovost P, Needham D, Berenholtz S, et al. An intervention to decrease catheter-related bloodstream infections in the ICU. N Engl J Med. 2006;355:2725–2732. [PubMed: 17192537]
7.
Epstein AM. Pay for performance at the tipping point. N Engl J Med. 2007;356:515–517. [PubMed: 17259445]
8.
French JRP, Raven B. Chapter 9, Studies of social power. Ann Arbor, MI: University of Michigan Press; 1959. The bases of social power.
9.
Woolston JL. Implementing evidence-based treatments in organizations. J Am Acad Child Adolesc Psychiatry. 2005;44:1313–1316. [PubMed: 16292125]
10.
Heifetz RA. Leadership without easy answers. Cambridge, MA: The Belknap Press of Harvard Press; 1994.
11.
Berenholtz SM, Milanovich S, Faircloth A, et al. Improving care for the ventilated patient. Jt Comm J Qual Saf. 2004;30:195–204. [PubMed: 15085785]
12.
Heifetz RA, Linsky M. Leadership on the line: Staying alive through the dangers of leading. Boston: Harvard Business School Press; 2002.
13.
Pronovost PJ, Berenholtz SM, Goeschel CA, et al. Creating high reliability in healthcare organizations. Health Serv Res. 2006;41:1599–1617. [PMC free article: PMC1955341] [PubMed: 16898981]
14.
Leatherman S. Optimizing quality collaboratives. Qual Saf Health Care. 2002;11:307. [PMC free article: PMC1758014] [PubMed: 12468688]
15.
Graham ID, Logan J, Harrison MB, et al. Lost in knowledge translation: Time for a map? J Contin Educ Health Prof. 2006;26:13–24. [PubMed: 16557505]
16.
Shortell SM, Rundall TG, Hsu J. Improving patient care by linking evidence-based medicine and evidence-based management. JAMA. 2007;298:673–676. [PubMed: 17684190]
17.
Kotter KP. The heart of change: Real life stories of how people change their organizations. Boston: Harvard Business School Press; 2002.
18.
Berenholtz SM, Pronovost PJ, Lipsett PA, et al. Eliminating catheter-related bloodstream infections in the intensive care unit. Crit Care Med. 2004;32:2014–2020. [PubMed: 15483409]
19.
Thomas EJ, Sexton JB, Lasky RE, et al. Teamwork and quality during neonatal care in the delivery room. J Perinatol. 2006;26:163–169. [PubMed: 16493432]
20.
Sexton JB, Makary MA, Tersigni AR, et al. Teamwork in the operating room: Frontline perspectives among hospitals and operating room personnel. Anesthesiology. 2006;105:877–884. [PubMed: 17065879]
21.
Sexton JB, Helmreich RL, Neilands TB, et al. The safety attitudes questionnaire: Psychometric properties benchmarking data, and emerging research. BMC Health Serv Res. 2006;6:44. [PMC free article: PMC1481614] [PubMed: 16584553]
22.
Pronovost P, Weast B, Rosenstein B, et al. Implementing and validating a comprehensive unit-based safety program. J Pat Saf. 2005;1:33–40.
23.
Pronovost PJ, Berenholtz SM, Goeschel CA. Commentary: Improving the quality of measurement and evaluation in quality improvement efforts. Am J Med Qual. 2008 Jan 29 [Epub ahead of print] [PubMed: 18230870]
24.
Pronovost P, Berenholtz S, Dorman T, et al. Improving communication in the ICU using daily goals. J Crit Care. 2003;18:71–75. [PubMed: 12800116]

Bryan Sexton, PhD, unpublished data, September 1, 2007.

Footnotes

a

Bryan Sexton, PhD, unpublished data, September 1, 2007.

PubReader format: click here to try

Views

  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this page (117K)

Other titles in this collection

Related information

  • PMC
    PubMed Central citations
  • PubMed
    Links to pubmed

Related citations in PubMed

See reviews...See all...

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...