NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.
Institute of Medicine (US) Committee on Quality of Health Care in America. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington (DC): National Academies Press (US); 2001.
Crossing the Quality Chasm: A New Health System for the 21st Century.
Show detailsSubstantial investments have been made in clinical research and development over the last 30 years, resulting in an enormous increase in the medical knowledge base and the availability of many more drugs and devices. Unfortunately, Americans are not reaping the full benefit of these investments. The lag between the discovery of more efficacious forms of treatment and their incorporation into routine patient care is unnecessarily long, in the range of about 15 to 20 years (Balas and Boren, 2000). Even then, adherence of clinical practice to the evidence is highly uneven.
A far more effective infrastructure is needed to apply evidence to health care delivery. Greater emphasis should be placed on systematic approaches to analyzing and synthesizing medical evidence for both clinicians and patients. Many promising private- and public-sector efforts now under way, including the Cochrane Collaboration, the ACP Journal Club, and the Evidence-Based Practice Centers supported by the Agency for Healthcare Research and Quality, represent excellent models and building blocks for a more comprehensive effort. Yet synthesizing the evidence is only the first step in making knowledge more usable by both clinicians and patients. Many efforts to develop clinical practice guidelines, defined as “systematically developed statements to assist practitioner and patient decisions about appropriate health care for specific clinical circumstances,” flourished during the 1980s and early 1990s (Institute of Medicine, 1992). Although the translation of evidence into clinical practice guidelines is an important first step, however, the dissemination of guidelines alone has not been a very effective method of improving clinical practice (Cabana et al., 1999).
Far more sophisticated clinical decision support systems will be needed to assist clinicians and patients in selecting the best treatment options and delivering safe and effective care. Certain types of clinical decision support applications, most notably preventive service reminder systems and drug dosing systems, have been demonstrated to improve clinical decisions and should be adopted on a widespread basis (Balas et al., 2000; Bates et al., 1999). More complex applications, such as computer-aided diagnosis, are in earlier stages of development (Kassirer, 1994), but the potential for these systems to contribute to evidence-based practice and consumer-oriented care is great.
The spread of the Internet has opened up many new opportunities to make medical evidence more accessible to clinicians and consumers. The efforts of the National Library of Medicine to facilitate access to the medical literature by both consumers and health care professionals and to design Web sites that organize large amounts of information on particular health needs are particularly promising (Lindberg and Humphreys, 1999).
The development of a more effective infrastructure to synthesize and organize evidence around priority conditions and to improve clinician and consumer access to the evidence base through the Internet offers new opportunities to enhance quality measurement and reporting. A stronger and more organized evidence base should facilitate the development of valid and reliable quality measures for priority conditions that can be used for both internal quality improvement and external accountability. Broad-based involvement of private- and public-sector groups and strong leadership from within the medical and other health professions are critical to ensuring the success of this effort.
Recommendation 8: The Secretary of the Department of Health and Human Services should be given the responsibility and necessary resources to establish and maintain a comprehensive program aimed at making scientific evidence more useful and accessible to clinicians and patients. In developing this program, the Secretary should work with federal agencies and in collaboration with professional and health care associations, the academic and research communities, and the National Quality Forum and other organizations involved in quality measurement and accountability.
The infrastructure developed through this public- and private-sector partnership should focus initially on priority conditions (see Chapter 4, Recommendation 5). Its activities should include the following:
- Ongoing analysis and synthesis of the medical evidence
- Delineation of specific practice guidelines
- Enhanced dissemination efforts to communicate evidence and guidelines to the general public and professional communities
- Development of decision support tools to assist clinicians and patients in applying the evidence
- Identification of best practices in the design of care processes
- Development of quality measures for priority conditions
It is critical that leadership from the private sector, both professional and other health care leaders and consumer representatives, be involved in all aspects of this effort to ensure its applicability and acceptability to clinicians and patients.
BACKGROUND
Early definitions of evidence-based medicine or practice emphasized the “conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients” (Sackett et al., 1996). In response to concerns that this definition failed to recognize the importance of other factors in making clinical decisions, more recent definitions explicitly incorporate clinical expertise and patient values into the decision-making process (Lohr et al., 1998). Contemporary definitions also clarify that “evidence” is intended to refer not only to randomized controlled trials, the “gold standard,” but also to other types of systematically acquired information.
For purposes of this report, the following definition of evidence-based practice, adapted from Sackett et al. (2000), is used:
Evidence-based practice is the integration of best research evidence with clinical expertise and patient values. Best research evidence refers to clinically relevant research, often from the basic health and medical sciences, but especially from patient-centered clinical research into the accuracy and precision of diagnostic tests (including the clinical examination); the power of prognostic markers; and the efficacy and safety of therapeutic, rehabilitative, and preventive regimens. Clinical expertise means the ability to use clinical skills and past experience to rapidly identify each patient's unique health state and diagnosis, individual risks and benefits of potential interventions, and personal values and expectations. Patient values refers to the unique preferences, concerns, and expectations that each patient brings to a clinical encounter and that must be integrated into clinical decisions if they are to serve the patient.
Evidence-based practice is not a new concept. One of its earliest proponents was Archie Cochrane, a British epidemiologist who wrote extensively in the 1950s and 1960s about the importance of conducting randomized controlled trials to upgrade the quality of medical evidence (Mechanic, 1998).
Evidence has always contributed to clinical decision making, but the standards for evidence have become more stringent, and the tools for its assembly and analysis have become more powerful and widely available (Davidoff, 1999). Prior to 1950, clinical evidence consisted of case reports, whereas during the latter half of the 20th century, results of about 131,000 randomized controlled trials of medical interventions were published. Study designs and methods of analysis have also become more sophisticated, and now include decision analysis, systematic review of the literature, meta-analysis, and cost-effectiveness analysis.
Prior to 1990, efforts to incorporate evidence-based decision making into practice encouraged clinicians to follow four steps. According to this approach, when a patient presents a problem for which the decision is not apparent, the clinician should (1) formulate a clear clinical question from that problem, (2) search for the relevant information from the best possible published or unpublished sources, (3) evaluate that evidence for its validity and usefulness, and (4) implement the appropriate findings (Davidoff, 1999).
During the last decade, it has become apparent that this strategy of training and encouraging clinicians to independently find, appraise, and apply the best evidence will not alone lead to major improvements in practice (Guyatt et al., 2000; McColl et al., 1998). The relevant information is widely scattered across the medical literature and of varying quality in terms of methodological rigor (Davidoff, 1999). Advanced study is required to master and apply state-of-the-art approaches to analysis of the literature. The demands and rigors of clinical practice do not allow clinicians the time required to undertake this process on a regular basis. Some have proposed a greater role for specially trained clinical librarians to assist clinicians in framing clinical questions and identifying the relevant literature (Davidoff and Florance, 2000). Many efforts are also under way to make it easier for clinicians and patients to access and interpret the findings of the literature.
SYNTHESIZING CLINICAL EVIDENCE
The most common approaches to synthesizing and integrating the results of primary studies are the conduct of systematic reviews and the development of evidence-based practice guidelines. Interest in applying both techniques has increased dramatically in the last 15 years (Chalmers and Haynes, 1994; Chalmers and Lau, 1993).
Systematic Reviews
Systematic reviews are scientific investigations that synthesize the results of multiple primary investigations. Conduct of a systematic review to answer a specific clinical question generally involves four steps (Cook et al., 1997):
- Conduct of a comprehensive search of potentially relevant articles using explicit, reproducible criteria in the selection of articles for review
- Critical appraisal of the scientific soundness of the research designs of the primary studies, including the selection of patients, sample size, and methods of accounting for confounding variables (Cook et al., 1997; Lohr and Carey, 1999)
- Synthesis of data
- Interpretation of results
There are two types of systematic reviews—qualitative and quantitative (Cook et al., 1997). In a qualitative review, the results of primary studies are summarized but not statistically combined. Quantitative reviews, sometimes called meta-analyses, use statistical methods to combine the data and results of two or more studies.
When applied properly, meta-analysis can be a powerful tool for reaching a decision about the efficacy of alternative treatments in a more timely fashion than is possible through the qualitative review of individual studies. A classic example is the case of the efficacy of thrombolysis in treating myocardial infarction (Davidoff, 1999). In a review of 33 randomized controlled trials published between 1959 and 1988 that examined the efficacy of thrombolysis in reducing acute mortality, it was found that most studies “suggested” some benefit of therapy; however, the outcomes varied considerably from one study to another, and for the most part, the studies did not achieve statistical significance (Lau et al., 1992). But through the use of meta-analysis techniques to combine the results of multiple studies (thus increasing the statistical power), it was possible to demonstrate by 1973 that the therapeutic efficacy of thrombolysis was statistically significant at the 0.05 level. Unfortunately, some medical textbooks in the early 1990s still contained statements that thrombolysis was an unproven therapy (Davidoff, 1999).
Systematic reviews are highly variable in their methodological rigor. In a critical evaluation of 50 articles describing a systematic review or meta-analysis of the treatment of asthma, for example, Jadad et al. (2000b) concluded that 40 publications had serious or extensive flaws. Reviews conducted by the Cochrane Collaboration, discussed below, were found to be far more rigorous than those published in peer-reviewed journals.
Two organized efforts are directed at conducting systematic reviews or meta-analyses. The first, the Cochrane Collaboration, was started in 1992 in Oxford, England. The second, the Agency for Healthcare Research and Quality's Evidence-Based Practice Centers program, started in 1997 and has resulted in the establishment of 12 centers, located mainly in universities, medical centers, and private research centers, that produce evidence-based reports on specific topics (Agency for Healthcare Research and Quality, 2000b).
The Cochrane Collaboration is an international network of health care professionals, researchers, and consumers that develops and maintains regularly updated reviews of evidence from randomized controlled trials and other research studies (Cochrane Collaboration, 1999). It currently comprises about 50 Collaborative Review Groups, which produce systematic reviews of various prevention and health care issues. The Collaboration maintains the Cochrane Library, a collection of several databases that is updated quarterly and distributed annually to subscribers on disk, on CD-ROM, and via the Internet. One of the databases, The Cochrane Database of Systematic Reviews, contains Cochrane reviews, and another, The Cochrane Controlled Trials Register, is a bibliographic database of controlled trials. The Database of Abstracts of Reviews of Effectiveness includes structured abstracts of systematic reviews that have been critically appraised by the National Health Services Centre for Reviews and Dissemination in York, England; the American College of Physicians' Journal Club; and the journal Evidence-Based Medicine. The library also includes a registry of bibliographic information on nearly 160,000 controlled trials that provide high-quality evidence on health care outcomes.
The Agency for Healthcare Research and Quality's 12 Evidence-Based Practice Centers conduct systematic, comprehensive analyses and syntheses of the scientific literature on clinical conditions/problems that are common, account for a sizable proportion of resources, and are significant for the Medicare or Medicaid populations (Agency for Healthcare Research and Quality, 2000b). The centers include universities (Duke University, The Johns Hopkins University, McMaster University, Oregon Health Sciences University, the University of California at San Francisco, and Stanford University); research organizations (Meta-Works, the Research Triangle Institute, and the RAND Corporation); and health care organizations and associations (New England Medical Center, and Blue Cross and Blue Shield Association). Since December 1998, evidence reports have been released on the following topics: sleep apnea, traumatic brain injury, alcohol dependence, cervical cytology, urinary tract infection, depression, dysphagia, sinusitis, testosterone suppression, attention deficit/hyperactivity disorder, and atrial fibrillation (Eisenberg, 2000a).
In response to the rapid increase in the volume of and interest in systematic reviews generated by the Cochrane Collaboration, the Evidence-Based Practice Centers, and many other smaller-scale efforts, numerous journals specializing in evidence-based publications have emerged. The first journal devoted exclusively to systematic reviews and meta-analyses was the ACP Journal Club, first published in 1991. There are now a number of evidence-based journals, including Evidence-Based Medicine, Journal of Evidence-Based Health Care, Evidence-Based Cardiovascular Medicine, Evidence-Based Mental Health, and Evidence-Based Nursing, as well as numerous “best-evidence” departments in other journals (Sackett et al., 2000).
One of the most recent evidence-based resources is Clinical Evidence, an “evidence formulary” resulting from a collaborative effort of the British Medical Journal and the American College of Physicians (Godlee et al., 1999). Clinical Evidence is noteworthy because of its focus and organization around common conditions. First published in June 1999, it includes summaries on the prevention and treatment of about 70 such conditions. The summaries are based on systematic reviews and, when these are lacking, individual randomized controlled trials. Clinical Evidence will be updated periodically, and eventually will lead to a family of products available in electronic and print form.
Practice Guidelines
Clinical practice guidelines can be defined as “systematically developed statements to assist practitioner and patient decisions about appropriate health care for specific clinical circumstances” (Institute of Medicine, 1992). Guidelines build on syntheses of the evidence, but go one step further to provide formal conclusions or recommendations about appropriate and necessary care for specific types of patients (Lohr et al., 1998). As a practical tool to influence practice, guidelines have been used in continuing medical education and clinical practice, as well as to make decisions about benefits coverage and medical necessity.
Guidelines have proliferated at a rapid pace during the last decade. During the early 1990s, the Agency for Health Care Policy and Research (now the Agency for Healthcare Research and Quality) sponsored an ambitious program for guideline development, which led to the specification of about 20 guidelines across a wide variety of clinical areas (Agency for Healthcare Research and Quality, 2000a; Perfetto and Stockwell Morris, 1996). The efforts in this area were eventually curtailed in favor of establishing the Evidence-Based Practice Centers in partnership with private-sector organizations (Lohr et al., 1998). Specialty societies, professional groups, health plans, medical centers, utilization review organizations, and others have also developed many practice guidelines.
Guidelines vary greatly in the degree to which they are derived from and consistent with the evidence base, for several reasons. First, as noted above, there is much variability in the quality of systematic reviews, which are the foundation for guidelines. Second, guideline development generally relies on expert panels to arrive at specific clinical conclusions. Judgment must be exercised in this process because the evidence base is sometimes weak or conflicting, or lacking in the specificity needed to develop recommendations useful for making decisions about individual patients in particular settings (Lohr et al., 1998).
In an effort to organize information on practice guidelines and to identify those having an adequate evidence base, the Agency for Healthcare Research and Quality, in partnership with the American Medical Association and the American Association of Health Plans, has developed a National Guideline Clearinghouse, which became fully operational in 1999 (Eisenberg, 2000a). The Clearinghouse provides online access to a large and growing repository of evidence-based practice guidelines.
Developing and disseminating practice guidelines alone has minimal effect on clinical practice (Cabana et al., 1999; Hayward, 1997; Lomas et al., 1989; Woolf, 1993). But a growing body of evidence indicates that guidelines implemented with patient-specific feedback and/or computer-generated reminders lead to significant improvements (Dowie, 1998; Grimshaw and Russell, 1993). More recent literature in this area also recognizes the importance of breaking down cultural, financial, organizational, and other barriers, both internal and external to health care organizations, to achieve widespread compliance with evidence-based guidelines (Solberg et al., 2000). To this end, up-front involvement of leaders from the health professions and representatives of patients in the guideline development process would likely help to ensure widespread adoption of the guidelines developed.
USING COMPUTER-BASED CLINICAL DECISION SUPPORT SYSTEMS
Until now, we have believed that the best way to transmit knowledge from its source to its use in patient care is to first load the knowledge into human minds …and then expect those minds, at great expense, to apply the knowledge to those who need it. However, there are enormous ‘voltage drops' along this transmission line for medical knowledge.—Lawrence Weed, 1997
A clinical decision support system (CDSS) is defined as software that integrates information on the characteristics of individual patients with a computerized knowledge base for the purpose of generating patient-specific assessments or recommendations designed to aid clinicians and/or patients in making clinical decisions.1 Work on such systems has been under way for decades with minimal impact on health care delivery. Interest in CDSSs has grown dramatically during the last decade, however, in part because of the promise such systems hold for assisting clinicians and patients in applying science to practice.
Publications reporting the results of clinical trials evaluating the effectiveness of CDSSs have also increased in number and quality in recent years. In a systematic review of controlled clinical trials assessing the effects of CDSSs on physician performance and patient outcomes, Hunt and colleagues identified 68 publications during the period 1974 through 1998, with 40 of these having been published in the most recent 6-year period (Hunt et al., 1998; Johnston et al., 1994).
CDSS applications assist clinicians and patients with three types of clinical decisions: preventive and monitoring tasks, prescribing of drugs, and diagnosis and management. Applications in the first category and most applications to date in the second category deal with less complex and frequently occurring clinical decisions. The software required to assist clinicians and patients with these types of decisions can be constructed using relatively simple rule-based logic, often based on practice guidelines (Delaney et al., 1999; Shea et al., 1996). Applications in the third category are far more complex and require more comprehensive patient-specific data, access to a much larger repository of up-to-date clinical knowledge, and more sophisticated probabilistic mathematical models.
Use of a CDSS for prevention and monitoring purposes has been shown to improve compliance with guidelines in many clinical areas. In a meta-analysis of 16 randomized controlled trials, computer reminders were found to improve preventive practices for vaccinations, breast cancer screening, colorectal cancer screening, and cardiovascular risk reduction, but not for cervical cancer screening or other preventive services (e.g., glaucoma screening, TB skin test) (Shea et al., 1996). In another meta-analysis of 33 studies of the effect of prompting clinicians, 25 of which used computer-generated prompts, the technique was found to enhance performance significantly in all 16 preventive care procedures studied (Balas et al., 2000). Computer-generated reminder systems targeting patients have also been shown to be effective (Balas et al., 2000; McDowell et al., 1986, 1989).
Computerized prescribing of drugs offers great potential benefit in such areas as dosing calculations and scheduling, drug selection, screening for interactions, and monitoring and documentation of adverse side effects (Schiff and Rucker, 1998). Many studies have been conducted on the use of CDSSs to improve drug dosing, and most (9 out of 15) show some positive effect (Hunt et al., 1998). The use of CDSSs for drug selection, screening for interactions, and monitoring and documentation of adverse side effects is far more limited because these applications generally require the linkage of more comprehensive patient-specific clinical information with the medication knowledge base. Although comprehensive medication order entry systems have been implemented in only a limited number of health care settings, the results of several recent studies have demonstrated that these systems reduce medical errors and costs (Bates et al., 1997, 1998, 1999). Computer-assisted disease management programs in areas in which decision making about medications is complex, such as the use of antibiotic and anti-infective agents, also have been shown to have a positive impact on quality and cost reduction (Classen et al., 1992; Evans et al., 1998).
The third category, computer-assisted diagnostic and management aids, is by far the most challenging. These systems require (1) an expansive knowledge base covering the full range of diseases and conditions, (2) detailed patient-specific clinical information (e.g., history, physical examination, laboratory data), and (3) a powerful computational engine that employs some form of probabilistic decision analysis.
Interest in computer-assisted diagnosis goes back more than four decades, and yet there have been only a few evaluations of its performance (Kassirer, 1994). In a systematic review of 68 CDSS controlled trials between 1974 and 1998, Hunt and colleagues found only 5 studies (4 of the 5 published before 1990) that assessed the role of CDSSs in diagnosis, only one of which found a benefit from their use (Chase et al., 1983; Hunt et al., 1998; Pozen et al., 1984; Wellwood et al., 1992; Wexler et al., 1975; Wyatt, 1989).
These early studies generally evaluated how well a computer performed in making or generating plausible diagnoses as compared with the decisions of experts, not the ability of a computer in partnership with a practicing clinician to perform better than the clinician alone (Kassirer, 1994). One recent study compared the performance of practicing clinicians with and without the aid of a diagnostic CDSS, and found among the former a significant improvement in the generation of correct diagnoses in hypothesis lists (Friedman et al., 1999). The study included faculty, residents, and fourth-year medical students; while all three groups performed better with the help of the computer, the magnitude of the improvement was greatest for students and smallest for faculty.
Studies conducted to date do not provide a convincing case in support of CDSS diagnostic tools. Yet it is important to recognize that changes under way in health care and computing will likely result in the development of far superior tools in the near future, for three reasons. First, CDSS diagnostic programs have been limited to date in terms of their clinical knowledge base. The cost of maintaining updated syntheses of the evidence for most conditions and translating these syntheses into decision rules has been prohibitively high for commercial developers of these systems. As discussed above, however, interest in evidence-based practice has led to a logarithmic increase in systematic reviews of the clinical evidence on particular clinical questions, which are available in the public domain.
Second, advances in computer technology, accompanied by dramatic decreases in the cost of hardware and software, have greatly reduced concerns about the computing requirements of CDSS diagnostic systems. Furthermore, there are early signs of CDSS diagnostic systems becoming available on the Internet, thus further reducing the capital investment and operational costs incurred at the level of a clinical practice (McDonald et al., 1998).
Third, the Internet has opened up new opportunities to address issues related to patient data. As noted, to be effective, CDSS diagnostic systems require detailed, patient-specific clinical information (history, physical results, medications, laboratory test results), which in most health care settings resides in a variety of paper and automated datasets that cannot easily be integrated. Past efforts to develop automated medical record systems have not been very successful because of the lack of common standards for coding data, the absence of a data network connecting the many health care organizations and clinicians involved in patient care, and a number of other factors. The Internet has the potential to overcome many of these barriers to automated patient data. The World Wide Web offers much of the standardization technology needed to combine independent sources of clinical data (McDonald et al., 1998). The willingness of patients and clinicians to use these systems will depend to a great extent on finding ways to adequately address concerns about the confidentiality of personally identifiable clinical information and a host of technical, legal, policy, and organizational issues that currently impede many health applications on the Internet. But numerous efforts are under way to address these issues as they apply to both the current and the next-generation Internet (Elhanan et al., 1996; National Research Council, 2000).
Fourth, the extraordinary advances achieved in molecular medicine in recent years will further increase the complexity of both the evidence base and the clinical decision-making process, making it imperative that clinicians use computer-aided decision supports. Molecular medicine introduces a huge new body of knowledge that will affect virtually every area of practice, and also opens up the possibility of developing individualized treatments linked to a patient's genetic definition (Rienhoff, 2000). CDSS programs offer the prospect of applying more sophisticated forms of decision analysis to the evaluation of various treatment options, taking into account both the patient's genetic definition and preferences (Lilford et al., 1998).
Given the potential of CDSSs to enhance evidence-based practice and provide greater opportunity for patients to participate in clinical decision making, the committee believes greater public investment in research and development on such systems is warranted. In fiscal year 1999, the Agency for Healthcare Research and Quality began a new initiative, Translating Research into Practice, aimed at implementing evidence-based tools and information in health care settings (Eisenberg, 2000a). The focus of the initiative is on cultivating partnerships between researchers and health care organizations for the conduct of practice-based, patient outcome research in applied settings. In fiscal year 1999, 3-year grants were awarded in support of projects to identify effective approaches to smoking cessation, chlamydia screening of adolescents, diabetes care in medically underserved areas, and treatment of respiratory distress syndrome in preterm infants. The resources for this program should be expanded to support an applied research and development agenda specific to CDSSs.
MAKING INFORMATION AVAILABLE ON THE INTERNET
The Internet is rapidly becoming the principal vehicle for communication of health information to both consumers and clinicians. It is predicted that 90 percent of households will have Internet access by 2005–2010 (Rosenberg, 1999). The number of Americans who use the Internet to retrieve health-related information is estimated to be about 70 million (Cain et al., 2000). The connectivity of health care organizations has also increased. For example, between 1993 and 1997, the percentage of academic medical libraries with Internet connections increased from 72 to 96 percent, and that of community hospital libraries rose from 24 to 72 percent (Lyon et al., 1998).
The volume of health care information available on the Internet is enormous. Estimates of the number of health-related Web sites vary from 10,000 to 100,000 (Benton Foundation, 1999; Eysenbach et al., 1999). A survey conducted by USA Today found that consumers access health-related Web sites to research an illness or disease (62 percent), seek nutrition and fitness information (20 percent), research drugs and their interactions (12 percent), find a doctor or hospital (4 percent), and look for online medical support groups (2 percent) (USA Today, 1998).
It is easy for a user to be overwhelmed by the volume of information available on the Web. For example, there are some 61,000 Web sites that contain information on breast cancer (Boodman, 1999), and a simple search for “diabetes mellitus” returns more than 40,000 sites (National Research Council, 2000). Information available on the Internet is also of varying quality: some is incorrect, and some is misleading (Achenbach, 1996; Biermann et al., 1999). Several options have been proposed to assist users in distinguishing the good information from the bad. Silberg et al. (1997) have encouraged Web site sponsors to adhere voluntarily to a set of rules including (1) inclusion of information on authors, along with their affiliations and credentials; (2) attribution, including references and sources for all content; (3) disclosure of Web site ownership, sponsorship, advertising, underwriting, commercial funding, and potential conflicts of interest; and (4) dates on which content was posted and updated.
To identify valuable information, users can rely on a number of rating services that review and rate Web sites, but there are problems with many of these rating services as well. In a recent review, Jadad and Gagliardi (1998) identified 47 rating services, of which only 14 provided a description of the criteria used to produce the ratings, and none gave information on interobserver reliability or construct validity.
One of the richest sources of clinical information on the Internet is the National Library of Medicine's (NLM) MEDLINE. MEDLINE contains more than 9 million citations and abstracts of articles drawn mainly from professional journals (Miller et al., 2000). In June 1997, NLM made MEDLINE available free of charge on the Web, and usage jumped about 10-fold to 75 million searches annually (Lindberg and Humphreys, 1998).
When MEDLINE was established, it was assumed that its primary audience would be health care professionals, but it is now recognized that the lay public has a keen interest in accessing the clinical knowledge base as well. It is estimated that about 30 percent of MEDLINE searches are by members of the general public and students, 34 percent by health care professionals, and 36 percent by researchers (Lindberg, 1998). In 1998, NLM added 12 consumer health journals to MEDLINE to increase its coverage of information written for the general public, and also launched MEDLINEplus, a Web site specifically for consumers (Lindberg and Humphreys, 1999). MEDLINEplus is divided into eight sections (e.g., health topics, databases, organizations, clearinghouses), each of which provides links to reputable Web sites maintained by the National Institutes of Health, the Centers for Disease Control and Prevention, the Food and Drug Administration, and professional organizations and associations.
The MEDLINEplus section HealthTopics provides users with access to pre-formulated MEDLINE searches on common topics, most of which are diseases or conditions. The topics included were identified through an analysis of the most common search terms used on the NLM home page, which revealed that 90 percent or more were for specific diseases, conditions, or other common medical terms (e.g., Viagra, St. John's Wort) (Miller et al., 2000). The HealthTopics list numbers more than 300, with some of the most frequently searched topics being diabetes, shingles, prostate, hypertension, asthma, lupus, fibromyalgia, multiple sclerosis, and cancer.
There are many other sources of filtered evidence-based information as well, including the Cochrane Library discussed above. Access to evidence-based guidelines is provided in the United States by the National Guideline Clearinghouse (sponsored by the Agency for Healthcare Research and Quality), the American Medical Association, and the American Association of Health Plans (Agency for Healthcare Research and Quality et al., 2000), and in Canada by the CPG Infobase (sponsored by the Canadian Medical Association) (Canadian Medical Association, 2000). NOAH (New York Online Access to Health) is a library collaboration for bilingual consumer health information on the Internet (Voge, 1998).
Thus many efforts are under way to assist users in accessing useful health care information on the Web. Some believe, however, that much more could be done to achieve a more “powerful and efficient synergy” between the Internet and evidence-based decision making (Jadad et al., 2000a).
DEFINING QUALITY MEASURES
The enhanced interest in and infrastructure to support evidence-based practice have implications for quality measurement, improvement, and accountability (Eisenberg, 2000b). The use of priority conditions as a framework for organizing the evidence base, as discussed in Chapter 4, may also have implications for external accountability programs.
Systematic reviews and practice guidelines provide a strong foundation for the development of a richer set of quality measures focused on medical care processes and outcomes. To date, a good deal of quality measurement for purposes of external accountability has focused on a limited number of “rate-based” indicators—rates of occurrence of desired or undesired events. The National Committee for Quality Assurance, through its Health Plan Employer Data and Information Set, makes comparative quality data available on participating health plans and includes such measures as childhood immunization rates, mammography rates, and the percentage of diabetics who had an annual eye exam (National Committee for Quality Assurance, 1999). The Joint Commission on the Accreditation of Healthcare Organizations sponsors the ORYX system for hospitals, which includes measures such as infection rates and postsurgical complication rates. Syntheses of the evidence base and the development of practice guidelines should contribute to more valid and meaningful quality measurement and reporting.
As systematic reviews, development of practice guidelines, and efforts to disseminate evidence focus increasingly on priority conditions—a unit of analysis that is meaningful to patients and clinicians—so, too, must accountability processes. To date, efforts to make comparative quality data available in the public domain have focused on types of health care organizations, for the most part health plans and hospitals, and, as noted above, measurement of a limited number of discrete quality indicators for these organizations. Numerous efforts are under way, however, to develop comprehensive measurement sets for various conditions and quality reporting mechanisms. These include the efforts of the Foundation for Accountability, the Health Care Financing Administration's peer review organizations, and a variety of collaborations involving leading medical associations and accrediting bodies.
The Foundation for Accountability (2000b) has developed condition-specific measurement guides related to a number of common conditions: adult asthma, alcohol misuse, breast cancer, diabetes, health status under age 65, and major depressive disorders. The Foundation continues to work on child and adolescent health, coronary heart disease, end of life, and HIV/AIDS. In addition, it has created FACCT|ONE, a survey tool designed to gather information directly from patients about important aspects of their health care (Foundation for Accountability, 2000a). The first phase of the survey addresses quality of care for people living with the chronic illnesses of asthma, diabetes, and coronary artery disease. It assesses performance related to patient education and knowledge, obtaining of essential treatments, access, involvement in care decisions, communication with providers, patient self-management behaviors, coping, symptom control, maintenance of regular activities, and functional status.
Since 1992, the Health Care Financing Administration, through its Peer Review Organizations, has been developing core sets of performance measures for a number of common conditions, including acute myocardial infarction, heart failure, stroke, pneumonia, breast cancer, and diabetes (Health Care Financing Administration, 2000). Comparative performance data for Medicare fee-for-service beneficiaries by state were recently released for each of these conditions (Jencks et al., 2000). Quality-of-care measures for beneficiaries experiencing acute myocardial infarction have been piloted in four states as part of the Cooperative Cardiovascular Project (Ellerbeck et al., 1995; Marciniak et al., 1998).
The Diabetes Quality Improvement Project, a collaborative quality measurement effort involving the American Diabetes Association, the Foundation for Accountability, the Health Care Financing Administration, the National Committee for Quality Assurance, the American Academy of Physicians, the American College of Physicians, and the Veterans Administration, has been under way for several years. The project has identified seven accountability measures (i.e., hemoglobin A1c tested, poor hemoglobin A1c control, eye exam performed, lipid profile performed, lipids controlled, monitoring for kidney disease, and blood pressure controlled), six of which will be included in the National Committee for Quality Assurance's Year 2000 Health Plan Employer Data and Information Set (Health Care Financing Administration, 1999).
The American Medical Association, working with experts from national medical specialty societies and the quality measurement community, has developed measure sets for physician clinical performance in the areas of adult diabetes, prenatal testing, and chronic stable coronary artery disease. The core measure set for adult diabetes, developed with input from the Iowa Foundation for Medical Care, was approved by the American Medical Association in July 2000, while the other two measure sets are undergoing public review and comment (American Medical Association, 2000).
It will be important for the National Quality Forum, a recently created public-private partnership developed to foster collaboration across public and private oversight organizations, to consider carefully how best to align comparative quality reporting with the developing infrastructure in support of evidence-based practice and consumer-centered health care. The National Quality Forum, a not-for-profit organization established in 1999 with the participation of both public and private purchasers, is currently developing a strategic measurement framework to guide the future development of external quality reporting for purposes of accountability and consumer choice (Kizer, 2000). This activity, now under way, presents a unique opportunity to influence the direction of quality measurement.
REFERENCES
- Achenbach, Joel. Reality Check. You Can't Believe Everything You Read. But You'd Better Believe This. Washington Post . E-C01, Dec. 4, 1996.
- Agency for Healthcare Research and Quality. 2000. a. “Clinical Practice Guidelines Online.” Online. Available at http://www
.ahcpr.gov/clinic/cpgonline .htm [accessed Jan. 2, 2001]. - ——. 2000. b. “Evidence-based Practice Centers. Synthesizing Scientific Evidence to Improve Quality and Effectiveness in Clinical Care. AHRQ Publication No. 00-P013.” Online. Available at http://www
.ahcpr.gov/clinic/epc/ [accessed Oct. 11, 2000]. - Agency for Healthcare Research and Quality, American Medical Association, and American Association of Health Plans. 2000. “National Guideline Clearinghouse.” Online. Available at http://www
.guideline.gov [accessed Jan. 2, 2001]. - American Medical Association. Adult Diabetes Core Physician Performance Measurement Set . Chicago, IL: American Medical Association, 2000.
- Balas, E.Andrew and Suzanne A.Boren. Managing Clinical Knowledge for Health Care Improvement. Yearbook of Medical Informatics . National Library of Medicine, Bethesda, MD: 65–70,2000. [PubMed: 27699347]
- Balas, E.Andrew, Scott Weingarten, Candace T.Garb, et al. Improving Preventive Care by Prompting Physicians. Arch Int Med 160(3):301–8,2000. [PubMed: 10668831]
- Bates, David W., Lucian L.Leape, David J.Cullen, et al. Effect of Computerized Physician Order Entry and a Team Intervention on Prevention of Serious Medication Errors. JAMA 280(15): 1311–6,1998. [PubMed: 9794308]
- Bates, David W., Nathan Spell, David J.Cullen, et al. The Costs of Adverse Drug Events in Hospitalized Patients. JAMA 277(4):307–11,1997. [PubMed: 9002493]
- Bates, David W., Jonathan M.Teich, Joshua Lee, et al. The Impact of Computerized Physician Order Entry on Medication Error Prevention. J Am Med Inform Assoc 6(4):313–21,1999. [PMC free article: PMC61372] [PubMed: 10428004]
- Benton Foundation. 1999. “Networking for Better Care: Health Care in the Information Age.” Online. Available at http://www
.benton.org/Library/health/ [accessed Sept. 18, 2000]. - Biermann, J.Sybil, Gregory J.Golladay, Mary Lou V.H.Greenfield, and Laurence H.Baker. Evaluation of Cancer Information on the Internet. Cancer 86(3):381–90,1999. [PubMed: 10430244]
- Boodman, Sandra G. Medical Web Sites Can Steer You Wrong. Study Finds Erroneous and Misleading Information on Many Pages Dedicated to a Rare Cancer. Washington Post . Health-Z07, Aug. 10, 1999.
- Cabana, Michael D., Cynthia S.Rand, Neil R.Powe, et al. Why Don't Physicians Follow Clinical Practice Guidelines? A Framework for Improvement. JAMA 282(15):1458–65,1999. [PubMed: 10535437]
- Cain, Mary M., Robert Mittman, Jane Sarasohn-Kahn, and Jennifer C.Wayne. Health e-People: The Online Consumer Experience . Oakland, CA: Institute for the Future, California Health Care Foundation, 2000.
- Canadian Medical Association. 2000. “CMA Infobase—Clinical Practice Guidelines.” Online. Available at http://www
.cma.ca/cpgs/index.asp [accessed Jan. 2, 2001]. - Chalmers, Iain and Brian Haynes. Systematic Reviews: Reporting, Updating, and Correcting Systematic Reviews of the Effects of Health Care. BMJ 309:862–5,1994. [PMC free article: PMC2541052] [PubMed: 7950620]
- Chalmers, T.C. and J.Lau. Meta-Analytic Stimulus for Changes in Clinical Trials. Statistical Meth ods in Medical Research 2:161–72,1993. [PubMed: 8261256]
- Chase, Christopher R., Pamela M.Vacek, Tamotsu Shinozaki, et al. Medical Information Management: Improving the Transfer of Research Results to Presurgical Evaluation. Medical Care 21(3):410–24,1983. [PubMed: 6843194]
- Classen, David C., R.Scott Evans, Stanley L.Pestotnik, et al. The Timing of Prophylactic Administration of Antibiotics and the Risk of Surgical-Wound Infection. N EngI J Med 326(5):281–6,1992. [PubMed: 1728731]
- Cochrane Collaboration. 1999. “Cochrane Brochure.” Online. Available at http://hiru
.mcmaster .ca/cochrane/cochrane/cc-broch.htm [accessed Jan. 2, 2001]. - Cook, Deborah J., Cynthia D.Mulrow, and R.Brian Haynes. Systematic Reviews: Synthesis of Best Evidence for Clinical Decisions. Ann Int Med 126(5):376–80,1997. [PubMed: 9054282]
- Davidoff, Frank. In the Teeth of the Evidence. The Curious Case of Evidence-Based Medicine. The Mount Sinai Journal of Medicine 66(2):75–83,1999. [PubMed: 10100410]
- Davidoff, Frank and Valerie Florance. The Informationist: A New Health Profession? Ann Int Med 132(12):996–8,2000. [PubMed: 10858185]
- Delaney, Brendan C., David A.Fitzmaurice, Amjid Riaz, and F.D.Richard Hobbs. Changing the Doctor-Patient Relationship: Can Computerised Decision Support Systems Deliver Improved Quality in Primary Care? BMJ 319:1281,1999. [PMC free article: PMC1129060] [PubMed: 10559035]
- Dowie, Robin. A Review of Research in the United Kingdom to Evaluate the Implementation of Clinical Guidelines in General Practice. Family Practice 15(5):462–70,1998. [PubMed: 9848434]
- Eisenberg, John M. Quality Research for Quality Healthcare: The Data Connection. Health Services Research 35:xii–xvii , 2000. a.
- ——. A Research Agenda for Quality. Washington, D.C.: Presentation at the Institute of Medicine Thirtieth Annual Meeting, The National Academies, 2000. b.
- Elhanan, G., S.A.Socratous, and J.J.Cimino. Integrating DXplain into a Clinical Information System Using the World Wide Web. Proc AMIA Annual Fall Symp : 348–52,1996. [PMC free article: PMC2233176] [PubMed: 8947686]
- Ellerbeck, Edward F., Stephen F.Jencks, Martha J.Radford, et al. Quality of Care for Medicare Patients With Acute Myocardial Infarction: A Four-State Pilot Study from the Cooperative Cardiovascular Project. JAMA 273(19):1509–14,1995. [PubMed: 7739077]
- Evans, R.Scott, Stanley L.Pestotnik, David C.Classen, et al. A Computer-Assisted Management Program for Antibiotics and Other Antiinfective Agents. N EngI J Med 338(4):232–8,1998. [PubMed: 9435330]
- Eysenbach, Gunther, Eun Ryoung Sa, and Thomas L.Diepgen. Shopping Around the Internet Today and Tomorrow: Towards the Millennium of Cybermedicine. BMJ 319:1294,1999.
- Foundation for Accountability. 2000. a. “FACCT|ONE: A Tool for Evaluating the Performance of Health Care Organizations.” Online. Available at http://www
.facct.org /measures/Develop/FACCTONE.htm [accessed Jan. 2, 2001]. - ——. 2000. b. “Supporting Quality-Based Decisions. The FACCT Consumer Information Network, Comparative Information for Better Health Care Decisions.” Online. Available at http://www
.facct.org/information.html [accessed Jan. 2, 2001]. - Friedman, Charles P., Arthur S.Elstein, Fredric M.Wolf, et al. Enhancement of Clinicians' Diagnostic Reasoning by Computer-Based Consultation. JAMA 282(19):1851–6,1999. [PubMed: 10573277]
- Godlee, Fiona, Richard Smith, and David Goldmann. Clinical Evidence: This month sees the publication of a new resource for clinicians. BMJ 318:1570–1,1999.
- Grimshaw, Jeremy M. and Ian T.Russell. Effect of Clinical Guidelines on Medical Practice: A Systematic Review of Rigorous Evaluations. The Lancet 342:1317–22,1993. [PubMed: 7901634]
- Guyatt, Gordon H., Maureen O.Meade, Roman Z.Jaeschke, et al. Practitioners of Evidence Based Care: Not All Clinicians Need to Appraise Evidence from Scratch but All Need Some Skills. BMJ 320:954–5,2000. [PMC free article: PMC1117895] [PubMed: 10753130]
- Hayward, Robert S.A. Clinical Practice Guidelines on Trial. Can Med Assoc J 156:1725–7,1997. [PMC free article: PMC1227587] [PubMed: 9220924]
- Health Care Financing Administration. 1999. “Quality of Care—National Projects. Diabetes Quality Improvement Project (DQIP).” Online. Available at http://www
.hcfa.gov/quality/31.htm [accessed Jan. 2, 2001]. - ——. 2000. “Quality of Care—PRO Priorities. National Clinical Topics (Task 1).” Online. Available at http://www
.hcfa.gov/quality/11a.htm [accessed Jan. 2, 2001]. - Hunt, Dereck L., R.Brian Haynes, Steven E.Hanna, and Kristina Smith. Effects of Computer-Based Clinical Decision Support Systems on Physician Performance and Patient Outcomes: A Systematic Review. JAMA 280(15):1339–46,1998. [PubMed: 9794315]
- Institute of Medicine. Guidelines for Clinical Practice: From Development to Use . Marilyn J.Field, editor; and Kathleen N.Lohr, editor. , eds. Washington, D.C.: National Academy Press, 1992. [PubMed: 25121254]
- Jadad, Alejandro R. and Anna Gagliardi. Rating Health Information on the Internet: Navigating to Knowledge or to Babel? JAMA 279(8):611–4,1998. [PubMed: 9486757]
- Jadad, Alejandro R., R.Brian Haynes, Dereck Hunt, and George P.Browman. The Internet and Evidence-Based Decision-Making: A Needed Synergy for Efficient Knowledge Management in Health Care. Journal of the Canadian Medical Association 162(3):362–5,2000. a. [PMC free article: PMC1231018] [PubMed: 10693595]
- Jadad, Alejandro R., Michael Moher, George P.Browman, et al. Systematic Reviews and Meta-Analysis on Treatment of Asthma: Critical Evaluation. BMJ 320(7234):537,2000. b. [PMC free article: PMC27295] [PubMed: 10688558]
- Jencks, Stephen F., Timothy Cuerdon, Dale R.Burwen, et al. Quality of Medical Care Delivered to Medicare Beneficiaries: A Profile at State and National Levels. JAMA 284(13):1670–6,2000. [PubMed: 11015797]
- Johnston, Mary E., Karl B.Langton, R.Brian Haynes, and Alix Mathieu. Effects of Computer-Based Clinical Decision Support Systems on Clinician Performance and Patient Outcome: A Critical Appraisal of Research. Ann Int Med 120:135–42,1994. [PubMed: 8256973]
- Kassirer, Jerome P. A Report Card on Computer-Assisted Diagnosis—The Grade: C. N EngI J Med 330(25):1824–5,1994. [PubMed: 8190163]
- Kizer, Kenneth W. The National Quality Forum Enters the Game. International Journal for Quality in Health Care 12(2):85–7,2000. [PubMed: 10830664]
- Lau, Joseph, Elliott M.Antman, Jeanette Jimenez-Silva, et al. Cumulative Meta-Analysis of the Therapeutic Trials for Myocardial Infarction. N EngI J Med 327(4):248–54,1992. [PubMed: 1614465]
- Lilford, R.J., S.G.Pauker, D.A.Draunholtz, and Jiri Chard. Getting Research Findings into Practice: Decision Analysis and the Implementation of Research Findings. BMJ 317:405–9,1998. [PMC free article: PMC1113676] [PubMed: 9694762]
- Lindberg, Donald A.B. 1998. “Fiscal Year 1999 President's Budget Request for the National Library of Medicine.” Online. Available at http://www
.nlm.nih.gov /pubs/staffpubs/od/budget99.html [accessed Sept. 18, 2000]. - Lindberg, Donald A.B. and Betsy L.Humphreys. Updates Linking Evidence and Experience. Medicine and Health on the Internet: The Good, the Bad, and the Ugly. JAMA 280(15):1303–4,1998. [PubMed: 9794299]
- ——. A Time of Change for Medical Informatics in the USA. Yearbook of Medical Informatics National Library of Medicine, Bethesda, MD: 53–7,1999. [PubMed: 27699364]
- Lohr, Kathleen N. and Tomothy S.Carey. Asessing “Best Evidence:” Issues in Grading the Quality of Studies for Systematic Reviews. Journal on Quality Improvement 25(9):470–9,1999. [PubMed: 10481816]
- Lohr, Kathleen N., Kristen Eleazer, and Josephine Mauskopf. Health Policy Issues and Applications for Evidence-Based Medicine and Clinical Practice Guidelines. Health Policy 46:1–19,1998. [PubMed: 10187652]
- Lomas, Jonathan Anderson Geoffrey M., Karin Domnick-Pierre, et al. Do Practice Guidelines Guide Practice? The Effect of a Consensus Statement on the Practice of Physicians. N EngI J Med 321(19):1306–11,1989. [PubMed: 2677732]
- Lyon, Becky J., P.Zoë Stavri, D.Colette Hochstein, and Holly Grossetta Nardini. Internet Access in the Libraries of the National Network of Libraries of Medicine. Bull Med Libr Assoc 86(4):486– 90 , 1998. [PMC free article: PMC226439] [PubMed: 9803289]
- Marciniak, Thomas A., Edward F.Ellerbeck, Martha J.Radford, et al. Improving the Quality of Care for Medicare Patients With Acute Myocardial Infarction: Results from the Cooperative Cardiovascular Project. JAMA 279(17):1351–7,1998. [PubMed: 9582042]
- McColl, Alastair, Helen Smith, Peter White, and Jenny Field. General Practitioners' Perceptions of the Route to Evidence Based Medicine: A Questionnaire Survey. BMJ 316:361–5,1998. [PMC free article: PMC2665572] [PubMed: 9487174]
- McDonald, Clement J., J.Marc Overhage, Paul R.Dexter, et al. Canopy Computing: Using the Web in Clinical Practice. JAMA 280(15):1325–9,1998. [PubMed: 9794311]
- McDowell, Ian, Claire Newell, and Walter Rosser. Comparison of Three Methods of Recalling Patients for Influenza Vaccination. Can Med Assoc J 135:991–7,1986. [PMC free article: PMC1491283] [PubMed: 3093045]
- ——. A Randomized Trial of Computerized Reminders for Blood Pressure Screening in Primary Care. Medical Care 27(3):297–305,1989. [PubMed: 2494397]
- Mechanic, David. Bringing Science to Medicine: The Origins of Evidence-Based Practice. Health Affairs 17(6):250–1,1998.
- Miller, Naomi, Eve-Marie Lacroix, and Joyce E.B.Backus. MEDLINEplus: Building and Maintaining the National Library of Medicine's Consumer Health Web Service. Bull Med Libr Assoc 88(1):11–7,2000. [PMC free article: PMC35193] [PubMed: 10658959]
- National Committee for Quality Assurance. Health Plan Employer Data and Information Set, Ver sion 3.0. Washington, D.C.: National Committee for Quality Assurance, 1999.
- National Research Council. Networking Health: Prescriptions for the Internet . Washington D.C.: National Academy Press, 2000. [PubMed: 20669497]
- Perfetto, Eleanor M. and Lisa Stockwell Morris. Agency for Health Care Policy and Research Clinical Practice Guidelines. The Annals of Pharmacotherapy 30:1117–21,1996. [PubMed: 8893120]
- Pozen, Michael W., Ralph B.D'Agostino, Harry P.Selker, et al. A Predictive Instrument to Improve Coronary-Care-Unit Admission Practices in Acute Ischemic Heart Disease. N EngI J Med 310(20):1273–8,1984. [PubMed: 6371525]
- Rienhoff, Otto. Retooling Practitioners in the Information Age. Information Technology Strategies from the United States and the European Union: Transferring Research to Practice for Health Care Improvement . E.Andrew Balas, editor. , ed. Washington, D.C.: IOS Press, 2000.
- Rosenberg, Matt. Popularity of Internet Won't Peak for Years: Not Until Today's Middle-Schoolers Reach Adulthood Will the Technology Really Take Off. Puget Sound Business Journal . May 24, 1999. Online. Available at http://www
.bizjournals .com/seattle/stories /1999/05/24/focus9.html [accessed Jan. 22. 2001]. - Sackett, David L., William M.C.Rosenberg, J.A.Muir Gray, et al. Evidence Based Medicine: What It Is and What It Isn't. BMJ 312:71–2,1996. [PMC free article: PMC2349778] [PubMed: 8555924]
- Sackett, David L., Sharon E.Straus, W.Scott Richardson, et al. Evidence-Based Medicine: How to Practice & Teach EBM . 2nd edition. London, England: Churchill Livingstone, 2000.
- Schiff, Gordon D. and T.Donald Rucker. Computerized Prescribing: Building the Electronic Infrastructure for Better Medication Usage. JAMA 279(13):1024–9,1998. [PubMed: 9533503]
- Shea, Steven, William DuMouchel, and Lisa Bahamonde. A Meta-Analysis of 16 Randomized Controlled Trials to Evaluate Computer-Based Clinical Reminder Systems for Preventive Care in the Ambulatory Setting. J Am Med Inform Assoc 3(6):399–409,1996. [PMC free article: PMC116324] [PubMed: 8930856]
- Silberg, William M., George D.Lundberg, and Robert A.Musacchio. Assessing, Controlling, and Assuring the Quality of Medical Information on the Internet. JAMA 277(15):1244–5,1997. [PubMed: 9103351]
- Solberg, Leif I., Milo L.Brekke, Charles J.Fazio, et al. Lessons from Experienced Guideline Implementers: Attend to Many Factors and Use Multiple Strategies. Joint Commission Journal on Quality Improvement 26(4):171–88,2000. [PubMed: 10749003]
- USA Today . Health-Related Activities Conducted Online. Health, July 10, 1998.
- Voge, Susan. NOAH-New York Online Access to Health: Library Collaboration for Bilingual Consumer Health Information on the Internet. Bull Med Libr Assoc 86(3):326–34,1998. [PMC free article: PMC226378] [PubMed: 9681167]
- Wellwood, J., S.Johannessen, and D.J.Spiegelhalter. How Does Computer-Aided Diagnosis Improve the Management of Acute Abdominal Pain? Annals of the Royal College of Surgeons of England 74:40–6,1992. [PMC free article: PMC2497469] [PubMed: 1736794]
- Wexler, Jerry R., Phillip T.Swender, Walter W.Tunnessen, and Frank A.Oski. Impact of a System of Computer-Assisted Diagnosis: Initial Evaluation of the Hospitalized Patient. Am J Dis Child 129:203–5,1975. [PubMed: 1091140]
- Woolf, Steven H. Practice Guidelines: A New Reality in Medicine. III. Impact on Patient Care. Arch Int Med 153:2646–55,1993. [PubMed: 8250661]
- Wyatt, J.R. Lessons Learnt from the Field Trial of ACORN, An Expert System to Advise on Chest Pain. Proceedings of the Sixth World Conference on Medical Informatics, Singapore . 111–5,1989.
Footnotes
- 1
This definition is adapted from a physician-oriented definition developed by Hunt et al., 1998.
- Applying Evidence to Health Care Delivery - Crossing the Quality ChasmApplying Evidence to Health Care Delivery - Crossing the Quality Chasm
Your browsing activity is empty.
Activity recording is turned off.
See more...