• We are sorry, but NCBI web applications do not support your browser and may not function properly. More information
Logo of jamiaJAMIA - The Journal of the American Medical Informatics AssociationVisit this articleSubmit a manuscriptReceive email alertsContact usBMJ
J Am Med Inform Assoc. 2008 Sep-Oct; 15(5): 638–646.
PMCID: PMC2528038

Clinical Decision Velocity is Increased when Meta-search Filters Enhance an Evidence Retrieval System

Enrico Coiera, MBBS, PhD, a , * Johanna I. Westbrook, MHA, PhD, b and Kris Rogers, PhD a

Abstract

Objective

To test whether the use of an evidence retrieval system that uses clinically targeted meta-search filters can enhance the rate at which clinicians make correct decisions, reduce the effort involved in locating evidence, and provide an intuitive match between clinical tasks and search filters.

Design

A laboratory experiment under controlled conditions asked 75 clinicians to answer eight randomly sequenced clinical questions, using one of two randomly assigned search engines. The first search engine Quick Clinical (QC) was equipped with meta-search filters (the combined use of meta-search and search filters) designed to answer typical clinical questions e.g., treatment, diagnosis, and the second ‘library model’ system (LM) offered free access to an identical evidence set with no filter support.

Measurements

Changes in clinical decision making were measured by the proportion of correct post-search answers provided to questions, the time taken to answer questions, and the number of searches and links to documents followed in a search session. The intuitive match between meta-search filters and clinical tasks was measured by the proportion and distribution of filters selected for individual clinical questions.

Results

Clinicians in the two groups performed equally well pre-search. Post search answers improved overall by 21%, with 52.2% of answers correct with QC and 54.7% with LM (χ2 = 0.33, df = 1, p > 0.05). Users of QC obtained a significantly greater percentage of their correct answers within the first two minutes of searching compared to LM users (QC 58.2%; LM 32.9%; χ2 = 19.203, df = 1, p < 0.001). There was a statistical difference for QC and LM survival curves, which plotted overall time to answer questions, irrespective of answer (Wilcoxon, p = 0.019) and for the average time to provide a correct answer (Wilcoxon, p = 0.006). The QC system users conducted significantly fewer searches per scenario (m = 3.0 SD = 1.15 versus m = 5.5 SD1.97, t = 6.63, df = 72, p = 0.0001). Clinicians using the QC system followed fewer document links than did those who used LM (respectively 3.9 links SD = 1.20 versus 4.7 links SD = 1.79, t = 2.13, df = 72, p = 0.0368). In 6 of the 8 questions, two meta-search filters accounted for 89% or more of clinicians' first choice, suggesting the choice of filter intuitively matched the clinical decision task at hand.

Conclusions

Meta-search filters result in clinicians arriving at answers more quickly than unconstrained searches across information sources, and appear to increase the rate with which correct decisions are made. In time restricted clinical settings meta-search filters may thus improve overall decision accuracy, as fewer searches that could otherwise lead to a correct answer are abandoned. Meta-search filters appear to be intuitive to use, suggesting that the simplicity of the user model would fit very well into clinical settings.

Introduction

Numerous studies show that clinicians have many unanswered questions during clinical encounters and that failure to answer these may impact the quality and safety of decisions made. Medical practitioners may generate up to six questions per patient and pursue answers to only 30% of these. 1–3 However, when questions are answered, clinicians will change their decisions in response to the information obtained. 4

Increasingly evidence retrieval systems are seen as one of the main tools to find the evidence needed to answer clinical questions. 5,6 Our own research has demonstrated substantial use of online evidence retrieval systems at the point-of-care by hospital clinicians 7 and general practitioners, 8 and that such systems significantly increase the ability of clinicians to correctly answer clinical questions. 9

Of particular interest is that the search for clinical information often occurs in time-poor settings. Doctors may spend no more than two minutes searching for an answer 10,11 and often abandon searches when no answer is forthcoming. 12–14 Consequently, the rate at which clinicians are able to make decisions, their decision velocity, could be an important metric both of clinical performance in general, as well as for benchmarking the performance of evidence-retrieval systems.

The clinical impact of an evidence retrieval system is likely to be a product both of the quality of evidence it retrieves as well as the likelihood that the evidence is easily assimilated and then is actually used to support clinical decisions. 15 We can hypothesize that evidence retrieval systems that enable greater decision velocities will ultimately have greater impact at the point of care, as they will result in more clinicians finding correct answers to their questions, and with fewer abandoning searches within the time limits imposed by the clinical setting.

This paper reports the results of a controlled laboratory study that explores the dynamic nature of clinical decision making when using evidence retrieval systems. It compares the impact of two versions of a specific clinical search engine on the decision velocity of clinicians. Both versions accessed the same evidence set, with one version providing free access to those resources, and the other enhanced through customized clinical search filters.

Searching for the Best Clinical Evidence with Search Filters—Quick Clinical

The first step in evidence-based practice is to formulate answerable clinical questions 4 yet many clinical evidence retrieval systems are not structured around clinical questions but rather replicate information structures designed for libraries, which are oriented either to the type of resource, such as journal, text book and guideline, or topic structures aligned with biomedical classification structures such as disease types. Recent work has looked at using pre-defined queries or search filters to improve the chance that relevant documents are retrieved. 16 For example, the PubMed interface to MEDLINE offers a small set of ‘Clinical Queries’, which are pre-defined and validated search filters that are most likely to retrieve documents that are clinically relevant. PubMed users can focus their searches on articles that emphasize etiology, diagnosis, therapy or prognosis. Search Filters have been shown to substantially enhance the specificity and sensitivity of different classes of search, and an ongoing body of research is devoted to creating filters to support different search tasks. 17–20

Meta-search, where a query is sent to multiple different search engines in parallel, is another strategy that has also been shown to improve search precision. 21 The Quick Clinical (QC) evidence retrieval system is a knowledge-based search system that combines meta-search with search filters, or meta-search filters. 22 Users, who are always free to search individual databases or journal sites, have the additional option of selecting pre-defined meta-search filters when they have a clinical question. The meta-search filters (or Profiles) in QC can be thought of as encodings of search strategies that capture expert knowledge on how to search for an answer. Profiles might capture the search skills of a biomedical librarian, for example, that would not normally be known to typical clinical users. A search strategy might describe which evidence sources are most appropriate for answering a given question, and how best to ask the question with different resources.

Search profiles can be designed to support specific user groups, and different tasks and contexts associated with each group. For example, a search strategy for information on the treatment of a disease for consumers would need to return different documents, probably from different resources, compared to that needed for expert physicians. Similarly a clinician asking a question in a busy work setting needing an immediate answer to a patient problem will expect information in a form easily assimilated in the work context, such as an extract from a guideline, and within a set time frame. The same question asked in preparation for a team discussion might return more of the primary research literature.

Search profiles can be designed for typical queries such as ‘diagnosis’ or ‘prescribing’ and are crafted to find the evidence most likely to satisfy the query type. Current components of a search profile in QC consist of ([triangle]):

  • • The sources to be searched for the query, including web sources such as PubMed, formularies, guideline collections and web versions of biomedical journals, as well as local sources such practice guidelines;
  • Search filters to be used when querying each specified source. Search filters reformulate a user query into the syntax native to a given source, and may add in additional Keywords known to improve search quality. For example, if a clinician selects the ‘diagnosis’ filter and enters the search term ‘asthma’, when it queries PubMed QC can add in the additional terms “sensitivity and specificity” [MESH] OR “sensitivity” [WORD] OR “diagnosis” [SH] OR “diagnostic use” [SH] OR “specificity” [WORD]. Multiple filters may be constructed for a single source, reflecting different strategies for asking a question.
  • Context settings. Searches can be time-limited to reflect time constraints. The display of results can also be configured to blend the results obtained from different evidence sources in a manner most likely to support the intended user. For example, the order in which results from different sources are presented and the number of results to be retrieved from any given resource can also be specified.

Table 1
Table 1 Example Search Profile for “Diagnosis”, Describing Sources to be Searched, the Position of the Source in the Results Page (P), and the Number of Results Displayed from a Source (R)

From the user's point of view, QC firstly assists clinicians to structure their search by asking them to specify the type of question they are asking, for example about diagnosis, and then suggests Keyword classes for that question, for example, ‘drug name’, ‘symptom’ ([triangle]). The QC system has previously been evaluated in a prospective cohort study in general practice, where general practitioners reported that use of the system improved the quality of care. 8

Figure 1
Quick Clinical (QC) search page.

This paper reports the results of a laboratory trial under controlled conditions to determine what additional benefit QC search profiles confer over free access to the same literature set. We hypothesised that the use of search profiles would improve search effectiveness, defined as i) the proportion of correct decisions made after searching, and ii) improved decision velocity as measured by the time and effort devoted to arriving at an answer to a question. We also hypothesised that well-constructed search profiles would be intuitive to use. Previous results from this dataset describing the combined impact of both search systems (QC and LM) on clinician confidence and search outcomes according to professional group have been reported earlier. 9,23

Methods

We compared the evidence searching performance of two groups of clinicians, one group randomised to use the QC system and one group who were given access to QC without profiles (library search model or LM).

Measures

We aimed to test whether clinical decision making was enhanced using QC by measuring the proportion of correct post-search answers provided to clinical questions, the time taken to answer these questions, and the number of searches and links to documents followed in a search session. The intuitive match between profiles and clinical tasks was measured by the proportion and distribution of profiles selected for individual clinical questions by clinicians who used QC.

Participants

A sample of 75 clinicians (44 doctors and 31 clinical nurse consultants (CNC)) were recruited. Doctors were recruited using invitation letters seeking volunteers sent via mail and email to clinical departments at two major teaching hospitals and via professional organisations representing family practitioners. The CNCs were recruited via a CNC list-server. Clinical nurse consultants are registered nurses who have at least five years experience and have completed post-registration qualifications in a specialty area.

Procedures

Clinicians attended a university computer laboratory for a two-hour session. Following written informed consent, clinicians were presented with eight randomly ordered clinical scenarios, which posed clinical questions ([triangle]). The response categories for six scenarios were: i) yes, ii) no, iii) conflicting evidence, iv) don't know. Two scenarios required a one-word free text answer. The scenario development process was described previously. 9

Table 2
Table 2 Eight Clinical Questions, and Expected Answers

Online Evidence System Intervention

Clinicians were randomly allocated to use either the QC or LM evidence retrieval system. Each clinician sat alone at a computer workstation and first completed the eight questions without the aid of a search system. Following randomisation to one of the two search systems, participants were again presented with the 8 scenarios and used the evidence retrieval system to find and document the evidence to support their answers. In total 36 clinicians were randomised to QC (14 CNCs and 22 doctors) and 39 to LM (17 CNCs and 22 doctors). Time from commencing a search until the time an answer was provided was automatically logged. A researcher was present throughout the session and gave minimal technical assistance when necessary. Ethics approval for the study was received from the University of New South Wales Human Research Ethics Committee.

The QC system incorporated search filters, and the second version, called the library model (LM), stripped away the search filter function. To control for the impact of user interface design, both systems appeared identical to users with the exception of the presence or absence of search filters. To control for the impact of the evidence retrieved, both systems had identical access to the same on-line resources. Both systems provided access to the same six selected sources of evidence including PubMed, MIMS (a pharmaceutical database), Therapeutic Guidelines (http://www.tg.com.au/), Merck manual, Harrison's textbook and HealthInsite (a government funded consumer oriented health database at http://www.healthinsite.gov.au/).

With QC, five types of filters or “profiles” were available (disease etiology, diagnosis, treatment, prescribing and patient education). Clinicians searched by first selecting a filter, and then entering Key words into one of four fields (disease, drug, symptom, other) ([triangle]). Users of the LM had the option of selecting which of the six sources of evidence they wanted to query; and then entered keywords into a single field ([triangle]).

Figure 2
Library model (LM) search page. All resources available in QC could be selected individually in LM.

Statistical Analyses

Clinicians' written responses to the scenario questions pre and post-search were compared. Pre-search answers were coded as ‘correct’ according to an expert panel's predetermined scenario answers and post-search answers were coded as correct if the answer was correct and a relevant evidence source was documented (e.g., the name of a journal article). Pre-search answers of ‘don’t know' were classified as incorrect.

The proportions of correct pre and post-search answers for both intervention groups were calculated and compared using Chi square analyses. Data about the search times, number of searches performed and document links followed were extracted from computer logs and differences between the intervention groups were compared using Chi Square analyses and t-tests.

The product-limit (Kaplan-Meir) survival analysis method was used to analyze the difference between the two systems for the amount of time taken to answer questions. The total time to answer each question (irrespective of answer) was summed for each participant to test whether exposure to the QC system affected the time taken to provide an answer. An additional analysis of the average time for a participant to provide a correct answer was also included. Heterogeneity of the survival function was tested with the log-rank and Wilcoxon tests. A small probability value in these tests is evidence that there is heterogeneity (i.e., a difference) in survival curves between the systems. The log-rank test places more weight on differences between the curves at larger values of time, while the Wilcoxon test places greater weight at smaller values of time. It is possible for there to be differences between these two tests for the same survival function, e.g., a significant Wilcoxon test and a non-significant log-rank test indicates that there is a difference in the survival functions between the two tests for small values of time. Point-wise confidence intervals were calculated for each of the survival functions.

Results

Correctness of Answers Independent of Time

Clinicians in the two groups performed equally well in the pre-search stage with no significant difference in the proportion of correct answers provided by clinicians randomized to the QC or LM system (χ2 = 3.34, df = 1, p > 0.05), suggesting the two groups were matched for clinical reasoning skills and knowledge.

As reported previously, there was an aggregate 21% improvement in clinicians' answers post-search, from 29%(95%CI 25.4-32.6) correct prior- to 50%(95%CI 46.0-54.0) post-system use. 9 If time to answer was not considered, then users of both systems performed equally well in terms of the proportion of accurate answers provided to the scenario questions post search, with 52.2% of answers correct with the QC system use compared to 54.7% with the LM system (χ2 = 0.33, df = 1, p > 0.05).

Outcomes in the First Two Minutes of Searching

Users of QC were significantly more likely to obtain correct answers within the first two minutes of searching compared to LM users. Users of QC found 58% of their correct answers in the first two minutes of searching. Users of LM found 33% of correct answers during this time; (χ2 = 19.2, df = 1, p < 0.001).

For QC users, longer search times resulted in a greater proportion of incorrect answers. Searches lasting longer than two minutes had a greater percentage of wrong answers compared to searches under two minutes (respectively 60% wrong versus 41% wrong; χ2 = 10.93, df = 1, p < 0.001). For LM users search time was not associated with right or wrong answers. There was no statistical difference in the provision of incorrect answers for searches in the first two minutes and those longer than two minutes (χ2 = 2.24, df = 1, p > 0.05).

Decision Velocity

There was a statistical difference for the survival curves of QC and LM users, which plotted the overall time to answer questions for each participant, both irrespective of answer ([triangle]; Wilcoxon, p = 0.019) and for the average time to provide a correct answer ([triangle]; Wilcoxon, p = 0.006). No difference was found between the curves using the log-rank test, which places greater weight on longer values of time ([triangle]).

Figure 3
Survival functions of time taken to an answer (in seconds) for all questions aggregated (solid line – LM, dotted line – QC). Light grey shading shows the 95% pointwise confidence interval around each survival function, and dark grey shadings ...
Figure 4
Survival functions of average time taken to provide a correct answer (in seconds) for both systems (solid line – LM, dotted line – QC). Light grey shading shows the 95% pointwise confidence interval around each survival function, and dark ...
Table 3
Table 3 Log-rank and Wilcoxon Tests of Homogeneity of Survival Curves for Overall Time to Answer Question, and for Time to Correct Answer for Each Question

Comparison of Number of Searches Conducted and Document Links Followed

Overall, clinicians who used the QC system conducted significantly fewer searches per scenario (m = 3.0 SD = 1.15 versus m = 5.5 SD1.97, t = 6.63, df = 72, p < 0.0001) ([triangle]). Clinicians using the QC system also followed fewer document links than did those who used the LM system (respectively 3.9 links SD = 1.20 versus 4.7 links SD = 1.79, t = 2.13, df = 72, p = 0.037).

Figure 5
Average number of searches per scenario by system intervention group (QC: dark; LM: hatched).

Intuitiveness of QC Search Profiles

To examine the extent to which the search profiles in the QC system were intuitive to clinicians given the scenario questions posed, we examined clinicians' first choice of search profile for each of the eight scenarios. In 6 of the 8 scenarios two profiles accounted for 89% or more of clinicians' first search profile choice. For the remaining two scenarios (MI and IVF) there was greater variation in selection of first search profile ([triangle]).

Figure 6
First search profiles selected by clinicians using the QC system for each of the eight scenarios.

Discussion

We now have good experimental evidence that the rate at which clinicians can make decisions, their decision velocity, can be improved without compromising overall decision performance, by adding meta-search filter functions to a search engine. More importantly, clinicians were able to increase the rate at which they made correct decisions within a 2-minute time window when they have access to search filter technology. Others have shown that 2 minutes is a crucial time limit in clinical settings, with 66% of attempts at seeking information by physicians lasting 2 minutes or less. 11 Previous research has shown that when physicians freely search across a set of online resources, similar to the LM intervention they take about 5–10 minutes, which is similar to our results. 6 Our findings thus indicate that the generic use of search filters, as exemplified by QC, should have practical benefits in time poor settings.

Interestingly, when time is no longer a factor in our comparative analysis, there is no difference between the quality of answers produced by users of the two systems. This makes sense, as users of both systems were indistinguishable in the answers they provided pre-search, indicating they were operating from similar levels of experience, and also that they should have found similar evidence using both systems, given sufficient time and effort. The net effect of using search filters with QC was probably that those who were going to get a correct answer did so more quickly. In time poor settings, when searches are likely to be abandoned, this should translate into an increase in the rate of correct answers provided by clinicians using search filters.

Our statistical testing of survival curves supports these conclusions, as we detected significant differences in the survival curves using the Wilcoxon test, which favoured differences for shorter periods, but did not do so with the log rank tests, which favoured differences at longer time periods.

As a corollary, QC users with longer search times had worse outcomes. This may be due to the fact that by shifting forward in time those who were likely to get a correct response, we have exposed a ‘tail’ of users who are unlikely to obtain a correct answer. Other explanations are also possible, including the possibility that for some subgroups, the search filters have led them significantly astray. Teasing this effect apart represents an interesting future direction for this research.

During a search session, a user may need to conduct several searches, swapping between different sources, or re-expressing queries. The significantly fewer searches and document links followed by QC users may at least partially account for the significantly faster search times among this group of clinicians. As QC combines meta-search with search filters, we are unable in this experiment to report the relative contribution of either component to the improvements in decision rates observed. Future work should explore their contributions separately and combined.

In [triangle] there is variation in performance across the eight questions. The sample size for individual questions makes it hard to determine if variation at the question level is the result of sampling or some other effect. In future work it will be of great interest to determine whether there is a uniform improvement in search and decision performance, or whether specific subsets of users, clinical questions or other factors contribute to any variation in performance.

Search profiles seem to be an intuitive model for search, as measured by the profile first chosen by clinicians when presented with a question. Subjects typically chose one profile to answer their question, in preference to the others that were available. In a small number of scenarios (e.g., IVF, [triangle]) we do not see this pattern, and there is a more uniform distribution of profiles selected. This may suggest that the question contained a number of sub tasks relating to different profiles, or that there was ambiguity about which profile was most appropriate. In such cases, this may be a trigger to system developers to design a new search profile that more effectively meets the needs of a typical question type. However, one of the benefits of the current profile design is that only a small number of high-level archetypical questions are represented, making the choice of profile a simple task. It is unclear what user penalties might be incurred when there are a large number of profiles available, nor whether there would be any significant improvement in decision outcomes. One may speculate that there would be diminishing returns on decision accuracy by providing an increasing number of profile choices. In these experiments search filters have been constructed by hand, but there is no reason, in principle, why such filters could not be automatically generated by machine learning methods. 24

Limitations

The study was conducted in a Laboratory setting and may not replicate the specific circumstances under which questions are answered in a clinical setting. As a consequence we may see different decision accuracy and velocity rates in clinical settings.

Questions in this experiment were simulated, and only eight in number. A larger or more representative sample of questions may potentially provide different results to those reported here. However, the sample size of our experiment was sufficiently large to generate clear differences of statistical significance.

Subjects in this study were drawn from hospital and primary care backgrounds and covered medical and nursing professionals. However this sample may not be representative of other populations. We measured our study population's prior knowledge, as measured by ability to answers questions before searching, which should allow other studies to be compared to ours.

While QC and LM had nearly identical interfaces, there was a minor variation to allow for the entry of Keywords, which may have contributed to a difference in performance. However, whilst the QC screen was the more complex, prompting for keywords in multiple categories, it produced the fastest response time, suggesting any increase in time spent at the interface was swamped by the benefit of meta-search filters.

Conclusion

Search Profiles, or meta-search filters, appear to be an effective addition to the tools available to support clinical decision making and evidence-based practice. They result in clinicians arriving at answers more quickly than unconstrained searches across information sources, and also appear to specifically increase the rate with which correct decisions are made. It is thus possible that in time restricted clinical settings meta-search filters will improve overall decision accuracy, as fewer searches which could otherwise lead to a correct answer are abandoned. Meta-search filters appear to be intuitive to use, suggesting that the simplicity of the user model would fit very well into busy and time poor clinical settings.

Footnotes

The design and implementation of Quick Clinical was supported by funding from Australian Research Council SPIRT Grant C00107730, NHMRC Development Grant 300591, and Merck Sharpe and Dohme (Australasia). Evaluation work was in part supported by ARC Discovery grant DP0452359 and the National Institute of Clinical Studies. The authors thank the clinicians who gave their time to take part in the study; and the clinicians who contributed to the development of the scenarios: Karolyn Vaughan, Karen Lintern, Dr. Madlen Gazarian, Dr Vitali Sinchenko and Dr Barbara Booth. Staff from the Centre for Health Informatics assisted with setting up and running the experiment including Nerida Creswick, Keri Bell and Annie Lau. Michelle Wensley from New South Wales Health assisted with recruitment of clinicians. The QC development team included Ken Nguyen, Martin Walther, Hugh Garsden, Victor Vickland and Luis Chuquipiondo.

Conflict of Interest: Quick Clinical was developed by researchers at the Centre for Health Informatics at the University of New South Wales, and the university and some of the authors could benefit from commercial exploitation of QC or its technologies.

References

1. Gorman PN, Helfand M. Information seeking in primary care: how physicians choose which clinical questions to pursue and which to leave unanswered Med Dec Making 1995;15(2):113-119. [PubMed]
2. Covell DG, Uman GC, Manning PR. Information needs in office practice: are they being met? Ann Intern Med 1985;103(4):596-599. [PubMed]
3. Smith R. What clinical information do doctors need? BMJ 1996;313(7064):1062-1068October 26, 1996. [PMC free article] [PubMed]
4. Sackett DL, Straus SE. Finding and applying evidence during clinical rounds—The “evidence cart” J Am Med Assoc 1998;280(15):1336-1338Oct 21. [PubMed]
5. Hersh W. Information retrieval: A health and biomedical perspective2nd ed.. New York: Spinger-Verlag; 2003.
6. Schwartz K, Northrup J, Israel N, Crowell K, Lauder N, Neale V. Use of On-line Evidence-based Resources at the Point of Care Family Medicine 2003;35(4):251-256. [PubMed]
7. Westbrook J, Gosling AS, Coiera E. Do clinicians use online evidence to support patient care?. A study of 55,000 clinicians. J Am Med Inform Assoc 2004;11(2):113-120. [PMC free article] [PubMed]
8. Magrabi F, Coiera EW, Westbrook JI, Gosling AS, Vickland V. General practitioners' use of online evidence during consultations Int J Med Inform 2005;74(1):1-12. [PubMed]
9. Westbrook J, Coiera E, Gosling AS. Do online information retrieval systems help experienced clinicians answer clinical questions? J Am Med Inform Assoc 2005;12(3):315-321. [PMC free article] [PubMed]
10. Ely JW, Osheroff JA, Ebell MH, et al. Analysis of questions asked by family doctors regarding patient care BMJ 1999;319:358-361. [PMC free article] [PubMed]
11. Ramos K, Schafer S. Real-time Information-seeking Behavior of Residency Physicians Family Medicine 2003;35(4):257-260. [PubMed]
12. Scott I, Heyworth R, Fairweather P. The use of evidence-based medicine in the practice of consultant physicians. Results of a questionnaire survey. Aust New Zeal J Med 2000;30(3):319-326. [PubMed]
13. Darmoni S, Benichou J, Thirion B, Hellot M, Fuss J. A study comparing centralized CD-ROM and decentralized intranet access to MEDLINE Bull Med Lib Assoc 2000;88(2):152-156. [PMC free article] [PubMed]
14. Ely J, Osheroff J, Ebell M, et al. Obstacles to answering doctors' questions about patient care with evidence: qualitative study Br Med J 2002;324:710-722. [PMC free article] [PubMed]
15. Coiera E. Maximising the uptake of evidence into clinical practice—An information economics approach MJA 2001;174:467-470. [PubMed]
16. Pratt W, Sim I. Physician's Information Customizer (PIC): Using a Shareable User Model to Filter the Medical Literature. International Conference on Medical Informatics (MEDINFO'95); 1995. [PubMed]
17. Bachmann LM, Coray R, Estermann P, Ter Riet G. Identifying diagnostic studies in MEDLINE: reducing the number needed to read J Am Med Inform Assoc 2002;9(6):653-658Nov-Dec. [PMC free article] [PubMed]
18. Ingui BJ, Rogers MA. Searching for clinical prediction rules in MEDLINE J Am Med Inform Assoc 2001;8(4):391-397Jul-Aug. [PMC free article] [PubMed]
19. Haynes RB, Wilczynski N, McKibbon KA, Walker CJ, Sinclair JC. Developing optimal search strategies for detecting clinically sound studies in MEDLINE J Am Med Inform Assoc 1994;1(6):447-458Nov-Dec. [PMC free article] [PubMed]
20. Haynes RB, Wilczynski NL. Optimal search strategies for retrieving scientifically strong studies of diagnosis from Medline: analytical survey BMJ 2004;328(7447):1040. [PMC free article] [PubMed]
21. Gauch S, Wang G, Gomez M. Profusion: intelligent fusion from multiple different search engines Journal of Universal Computer Science 1996;2(9).
22. Coiera E, Walther M, Nguyen K, Lovell NH. An architecture for knowledge-based and federated search of online clinical evidence J Med Intern Res 2005;7. [PMC free article] [PubMed]
23. Westbrook J, Gosling A, Coiera E. The impact of an online evidence system on confidence in decision making in a controlled setting Med Decis Mak 2005;25:178-185. [PubMed]
24. Aphinyanaphongs Y, Tsamardinos I, Statnikov A, Hardin D, Aliferis CF. Text categorization models for high-quality article retrieval in internal medicine J Am Med Inform Assoc 2005;12(2):207-216. [PMC free article] [PubMed]

Articles from Journal of the American Medical Informatics Association : JAMIA are provided here courtesy of American Medical Informatics Association
PubReader format: click here to try

Formats:

Related citations in PubMed

See reviews...See all...

Cited by other articles in PMC

See all...

Links

  • Cited in Books
    Cited in Books
    PubMed Central articles cited in books
  • MedGen
    MedGen
    Related information in MedGen
  • PubMed
    PubMed
    PubMed citations for these articles

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...