• We are sorry, but NCBI web applications do not support your browser and may not function properly. More information
Logo of jmlaJournal informationSubscribeSubmissions on the Publisher web siteCurrent issue of JMLA in PMCAlso see BMLA journal in PMC
J Med Libr Assoc. Jan 2005; 93(1): 104–115.
PMCID: PMC545129

Development and evaluation of evidence-based nursing (EBN) filters and related databases*

Mary A. Lavin, ScD, RN, FAAN, Associate Professor Director, Center for Interprofessional Education and Research, Mary M. Krieger, MLIS, RN, Reference Librarian,2 Geralyn A. Meyer, PhD, RN, Assistant Professor,3 Mark A. Spasser, PhD, MALS, Chief, Library and Information Services/Associate Professor,4 Tome Cvitan, MS, Information Management Consultant,5 Cordie G. Reese, EdD, RN, Professor,6 Judith H. Carlson, MSN, RN, Associate Professor,6 Anne G. Perry, EdD, RN, FAAN, Professor and Department Chair, Primary Care and Health Systems Nursing,7 and Patricia McNary, MALS, RN, Reference Librarian†,8

Abstract

Objectives: Difficulties encountered in the retrieval of evidence-based nursing (EBN) literature and recognition of terminology, research focus, and design differences between evidence-based medicine and nursing led to the realization that nursing needs its own filter strategies for evidence-based practice. This article describes the development and evaluation of filters that facilitate evidence-based nursing searches.

Methods: An inductive, multistep methodology was employed. A sleep search strategy was developed for uniform application to all filters for filter development and evaluation purposes. An EBN matrix was next developed as a framework to illustrate conceptually the placement of nursing-sensitive filters along two axes: horizontally, an adapted nursing process, and vertically, levels of evidence. Nursing diagnosis, patient outcomes, and primary data filters were developed recursively. Through an interface with the PubMed search engine, the EBN matrix filters were inserted into a database that executes filter searches, retrieves citations, and stores and updates retrieved citations sets hourly. For evaluation purposes, the filters were subjected to sensitivity and specificity analyses and retrieval set comparisons. Once the evaluation was complete, hyperlinks providing access to any one or a combination of completed filters to the EBN matrix were created. Subject searches on any topic may be applied to the filters, which interface with PubMed.

Results: Sensitivity and specificity for the combined nursing diagnosis and primary data filter were 64% and 99%, respectively; for the patient outcomes filter, the results were 75% and 71%, respectively. Comparisons were made between the EBN matrix filters (nursing diagnosis and primary data) and PubMed's Clinical Queries (diagnosis and sensitivity) filters. Additional comparisons examined publication types and indexing differences. Review articles accounted for the majority of the publication type differences, because “review” was accepted by the CQ but was “NOT'd” by the EBN filter. Indexing comparisons revealed that although the term “nursing diagnosis” is in Medical Subject Headings (MeSH), the nursing diagnoses themselves (e.g., sleep deprivation, disturbed sleep pattern) are not indexed as nursing diagnoses. As a result, abstracts deemed to be appropriate nursing diagnosis by the EBN filter were not accepted by the CQ diagnosis filter.

Conclusions: The EBN filter capture of desired articles may be enhanced by further refinement to achieve a greater degree of filter sensitivity. Retrieval set comparisons revealed publication type differences and indexing issues. The EBN matrix filter “NOT'd” out “review,” while the CQ filter did not. Indexing issues were identified that explained the retrieval of articles deemed appropriate by the EBN filter matrix but not included in the CQ retrieval. These results have MeSH definition and indexing implications as well as implications for clinical decision support in nursing practice.

INTRODUCTION

This article describes the development of filters that facilitate evidence-based nursing (EBN) searches. Evidence-based practice, regardless of the discipline, consists of discrete steps that vary in number from five to eight [13]. Although the number of steps varies, the second step, conducting an evidence-based search, is common to all. A successful and efficient evidence-based search depends on more than the search terms or search string entered. It also depends on the sophistication of the search logic employed and on the searcher's ability to leverage key database features, including terminological control. One tool used to search databases for the best available evidence is a carefully constructed filter.

Numerous examples of filters have been developed to facilitate the retrieval of research or evidence-based health care citations [415]. While a review of the history of search filter development is beyond the scope of this paper, the work of Wilczynski, Haynes, and the Hedges Team at McMaster University is clearly foundational. They developed search filters (or “hedges”) to identify clinical research studies in the MEDLINE database. Specifically, they developed four sets of study design–specific filters to retrieve research articles in therapy, diagnosis, prognosis, and causation or etiology categories. Their seminal work is implemented in the Clinical Queries interface to MEDLINE.

Important to the present work is the growing awareness that the filters developed for one health profession might not be optimal for retrieving strong, evidence-based research for other health care professions. Murphy [16, 17] tested the efficacy of extant search filters for finding information for evidence-based veterinary medicine in the Commonwealth Agricultural Bureau (CAB) Abstracts and PubMed. The search strategies devised by Haynes et al. were not effective for locating evidence-based research for veterinary medicine practice. Search precision was so low the authors concluded more sensitive veterinary medical filters needed to be developed.

More specific filters may be necessary for specialties in medicine as well. Ward and Meadows [18] documented the development of therapeutic and diagnostic search strategies designed to retrieve strong, evidence-based research for family practitioners from MEDLINE, PreMEDLINE, and Current Contents. These works suggest that search strategies developed from one set of disciplinary standards are not optimally configured for locating research literature for other health sciences disciplines.

The need for discipline-specific filters for the conduct of nursing literature searches is recognized. Ovid MEDLINE filters were adapted to create evidence-based filters for Ovid CINAHL at McMaster University by McKibbon and Walker-Dilks [11] and at the University of Rochester Miner Library by Nesbit [19]. More recently, Saranto and Tallberg [20] described the process of developing a controlled nursing vocabulary to index and retrieve nursing-sensitive information for evidence-based practice. Using a delphi technique, the expert panel concluded that to facilitate specifically nursing-sensitive research, new terms needed to be added to the Finnish thesaurus (FinMeSH) as an independent theme. Lavin et al. [21] suggested that differences between the discipline-specific, standardized disease diagnosis terminologies of medicine and the health problem/life process response diagnoses of nursing impeded the efficient retrieval of evidence-based nursing literature from PubMed's MEDLINE database. These works provided the background for this investigation.

BACKGROUND

Although this article describes the development and evaluation of filters that facilitate evidence-based searches, the work began with a simpler intent. A team of researchers headquartered at Saint Louis University School of Nursing decided to develop an annotated bibliography on the diagnoses of sleep disturbance and sleep deprivation, as defined by NANDA International (formerly, the North American Nursing Diagnosis Association).

Nursing diagnosis classification is not a new phenomenon. The work begun at the First National Conference on the Classification of Nursing Diagnosis, held in St. Louis, Missouri, in 1973 [22], eventually led to the founding of the organization now called NANDA International. NANDA's Classification of Nursing Diagnoses was the first nursing terminology recognized by the American Nurses Association (ANA) and the first included in the Unified Medical Language System (UMLS).

NANDA defines nursing diagnosis as a “clinical judgment about individual, family or community responses to actual or potential health problems/life processes” [23]. The term “nursing diagnosis” is included in the regulatory or statutory language of the Nurse Practice Act in forty-one of the fifty States and the District of Columbia and in the 2002 Model Nurse Practice Act of the National Council of State Boards of Nursing [24]. The establishment of a nursing diagnosis constitutes one of the standards of nursing practice as enunciated by the ANA [25] and has been a part of the ANA's definition of nursing since 1982 [26].

The intended purpose of the annotated bibliography was to illustrate the number and quality of evidence-based articles available on sleep-related nursing diagnoses. It was to include a brief overview of the purpose of each reviewed article, its study design and results, and a critical appraisal of the levels of evidence apparent in the articles, as well as an evaluation of the contribution the article made to the development of nursing practice. The following observations changed the direction of the work:

  • the inefficiency of the advanced search strategy in retrieving desired, discipline-specific, EBN results
  • the less standardized and more ambiguous study-design language used by nurse researchers as compared with the study-design language used by medical researchers
  • the inability of nursing research, as a whole, to fit neatly in the relatively narrow and limited hierarchy of study design methods used in medical research, a point similarly made by Cohen et al. when analyzing criticisms of EBM [27]

Accordingly, a different model needed to be employed to take into consideration three fundamental differences between EBN and evidence-based medicine (EBM).

The first two differences between EBN and EBM relate to the focus of research and designs used. While the nursing profession is increasing the number of studies of treatment effectiveness, a considerable amount of nursing research focuses on the analysis of data collected at the point of patient contact, descriptions or narratives of patient experiences, and evaluations of programs instituted to improve the delivery of care. Second, the breadth of the nursing profession's study designs contrasts with the medical profession's concentrated focus on treatment effectiveness and epidemiological research.

A third difference between EBM and EBN is that the two professions define the term diagnosis differently. A medical diagnosis refers to a disease; a nursing diagnosis refers to a human response to an actual or potential health problem or life processes [28]. Although nursing diagnoses are complementary to medical diagnoses, nursing diagnoses are not dependent upon medical diagnoses. For example, when Florence Nightingale was told upon her arrival in the Crimea that soldiers were dying of their wounds, she disagreed [29]. She said they were dying of lack of hygiene, of cold, and of malnourishment. She called these “conditions” and intervened appropriately. The resultant dramatic decline in the mortality rate occurred not because there were fewer wounds, but because of the successful treatment of nursing conditions or diagnoses. Similar data exist today. Research by Halloran et al. [30] indicated that variance in length of hospital stay was explained by a patient's nursing diagnosis in a different manner than the variance explained by the patient's medical diagnosis. These reflections on the differences between EBN and EBM led to the decision to develop nursing-sensitive filters that reflected the knowledgebase of the nursing profession and facilitated EBN searches.

METHODS

The research team, headquartered at Saint Louis University, consisted of nursing faculty, reference librarians, and an information management specialist. Their areas of expertise included standardized nursing terminology, search strategy construction, and relational database development. To build and test EBN filters, they used an inductive methodology, consisting of seven sequential steps.

Step 1: search strategy formulation and search engine selection

Although the primary purpose of the research was to develop and evaluate evidence-based filters, a subject search strategy was needed to apply to the filters for testing purposes. Sleep, as a topic, was selected because problems associated with sleep exist across the lifespan, occur in all care settings, and are amenable to nursing treatments in all specialty groups. An advanced sleep search strategy was developed by the reference librarians and nursing members of the research team.

The sleep strategy was tested in both CINAHL and MEDLINE databases. The advantages and disadvantages of each database were considered. PubMed was chosen as the search engine to be used in developing the EBN search filters for several reasons. First, PubMed relies on MEDLINE, the most comprehensive biomedical database and the gold standard for subject analysis and indexing. Second, this database includes journals representative of all the health professions. As of August 11, 2004, its professional nursing journals numbered 340. Third, unlike CINAHL, MEDLINE is freely available through the PubMed interface worldwide to consumers, researchers, faculty and students in academic settings, and clinicians for clinical decision support purposes. Fourth, PubMed provided access to in-process as well as fully indexed citations. Publisher-supplied records are added daily, making it the most current database. Finally, and especially useful to the team's work, is PubMed's Cubby feature that allows search strategies to be securely saved, easily edited, and regularly updated.

Step 2: development of the evidence-based nursing (EBN) matrix as a framework

While conceptually analogous to the Clinical Queries using research methodology filters developed by the National Library of Medicine, the EBN matrix was designed to categorize a broader array of options. In developing this matrix, two professional health sciences librarians and four nursing faculty with expertise in standardized nursing terminology reviewed the glossary of terms developed by the Centre for Evidence-Based Medicine at the University Network-Mount Sinai Hospital in Toronto, Canada [31]. They also examined the levels of evidence described by the:

  • Centre for Evidence-Based Medicine at Oxford University [32]
  • University of Illinois at Chicago [15]
  • PubMed's Clinical Queries using research methodology filters and related literature [4, 8]

However, these filter table categories did not allow for categorizing many of the qualitative and program evaluation designs used in nursing research. Therefore, the team decided to develop one filter, called a primary data filter, to conduct a search that captured or retrieved the full scope of nursing studies based on data collected at the point of patient contact.

This decision was based on several assumptions. First, it was assumed that a simple rather than complex retrieval method was needed to capture nursing evidence, which relied on multiple study designs. Second, it was assumed that the citations retrieved from a primary data filter represented a first cut at EBN literature. The third assumption was that the filter's purpose was not to appraise the quality and applicability of the evidence but to find it. This assumption was important because EBN levels of evidence criteria vary considerably. For example, in developing clinical guidelines, Lyons and Specht [33] used a two-level hierarchy, while Folkedahl and Frantz [34] applied a four-level hierarchy and Hodgins et al. [35] adapted the US Public Health Service levels of evidence and grades of recommendation. Recently, Cesario et al. [36] developed a new classification to evaluate qualitative studies. The final assumption was that use of a primary data filter did not preclude use of other available filters. It merely increased the available number of filter choices.

The development of the primary data filter left a large number of citations uncategorized. These citations were divided into two groups, called secondary and tertiary data. Secondary data were defined as evidence based on all studies reporting data collected from secondary databases such as the Center for Medicare and Medicaid Services' Minimum Data Set [37], in addition to cost-effectiveness or decision-analysis studies or studies based on data in the literature, such as meta-analyses or systematic literature reviews. Reasons for placing meta-analyses in the secondary filter category and not in the primary data filter category were based on definitions. Primary data were data collected at the point of patient contact. By definition, a meta-analysis is “a systematic review of the literature that uses quantitative methods to summarize the results” [31]. Tertiary data were defined as data relying on expert opinion, including studies of the expert opinion of groups of nurses, essays, reflections, opinion pieces, fictionalized case studies, and literature reviews that did not indicate levels of evidence or grades of recommendation.

The primary, secondary, and tertiary data categories constituted the rows in the matrix, presented in Figure 1 in its most basic framework form. The research team created the following categories to serve as column headings in the EBN matrix: diagnosis, related factors, diagnostic tests, interventions, and outcomes. They called the resulting framework the evidence-based nursing matrix (Figure 1).

Figure 1
Evidence-based nursing (EBN) matrix framework

Time constraints limited the number of nursing categories for which filters could be developed. The logic underlying the decision to develop nursing diagnosis and patient outcomes filters was straightforward. First, the nurses on the team were experts in the field of nursing diagnosis. Second, clinical nursing outcomes were diagnostic specific. Take, for example, the nursing diagnosis of acute or chronic pain and its outcomes: pain intensity levels, disruptive effects of pain, and psychological responses to pain [38]. Finally, the economic influence of nursing diagnoses on outcomes was documented in the literature [21, 30, 39].

Step 3: recursive development of nursing diagnosis, outcome, and primary data filters

A similar process was used in the development of the three filters. The team developed search strings by connecting terms one at a time, using Boolean “AND,” “OR,” or “NOT” to connect, equate, or exclude concepts. This process was labor intensive. Nurse members contributed their clinical and standardized nursing terminology expertise. Reference librarians contributed their expertise in search-technique and information-retrieval systems. If appropriate citations were added, the term and its connector were kept. The three filters, ultimately agreed upon, were saved in PubMed's Cubby (Figure 2). These Cubby-saved filters may be updated easily as new terms make their way into the journal literature.

Figure 2
EBN matrix filters

Step 4: insertion of the filters into the EBN matrix

Before the filters could be tested, they needed to interface with a search engine. The information management specialist inserted the nursing diagnosis, primary data, and patient outcomes filters into the EBN matrix framework and connected them to the National Library of Medicine's PubMed search engine. An automated script conducted a search on PubMed once every hour and retrieved citations for each filter. The citations were then stored in the hourly updated EBN matrix database, available on the Research Center Web page of the Network for Language in Nursing Knowledge Systems (NLINKS) <nlinks.org>.

When the retrieval script executes, the sleep search is integrated into the existing filters yielding results that are inserted into the database and displayed on the EBN matrix (Figure 3). The numbers in the cells of the EBN matrix represent the number of citations retrieved when the filters are activated. The numbers are underlined because they represent hyperlinks, which, when clicked, yield the PubMed search for that cell. On March 16, 2004, the diagnosis column had 221 primary data sleep citations and the outcomes column had 129 citations. The 304 citations in the total column of the primary data row represent the total number of primary data sleep citations and not the numerical sum of 221 and 129. The diagnosis and outcomes columns are not discrete insofar as some citations are common to both, because some articles address both diagnoses and outcomes. Figure 3 retrieval data are time sensitive and reflect the number of citations retrieved on the date indicated. The numbers increase as PubMed adds new citations to its database. The blank cells represent the filters to be developed. Figure 4 displays the first few items of the PubMed retrieval obtained when the sleep search strategy is applied to the EBN diagnosis and primary data filters, activating the retrieval of 221 citations.

Figure 3
Number of citations retrieved from an advanced sleep search strategy* applied to the EBN matrix filters, which interfaces with the PubMed search engine Retrieved from http://nlinks.org/research_ebn_matrix.phtml on March 16, 2004). ...
Figure 4
Sample advanced sleep search using NLINKS EBN nursing diagnosis filter, limited to English language, abstracts only, and nursing journals subset

Step 5: evaluation of the filters

Table 1 presents the results of the tests conducted on the filters. Sensitivity is the probability that the filter will retrieve an article, given that the article is truly appropriate for the intended search. Specificity is the probability that the filter will reject an article that is truly inappropriate for the intended search. The only way to determine if the filter retrieves appropriate articles and rejects inappropriate ones is to compare the filter results against a manual review of the abstracts [40, 41]. Sensitivity then becomes the probability the filter accepts the same articles the manual reviewer accepts; specificity becomes the probability the filter rejects the same articles the manual reviewer rejects.

Table thumbnail
Table 1 Results of sensitivity and specificity analyses conducted on EBN matrix filters

To conduct the needed sensitivity and specificity analyses, inter-rater reliability (IR) was established, the gold standard manual reviewers were selected, the abstract pools were identified, a manual review of the abstract pool was conducted to accept appropriate and reject inappropriate abstracts, the EBN filter strategies were applied to the same pool to determine which abstracts the filter selected as appropriate and rejected as inappropriate, and the sensitivity and specificity of the filters were calculated.

To establish IR, each of five independent nurse reviewers was presented with sets of identical abstracts and with decision rules for including or excluding abstracts on the basis of the presence or absence of a nursing diagnosis, primary data, or patient outcomes content in the abstracts. When agreement was less than desired, the decision rules were clarified and a new set of abstracts was distributed to the reviewers. This process was repeated until there was 100% agreement by the reviewers on the inclusion or exclusion of an abstract 80% or more of the time. Two reviewers achieved 100% agreement on the inclusion of abstracts more than 90% of the time.

Two filter sets were examined: patient outcome and the nursing diagnosis and primary data filters. Each set needed its own pool of abstracts. The principal investigator, who was not a manual reviewer, selected the abstract pools. The patient outcome abstract pool was selected by applying the patient outcome filter to the PubMed search engine and limiting the results to abstracts only, English language, and the nursing journal subset. Citations were sorted by journal and the number of citations per journal was counted. All abstracts from the four journals that contributed the greatest number of outcome-related citations constituted the abstract pool. The pool was limited to January 1, 1997, to December 31, 2001. Four of the five nurses with prior IR ratings participated in this analysis; one was unable to participate. Decision rules for accepting or rejecting abstracts based on the presence or absence of patient outcomes content were distributed to the reviewers. The manual reviewers were each assigned one of the four journals and analyzed the first 100 abstracts drawn from that journal. Their decisions regarding the acceptance or rejection of the abstract on the basis of its patient outcome content were considered the gold standard against which the ability of the filter to accept or reject the same was compared.

The first analysis was disappointing. While the patient outcome filter's specificity ranged from 52% to 91%, its sensitivity only ranged from 44% to 55%. The filter was adjusted as a result: Terms were added with the intent of increasing its sensitivity. The sensitivity and specificity of the improved patient outcomes filter was tested against the gold standard manual review of the investigator with the highest IR (Table 1).

The nursing diagnosis and primary data filters were evaluated next. To obtain a larger abstract pool, a different sampling method was used. The sleep search strategy identified in step 1 was applied to the nursing diagnosis and primary data filters and searched in PubMed. The results were limited to the nursing journal subset, abstracts only, with articles in the English language. Seventy-seven citations were retrieved. For each abstract, the journal and year were noted. All abstracts in the same year for each of the journals were retrieved, yielding a total of 4,330 abstracts.

Over a 3-month period, each abstract was reviewed by the investigator who earlier had been unable to participate. This reviewer's prior IR was greater than 90%. Her review constituted the gold standard against which the sensitivity and specificity of the nursing diagnosis and primary data filters were evaluated. An abstract was accepted if it included sleep-related nursing diagnosis terminology (the intent of the nursing diagnosis filter) and data collected at the point of patient contact (the intent of the primary data filter). Sleep-related nursing diagnosis terminology was defined according to the 2001–2002 NANDA definitions and defining characteristics of disturbed sleep patterns and sleep deprivation.

Step 6: retrieval comparisons

After the sensitivity and specificity analyses were completed, the retrieval of the EBN diagnosis and primary care filters was compared with the PubMed Clinical Queries (CQ) retrieval. It was logical to assume that the results would be different, because the filters used different terms. The point of the retrieval comparison was to identify and characterize retrieval differences.

The sleep search strategy (step 1) was applied to the EBN matrix nursing diagnosis and primary data and the CQ diagnosis/sensitivity filter options. The rationale underlying this comparison assumes that the EBN matrix and the CQ filters were similar in that both interfaced with PubMed and were intended to be diagnosis filters. Furthermore, in the absence of the EBN matrix, CQ was the only available research methodology filter for use by nurses in the public domain and hence the only research methodology filter widely available for decision support purposes in clinical nursing. Its capability of retrieving appropriate EBN references was, therefore, of interest to nursing practice.

Comparisons between the EBN patient outcomes and the CQ prognosis filter were not made, because nursing-sensitive patient outcomes did not equate well with disease prognoses. For example, pain intensity was a diagnostic-specific patient outcome in nursing, but it is not a prognostic statement, meaning it is not a statement of how well the patient would or would not recover in the long run. Therefore, no comparisons were made between the EBN outcomes and the CQ prognosis filters, because the intent of the filters varied too greatly.

Step 7: presentation of databases for public use

Once retrieval was compared between the EBN matrix and CQ filters (step 6), the team decided to establish NLINKS hyperlinks to the EBN databases, so that others could use the databases for subject searches of their own choosing.

RESULTS

Sensitivity and specificity results are presented in Table 1. Sensitivity less than 90% means that the filter would benefit if made less tight, so that a greater number of appropriate articles would be captured. Specificity less than 90% means that the rejection ability of the filter needs to be improved. Sensitivity and specificity are not inversely related. Having both sensitivity and specificity above 90% is possible. The gains attained, however, need to be weighed against the cost, because refinement and testing of sensitivity and specificity is a labor-intensive process. Additionally, other evaluation methods are available, such as retrieval comparisons.

The EBN matrix (nursing diagnosis/primary data) and the CQ (diagnosis/sensitivity) retrievals were compared after applying the same sleep search (step 1) to each. CQ retrieved 215 citations; the corresponding EBN matrix retrieved 221 citations. Seventy citations were common to both retrieval sets, leaving 151 records unique to the EBN set and 145 records unique to CQ.

Comparisons of publication type revealed differences between the unique CQ records. Thirty-six percent of the unique CQ records were review articles. Because the intent of the EBN nursing diagnosis and primary data filters was to retrieve nursing diagnosis abstracts reporting data collected at the point of patient contact, “review” was “NOT'D” out of the EBN filter. Seventeen percent of the unique CQ records represented primary data articles that included content on sleep deprivation or sleep disorders and should have been retrieved by the EBN nursing diagnosis or primary data filter. This finding supported the need for improvement in the sensitivity of the EBN matrix. The remaining CQ unique records represented articles based on tertiary data (e.g., expert opinion research, reflections, essays). Clearly, the EBN filter excluded what it intended to exclude (specificity), while its sensitivity could be enhanced.

Records unique to the EBN matrix were evaluated by one nursing member of the team and two reference librarians. Most were deemed appropriate nursing diagnosis or primary data selections by the nurse and librarians, yet CQ had not retrieved them. CQ's inability to retrieve the diagnosis literature unique to the EBN matrix filters was essentially a problem of indexing. Few or no records in PubMed about sleep deprivation or disturbed sleep patterns were indexed to nursing diagnoses. They were indexed to etiology, therapy, or nursing subheadings, which were then attached to terms indexed as medical signs, symptoms, or disease. Because the EBN matrix filter searched the same PubMed database as CQ did, its EBN filter terms and structure overcame this indexing problem.

An example may help clarify the effects of these indexing constraints on the retrieval of standardized nursing diagnosis literature. The research team tracked another condition that can be a nursing diagnosis or a medical sign or symptom: pain. In the MeSH browser, “pain” is listed as a sign or symptom and as a diagnosis. As a diagnosis, however, it is described only in relationship to the role it plays in the differential diagnosis of disease and not in the role it plays as a health problem response or nursing diagnosis amenable to nursing interventions. These differences have implications for retrieval and clinical decision support, especially if databases are to maximize their usefulness as support tools for evidence-based clinical decisions across health profession fields.

When the team completed the evaluation process, they decided to make the EBN matrix databases available for clinical, educational, or research use. The following filter databases were positioned in vertical order at the top of the Web page for the Research Center:

  • The Nursing Diagnosis Database
  • Nursing Diagnosis and Primary Data Database
  • Nursing Sensitive Patient Outcomes Database
  • The Primary Data Database
  • Nursing Sensitive Patient Outcomes and Primary Data Database

Adjacent to each database is a User's Guide hyperlink with a pop-up message box containing these instructions.

  1. Click on the database link.
  2. Notice that a PubMed webpage appears.
  3. Notice that there are search terms already entered into the search box. This is the database filter. Do not add to or subtract from these term, if you desire the use the filter as it was designed and tested.
  4. To use the database filter, simply
  5. Add your own search term or terms or search string in the PubMed search box before the filter terms already present in the box.
  6. Connect your search term or terms to the filter by a Boolean AND.
  7. Be sure to leave a space before and after AND.
  8. Be sure to type the Boolean AND in caps.
  9. Click on the “Go” button.
  10. Your search will appear.
  11. Limit your search by clicking on “Limits” in the PubMed tool bar and then limit by year, abstracts only, or subset, etc. Note that by clicking on the subset drown down menu that one of the choices is “Nursing Journals,” meaning that you can limit your search to nursing journals.

CONCLUSIONS

This paper reports on the development and evaluation of EBN matrix search filters, the foundation for presenting five related databases. Through an interface with the PubMed search engine, the EBN filters have been inserted into a database that executes filter searches, retrieves citations, and stores and updates retrieved citations sets hourly. They add value to the field of evidence-based practice. They are easy to use and save search time. They require only that the searcher combine the filter with the desired subject search (e.g., a specific nursing diagnosis), using the Boolean operator “AND.” Other useful EBN filter functions include:

  • Predefined and pre-saved search strategies assist the searcher in retrieving high-quality research papers [8].
  • EBN search filters can be easily saved in PubMed by using PubMed's Cubby function.
  • Through their connection with PubMed, EBN search filters are in the public domain, benefiting all interested researchers and clinicians worldwide.

Indexing surfaced as an issue upon examining unique records of the EBN matrix not retrieved by CQ. This indexing issue is more complicated than the analysis of sensitivity and specificity or the number of review publication types in CQ. Indexing procedures seemed to have prevented the CQ filter from retrieving the articles deemed appropriate to the search by the EBN filter and reviewers. Issues relating to definition, in turn, helped explain the indexing issue. Abstracts retrieved by the EBN matrix filter and deemed appropriate by research reviewers were not indexed as such by PubMed, because the definition of nursing diagnosis in MeSH related nursing diagnoses to assessment, interventions, or therapy, but the assessment and intervention or therapy definitions did not relate to or refer back to nursing diagnoses. Furthermore, the MeSH definition for nursing diagnosis used in indexing was a process definition. It referred to diagnoses being the conclusions drawn from nursing assessment, which would be analogous to defining medical diagnoses as conclusions drawn from a history and physical exam. Neither addressed the subject matter of the diagnosis: disease for medical diagnoses and health problem or life process responses for nursing diagnoses.

The result is that standardized nursing diagnoses (e.g., sleep deprivation or disturbed sleep pattern) apparently are not indexed as nursing or diagnosis or as “nursing diagnosis,” unless they occur concomitantly with the MeSH term “nursing diagnosis.” This indexing is analogous to not indexing hypertension as a disease or diagnosis unless it is accompanied by the term “disease diagnosis” or “medical diagnosis.” Because the same subject search strategy was applied to the EBN matrix and the CQ filters, the retrieval comparisons show that even advanced search strategies can be constrained by indexing decisions.

These observations have implications for clinical decision support for nursing practice. Nurses, using PubMed or CQ alone, are likely to have limited success accessing nursing diagnosis information due to indexing issues. At the same time, these indexing problems are overridden by the nursing diagnosis and primary data EBN matrix filters, which interface with PubMed.

While using filters to search the medical literature is not new, tools developed for EBM literature retrieval are not sufficiently broad to locate EBN literature [21]. EBN search filters, or any other valid nursing-sensitive filters accessible via the Internet, complement methodologies for clinical decision support and facilitate literature reviews conducted for practice, research, or educational purposes. Through their connection with PubMed, EBN search filters are in the public domain, benefiting all interested researchers and clinicians worldwide. Assessing the evidence-based practice (EBP) literature is increasingly recognized as a critical skill required for answering everyday clinical questions. Barriers—such as lack of time, difficulty formulating or translating questions for EBP, or difficulty developing an optimal search strategy—all militate against effective use of the research literature [16]. Most importantly, not all health professions draw on the same pool of evidence. While evidence-based nursing and medicine have areas of overlap, they have essential and consequential differences as well [21].

Footnotes

* This research was funded by the Saint Louis University Faculty Development Fund.

 NLINKS is a virtual, international partnership of individuals, groups, and organizations designed to advance the development, testing, and refinement of language and informatics in nursing knowledge systems. Its Research Center is devoted to the development of evidence-based nursing filters and databases in the public domain.

REFERENCES

  • University of Toronto Libraries, Centre for Evidence-Based Medicine. Practising EBM. [Web document]. 2004. [cited 22 May 2004]. <http://www.cebm.utoronto.ca/practise/>.
  • Jenicek M. Clinical case reporting in evidence-based medicine. New York, NY: Oxford University Press, 2001.
  • McKibbon KA, Eady A, and Marks S. PDQ: evidence-based principles and practice. Hamilton, ON, Canada: B. C. Decker, 1999.
  • National Library of Medicine. PubMed's Clinical Queries research methodology filters. [Web document]. [cited 22 May 2004]. <http://www.ncbi.nlm.nih.gov/entrez/query/static/clinicaltable.html>.
  • Haynes RB, Wilczynski NL. (Hedges Team). Optimal search strategies for retrieving scientifically strong studies of diagnosis from Medline: analytical survey. [Web document]. BMJ 2004;328(7447). Available from: <http://bmj.bmjjournals.com/cgi/content/full/328/7447/1040>. [cited 11 May 2004]. [PMC free article] [PubMed]
  • Wilczynski NL, Haynes RB. Developing optimal search strategies for detecting clinically sound causation studies in MEDLINE. Proc AMIA Annu Symp 2003:719–23. [PMC free article] [PubMed]
  • Wilczynski NL, Haynes RB. Robustness of empirical search strategies for clinical content in MEDLINE. Proc AMIA Annu Symp 2002:904–8. [PMC free article] [PubMed]
  • Haynes RB, Wilczynski N, McKibbon KA, Walker CJ, and Sinclair JC. Developing optimal search strategies for detecting clinically sound studies in MEDLINE. J Am Med Inform Assoc. 1994.  Nov–Dec; 1(6):447–58. [PMC free article] [PubMed]
  • Wilczynski NL, Walker CJ, McKibbon KA, and Haynes RB. Assessment of methodologic search filters in MEDLINE. Proc Annu Symp Comput Appl Med Care 1993:601–5. [PMC free article] [PubMed]
  • Edward G. Miner Library. Evidence-based filters for Ovid MEDLINE [Nesbit guide to evidence based resources]. [Web document]. 2004. [cited 22 May 2004]. <http://www.urmc.rochester.edu/hslt/miner/digital_library/tip_sheets/OVID_eb_filters.pdf>.
  • Edward G. Miner Library. Evidence based filters for Ovid CINAHL [Nesbit guide to evidence based resources]. [Web document]. 2004 [cited 22 May 2004]. <http://www.urmc.rochester.edu/hslt/miner/digital_library/tip_sheets/Cinahl_eb_filters.pdf>.
  • Glover J. Finding the evidence on PubMed. [Web document]. 2004. [rev. 4 Feb 2002; cited 22 May 2004]. <http://info.med.yale.edu/library/reference/publications/pubmed/#simple>.
  • Duke University Medical Center Library. Evidence based medicine: filters/strategies for Ovid Medline [Web document]. 1998 [rev. 14 Oct 1998; cited 22 May 2004]. <http://www.mclibrary.duke.edu/respub/guides/filters.html>.
  • Oxford-Centre for Evidence-based medicine. Searching for the best evidence in clinical journals. [Web document]. [cited 22 May 2004]. <http://www.cebm.net/searching.asp>.
  • Library of the Health Sciences Peoria, University of Illinois at Chicago. Evidence-based medicine: finding the best clinical literature: is all evidence created equal? [Web document]. 2004. [rev. 2 Dec 2003; cited 13 May 2004]. <http://www.uic.edu/depts/lib/lhsp/resources/levels.shtml>.
  • Murphy SA. Research methodology search filters: are they effective for locating research for evidence-based veterinary medicine in PubMed? J Med Libr Assoc. 2003.  Oct; 91(4):484–9. [PMC free article] [PubMed]
  • Murphy SA. Applying methodological search filters to CAB Abstracts to identify research for evidence-based veterinary medicine. J Med Libr Assoc. 2002.  Oct; 90(4):406–10. [PMC free article] [PubMed]
  • Ward D, Meadows S. Family practice network is an opportunity for expert searchers. MLA News 2003;360:1,12.
  • Hardin Library for the Health Sciences. Evidence-based nursing filters (CINAHL). [Web document]. Iowa City, IA: University of Iowa, 2004. [rev. 23 Feb 2004; cited 13 May 2004]. <http://www.lib.uiowa.edu/hardin/ebmfilt_nursing.html>.
  • Saranto K, Tallberg M.. Enhancing evidence-based practice—a controlled vocabulary for nursing practice and research. Int J Med Inf. 2003;70:249–53. [PubMed]
  • Lavin MA, Meyer G, Krieger M, McNary P, Carlson J, Perry A, James D, Cvitan T.. Essential differences between evidence-based nursing and evidence based medicine. Int J Nurs Terminol Classif. 2002;13(3):101–6. [PubMed]
  • Gebbie KM, Lavin MA. Classification of nursing diagnoses: proceedings. St. Louis, MO: Mosby, 1975.
  • North American Nursing Diagnosis Association. NANDA nursing diagnoses: definitions and classification 2001–2002. Philadelphia, PA: The Association, 2001.
  • National Council of State Boards of Nursing. Model nurse practice act, article ii, section 2b. [Web document]. (2002). [cited 27 Aug 2004]. <http://www.ncsbn.org/regulation/nursingpractice_nursing_practice_model_practice_act.asp>.
  • American Nurses Association. Nursing: scope and standards of practice. Washington, DC: American Nurses Association, 2004.
  • American Nurses Association. Nursing's social policy statement. 2nd ed. Washington, DC: American Nurses Association, 2003.
  • Cohen AM, Stavri PZ, and Hersh WR. A categorization and analysis of the criticisms of evidence-based medicine. Int J Med Inf. 2004.  Feb; 73(1):35–43. [PubMed]
  • NANDA nursing diagnoses: definitions and classification, 2003–2004. Philadelphia, PA: Nursecom, 2003.
  • Nightingale F. A contribution to the sanitary history of the British army during the late war with Russia. London, UK: Harrison, 1859.
  • Halloran EJ, Welton JM, Englebardt SP, and Thorson MW. A comparison of nursing and medical diagnoses in predicting hospital outcomes. Proc AMIA Symp, 1999:171–5. [PMC free article] [PubMed]
  • Centre for Evidence Based Medicine, University Network-Mount Sinai Hospital, University of Toronto Libraries. Glossary of EBM terms. [Web document]. 2004. [cited 22 May 2004]. <http://www.cebm.utoronto.ca/glossary/>.
  • Centre For Evidence-Based Medicine, Oxford University. Levels of evidence and grades of recommendation. [Web document]. Oxford, UK: Oxford University. [cited 22 May 2004]. <http://www.cebm.net/levels_of_evidence.asp>.
  • Lyons SS, Specht JKP. Prompted voiding for persons with urinary incontinence. Iowa City, IA: University of Iowa Gerontological Nursing Interventions Research Center, Research Dissemination Core, 1999.
  • Folkedahl BA, Frantz RA. Prevention of pressure ulcers. Iowa City, IA: University of Iowa Gerontological Nursing Interventions Research Center, Research Dissemination Core, 2002.
  • Hodgins C, Mosley M, and Pola-Strowd M. Recommendations for the diagnosis and management of recurrent aphthous stomatitis. Austin, TX: University of Texas at Austin, School of Nursing, 2003.
  • Cesario S, Morin K, Santa-Donato A.. Evaluating the level of evidence of qualitative research. J Obstet Gynecol Neonatal Nurs. 2002;31(6):708–14. [PubMed]
  • Centers for Medicare & Medicaid Services. December 2002 revised long term care resident assessment instrument user's manual for the minimum data set (MDS) version 2.0. [Web document]. [Dec 2002]. [rev. 23 Apr 2004; cited 22 May 2004]. <http://www.cms.hhs.gov/quality/mds20/>.
  • Johnson M, Moorhead S, and Maas M. Nursing outcomes classification (NOC). St. Louis, MO: Mosby, 2000.
  • Moorhead S, Johnson M.. Diagnostic-specific outcomes and nursing effectiveness research. Int J Nurs Terminol Classif. 2004);15(2):49–57. [PubMed]
  • Wilcyznski NL, Waler CJ, McKibbon KA, and Haynes RB. Quantitative comparison of pre-explosions and subheading with methodologic search terms in MEDLINE. Proc Annu Symp Comput Appl Med Care 1994:905–9. [PMC free article] [PubMed]
  • McKibbon KA. Telephone communication, May 2002.

Articles from Journal of the Medical Library Association : JMLA are provided here courtesy of Medical Library Association
PubReader format: click here to try

Formats:

Related citations in PubMed

See reviews...See all...

Cited by other articles in PMC

See all...

Links

  • MedGen
    MedGen
    Related information in MedGen
  • PubMed
    PubMed
    PubMed citations for these articles

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...