Melissa Rethlefsen

Comments 1 to 8 of 8

  • Partial breast irradiation for early breast cancer.

    Hickey BE.Cochrane Database Syst Rev. 2016.1 comment

    Melissa Rethlefsen2017 Feb 08 10:46 a.m. 5 of 5 people found this helpful

    I thank the authors of this Cochrane review for providing their search strategies in the document Appendix. Upon trying to reproduce the Ovid MEDLINE search strategy, we came across several errors. It is unclear whether these are transcription errors or represent actual errors in the performed search strategy, though likely the former.

    For instance, in line 39, the search is "tumour bed boost.sh.kw.ti.ab" [quotes not in original]. The correct syntax would be "tumour bed boost.sh,kw,ti,ab" [no quotes]. The same is true for line 41, where the commas are replaced with periods.

    In line 42, the search is "Breast Neoplasms /rt.sh" [quotes not in original]. It is not entirely clear what the authors meant here, but likely they meant to search the MeSH heading Breast Neoplasms with the subheading radiotherapy. If that is the case, the search should have been "Breast Neoplasms/rt" [no quotes].

    In lines 43 and 44, it appears as though the authors were trying to search for the MeSH term "Radiotherapy, Conformal" with two different subheadings, which they spell out and end with a subject heading field search (i.e., Radiotherapy, Conformal/adverse events.sh). In Ovid syntax, however, the correct search syntax would be "Radiotherapy, Conformal/ae" [no quotes] without the subheading spelled out and without the extraneous .sh.

    In line 47, there is another minor error, again with .sh being extraneously added to the search term "Radiotherapy/" [quotes not in original].

    Though these errors are minor and are highly likely to be transcription errors, when attempting to replicate this search, each of these lines produces an error in Ovid. If a searcher is unaware of how to fix these problems, the search becomes unreplicable. Because the search could not have been completed as published, it is unlikely this was actually how the search was performed; however, it is a good case study to examine how even small details matter greatly for reproducibility in search strategies.

  • Melissa Rethlefsen2017 Jan 05 7:00 p.m. 3 of 3 people found this helpful

    Though the authors clearly recognize a need to search multiple databases to gather as many relevant references as possible, a major concern is that the information sources listed as searched in this article are largely not databases, but database platforms.

    The authors searched "Scopus, EBSCOhost, Ovid, and Web of Science platforms." Of those four, only Scopus is a unique database. EBSCOhost is a platform that contains many different databases. I counted 71 beginning with the letter "A" on their title list: https://www.ebscohost.com/title-lists Ovid is similarly a platform with many different database options, though not quite as many as on EBSCOhost. Ovid has "over 100" different database options: http://www.ovid.com/site/catalog/databases/index.jsp Web of Science similarly is a platform with multiple database offerings (22 of them, with different date range options available): http://thomsonreuters.com/en/products-services/scholarly-scientific-research/scholarly-search-and-discovery/web-of-science.html

    Unfortunately, this article does not include a replicable search strategy in the text or in a supplementary document, so it is not possible to guess what databases might have been used, or what search strategies were used to search them. Because this is a mixed methods review and did not have an established protocol, it may be unreasonable to expect the authors to report their search methods as stringently as in a "true" systematic review, but since the authors claim a systematic review, it would have been appropriate to document and report the search methods according to known standards (i.e., PRISMA, MOOSE).

    This study might have benefited from the inclusion of a librarian or information specialist on the team to improve documentation and reporting of the key methodology used to conduct this work. Additional peer review from librarians and information specialists may help identify reporting concerns, including lack of search detail and details about information sources utilized.

  • Melissa Rethlefsen2017 Jan 05 6:09 p.m.edited 5 of 5 people found this helpful

    The authors have wisely searched in multiple sources to inform this systematic review. Even better, they have listed the number of references found per information source prior to deduplication in their flow diagram. Upon closer examination of the flow diagram, I noted that after deduplication, the authors were left with 395 citations. This seems highly unusual, since some of their individual information sources (e.g., Scopus and "ISI Web of Science") individually contain more than 395 citations.

    Though it is possible that there is some duplication in Scopus, it is highly unlikely that there were 76 duplicates in that database. Because the search strategies are not included in full (without interpretation required) and would not work in a database other than MEDLINE (due to the use of the "major medical subject heading 'pulmonary arterial hypertension'", which is MEDLINE specific), it is difficult to ascertain whether Scopus may have been searched multiple times, perhaps with multiple searches, without deduplication prior to exporting the results. This is likely the answer, but it remains unclear.

    For the ISI Web of Science, it is markedly more challenging to guess what the difference between the 680 results and the 395 total deduplicated results may have been. This is primarily because the reason for the gap of 285 articles could theoretically be attributed to several reasons. The first and foremost reason is that ISI Web of Science is not a database; it is a platform that hosts many databases, depending on the subscriptions held by the institution. For example, at my institution, our ISI Web of Science platform hosts: Science Citation Index Expanded (SCI-EXPANDED) --1900-present Social Sciences Citation Index (SSCI) --1900-present; Arts & Humanities Citation Index (A&HCI) --1975-present; Emerging Sources Citation Index (ESCI) --2015-present; BIOSIS Previews; MEDLINE; Russian Science Citation Index; SciELO Citation Index; and KCI: Korean Journal Database. It is unclear which ISI Web of Science database(s) were searched in this instance. If multiple databases on the platform were searched individually (perhaps Science Citation Index and MEDLINE, e.g.), that could lead to the deduplicated results being lower than the individual platform's results. It could also be, however, that multiple searches were performed without deduplication before exporting results. Since the search strategy loosely described in the text would not work in ISI Web of Science, except in the MEDLINE database within, it is unclear what searches may have been performed on that platform.

    Though the authors make a good attempt to make their methods transparent, the limited search strategy reporting and incomplete reporting as to database usage makes their data flow hard to track and reproduce. Having a librarian or information specialist on the team may have helped to improve their search process and reporting.

  • Melissa Rethlefsen2017 Jan 05 5:30 p.m. 1 of 1 people found this helpful

    Upon reading this article, I came across a small detail that I am curious about. In the flow diagram (Figure 1), the authors note in the final step that they found 127 reports to 15 trials. Generally, I would interpret this as the authors having located 127 references that discuss 15 different trials. In other Cochrane reviews, this is often noted as X number of references to X number of studies, and then all of the references for each study are usually listed under the main study name in the included studies list. In this case, there appear to be only 15 references to 15 studies in the included studies list. Is this a typo in the flow diagram, or are the remaining 112 references that could have been ascribed to the 15 studies erroneously omitted?

  • Melissa Rethlefsen2016 Nov 01 11:55 a.m. 4 of 4 people found this helpful

    I thank Dr. Thombs for his responses, particularly for pointing out the location of the search strategy in the appendix of Thombs BD, 2014. I am still uncertain whether the search strategies in question were the ones used to validate whether the primary studies would be retrieved ("In addition, for all studies listed in MEDLINE, we checked whether the study would be retrieved using a previously published peer-reviewed search [9].") for two reasons: 1) The cited study (Sampson M, 2011, about the method of validation) does not include the search strategy Dr. Thombs notes below, though the strategy is cited previously when the search to identify meta-analyses meeting the inclusion criteria was discussed, and 2) The search strategy in Thombs BD, 2014 is very specific to the "Patient Health Questionnaire." Was this search strategy modified to include other instruments? Or was the Patient Health Questionnaire the only depression screening tool in this project? It appeared as though other scales were included, such as the Geriatric Depression Scale and the Hospital Anxiety and Depression Scale, hence my confusion.

    I greatly appreciate the information about the reduction in the number of citations to examine; this is indeed highly beneficial information. I am curious whether the high number of citations came from primarily the inclusion of one or more Web of Science databases? Looking at the Thombs BD, 2014 appendix, multiple databases (SCI-EXPANDED, SSCI, A&HCI, CPCI-S, CPCI-SSH) were searched in the Web of Science platform. Were one or more of those a big contributor to extra citations retrieved?

    Though Dr. Thombs and his colleagues make excellent points about the need to maximize resources at the expense of completeness, which I fully agree with, my concern is that studies which do post-hoc analysis of database contributions to systematic reviews lead those without information retrieval expertise to believe that searching one database is comprehensive, when in fact, the skill of the searcher is the primary factor in recall and precision. Most systematic review teams do not have librarians or information specialists, much less ones with the skill and experience of Dr. Kloda. I appreciate that Dr. Thombs acknowledges the importance of including information specialists or librarians on systematic review teams, and I agree with him that the use of previously published, validated searches is a highly promising method for reducing resource consumption in systematic reviews.

  • Melissa Rethlefsen2016 Oct 21 6:18 p.m. 6 of 6 people found this helpful

    The authors are asking an important question—which database(s) should be searched in a systematic review? Current guidance from the Cochrane Collaboration, the Institute of Medicine, and most information retrieval specialists suggests that searching multiple databases is a necessity for a comprehensive search of the literature, but searching multiple databases can be time-consuming and may result in more citations than are manageable to review. In this paper, the authors posit that searching MEDLINE alone would be sufficient to locate relevant studies when conducting systematic reviews with meta-analysis on depression screening tools.

    Though the authors’ methodology is detailed, one limitation noted in the paper was noted as, “we were unable to examine whether the search strategies used by authors in each meta-analysis did, in fact, identify the articles indexed in MEDLINE as most included meta-analyses did not provide reproducible search strategies.” Because of this, the conclusions of this study must be viewed with caution. If the searches conducted by the authors did not locate the studies in MEDLINE, the fact that the studies could have theoretically been located in MEDLINE is irrelevant. Finding results in MEDLINE is largely due to the ability of the searcher, the sensitivity of the search design, and the skill of the indexer.Wieland LS, 2012 Suarez-Almazor ME, 2000 Golder S, 2014 O'Leary N, 2007 Searching for known items to assess database utility in systematic reviews has been previously done (see, for example, Gehanno JF, 2013), but it has been critiqued due to the lack of search strategy assessment.Boeker M, 2013 Giustini D, 2013

    The authors, using an uncited search strategy in an unspecified version of MEDLINE on the Ovid SP platform they state had been “a previously published peer-reviewed search,” indeed can only find 92% of the included articles, versus the 94% available in the database. Unfortunately, there is little reason to suppose that the authors of systematic review articles can be expected to conduct a “reasonable, peer-reviewed search strategy.” In fact, researchers have repeatedly shown that even fully reproducible reported search strategies often have fatal errors and major omissions in search terms and controlled vocabulary.Sampson M, 2006 Rethlefsen ML, 2015 Koffel JB, 2016 Golder S, 2008 Though working with a librarian or information specialist is recommended as a way to enhance search strategy quality, studies have shown that certain disciplines never work with librarians on their systematic reviews Koffel JB, 2016, and those disciplines where it is more common still only work with librarians about a third of the time.Rethlefsen ML, 2015 Tools like PRESS were developed to improve search strategies McGowan J, 2016, but search peer review is rarely done. Rethlefsen ML, 2015

    The authors also state that, “searching fewer databases in addition to MEDLINE will result in substantively less literature to screen.” This has not been shown by this study. The authors do not report on the number of articles retrieved by their search or any of the searches undertaken in the 16 meta-analyses they evaluate. Furthermore, no data on precision, recall, or number needed to read was given for either their search or for the meta-analyses. These data could be reconstructed and would give readers concrete information to make this case. That would be particularly helpful in light of the information provided about the number and names of the databases searched. Other studies looking at database performance for systematic reviews have included precision and recall information for the original search strategies and/or from the reported found items. Preston L, 2015 Bramer WM, 2013 Bramer WM, 2016 These studies have largely found that searching multiple databases is of benefit.

  • Melissa Rethlefsen2016 Sep 19 6:15 p.m. 1 of 1 people found this helpful

    I applaud the authors' reporting of all of the search strategies behind this Cochrane review. There is, however, one area with a minor error. The MEDLINE search strategy for this review update is not reproducible, due to two causes: 1) no PRISMA-style flow chart to indicate the number of references found in either all or individual databases or in-text reporting of the complete number of references identified by searches before and after deduplication, and 2) a major Boolean error in the MEDLINE search strategy. For the second point, the MEDLINE search strategy is missing one or more lines that would combine the gynecological cancer terms together with the obstruction terms. It can be somewhat extrapolated from more complete search strategies reported for EMBASE and other databases, but because there is no flow chart, it is not clear if the extrapolated search is indeed what the authors performed. It appears as though the correct Boolean logic addition would be to combine lines 15 and 23, plus add the additional date limitation.

  • Melissa Rethlefsen2016 Mar 04 2:08 p.m. 10 of 10 people found this helpful

    We attempted to replicate the authors’ Embase search strategy, as reported in the Appendix (pages 13-14). The authors, according to the PRISMA flow chart also located in the Appendix (page 37), retrieved 1,371 results for their Embase search strategy, prior to deduplication of records. The authors specifically mention that there was no language limit imposed on this search in the "search strategy and selection criteria" panel in the primary manuscript, but there are no other limits mentioned.

    In case the phrase “for randomised controlled trials” in the "search strategy and selection criteria" panel indicated a limit to ‘randomized controlled trials’ as a study type (using the EMTREE term), we restricted to this term. There was also no indication of the dates of coverage of the version of Embase (via the Ovid platform) used. There are multiple possibilities of date ranges (1947-1979; 1980-1987; 1988-1995; 1947 to present; 1974 to present; 1980 to present; 1988 to present; 1989 to present; and 1996 to present). We attempted all of the different date ranges, until an entry date of February 1, 2014 to most closely mimic the published search. There was no exact search date mentioned, but the authors did note that the search went "up to January 2014"; we used a February 1, 2014 entry date to allow for a late January 2014 search, though it was not clear if the authors searched in January 2014, or limited to publications published prior to January 2014. Even with the assumed ‘randomized controlled trials’ EMTREE term applied, we could not replicate the search count for any year range available by default in any Embase via Ovid database option.

    The search as presented by the authors retrieves over 50,000 records in Embase (up to an entry date of 2/1/14). With the ‘randomized controlled trials’ EMTREE term applied as a limit, it retrieves over 3,100 articles. Mimicking the smallest Embase database coverage at the time of the authors’ search (1996-present, limiting to an entry date before February 1, 2014), we still see nearly 3,000 articles. This is a 1,600 record difference between the authors’ published results and what their published search strategy retrieves, even with these assumptions applied.

    This study highlights the need for more accurate and comprehensive reporting needed for search strategies in systematic reviews and other literature search-based research syntheses, and the need for better peer review of search strategies by information specialists/medical librarians. Though the searches in the Appendix are on face value replicable and high quality, on closer inspection, they do not in fact meet the reporting standards as outlined by PRISMA Statement items #7 and #8: “Describe all information sources in the search (e.g., databases with dates of coverage, contact with study authors to identify additional studies) and date last searched” and “Present the full electronic search strategy for at least one major database, including any limits used, such that it could be repeated.”

Supplemental Content

Authors of Comments

Find comments contributed by an author

Find publications with comments in PubMed

Include "has_user_comments[filter]" with the query or use the "Reader comments" filter in PubMed's side bar. Try it now

More tips for using PubMed Commons