Sustainable collaboration for community outreach: Literature/Uso Directo (SALUD) project

Health sciences librarians have an active interest in community health information outreach projects [1–8]. Successful community partnerships are guided by each partner clearly defining goals. Accomplishing these goals requires time and personal contact to develop trust, active engagement of all partners, and careful planning [9, 10]. The investment required to develop partnerships makes project sustainability a desired outcome. The academic health sciences library exists to support the educational mission of the institution. By collaborating with other academic units, the library can meet its educational goals while building the sustainable relationships necessary for successful outreach projects. 
 
At the University of Illinois at Chicago (UIC), the Library of the Health Sciences (LHS) and College of Nursing (CON) developed an outreach project that directly supports the mission of the university while strengthening relationships between the library, CON, and surrounding community [11]. As part of a core course in the undergraduate nursing curriculum, CON faculty supervise students in clinical rotations at two Chicago Department of Public Health (CDPH) clinics in nearby neighborhoods, engaging them in the challenges and rewards of community practice. In the fall of 2005, CON and LHS faculty saw an opportunity for outreach. The resulting Spanish Access to Literature/ Uso Directo (SALUD) Public Health Information Pilot Project provided education on evaluating and using online health education materials by leveraging community health nursing students and existing relationships between these organizations. This paper describes the project's core activities with an emphasis on the project's techniques for sustainability.


INTRODUCTION AND BACKGROUND
Health sciences librarians have an active interest in community health information outreach projects [1][2][3][4][5][6][7][8]. Successful community partnerships are guided by each partner clearly defining goals. Accomplishing these goals requires time and personal contact to develop trust, active engagement of all partners, and careful planning [9,10]. The investment required to develop partnerships makes project sustainability a desired outcome. The academic health sciences library exists to support the educational mission of the institution. By collaborating with other academic units, the library can meet its educational goals while building the sustainable relationships necessary for successful outreach projects.
At the University of Illinois at Chicago (UIC), the Library of the Health Sciences (LHS) and College of Nursing (CON) developed an outreach project that directly supports the mission of the university while strengthening relationships between the library, CON, and surrounding community [11]. As part of a core course in the undergraduate nursing curriculum, CON faculty supervise students in clinical rotations at two Chicago Department of Public Health (CDPH) clinics in nearby neighborhoods, engaging them in the challenges and rewards of community practice. In the fall of 2005, CON and LHS faculty saw an opportunity for outreach. The resulting Spanish Access to Literature/ Uso Directo (SALUD) Public Health Information Pilot Project provided education on evaluating and using online health education materials by leveraging community health nursing students and existing relationships between these organizations. This paper describes the project's core activities with an emphasis on the project's techniques for sustainability.

THE SPANISH ACCESS TO LITERATURE/USO DIRECTO (SALUD) PROJECT
The core project team responsible for implementing the SALUD project included health sciences librarians and a nurse clinical consultant. Members of the project planning group met with CDPH clinic staff, administrators, and members of the community advisory board to identify challenges and develop an outreach plan to address health information needs of Latino pa-tients at these clinics. Based on quarterly chart audits and patient questionnaires from the year preceding the SALUD project, administrators were interested in exploring ways to make improvements to their health education activities. Staff expressed a need for information that is up-to-date, at a low reading level, and in languages other than English. Both clinics serve a majority Latino population, many of whom speak Spanish as their primary language.
Librarians were familiar with many websites, such as MedlinePlus, that offer health education materials written at a lower reading level and written in Spanish or other languages [12]. These online materials are convenient, inexpensive, and easily distributed to patients. However, health care providers continue to face barriers to their use: lack of technology to access the Internet, lack of awareness or skills to locate materials, and dispersed selection of materials relevant to clinic population.
Librarians had learned from previous outreach that the time necessary to develop and implement an outreach program is often underestimated [3]. With this in mind, the goals and implementation plan for the project were designed to not only meet community needs, but also closely align with institutional objectives (Table 1). This approach would allow for continuation of successful project activities and improvements on unsuccessful activities.
Team members were cognizant of building on the strengths of project partners, distributing the resource burden across organizations, and leveraging collaboration as features that appeal to funding agencies. Librarians agreed to develop resources, conduct training, and provide administrative support for the project. The consultant, a CON faculty member supervising students in clinics, agreed to act as an essential liaison to clinic staff and provide a ''reality check'' on feasibility of activities. Clinic resources were limited, but administrators offered support to the SALUD liaison and were helpful in organizing training sessions. The SALUD project was approved by the UIC Institutional Review Board. staff, librarians created a web portal designed for clinic staff use [13]. The web portal included links to health education websites and direct links to handouts in English and Spanish addressing topics commonly seen in the clinics, such as diabetes, domestic violence, and obesity. The SALUD website was designed to prevent staff with lower computer literacy from being overwhelmed by a large number of sites with vastly different interfaces and search systems. One mobile computer-printer station was placed in each clinical cluster (adult, pediatric, and women's health) at clinic I, and, due to wiring constraints, a single station was installed at clinic II. Lastly, librarians provided one-on-one, hands-on, online health education resource training for staff at both clinics.
With improvements made to clinic infrastructure, the SALUD project team was ready to implement the heart of the project-a train-the-trainer model. Librarians trained community health nursing students, as part of their clinical orientation process, to access and evaluate health education materials that are readinglevel and language appropriate for community health patients. Students then applied their knowledge during their clinical rotations in the target clinics. Students, working in pairs, also gave a graded presentation about health education resources to staff at another community health clinic in the city selected from a list of approximately thirty clinics with whom CON has established a relationship.

EVALUATION AND OUTCOMES
Evaluation activities were ongoing throughout the SALUD project to track progress and assess outcomes. Measures were both quantitative (e.g., numbers of participants) and qualitative. The two key methods used were written evaluations and focus groups (two student groups with a total of thirteen participants, two staff groups with a total of fifteen participants).
During the SALUD project, librarians taught six sessions to sixty-nine nursing students. In two focus groups at the close of the funded project year, students (nϭ13) all indicated that they had learned about health education resources online and were able to describe trustworthy resources. Students identified several barriers to putting their knowledge into practice, including computer location and lack of time. However, they believed health education to be an important activity and hoped to have more time to use online health education resources in their private practice.
Approximately 128 clinic staff from across the city learned about online resources providing access to materials in languages other than English. Clinic staff were invited to provide oral and written feedback throughout, and a focus group was convened at each of the 2 CDPH clinic sites at the conclusion of the SAL-UD project (nϭ15). Focus group participants described continued barriers to effective resource use. Good computer placement led to use of online resources at clinic I; poor placement at clinic II meant that resources were rarely used by staff. Staff at clinic II in particular noted that material on the SALUD website was at reading levels that were too high for their patients. Their comments were not entirely surprising, as during the project, librarians had expressed difficulty in finding language and reading-level appropriate materials on the topics staff had identified. Web usage statistics available for the first months of the project-March, April, and May 2005-showed that the average number of visits per month was 121. It is unclear what percentage of these visits were made by nursing students versus clinic staff. Staff in focus groups indicated that one-on-one training was valuable, especially when tailored to different levels of knowledge and experience. Anecdotally, librarians noted that medical assistants in particular had appreciated the training, as they have few opportunities for professional development but may have a role to play in patient education.

DISCUSSION
SALUD project staff and students continue to face challenges reported in library outreach literature: computer skill level and access, need for comprehensive yet audience-specific resources, staffing, and time demands [1][2][3][4][5][6][7][8]14]. These barriers limit Latino patients' access to quality reading-level and language-appropriate health education materials. The library continues to maintain the SALUD web portal for the two clinics and will periodically reevaluate its use. Usage statistics in March through May of 2007-almost a year following the end of the project funding periodshow an average of 375 visits per month, which is more than double the number of visits during the project year itself.
The component deemed most successful by all parties is the sustainable collaboration. The CDPH has devoted additional resources to continuing elements of the SALUD project. Students are highly motivated trainees and trainers because participation is a mandatory and graded portion of their curriculum. Because a new group of nursing students begins approximately every five to eight weeks during the academic semester, CDPH clinic staff throughout the city receive education about a broad number of topics, and their knowledgebase is refreshed as changes are made to materials, websites, and modes of access.
CDPH administrators have embraced this continuing education model that helps them meet their goal of improved health education. A portion of the CDPH clinics' budget has been allocated to place a computer station to be used for health education activities in the clinics. Staff at clinic II have expressed increased interest in online health education since two new computers from CDPH were planned to be installed in more convenient locations. This additional support from CDPH will improve access and training regarding health education activities.
A valuable outcome of this project is the strengthened relationship between library and nursing faculty. Even after completion of the pilot project and in spite of turnover in original staff, the project has been fully integrated into the UIC curriculum for community health nursing. In the year following the contract pe-riod, the number of librarian sessions on health education resources increased from six to ten. Librarians developed a tutorial that is available via the university's online course management system. Integrating instruction into an established curriculum is an activity that librarians can sustain practically. Relationships developed through the SALUD pilot project have also enabled librarians to provide additional instruction throughout the nursing curriculum. Through better integration, they can now provide a richer educational experience for students. The librarian-led curriculum prepares future nursing students to deliver patient health information in the CDPH clinics and later in their own practice. It would be valuable to investigate in the future the extent to which students use these resources following graduation.
Perhaps most importantly, because the collaboration between the library, CON, and clinics continues, there is an opportunity to explore new means of improving health education in the community and to develop interventions with more rigorous outcome measures. In planning for a project, it is important not only to include evaluation activities, but also seek out assessment expertise in partner organizations. CON has both clinical and research expertise that could be utilized in future projects. For instance, the SALUD project team is curious about whether patient satisfaction improves when patients are provided with information in their preferred language and at an appropriate reading level. Future research could systematically investigate what health education materials staff would use in a clinic and how many appropriate materials are available for those topics.
Librarians at UIC focused on the library's educational mission and their own strengths as teachers in the institution. By fitting these strengths with the talent and existing relationships of the CON faculty, the SALUD project team was able to form an effective academic-community partnership. The relationship formed between the library, CON, and CDPH clinics during the project has built a foundation that will give a positive edge to future efforts to address the health education needs of Latino patients. In light of increasing demands and decreasing resources, academic health sciences librarians could consider the SALUD model in seeking opportunities to build sustainable collaborations that will serve as a foundation for outreach initiatives. Interdisciplinary, collaborative relationships diffuse the resource burden, appeal to funders, leverage diverse skills of partners, and, over time, build expertise and systems for successful outreach. Sustainable collaboration between the library and other academic units is a model for a new generation of outreach programs that can meet both educational goals and community service needs.

INTRODUCTION/BACKGROUND
Online tutorials can be a useful facet of a library's instructional strategies. According to the Instructional Technologies Committee of the Association of College & Research Libraries (ACRL), web tutorials should include interactive exercises such as simulations or quizzes [1]. These activities encourage active learning and allow students to respond to what is taught, while selfassessing their own learning. Web tutorials should also provide a way to contact a librarian for questions or to give feedback about the tutorial's design or usefulness [1].
While previous studies have looked for examples of active learning in tutorials, they did not focus on academic medical libraries [2,3]. Dewald analyzed 20 tutorials (19 for post-secondary education; 1 for kindergarten-8th grade students) selected by the Research Committee of the Library Instruction Round Table of the American Library Association [3]. Hrycaj examined 65 tutorials created by member libraries of the Association of Research Libraries (ARL) [2]. Both studies emphasized the importance of including active learning in tutorials. Examples of active learning described in these articles include quizzes at the end of tutorial modules, questions integrated into the tutorial modules, exercises used in tutorial modules, quizzes requiring the use of separate browser windows, or options for sending quiz results to an instructor [2]. Dewald's 1999 study found that 37% of the tutorials included active learning features, and Hrycaj's 2005 study found that 60% of the tutorials contained some element of active learning.
The purpose of the current project was to identify and analyze freely available online tutorials created by medical libraries. The project team was interested in identifying the topics of tutorials created by medical libraries, determining common design features used in tutorials, and assessing elements of active learning in the identified library-created tutorials. The team also generated a list of third-party tutorials to which libraries link.

Using the list of the Association of American Medical
Colleges' member schools, the team identified websites Supplemental Tables 1, 2, 3, 4, and 6 and an appendix are available with the online version of this journal.
for 124 academic medical libraries in the United States, which served as the review subjects [4]. The project team divided this list so that each team member reviewed 31 medical library websites. Each team member then searched the library sites using terms such as ''tutorials,'' ''online tutorials,'' and ''web tutorials.'' Team members also browsed the library websites to locate any reference to tutorials.
Prior to examining the library sites, the team reviewed the literature to create a checklist of common tutorial features. Using the tutorial design tips obtained from this literature review and the project team's own subjective list of effective tutorial design elements, the team generated a list of ten tutorial questions (Appendix online) to use as they accessed each medical library website. Team members identified tutorials created by the library under examination and tutorials created by third parties, a vendor or another library, to which libraries linked. If the medical library designed its own tutorials, the team member collected data about the tutorials via subsequent questions. The team repeated this process for each tutorial created by a medical library. If a library identified the resource as being a ''tutorial,'' the team counted the item as such, even when it appeared to be a simple handout or electronic presentation.
The team also evaluated elements of active learning. The team members counted the tutorial as being interactive (question 4) if the user was required to perform searches, complete exercises, or click on appropriate boxes for additional information. Tutorials that asked the patron to open up the database or software product in a new window and follow along with the steps in the tutorial were counted as interactive. Tutorials that simply required the patron to click a forward button to navigate the tutorial were not considered interactive. The team also collected data on whether a tutorial included a quiz or a test (question 5).
During the data collection phase, the team consulted each other in an attempt to remain consistent in data collecting. The team collected data from the 124 websites between the months of January and February 2007 and compiled the data in an Excel spreadsheet for analysis.

Tutorials created by third parties
Seventy-eight out of 124 library websites (63%) included links to tutorials that were created outside the library, such as by a vendor or another library (Table 1 online). Some libraries had designated sections on their websites for tutorials. In other cases, the links to tutorials were included on a Subject List page or beside links used to access databases.
Sixty-five of the 124 libraries linked to the National Library of Medicine's (NLM's) PubMed tutorial, the most commonly linked-to tutorial (

Software used to create tutorials
Some tutorials were created using more than one type of software, such as hypertext markup language (HTML) editors and electronic presentation programs (

Active learning
In most of the tutorials, the patron is passive and simply reads content or watches a demonstration of how to search a database. Seven percent of the tutorials (19/ 274) were considered interactive (  (Table 5).

Feedback
Out of the 274 tutorials observed, 66 (24%) included a survey or feedback option (Table 5). If a tutorial included a librarian's contact information or an ''Ask-a-Librarian'' link, the team counted it as providing a feedback option.

Target audiences
Two hundred and fifty-four tutorials were designed for anyone using the library and its resources (Table 6 online). The total number of tutorials addressing targeted groups was 281, as some tutorials mentioned multiple target groups. The team found tutorials designed specifically for chemistry (6 tutorials), distance education (4 tutorials), nursing (3 tutorials), first-year medical (3 tutorials), dentistry (1 tutorial), and thirdyear medical students (1 tutorial). One tutorial was geared toward faculty, and 1 tutorial was created for researchers.

Printable parts
Only 26% of the tutorials (72/274) had printable parts, such as accompanying handouts (Table 5). If the tutorial itself was formatted to be printed, such as in Word or PDF documents, the team counted the tutorial as having printable parts. The tutorials were not counted as printable if users could only print one screen of the tutorial at a time.

CONCLUSION
Some libraries created tutorials for resources or content specific to their institutions, while relying on vendor or third-party tutorials for educating users about how to search databases. The project team believes that many libraries may be choosing to link to vendor-produced tutorials instead of creating their own due to frequent interface changes. Moreover, although the majority of observed libraries had created tutorials, most of the tutorials had simplistic designs that did not require responses from the user. Most of the libraries used HTML editors to create tutorials. Screen recording software, which is easy to use, can help librarians create sophisticated tutorials with interactive elements more quickly than using HTML alone. Further, the authors believe that quizzes and/or printable parts, like handouts, are particularly helpful for tutorials that do not include a search simulation. After users watch a demonstration of searching a resource, they can print the handout to refer to while they attempt their own search. Such simple additions as printable handouts and/or follow-up quizzes would likely increase the levels of active learning for tutorial users.
While the team consulted with each other to ensure consistency during data collection, some level of collector error may have occurred. One problem the team encountered was determining whether the medical library or the larger main campus library created the tutorial. The project team attempted to collect data only on tutorials created by medical librarians. The team did not allot time to contact libraries to confirm whether the observed tutorials were created by medical librarians or by other academic librarians.
This study was designed to look only at freely available, Web-based tutorials. It is important to recognize that course-integrated tutorials and password-restricted tutorials might have more sophisticated designs. Therefore, the team could have extended the timeline of the project to request access to password-restricted tutorials, which likely would have increased the number of tutorials using active learning techniques. More libraries may be creating tutorials for specific patron groups, but access to these tutorials may also be restricted. This study might have been more systematic if the team established a definition of a tutorial beforehand; instead, the team decided to evaluate what individual libraries called tutorials.
Further research is needed to determine whether interactive library tutorials are more effective than passive ones. Literature review could be expanded to other disciplines, such as education, which may also be researching effective design elements of tutorials. More research is also needed in the area of the feedback and usage data that medical libraries are obtaining from their tutorials.
Overall, while the project team's survey of online academic medical libraries' websites revealed a large number of self-produced web tutorials, few of those tutorials incorporated active learning elements such as interactive interfaces or printable handouts. Medical libraries might want to explore incorporating such elements into tutorials to encourage learner engagement.

INTRODUCTION
Information professionals are called on to determine how best to measure the impact of an author's articles, and citation counts are often regarded as one method for obtaining a quantitative expression of the utilization and contribution of a particular published paper. As Meho states, citation analysis assumes that influential works or scientists are cited more often than others [1]. Egghe and Rousseau claim that citation counts are based on four important assumptions: an article's citation implies use of that document by the citing author; the citation reflects the merit (quality, significance, impact) of the article; the references are from the best possible works on the topic; and the cited articles are related in content to the one in which they are used [2].
Traditionally, the peer-review process has been used to assess article quality. Currently, there is a global trend toward the development, refinement, and increased use of quantitative metrics, particularly those resulting in ''quantifiable, post publication quality assessment'' [1,3,4]. However, determining impact by citation analysis can be controversial; in some cases, works are cited to point out errors and inaccuracies in the research. Additionally, long articles are often cited more frequently, and some reference lists contain erroneous citations, which can skew results. Finally, journal visibility and prestige affects dissemination, and self-citation can artificially inflate citation counts [1,3,[5][6][7][8]. Despite these concerns, citation analysis remains a useful tool for assessing faculty research publication.
The journal impact factor (JIF) was developed to facilitate comparison between citation rates of journals and evolved as a measurement of journal quality on the assumption that a higher citation rate equaled a higher quality journal [9]. This assumption causes concerns, as Amin and Mabe indicated, because it is often used as the ''chief quantitative measure of the quality of a journal, its research papers, and the researchers who wrote the paper'' [10]. Many authors have noted other factors that affect the actual impact factor number: (1) research field, (2) type of journal, (3) average number of authors per paper, (4) size of the journal, and (5) two-year measurement window. Other limitations are that JIFs are biased toward US publishers, a small percentage of articles is highly cited, and the JIF may be easily manipulated [1,3,[10][11][12]. Also of note is the fact that a journal may not yet be indexed in Web of Science (WOS) or tracked in the Journal Citation Reports (JCR) database long enough to have an impact factor. For these reasons, many have cautioned against using JIF to judge the quality or impact of individual papers or authors [9,13].
Vieira and Faraino, however, used JCR to analyze the research record of their institution's list of faculty publications [14]. They pointed out that JCR can be an important research tool in indicating how faculty authors were citing the literature. More recently, Saha et al. found a strong correlation between the quality ratings of surveyed physicians of nine general medicine journals and their impact factors [15], while Yue et al. found that clinical and research neurologists' ratings of journal quality also correlated with impact factors [16]. Rice et al. provided critical information about the statistical formulas used to calculate the reliability and validity of citation data [17].

THE UNIVERSITY OF ALABAMA AT BIRMINGHAM EXPERIENCE
In October 2006, the Reference Department of the Lister Hill Library (LHL) of the Health Sciences at the University of Alabama at Birmingham (UAB) received a request from a university administrator to ascertain which papers or journal articles written by several UAB authors over the past ten years have had the greatest impact. The administrator made no distinction between research articles or other article types.
To fulfill this request, WOS searches for each different author were performed. The same search strategy in WOS was used for each author. The requestor and the librarians mutually agreed that the search would utilize the author's last name with first and middle initials. To address issues of locale, the city ''Birmingham'' was used in the city (CI) field instead of zip codes. The CI field was included in the search strategy as the administrator was only interested in the publications that the authors had written while affiliated with UAB. Using the CI field also helped eliminate authors at other institutions with similar last name and initial combinations. The librarian informed the requestor of the various limitations of this search methodology: that the articles must all be signed in the same naming convention and that the city ''Birmingham'' may be located outside Alabama. Due to the uniqueness of the authors' names coupled with the city, false drops were not expected. The search was limited to the years of 1995 to 2006, and results were then sorted by the number of times cited.
The librarians then used the WOS Results Analysis feature to obtain a report showing the title of the journal and the number of articles by the author being searched that were published in that journal. Results were sorted by record count with the minimum record count set to one.
The librarians then utilized JCR's journal summary feature to sort the journals in specific subjects by their impact factors. For this particular request, the source title list was reviewed and the librarians identified the major categories (e.g., surgery, internal medicine) using the subject categories identified in the JCR record for each journal. A list of impact factors for journal titles in the appropriate areas was generated and included in the packet that the librarians hand-delivered to the requestor. A distinction was made by subject to provide a more representative comparison given variations of impact factors in subject categories.
The requestor's packet included the following items: WOS author search sorted by times cited, WOS results analysis with the records ranked by record count, JCR subject category list, and the journal summary list for each subject category, sorted by impact factor. The packet also included a cover letter describing the search process and explaining that determining the impact of an author's work requires caution. The variables to be considered included: (1) number of times the journal article has been cited (are self citations included?); (2) author's position in the author string (if the article is the product of an author's lab, the author will usually be listed last); (3) impact factor of the journal (viewed generally or within its subject categories); (4) date of publication (more recent articles may not have been published long enough to have been cited numerous times); and (5) subject area of the journal (determining if this is a large subject area in terms of the number of journals published in that subject or a really narrow specialty). The librarians informed the requestor that, given the information provided, it was the requestor's responsibility to analyze the data and determine the appropriate value or weight to give to each piece of information.

DISCUSSION
Reference departments in other medical centers may often struggle with similar requests. Published literature indicates that various approaches and tools are available for assessing the impact and quality of a researcher's work. While the LHL librarians decided to utilize JCR and WOS, the emergence of additional web-based citation analysis tools has had an impact on citation analysis and provides a number of new quantitative measures to be considered. In 2004, two primary competitors to WOS became available: Elsevier's Scopus and the freely available Google Scholar (GS). Several groups have compared these databases and have concluded that each of the three databases returned unique material [1,5,8,18,19]. Scopus includes a larger number of international and open access journals than WOS, thus providing complementary coverage. Although GS has limited search features, it includes other unique items such as book chapters, dissertations, electronic prints, and research reports.
For 25 highly cited authors in the field of information science, a comparison of WOS, Scopus, and GS found that Scopus and GS increase the citation counts by 35% and 160%, respectively, revealing the importance of using several citation sources to judge the true impact of a scientist's work [1]. Jasco compared citations to a single paper (Science 1955;122:108-11) for the 1996-2005 time period [19]. Although WOS, GS, and Scopus returned a similar number of records, only 33 citing papers were common in the 3 result sets, leading to the conclusion that ''a single database cannot provide comprehensive citation coverage.'' In addition, the various databases offer different strengths as administrative tools and provide alternative ways to analyze the data [8].
Other web-based tools provide different approaches for measuring quality or impact. Introduced in 2001, the subscription-based Faculty of 1,000 offers a peerreviewed alternative to citation analysis. Each month, over 1,000 experts select 2-4 papers in the biomedical fields and provide comments and grades for all [3]. An editorial in Nature Neuroscience noted a study that suggested that this tool provides an excellent correlation with JIF in the field of neuroscience [20].
Measuring the number of times an article is downloaded is also under discussion as a measurement tool or analysis method [21]. Dong et al. contend that online availability increases JIF in a positive manner [6]. Meho noted strong and positive correlation between download counts, citation counts, and JIF [1]. More research comparing measurement tools and the impact of downloaded articles is needed.
Two interesting new approaches to citation analysis are PubFocus and h-index. PubFocus [22] is a web service that performs statistical analysis of the MED-LINE/PubMed search queries, enriched with the additional information gathered from journal ranking, and that incorporates the number of forward citations taken from PubMed Central or Google Scholar. The algorithm prioritizes citations and evaluates an author's impact on a field [23]. The h-index, proposed by Hirsch, is used to measure the impact of a scientist's body of work. The h-index correlates positively with citation counts, impact factors, publication counts, and peer evaluation of research impact and quality [24]. Currently, in WOS, the h-index is included in the Citation Report available with an author search, and it can easily be determined by using the Citation Tracker feature with an individual author search in Scopus.

CONCLUSION
An analysis of both the quality and impact of an author's contribution requires a complete knowledge of the context of the request to determine the best approach to use, as too much is at stake if the process is oversimplified. Though this analysis focused exclusively on WOS, there are more tools that need to be further explored. Thompson Scientific's JCR and its JIF are still important tools that researchers will readily understand; however, use of these traditional tools introduces limitations in use and interpretation. Given the availability of multiple tools that may be considered in addition to JCR's citation analysis, such as GS and Scopus, it is up to librarians to carefully explain to researchers what tools are available, what criteria are used, and how the various pieces of this puzzle are put together to reach an answer that has both merit and validity. Further research is needed to determine if these emerging citation analysis tools will be able to withstand the rigorous testing and analysis to which WOS and JCR have been subjected. While there is great demand for easy quantitative methods to determine salary raises, tenure, promotion, and hiring, the experience of these reference librarians demonstrates that information professionals and librarians alike have a significant role in educating faculty and administrators about relying too heavily on one specific instrument or approach when making these decisions.

INTRODUCTION
Systematic reviews provide answers to focused clinical questions through a rigorous and comprehensive methodology designed to limit bias [1]. The search for evidence to answer these questions therefore should be as thorough as resources permit [2]. As in other fields, systematic reviews of library and information science topics can answer questions in the field and inform best practices. This paper reports on the productivity of sources of evidence for such reviews and determines which are most efficient, alone and in combination.

METHODS
Three consecutive and recently completed systematic reviews on issues of information retrieval provided an opportunity to retrospectively analyze the sources of relevant evidence: Ⅲ The Checking Reference Lists (CRL) review [3] examined research into the utility of checking reference lists as a method to identify studies for systematic reviews. Ⅲ The Updating Systematic Reviews (Updating) project identified and summarized existing methods and strategies for updating as a first step in an ongoing research initiative [4]. Ⅲ The Peer Review of Electronic Search Strategies (PRESS) review [5,6] analyzed common errors in search strategies and proposed safeguards.
In the original 3 reviews, reviewers read 14,727 bibliographic records resulting from searches conducted to support the reviews and, when needed, the full-text articles to assess them against the reviews' eligibility criteria. This process yielded 142 relevant documents to include in at least 1 of the 3 reviews.
In the current study, 11 databases were examined for coverage of these 142 eligible studies: 3 MEDLINE search interfaces (Ovid MEDLINE; OVID HealthSTAR, a version of HealthSTAR with coverage to the present [7]; and PubMed); EMBASE; Library, Information Science and Technology Abstracts (LISTA); Library and Information Science Abstracts (LISA); Cochrane Methodology Register (CMR); CINAHL; PsycINFO; Cochrane Database of Methodology Reviews (CDMR) (later absorbed into Cochrane Database of Systematic Reviews); and Health and Psychosocial Instruments (HAPI). The databases in which the records were orig-Supplemental Tables 1, 2, and 4 are available with the online version of this journal. inally found had been recorded at the time of the search for the systematic review. The selected databases were searched post hoc for each of the 142 eligible studies to determine where the included items were indexed. Except where noted, eligible records served as the denominator for calculations of recall (proportion of relevant studies retrieved) and the numerator for calculations of precision (proportion of retrieved studies that are relevant) [8]. Bibliometric characteristics such as distribution of citations among journals were calculated using Reference Manager databases of the saved citations. Based on scope of coverage, journals were classified as library science or informatics, medical librarianship or medical informatics, or medicine (including evidence-based health care and epidemiology).

RESULTS
Electronic bibliographic database searches were the means of identification for 101 of 142 (71%) relevant documents in the original reviews. The rest were identified by methods such as reference list scanning and peer nomination. The most common identifying sources for materials used in the original reviews were MEDLINE (28%) and LISA (21%).
Although 71% of the overall pool of relevant material was originally identified through bibliographic databases, 92% (131 of the 142 documents) were actually indexed in at least 1 of the tested bibliographic databases. Using the number of documents actually indexed in bibliographic databases as the denominator, rather than the total number of relevant documents, overall recall of the original searches was 77%.
Precision of the 3 original searches was low. With 142 documents found to be relevant, the overall precision of the original searches was 0.9% (0.5%, 1.2%, and 0.6% for CRL, PRESS, and Updating, respectively).

Coverage
The MEDLINE search interfaces (Ovid MEDLINE, Ovid HealthSTAR, and PubMed) provided the highest coverage of relevant documents, indexing almost half of the relevant material (Table 1 online). Relative coverage of relevant material was equivalent among the three interfaces. LISA also covered almost half of the relevant material. CMR, LISTA, and EMBASE followed closely; each indexed over one-third of relevant documents; however, CMR had the largest unique component-documents not available from any other database tested (13 documents, 9% of the total). CINAHL covered roughly one-quarter of the relevant literature, while PsycINFO, CDMR, and HAPI provided little or no coverage. About 70% of articles found in any other single source were also indexed in the MEDLINE interfaces.
The relatively low unique contribution of various databases can be better understood by examining the overlap, or degree of redundancy, between databases [9]. Overall, the highest overlap was between LISA and LISTA, the 2 information science databases ( Table  2 online). All relevant material indexed by LISA was also indexed by LISTA, and 91% of relevant material indexed in LISTA could be found in LISA. The greatest overlap seen among biomedical databases was between EMBASE and the MEDLINE interfaces: 69% of relevant material indexed by MEDLINE interfaces was also indexed in EMBASE, while 94% of relevant material indexed in EMBASE was also found in the MED-LINE interfaces. CMR, the database with the largest unique contribution, had moderate overlap with the biomedical databases but little overlap with the information science sources.
Resource combinations were examined using the 131 articles indexed in at least 1 of the studied databases as the denominator for calculating coverage (Table 3). Maximum coverage possible by searching 3 databases was 97%, achieved through the combination of 1 MEDLINE interface, LISA, and CMR. Maximum coverage possible through searching 2 databases was 87%, achieved through the combination of a MEDLINE interface and LISA.

Precision
Of the 2 standard performance indicators for information retrieval, recall and precision, recall is of great-er concern to systematic reviewers, as complete identification of relevant studies is thought to protect against bias [10]. Given that the MEDLINE interfaces had equal recall, precision or budget limitations may become a deciding factor. The precision of the MED-LINE searches used in the 3 reviews when run in Ovid MEDLINE and OVID HealthSTAR was compared. The HealthSTAR retrieval was smaller in all cases. Overall precision was 0.10% for MEDLINE and 0.11% for HealthSTAR. This is a small absolute difference, but it translated into a 13% decrease in screening burden, avoiding 879 irrelevant records across the 3 reviews.

Document type
Journal articles were the most common type of document retrieved by the searches (nϭ112, 79%), followed by conference abstracts (nϭ15, 10%). All journal articles were indexed in 1 or more of the examined databases, as was the single dissertation. However, only one-third of electronic documents were included in the searched resources, and thus two-thirds were unavailable for retrieval by the search.

Bibliometric characteristics
The frequencies of authors and journals both followed standard bibliographic distributions [11,12], with a few highly productive sources and the remaining material widely scattered. Three journals yielded 10 or more items, and together these accounted for almost a quarter of the material (23%) ( Table 4 online). The sources could be characterized as library science or informatics, medical librarianship or medical informatics, and medicine, evidence-based health care, or epidemiology in approximately equal numbers.

DISCUSSION
In an article proposing a practical framework for evidence-based librarianship, Crumley and Koufogiannakis describe six domains or categories of questions based on the daily practice of librarians [13]. The topics of the systematic reviews studied here fit in the domain of ''Information Access and Retrieval,'' but they were also interdisciplinary, as important evidence came from both health and library databases and journals. The library and information science journals found to be most productive overlap to some degree with those in Koufogiannakis et al.'s 2004 survey of librarianship research [14].
The best coverage of the evidence base for the systematic reviews in question was obtained through a combination of one MEDLINE interface, CMR, and LISA. The MEDLINE interfaces provided equivalent coverage of relevant material, so other factors will influence selection of databases for systematic reviews. Ovid MEDLINE is widely used by systematic reviewers [15], but the current analysis indicates that cost savings are possible by searching PubMed. An increase in precision with no loss of recall may be possible by selecting the HealthSTAR subset. When sub-scription access to databases is an issue, the combination of PubMed and LISTA, both available without cost, provided nearly as much coverage as Ovid MED-LINE and LISA, both of which have access fees.
Evidence for the three systematic reviews came not only from the journal literature, but also from abstracts, books, and technical reports. Gray literaturespecifically, conference abstracts, technical reports, electronic citations, and dissertations-composed 12% of the evidence base for these reviews, and another 8% came from books and book chapters. Alberani and Pietrangeli found that 22% of references in scientific publications in selected information science journals were to gray literature, although they noted that over half of these citations were to technical reports and tended to occur more frequently in journals that focused on technical aspects of the field [16]. The current results correspond with their work when only conference reports and theses are considered.
Gray literature is not easily identified from database searching and must be sought through means such as research registries, library catalogs, web searching, citing reference searching, and personal communications [17]. The CMR was an important source for the three reviews, having the most unique coverage of any of the examined databases and, in particular, coverage of gray literature. Many of the CMR abstracts may eventually be published as full articles, or they may represent pilot research that may remain as gray literature, corresponding with Eldredge's comment that librarians have had few incentives to publish in the past [18]. Still, a similar database of research articles and abstracts from all areas of librarianship could be an important contribution to research capacity in librarianship by capturing a significant portion of the published and gray literature in one resource.
This work, like other such surveys of the literature, is based on a relatively small sample of reviews. However, the distribution of included studies conforms to findings of those previous surveys, increasing confidence in these results [19][20][21]. While specific findings may not generalize to other domains of librarianship, they reflect the sources contributing to one of the areas at the forefront of evidence-based librarianship.

CONCLUSIONS
This study of information sources for 3 systematic reviews demonstrates that the evidence base for information science can be multidisciplinary and, in this case, is drawn from the literature in health care, published literature in information science, and unpublished literature. The searching combination of 1 MED-LINE interface, LISA, and CMR provided the most comprehensive coverage, capturing 95% of the relevant literature included in the original 3 reviews. Freely available sources provided nearly equivalent coverage to subscription sources, removing one potential barrier to the successful execution of systematic research in this area. Access to the unpublished library conference literature could be an important enhancement to research capacity in librarianship. Library and