Format

Send to

Choose Destination
Health Info Libr J. 2016 Jun;33(2):140-9. doi: 10.1111/hir.12140.

Inter-rater reliability of h-index scores calculated by Web of Science and Scopus for clinical epidemiology scientists.

Author information

1
Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, ON, Canada.
2
Department of Anesthesiology & Department of Innovation in Medical Education, The Ottawa Hospital - General Campus, Ottawa, ON, Canada.

Abstract

OBJECTIVE:

We investigated the inter-rater reliability of Web of Science (WoS) and Scopus when calculating the h-index of 25 senior scientists in the Clinical Epidemiology Program of the Ottawa Hospital Research Institute.

MATERIALS AND METHODS:

Bibliometric information and the h-indices for the subjects were computed by four raters using the automatic calculators in WoS and Scopus. Correlation and agreement between ratings was assessed using Spearman's correlation coefficient and a Bland-Altman plot, respectively.

RESULTS:

Data could not be gathered from Google Scholar due to feasibility constraints. The Spearman's rank correlation between the h-index of scientists calculated with WoS was 0.81 (95% CI 0.72-0.92) and with Scopus was 0.95 (95% CI 0.92-0.99). The Bland-Altman plot showed no significant rater bias in WoS and Scopus; however, the agreement between ratings is higher in Scopus compared to WoS.

CONCLUSION:

Our results showed a stronger relationship and increased agreement between raters when calculating the h-index of a scientist using Scopus compared to WoS. The higher inter-rater reliability and simple user interface used in Scopus may render it the more effective database when calculating the h-index of senior scientists in epidemiology.

KEYWORDS:

Bibliographic databases; bibliometrics; citation analysis; database searching

PMID:
27168256
DOI:
10.1111/hir.12140
[Indexed for MEDLINE]

Supplemental Content

Full text links

Icon for Wiley
Loading ...
Support Center