- We are sorry, but NCBI web applications do not support your browser and may not function properly. More information

# Does the *h* index have predictive power?

Author contributions: J.E.H. designed research, performed research, analyzed data, and wrote the paper.

## Abstract

Bibliometric measures of individual scientific achievement are of particular interest if they can be used to predict future achievement. Here we report results of an empirical study of the predictive power of the *h* index compared with other indicators. Our findings indicate that the *h* index is better than other indicators considered (total citation count, citations per paper, and total paper count) in predicting future scientific achievement. We discuss reasons for the superiority of the *h* index.

**Keywords:**citations, prediction, achievement

The *h* index of a researcher is the number of papers coauthored by the researcher with at least *h* citations each (1). We have recently proposed it as a representative measure of individual scientific achievement. Other commonly used bibliometric measures of individual scientific achievement are total number of papers published (*N*_{p}) and total number of citations garnered (*N*_{c}). Recently, Lehmann *et al.* (2, 3) have argued that the mean number of citations per paper (*n*_{c} = *N*_{c}/*N*_{p}) is a superior indicator. Here we wish to address the question: which of these four measures is best able to predict future scientific achievement?

For the purposes of this article, we do not wish to dwell on the controversial question of what is the optimal definition of scientific achievement.^{†} We are not interested in measuring the *past* achievement of an individual, e.g., for the purpose of awarding a prize or for election to a prestigious academy, but rather in predicting *future* achievement. So we could simply bypass this question by defining “scientific achievement” by the bibliometric measure under consideration and ask: which measure is better able to predict its future values? For example, how likely is a researcher who today has a large number of citations to gain a large number of citations in future years? To the extent that a bibliometric measure reflects particular traits of the researcher rather than random events, it should have higher predictive power than another measure that is more dependent on random events. For example, we argued in ref. 1 that the total number of citations, *N*_{c}, “may be inflated by a small number of ‘big hits,’ which may not be representative of the individual if he/she is coauthor with many others on those papers.” For that individual, the present *N*_{c} value is not likely to be a good predictor of his/her future *N*_{c} values.

Alternatively, among the indicators listed in the first paragraph, it may be argued that the total number of citations, *N*_{c}, is the one that best reflects scientific achievement because it gives a measure of the integrated impact of a scientist's work on the work of others. Then, we would like to know: which indicator is best able to predict *N*_{c} at a future time? It is certainly not obvious that the answer is *N*_{c} itself.

There are two slightly different questions of interest. (*i*) Given the value of an indicator at time *t*_{1}, *I*(*t*_{1}), how well does it predict the value of itself or of another indicator at a future time t_{2}, *I′*(*t*_{2})? This question is of interest, for example, in trying to decide between different candidates for a faculty position. A possible consideration might be: how likely is each candidate to become a member of the National Academy of Sciences 20 years down the line? For that purpose, one would like to rank the candidates by their expected *cumulative* achievement after 20 years. This means, in particular, that citations obtained *after* time *t*_{1} to papers written *before* time *t*_{1} are relevant. (*ii*) How well do the different indicators predict *future* scientific output? To award grant money or other resources for future research at time *t*_{1}, one would like to rank the candidates by their expected scientific output *from time* t_{1} *on* to some future time. In deciding who should get a grant, it should be irrelevant how many more citations the earlier papers of that individual are expected to collect in future years.

## Procedure

We use the ISI Web of Science database in the “General Search” mode (http://isiwebofknowledge.com).^{‡} ISI has recently incorporated tools under “Author Finder” that help to discriminate between different researchers with the same name. Once the publications of a researcher are identified, ISI provides in the “Citation Report” the total number of citations *N*_{c}, total number of papers *N*_{p}, citations per paper *n*_{c} = *N*_{c}/*N*_{p}, and the *h* index, all *at the present time*.

ISI also allows one to restrict the time frame of the papers' publication date, so it is easy to find *N*_{p}(*t*_{1}), the number of papers published up to year *t*_{1}, or *N*_{p}(*t*_{1}, *t*_{2}), the number of papers published between times *t*_{1} and *t*_{2}. However, ISI does not provide a similarly simple way to obtain these values for the other indicators under consideration. To obtain this information, one needs to export (save) the information in the Citation Report into an Excel file and add up the citations in the time interval desired. It is a straightforward but tedious procedure.

For illustration, we show in Fig. 1 the *h* index and the number of citations as a function of time for two prominent physicists: a theorist and an experimentalist. As we conjectured in ref. 1, the *h* index follows approximately a linear behavior with time, and the total number of citations is approximately quadratic with time. Similar behavior is found in many other cases.

## Sample PRB80

Because citation patterns vary between fields and also between subfields, and there are also trends with time, we chose to look at authors in a single subfield and of comparable scientific age, to compare the predictive power of the various indicators. Ideally we would like to pick a random subset of all physicists who earned their Ph.D. in a given subfield in a given year and published in that subfield throughout their career. However, we have no practical way to make such a selection. As an alternative, we picked a sample of 50 physicists who started publishing around 1980, by using the following procedure:

- We considered papers published in the journal
*Physical Review B:**Condensed Matter and Materials Physics*in 1985 that have today citations in the range of 45 to 60 (an arbitrary choice, simply to avoid extremes). In practice, we started with papers with 60 citations and went as far down as needed to get the number of authors desired for our sample. - From the authors of those papers, we selected those who had published their first paper between 1978 and 1982.

Because of the journal used for the selection (*Phys Rev B*), the sample contained mostly physicists who published in the field of condensed matter physics throughout their career. There was, however, a small subset of the sample who subsequently switched to other subfields.

We then looked at the publication records of these authors during the first 12 years of their career (starting with their first published paper) and in the subsequent 12 years. In Table 1 we show the average and standard deviation values of the four indicators considered in the first 12 years, first 24 years, and years 13–24, all measured from the publication year of the first paper. It can be seen that *h*, *N*_{p}, and *n*_{c} increase by approximately a factor of 2 in comparing 12-yr and 24-yr periods and *N*_{c} by approximately a factor of 4, as expected. The last column of the table shows the average and standard deviation of *a* = *N*_{c}/*h*^{2} for this sample. In ref. 1, *a* was observed to be typically in the range of 3 to 5.

As discussed earlier, we would like to know:

- How well does the performance during the first 12 years predict the cumulative achievement over the entire 24-yr period?
- How well does the performance during the first 12 years predict performance in the subsequent 12 years?

Because the number of citations is expected to grow quadratically with time, we used $\sqrt{{N}_{\text{c}}}$ as a measure of total citations.

First, we consider the predictive power of the various indicators after the first 12 years (*t*_{1}) for the cumulative achievement in the 24-yr period (*t*_{2}). In Fig. 2 we show the total number of citations after 24 years vs. each indicator after 12 years for each member of the sample, and their correlation coefficient, *r* (= covariance/product of standard deviations). It can be seen that the *h* index and the number of citations *N*_{c} at time *t*_{1} are the best predictors of cumulative citations at the future time *t*_{2}, with correlation coefficient *r* = 0.89. The number of papers correlates somewhat less (*r* = 0.74), and the number of citations per paper, *n*_{c}, has lowest correlation with cumulative citations, with *r* = 0.54.

*N*

_{c}, after

*t*

_{2}= 24 yr vs. the value of the various indicators at

*t*

_{1}= 12 yr (

*t*measured from the date of the first publication).

*h*,

*h*index;

*N*

_{p}, number of papers;

*n*

_{c}, mean number of citations per paper;

*r*, correlation

**...**

According to these results, if one wishes to select from among various candidates at time *t*_{1} the one(s) who will have the largest number of citations at the later time *t*_{2}, the *h* index or the number of citations at time *t*_{1} are good selection criteria. A candidate with low *h* or low *N*_{c} at time *t*_{1} will not have a high *N*_{c} at time *t*_{2}. Instead, a candidate with low *N*_{p} or low *n*_{c} at time *t*_{1} has a much higher chance of ending up with high *N*_{c} at time *t*_{2}.

Fig. 3 shows the ability of each indicator to predict its own cumulative value. Here, the differences between indicators is smaller and the correlation coefficient is high in all cases. Still, the *h* index shows the largest predictive power, with *r* = 0.91. That is, a researcher with a high *h* index after 12 years is highly likely to have a high *h* index after 24 years.

*t*

_{1}= 12 yr for the value of the same indicator at time

*t*

_{2}= 24 yr for sample PRB80.

It is more difficult for the indicators at time *t*_{1} to predict scientific achievement occurring only in the subsequent period, i.e., without taking into account the citations after time *t*_{1} to work performed prior to *t*_{1}. As discussed, one would like to make such predictions to decide on allocation of research resources. In Fig. 4, the ability of the indicators at time *t*_{1} to predict citations to papers written in the *t*_{1} − *t*_{2} time interval is considered. The highest correlation coefficient occurs for the *h* index (*r* = 0.60) and the lowest for mean number of citations per paper (*r* = 0.21). Similarly, as shown in Fig. 5, the ability of each index to predict itself is highest for the *h* index (*r* = 0.61) and lowest for number of citations per paper (*r* = 0.23).

*t*

_{1}= 12 yr for the number of citations to papers published in the

*t*

_{1}− t

_{2}time interval, with

*t*

_{2}= 24 yr, for sample PRB80.

*t*

_{1}= 12 yr for the value of the same indicator for the papers published in the

*t*

_{1}−

*t*

_{2}time interval, with

*t*

_{2}= 24 yr, for sample PRB80.

So, if we choose to measure scientific achievement either by total citation count, *N*_{c}, or by the *h* index, these results imply that (at least in this example) the *h* index has the highest ability to predict *future* scientific achievement. In fact, even choosing the number of papers, *N*_{p}, as the measure of achievement, the *h* index yields the highest predictive power, as shown in Fig. 6: *r* = 0.49, vs. *r* = 0.43, *r* = 0.42, and *r* = 0.092 for *N*_{p}, *N*_{c}, and *n*_{c} as predictors, respectively. In allocating research resources (e.g., grant funding) to otherwise comparable researchers, if the goal is to maximize the expected return on the investment as measured by *N*_{c}, the *h* index, or *N*_{p}, we suggest that these results should be considered. If one chose instead to use as indicator of scientific achievement the mean number of citations per paper [following Lehmann *et al.* (2, 3)], our results suggest that (as in the stock market) “past performance is not predictive of future performance.”

## Sample APS95

As a second example, we consider the set of physicists elected to fellowship in 1995 by the Division of Condensed Matter Physics of the American Physical Society (APS). (The list is available at http://dcmp.bc.edu/page.php?name=fellows_95.) From the list of 29 individuals, 2 were excluded because it was difficult to identify their publications due to name overlaps. We evaluated the indicators for this group up to the year 1994 (right before being elected to fellowship), up to 2006, and in the 12 years from 1995 to 2006. The averages and standard deviations are shown in Table 2.

Fig. 7 shows the number of citations in the 12 years after being elected to fellowship vs. each of the indicators up to the year 1994. The correlations here are weaker than in the first example, nevertheless the *h* index shows a stronger correlation (*r* = 0.49) than all other indicators. Similarly, Fig. 8 shows that the *h* index is a better predictor of itself (*r* = 0.54) than any of the other indicators.

Incidentally, note the large dispersion in the values of the indicators at time *t*_{1} (e.g., *h* ranging from 9 to 43, *N*_{c} from 482 to 7,471, and *N*_{p} from 19 to 248), which indicates that the APS fellowship committee does not rely (for better or for worse) on any of these numerical indicators as a deciding factor for election to fellowship.

The data for cumulative achievement up to 2006 are shown in Figs. 9 and and10.10. It can be seen that the pattern is similar to Figs. 2 and and3,3, the corresponding graphs for sample PRB80.

It is easy to understand why the correlations here are weaker than in the first example. Scientists are elected to APS fellowship at very different stages in their careers, so the horizontal axis variables in these figures are not time-normalized. For example, a member of this group might have had a large *N*_{c} in 1994 because he/she had been publishing for many years at a slow rate, and his/her productivity in the subsequent 12 years would not be expected to be larger than that of another scientist of this group who started his/her career much later and had a higher publication rate.

Note also that the 12-yr productivity and impact of the APS fellows sample (Table 2) is, on average, substantially higher than that of the random sample PRB80 (Table 1) and that there are no points on the *x* axes in the figures for the period *t*_{1} − *t*_{2} for the APS sample (Figs. 7 and and8),8), in contrast to those of the PRB80 sample (Figs. 44–6). These differences are to be expected because election to APS fellowship is not a random process.

## Combining *h* and *N*_{c}

Our results indicate that the *h* index and the total number of citations are better than the number of papers and the mean citations per paper to predict future achievement, with achievement defined by either the indicator itself or the total citation count, *N*_{c}. Furthermore, we found a small consistent advantage of the *h* index compared with *N*_{c}.

It has been argued in the literature that one drawback of the *h* index is that it does not give enough “credit” to very highly cited papers, and various modifications have been proposed to correct this, in particular, Egghe's *g* index (4), Jin *et al.*'s AR index (5), and Komulski's H^{(2)} index (6). These modified indices reward authors with higher citation numbers in the papers that contribute to the *h* count.

To test the possibility that giving a higher weight to highly cited papers may enhance the predictive power of the *h* index, we considered the following expression:

and asked the question: which value of α will result in *h*_{α}(*t*_{1}) best predicting the citation count of future work, *N*_{c}(*t*_{1}, *t*_{2})? That is, we considered the cases of Figs. 4*a* and and77*a* with *h*_{α}(*t*_{1}) in the abscissa instead of *h*(*t*_{1}).

The resulting correlation coefficients as a function of α are shown in Fig. 11. Surprisingly, a small *negative* α (α ≈ −0.1) yields the largest correlation coefficient in both samples considered. For positive α, the correlation coefficients decrease monotonically and approach the values corresponding to the predictor *N*_{c}, *r* = 0.53 and *r* = 0.43, respectively, corresponding to Figs. 4*b* and and77*b*.

Consequently, the best predictor of future achievement (with achievement defined as number of citations) inferred from our data (e.g., sample PRB80) would be a linear regression fit to $\sqrt{{N}_{\text{c}}({t}_{1},{t}_{2})}$ vs. *h*_{α}(*t*_{1}) with α = −0.1 (*r* = 0.62), leading to the paradoxical result that, given two researchers with the same *h* index, the one with *lower* *N*_{c}(*t*_{1}) should be expected to earn a *higher* number of citations in the subsequent time period.

By using the relationship *N*_{c} = *ah*^{2}, we can rewrite Eq. **1** as

The fact that a negative α yields larger predictive power indicates that authors with large values of *a* = *N*_{c}/*h*^{2} are, on average, less likely to earn a larger number of citations in future work than authors with smaller *a*. We believe that this effect is principally due to the effect of coauthorship, as discussed in the next section.

## Discussion

In summary, we found that the *h* index appears to be better able to predict future achievement than the other three indicators—number of citations, number of papers, and mean citations per paper—with achievement defined by either the indicator itself or the total citation count, *N*_{c}. In addition, the *h* index was found to be a better predictor of productivity (*N*_{p}) than *N*_{p} itself. Furthermore, in attempting to combine *h* with *N*_{c} to enhance the predictive power of *h*, we found that *N*_{c} should enter with a *negative* weight.

It is interesting, and not obvious, that the *h* index is able to predict both itself and the productivity *N*_{p} better than *N*_{p} can predict itself. Perhaps it indicates that some of the prolific authors with small citation counts feel less incentive to continue being prolific because they perceive that their work is not having an impact.

We believe the superiority of *h* compared with *N*_{c} as a predictor is due to the issue of coauthorship, touched on in ref. 1 and in the Introduction. Let us elaborate on this further.

Consider a paper *j* with *N*_{c}^{j} citations coauthored by several scientists with different levels of seniority and ability, each of whom made different contributions to this paper. If we are counting citations, each coauthor gets the same “credit,” i.e., adds *N*_{c}^{j} citations to his/her total citation count, independent of his/her individual contribution to this paper.

Instead, if we are considering *h*, this paper will or will not contribute to the *i*th author's *h* index, *h*_{i}, depending on whether *N*_{c}^{j} > *h*_{i} or *N*_{c}^{j} < *h*_{i}. If it contributes, that author only “needs” *h*_{i} of the *N*_{c}^{j} citations to increase his/her *h* by 1. So one may say that each author *i* gets “allocated” only *h*_{i} of the *N*_{c}^{j} citations. Both junior and less able authors are likely to have a lower *h* than senior and more able authors, *and* they are likely to have made a lesser contribution to the paper.^{§} Hence, it is appropriate that they benefit from a smaller portion of the total *N*_{c}^{j}.

In other words, to “first order,” using *h* rather than *N*_{c} as a measure of scientific achievement automatically reduces an important source of distortion when multiply coauthored papers are involved, by allocating a smaller portion of the credit to those authors who are likely to have contributed less. The argument is not foolproof, and exceptions undoubtedly will occur, but the “injustice” done to powerful junior coauthors in the early stages of their careers will automatically be remedied in due time as their *h* index rapidly increases.

Furthermore, it is interesting and revealing that the advantage of *h* over *N*_{c} in predicting future *N*_{c} values is lost when *cumulative* citations, rather than only citations to new papers, are considered (Figs. 2 vs. vs.44 and Figs. 7 vs. vs.9).9). We suggest that this also reflects the effect of coauthorship. Highly cited papers in the initial period will usually continue to garner a high number of citations in the subsequent period also for those among the coauthors who made only minor contributions to the paper. Although the cumulative citations of those individuals will be high, they should be less likely to make major *new* contributions in the subsequent period. Thus we argue that, even for a decision focused on optimizing expected *cumulative* achievement, *h* should be favored as an indicator because it appears better able to predict *individual* cumulative achievement.

Other recently proposed bibliometric measures that give more weight to very highly cited papers, such as Egghe's *g* index (4), Jin *et al.*'s AR index (5), and Komulski's *H*^{2} index (6), are likely to suffer from the same drawback as *N*_{c} because they will assign the citations of highly cited papers equally to all coauthors without discrimination. Thus, we conjecture that the predictive power of these modified indices is likely to be worse than that of the *h* index, as our analysis of the *h*_{α} index (Eq. **2**) also suggests.

With respect to the indicator *n*_{c}, mean number of citations per paper, our results indicate that it has very little predictive value. The low correlation found between *n*_{c}s in the different timeframes (initial 12 years and subsequent 12 years) is due to a variety of reasons. In some cases, the individual's productivity, *N*_{p}, remained similar but the total impact, *N*_{c}, changed substantially; sometimes productivity declined and the total impact declined even more; and sometimes productivity increased and the mean impact per paper also increased.

These results are at odds with the conclusions of the recent studies by Lehmann *et al.* (2, 3). They start from the reasonable assumption that “the quality of a scientist is a function of his or her full citation record” and address the question of which single-number indicator is best to discriminate between scientists, aiming “to assign some measure of quality.” They argue that an indicator is of no practical use unless “the uncertainty in assigning it to individual scientists is small.” They perform a Bayesian analysis of citation data from a large sample extracted from the SPIRES database (www.slac.stanford.edu/spires/hep/) and find that, among the three indicators (*i*) mean number of citations per paper, *n*_{c}, (*ii*) number of papers published per year, *N*_{p}/*n*, and (*iii*) *h* index, *n*_{c} is far superior in discriminating between scientists. They conclude that “*compared with the* h *index, the mean number of citations per paper is a superior indicator of scientific quality, in terms of both accuracy and precision*,” and hence that “the mean or median citation counts (per paper) can be a useful factor in the academic appointment process.”

Bornmann and Daniel (7) echo their conclusions and state that the Lehmann *et al.* (2, 3) study “raises some doubt as to the accuracy of the (*h*) index for measuring scientific performance,” that “the mean, median, and maximum numbers of citations are reliable and permit accurate measures of scientific performance,” and, instead, that “the *h* index is shown to lack the necessary accuracy and precision to be useful.”

We argue that these conclusions are deeply flawed. Our results here have shown that the *h* index is a far better predictor of future scientific achievement than the mean number of citations per paper, and surely the same would hold for the median. For example, the correlation coefficient between the number of citations in the subsequent 12 years and the *h* index in the previous 12 years in sample PRB80 was found to be *r* = 0.60, much larger than *r* = 0.21, the correlation found with the mean number of citations per paper in the previous 12 years. The *h* index was also far superior at predicting itself, *r* = 0.61 vs. *r* = 0.23 for *n*_{c}. A similar pattern was found in our other sample, and it is likely that similar results would be obtained with the sample used by Lehmann *et al.* (2, 3).

This example illustrates the danger in using sophisticated mathematical analysis to jump to practical conclusions (of sometimes life-changing consequences) in the delicate issues under consideration. Although the Lehmann *et al.* (2, 3) study may be correct in concluding that the mean number of citations is better to “discriminate” between scientists for a given fixed time period according to their definition, the fallacy in their argument appears to be that this does not imply that the indicator is associated with an identifiable individual trait that would be expected to persist with time and certainly not with “scientific quality.” Else, one is forced to conclude, in light of the results of the present article, that scientific quality (as defined by Lehmann *et al.*) in the past is nearly uncorrelated with scientific quality in the future for individual scientists, a conclusion that defies common sense.

Instead, a variety of studies [refs. 1, 8 (including an extensive list of references), and 9] have shown that the *h* index by and large agrees with other objective and subjective measures of scientific quality in a variety of different disciplines (10–15), and the present study shows that the *h* index is also effective in discriminating among scientists who will perform well and less well in the future. We conclude tentatively (assuming that future empirical studies will corroborate the results of this article) that the *h* index is a useful indicator of scientific quality that can be profitably used (together with other criteria) to assist in academic appointment processes and to allocate research resources.

## Acknowledgments

I thank Marie McVeigh for helpful advice on extracting information from the ISI Web of Science database, P. Ball for calling ref. 2 to my attention, and M.C. for stimulating discussions.

## Footnotes

The author declares no conflict of interest.

This article is a PNAS Direct Submission.

^{†}Ball P, Meeting of the Deutsche Physikalische Gesellschaft, March 26–30, 2007, Regensburg, Germany.

^{‡}In using the very valuable ISI resource for individual evaluations, one should keep in mind that it has limitations, e.g., (*i*) it will, of course, miss citations where the author's name is misspelled; (*ii*) books, book chapters, and most conference proceedings are not included; (*iii*) citations to “Rapid Communications” papers in *Phys Rev B* that include (R) in the citation are currently not counted by ISI.

^{§}Of course, it will often be the case that a junior coauthor will have performed most of the actual work for the paper. Nevertheless, if the paper has senior coauthors and ended up with a large number of citations, it will often be the case that the senior coauthor(s) will have played the crucial role.

## References

**National Academy of Sciences**

## Formats:

- Article |
- PubReader |
- ePub (beta) |
- PDF (593K)

- Assessing the impact of biomedical research in academic institutions of disparate sizes.[BMC Med Res Methodol. 2009]
*Sypsa V, Hatzakis A.**BMC Med Res Methodol. 2009 May 29; 9:33. Epub 2009 May 29.* - How to judge a book by its cover? How useful are bibliometric indices for the evaluation of "scientific quality" or "scientific productivity"?[Ann Anat. 2011]
*von Bohlen Und Halbach O.**Ann Anat. 2011 May; 193(3):191-6. Epub 2011 Apr 1.* - NHMRC grant applications: a comparison of "track record" scores allocated by grant assessors with bibliometric analysis of publications.[Med J Aust. 2007]
*Nicol MB, Henadeera K, Butler L.**Med J Aust. 2007 Sep 17; 187(6):348-52.* - Impact factor and other standardized measures of journal citation: a perspective.[Indian J Dent Res. 2009]
*Mathur VP, Sharma A.**Indian J Dent Res. 2009 Jan-Mar; 20(1):81-5.* - New developments in the use of citation analysis in research evaluation.[Arch Immunol Ther Exp (Warsz). 2009]
*Moed HF.**Arch Immunol Ther Exp (Warsz). 2009 Jan-Feb; 57(1):13-8. Epub 2009 Feb 14.*

- Survey of Publications and the H-index of Academic Emergency Medicine Professors[Western Journal of Emergency Medicine. 2014...]
*Babineau M, Fischer C, Volz K, Sanchez LD.**Western Journal of Emergency Medicine. 2014 May; 15(3)290-292* - Career on the Move: Geography, Stratification, and Scientific Impact[Scientific Reports. ]
*Deville P, Wang D, Sinatra R, Song C, Blondel VD, Barabási AL.**Scientific Reports. 44770* - The use of bibliometric indicators to help peer-review assessment[Archivum Immunologiae et Therapiae Experime...]
*Haeffner-Cavaillon N, Graillot-Gak C.**Archivum Immunologiae et Therapiae Experimentalis. 2009 Feb; 57(1)33-38* - Silicosis: geographic changes in research: an analysis employing density-equalizing mapping[Journal of Occupational Medicine and Toxico...]
*Gerber A, Klingelhoefer D, Groneberg DA, Bundschuh M.**Journal of Occupational Medicine and Toxicology (London, England). 92* - Yellow fever disease: density equalizing mapping and gender analysis of international research output[Parasites & Vectors. ]
*Bundschuh M, Groneberg DA, Klingelhoefer D, Gerber A.**Parasites & Vectors. 6331*

- PubMedPubMedPubMed citations for these articles

- Does the h index have predictive power?Does the h index have predictive power?Proceedings of the National Academy of Sciences of the United States of America. Dec 4, 2007; 104(49)19193PMC

Your browsing activity is empty.

Activity recording is turned off.

See more...