Skip to main content
  • Submit
  • About
    • Editorial Board
    • PNAS Staff
    • FAQ
    • Accessibility Statement
    • Rights and Permissions
    • Site Map
  • Contact
  • Journal Club
  • Subscribe
    • Subscription Rates
    • Subscriptions FAQ
    • Open Access
    • Recommend PNAS to Your Librarian
  • Log in
  • My Cart

Main menu

  • Home
  • Articles
    • Current
    • Special Feature Articles - Most Recent
    • Special Features
    • Colloquia
    • Collected Articles
    • PNAS Classics
    • List of Issues
  • Front Matter
  • News
    • For the Press
    • This Week In PNAS
    • PNAS in the News
  • Podcasts
  • Authors
    • Information for Authors
    • Editorial and Journal Policies
    • Submission Procedures
    • Fees and Licenses
  • Submit
  • About
    • Editorial Board
    • PNAS Staff
    • FAQ
    • Accessibility Statement
    • Rights and Permissions
    • Site Map
  • Contact
  • Journal Club
  • Subscribe
    • Subscription Rates
    • Subscriptions FAQ
    • Open Access
    • Recommend PNAS to Your Librarian

User menu

  • Log in
  • My Cart

Search

  • Advanced search
Home
Home

Advanced Search

  • Home
  • Articles
    • Current
    • Special Feature Articles - Most Recent
    • Special Features
    • Colloquia
    • Collected Articles
    • PNAS Classics
    • List of Issues
  • Front Matter
  • News
    • For the Press
    • This Week In PNAS
    • PNAS in the News
  • Podcasts
  • Authors
    • Information for Authors
    • Editorial and Journal Policies
    • Submission Procedures
    • Fees and Licenses

New Research In

Physical Sciences

Featured Portals

  • Physics
  • Chemistry
  • Sustainability Science

Articles by Topic

  • Applied Mathematics
  • Applied Physical Sciences
  • Astronomy
  • Computer Sciences
  • Earth, Atmospheric, and Planetary Sciences
  • Engineering
  • Environmental Sciences
  • Mathematics
  • Statistics

Social Sciences

Featured Portals

  • Anthropology
  • Sustainability Science

Articles by Topic

  • Economic Sciences
  • Environmental Sciences
  • Political Sciences
  • Psychological and Cognitive Sciences
  • Social Sciences

Biological Sciences

Featured Portals

  • Sustainability Science

Articles by Topic

  • Agricultural Sciences
  • Anthropology
  • Applied Biological Sciences
  • Biochemistry
  • Biophysics and Computational Biology
  • Cell Biology
  • Developmental Biology
  • Ecology
  • Environmental Sciences
  • Evolution
  • Genetics
  • Immunology and Inflammation
  • Medical Sciences
  • Microbiology
  • Neuroscience
  • Pharmacology
  • Physiology
  • Plant Biology
  • Population Biology
  • Psychological and Cognitive Sciences
  • Sustainability Science
  • Systems Biology
Research Article

Universality of citation distributions: Toward an objective measure of scientific impact

Filippo Radicchi, Santo Fortunato, and Claudio Castellano
PNAS November 11, 2008 105 (45) 17268-17272; https://doi.org/10.1073/pnas.0806977105
Filippo Radicchi
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Santo Fortunato
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Claudio Castellano
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • For correspondence: claudio.castellano@roma1.infn.it
  1. Edited by Michael E. Fisher, University of Maryland, College Park, MD, and approved September 17, 2008 (received for review July 18, 2008)

Related Article

  • In This Issue
    - Nov 11, 2008
  • Article
  • Figures & SI
  • Info & Metrics
  • PDF
Loading

Abstract

We study the distributions of citations received by a single publication within several disciplines, spanning broad areas of science. We show that the probability that an article is cited c times has large variations between different disciplines, but all distributions are rescaled on a universal curve when the relative indicator cf = c/c0 is considered, where c0 is the average number of citations per article for the discipline. In addition we show that the same universal behavior occurs when citation distributions of articles published in the same field, but in different years, are compared. These findings provide a strong validation of cf as an unbiased indicator for citation performance across disciplines and years. Based on this indicator, we introduce a generalization of the h index suitable for comparing scientists working in different fields.

  • bibliometrics
  • analysis
  • h index

Citation analysis is a bibliometric tool that is becoming increasingly popular to evaluate the performance of different actors in the academic and scientific arena, ranging from individual scholars (1–3), to journals, departments, universities (4), and national institutions (5), up to whole countries (6). The outcome of such analysis often plays a crucial role in deciding which grants are awarded, how applicants for a position are ranked, and even the fate of scientific institutions. It is then crucial that citation analysis is carried out in the most precise and unbiased way.

Citation analysis has a very long history and many potential problems have been identified (7–9), the most critical being that often a citation does not—nor it is intended to—reflect the scientific merit of the cited work (in terms of quality or relevance). Additional sources of bias are, to mention just a few, self-citations, implicit citations, the increase in the total number of citations with time, or the correlation between the number of authors of an article and the number of citations it receives (10).

In this work we consider one of the most relevant factors that may hamper a fair evaluation of scientific performance: field variation. Publications in certain disciplines are typically cited much more or much less than in others. This may happen for several reasons, including uneven number of cited papers per article in different fields or unbalanced cross-discipline citations (11). A paradigmatic example is provided by mathematics: the highest 2006 impact factor (IF) (12) for journals in this category (Journal of the American Mathematical Society) is 2.55, whereas this figure is 10 times larger or more in other disciplines (for example, in 2006, New England Journal of Medicine had IF 51.30, Cell had IF 29.19, and Nature and Science had IF 26.68 and 30.03, respectively).

The existence of this bias is well-known (8, 10, 12) and it is widely recognized that comparing bare citation numbers is inappropriate. Many methods have been proposed to alleviate this problem (13–17). They are based on the general idea of normalizing citation numbers with respect to some properly chosen reference standard. The choice of a suitable reference standard, which can be a journal, all journals in a discipline, or a more complicated set (14), is a delicate issue (18). Many possibilities exist also in the detailed implementation of the standardization procedure. Some methods are based on ranking articles (scientists, research groups) within one field and comparing relative positions across disciplines. In many other cases relative indicators are defined, that is, ratios between the bare number of citations c and some average measure of the citation frequency in the reference standard. A simple example is the Relative Citation Rate of a group of articles (13), defined as the total number of citations they received, divided by the weighted sum of impact factors of the journals where the articles were published. The use of relative indicators is widespread, but empirical studies (19–21) have shown that distributions of article citations are very skewed, even within single disciplines. One may wonder then whether it is appropriate to normalize by the average citation number, which gives only very limited characterization of the whole distribution. We address this issue in this article.

The problem of field variation affects the evaluation of performance at many possible levels of detail: publications, individual scientists, research groups, and institutions. Here, we consider the simplest possible level, the evaluation of citation performance of single publications. When considering individuals or research groups, additional sources of bias (and of arbitrariness) exist that we do not tackle here. As reference standard for an article, we consider the set of all articles published in journals that are classified in the same Journal of Citation Report scientific category of the journal where the publication appears (see details in Methods). We take as normalizing the quantity for citations of articles belonging to a given scientific field to be the average number c0 of citations received by all articles in that discipline published in the same year. We perform an empirical analysis of the distribution of citations for publications in various disciplines and we show that the large variability in the number of bare citations c is fully accounted for when cf = c/c0 is considered. The distribution of this relative performance index is the same for all fields. No matter whether, for instance, Developmental Biology, Nuclear Physics, or Aerospace Engineering are considered, the chance of having a particular value of cf is the same. Moreover, we show that cf allows us to properly take into account the differences, within a single discipline, between articles published in different years. This provides a strong validation of the use of cf as an unbiased relative indicator of scientific impact for comparison across fields and years.

Variability of Citation Statistics in Different Disciplines

First, we show explicitly that the distribution of the number of articles published in some year and cited a certain number of times strongly depends on the discipline considered. In Fig. 1 we plot the normalized distributions of citations to articles that appeared in 1999 in all journals belonging to several different disciplines according to the Journal of Citation Reports classification.

Fig. 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Fig. 1.

Normalized histogram of the number of articles P(c,c0) published in 1999 and having received c citations. We plot P(c,c0) for several scientific disciplines with different average number c0 of citations per article.

From this figure it is apparent that the chance of a publication being cited strongly depends on the category the article belongs to. For example, a publication with 100 citations is ≈50 times more common in Developmental Biology than in Aerospace Engineering. This has obvious implications in the evaluation of outstanding scientific achievements: the simple count of the number of citations is patently misleading to assess whether a article in Developmental Biology is more successful than one in Aerospace Engineering.

Distribution of the Relative Indicator cf

A first step toward properly taking into account field variations is to recognize that the differences in the bare citation distributions are essentially not due to specific discipline-dependent factors, but are instead related to the pattern of citations in the field, as measured by the average number of citations per article c0. It is natural then to try to factor out the bias induced by the difference in the value of c0 by considering a relative indicator, that is, measuring the success of a publication by the ratio cf = c/c0 between the number of citations received and the average number of citations received by articles published in its field in the same year. Fig. 2 shows that this procedure leads to a very good collapse of all curves for different values of c0 onto a single shape. The distribution of the relative indicator cf then seems universal for all categories considered and resembles a lognormal distribution. To make these observations more quantitative, we have fitted each curve in Fig. 2 for cf ≥ 0.1 with a lognormal curve Embedded Image where the relation σ2 = −2μ, because the expected value of the variable cf is 1, reduces the number of fitting parameters to 1. All fitted values of σ2, reported in Table 1, are compatible within 2 standard deviations, except for one (Anesthesiology) that is, in any case, within 3 standard deviations of all of the others. Values of χ2 per degree of freedom, also reported in Table 1, indicate that the fit is good. This allows us to conclude that, in rescaling the distribution of citations for publications in a scientific discipline by their average number, a universal curve is found, independent of the specific discipline. Fitting a single curve for all categories, a lognormal distribution with σ2 = 1.3 is found, which is reported in Fig. 2.

Fig. 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Fig. 2.

Rescaled probability distribution c0 P(c,c0) of the relative indicator cf = c/c0, showing that the universal scaling holds for all scientific disciplines considered (see Table 1). The dashed line is a lognormal fit with σ2 = 1.3.

View this table:
  • View inline
  • View popup
Table 1.

List of all scientific disciplines considered in this article

Interestingly, a similar universality for the distribution of the relative performance is found, in a totally different context, when the number of votes received by candidates in proportional elections is considered (22). In that case, the scaling curve is also well-fitted by a lognormal with parameter σ2 ≈ 1.1. For universality in the dynamics of academic research activities, see also ref. 23.

The universal scaling obtained provides a solid grounding for comparison between articles in different fields. To make this even more visually evident, we have ranked all articles belonging to a pool of different disciplines (spanning broad areas of science) according either to c or to cf. We have then computed the percentage of publications of each discipline that appear in the top z% of the global rank. If the ranking is fair, the percentage for each discipline should be ≈z% with small fluctuations. Fig. 3 clearly shows that when articles are ranked according to the unnormalized number of citations c, there are wide variations among disciplines. Such variations are dramatically reduced, instead, when the relative indicator cf is used. This occurs for various choices of the percentage z. More quantitatively, assuming that articles of the various disciplines are scattered uniformly along the rank axis, one would expect the average bin height in Fig. 3 to be z% with a standard deviation Embedded Image where Nc is the number of categories and Ni the number of articles in the ith category. When the ranking is performed according to cf = c/c0, we find (Table 2) a very good agreement with the hypothesis that the ranking is unbiased, but strong evidence that the ranking is biased when c is used. For example, for z = 20%, σz = 1.15% for cf-based ranking, whereas σz = 12.37% if c is used, as opposed to the value σz = 1.09% in the hypothesis of unbiased ranking. Figs. 2 and 3 allow us to conclude that cf is an unbiased indicator for comparing the scientific impact of publications in different disciplines.

Fig. 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Fig. 3.

We rank all articles according to the bare number of citations c and the relative indicator cf. We then plot the percentage of articles of a particular discipline present in the top z% of the general ranking, for the rank based on the number of citations (A and C) and based on the relative indicator cf (B and D). Different values of z (different graphs) lead to a very similar pattern of results. The average values and the standard deviations of the bin heights shown are also reported in Table 2. The numbers identify the disciplines as they are indicated in Table 1.

View this table:
  • View inline
  • View popup
Table 2.

Average and standard deviation for the bin heights in Fig. 3

For the normalization of the relative indicator, we have considered the average number c0 of citations per article published in the same year and in the same field. This is a very natural choice, giving to the numerical value of cf the direct interpretation as the relative citation performance of the publication. In the literature this quantity is also indicated as the “item oriented field normalized citation score” (24), an analogue for a single publication of the popular Centre for Science and Technology Studies, Leiden (CWTS), field-normalized citation score or “crown indicator” (25). In agreement with the findings of ref. 11, c0 shows very little correlation with the overall size of the field, as measured by the total number of articles.

The previous analysis compares distributions of citations to articles published in a single year, 1999. It is known that different temporal patterns of citations exist, with some articles starting soon to receive citations, whereas others (“sleeping beauties”) go unnoticed for a long time, after which they are recognized as seminal and begin to attract a large number of citations (26, 27). Other differences exist between disciplines, with noticeable fluctuations in the cited half-life indicator across fields. It is then natural to wonder whether the universality of distributions for articles published in the same year extends longitudinally in time so that the relative indicator allows comparison of articles published in different years. For this reason, in Fig. 4 we compare the plot of c0P(c,c0) vs. cf for publications in the same scientific discipline that appeared in 3 different years. The value of c0 obviously grows as older publications are considered, but the rescaled distribution remains conspicuously the same.

Fig. 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Fig. 4.

Rescaled probability distribution c0 P(c,c0) of the relative indicator cf = c/c0 for 3 disciplines (Hematology, Neuroimaging, and Nuclear Physics) for articles published in different years (1990, 1999, and 2004). Despite the natural variation of c0 (c0 grows as a function of the elapsed time), the universal scaling observed over different disciplines naturally holds also for articles published in different time periods. The dashed line is a lognormal fit with σ2 = 1.3.

Generalized h Index

Since its introduction in 2005, the h index (1) has enjoyed a spectacularly quick success (28): it is now a well-established standard tool for the evaluation of the scientific performance of scientists. Its popularity is partly due to its simplicity: the h index of an author is h if h of his N articles have at least h citations each, and the other N − h articles have, at most, h citations each. Despite its success, as with all other performance metrics, the h index has some shortcomings, as already pointed out by Hirsch himself. One of them is the difficulty in comparing authors in different disciplines.

The identification of the relative indicator cf as the correct metrics to compare articles in different disciplines naturally suggests its use in a generalized version of the h index, taking properly into account different citation patterns across disciplines. However, just ranking articles according to cf, instead of on the basis of the bare citation number c, is not enough. A crucial ingredient of the h index is the number of articles published by an author. As Fig. 5 shows, such a quantity also depends on the discipline considered; in some disciplines, the average number of articles published by an author in a year is much larger than in others. However, also in this case, this variability is rescaled away if the number N of publications in a year by an author is divided by the average value in the discipline N0. Interestingly, the universal curve is fitted reasonably well over almost 2 decades by a power-law behavior P(N, N0) ≈ (N/N0)−δ with δ = 3.5 (5).

Fig. 5.
  • Download figure
  • Open in new tab
  • Download powerpoint
Fig. 5.

The same distributions as in Inset rescaled by the average number N0 of publications per author in 1999 in the different disciplines. The dashed line is a power law with exponent −3.5. (Inset) Distributions of the number of articles, N, published by an author during 1999 in several disciplines.

This universality allows one to define a generalized h index, hf, that factors out also the additional bias due to different publication rates, thus allowing comparisons among scientists working in different fields. To compute the index for an author, his/her articles are ordered according to cf = c/c0 and this value is plotted versus the reduced rank r/N0 with r being the rank. In analogy with the original definition by Hirsch, the generalized index is then given by the last value of r/N0 such that the corresponding cf is larger than r/N0. For instance, if an author has published 6 articles with values of cf equal to 4.1, 2.8, 2.2, 1.6, 0.8, and 0.4, respectively, and the value of N0 in his discipline is 2.0, his hf index is equal to 1.5. This is because the third best article has r/N0 = 1.5 < 2.2 = cf, whereas the fourth has r/N0 = 2.0 > 1.6 = cf.

Conclusions

In this article we have presented strong empirical evidence that the widely scattered distributions of citations for publications in different scientific disciplines are rescaled on the same universal curve when the relative indicator cf is used. We have also seen that the universal curve is remarkably stable over the years. The analysis presented here justifies the use of relative indicators to compare in a fair manner the impact of articles across different disciplines and years. This may have strong and unexpected implications. For instance, Fig. 2 leads to the counterintuitive conclusion that an article in Aerospace Engineering with only 20 citations (cf ≈ 3.54) is more successful than an article in Developmental Biology with 100 citations (cf ≈ 2.58). We stress that this does not imply that the article with larger cf is necessarily more “important” than the other. In an evaluation of importance, other field-related factors may play a role: an article with an outstanding value of cf in a very narrow specialist field may be less important (for science, in general, or for the society) than a publication with smaller cf in a highly competitive discipline with potential implications in many areas.

Because we consider single publications, the smallest possible entities whose scientific impact can be measured, our results must always be taken into account when tackling other, more complicated tasks, like the evaluation of performance of individuals or research groups. For example, in situations where the simple count of the mean number of citations per publication is deemed to be important, one should compute the average of cf (not of c) to evaluate impact independently of the scientific discipline. For what concerns the assessment of single authors' performance we have defined a generalized h index (1) that allows a fair comparison across disciplines taking into account also the different publication rates.

Our analysis deals with 2 of the main sources of bias affecting comparisons of publication citations. It would be interesting to tackle, along the same lines, other potential sources of bias, as, for example, the number of authors, which is known to correlate with a higher number of citations (10). It is natural to define a relative indicator, the number of citations per author. Is this normalization the correct one that leads to a universal distribution, for any number of authors?

Finally, from a more theoretical point of view, an interesting goal for future work is to understand the origin of the universality found and how its precise functional form comes about. An attempt to investigate what mechanisms are relevant for understanding citation distributions is in ref. 29. Further activity in the same direction would definitely be interesting.

Methods

Our empirical analysis is based on data from Thomson Scientific's Web of Science (WOS; www.isiknowledge.com) database, where the number of citations is counted as the total number of times an article appears as a reference of a more recently published article. Scientific journals are divided in 172 categories, from Acoustics to Zoology. Within a single category a list of journals is provided. We consider articles published in each of these journals to be part of the category. Notice that the division in categories is not mutually exclusive: for example, Physical Review D belongs both to the Astronomy and Astrophysics and to the Physics, Particles and Fields categories. For consistency, among all records contained in the database we consider only those classified as “article” and “letter,” thus excluding reviews, editorials, comments, and other published material likely to have an uncommon citation pattern. A list of the categories considered, with the relevant parameters that characterize them, is reported in Table 1. The category Multidisciplinary Sciences does not fit perfectly into the universal picture found for other categories, because the distribution of the number of citations is a convolution of the distributions corresponding to the single disciplines represented in the journals. However, if one focuses only on the 3 most important multidisciplinary journals (Nature, Science, and PNAS), this category fits very well into the global universal picture. Our calculations neglect uncited articles; we have verified, however, that their inclusion just produces a small shift in c0, which does not affect the results of our analysis. In the plots of the citation distributions, data have been grouped in bins of exponentially growing size, so that they are equally spaced along a logarithmic axis. For each bin, we count the number of articles with citation count within the bin and divide by the number of all potential values for the citation count that fall in the bin (i.e., all integers). This holds as well for the distribution of the normalized citation count cf, because the latter is just determined by dividing the citation count by the constant c0, so it is a discrete variable just like the original citation count. The resulting ratios obtained for each bin are finally divided by the total number of articles considered, so that the histograms are normalized to 1.

Footnotes

  • 1To whom correspondence should be addressed. E-mail: claudio.castellano{at}roma1.infn.it
  • Author contributions: F.R., S.F., and C.C. designed research; F.R., S.F., and C.C. performed research; F.R. analyzed data; and C.C. wrote the paper.

  • The authors declare no conflict of interest.

  • This article is a PNAS Direct Submission.

  • © 2008 by The National Academy of Sciences of the USA
View Abstract

References

  1. ↵
    1. Hirsch JE
    (2005) An index to quantify an individual's scientific research output. Proc Natl Acad Sci USA 102:16569–16572.
    OpenUrlAbstract/FREE Full Text
  2. ↵
    1. Egghe L
    (2006) Theory and practise of the g-index. Scientometrics 69:131–152.
    OpenUrlCrossRef
  3. ↵
    1. Hirsch JE
    (2007) Does the h index have predictive power? Proc Natl Acad Sci USA 104:19193–19198.
    OpenUrlAbstract/FREE Full Text
  4. ↵
    1. Evidence Ltd
    (2007) The use of bibliometrics to measure research quality in UK higher education institutions. Available at: http://bookshop.universitiesuk.ac.uk/downloads/bibliometrics.pdf. Accessed Oct. 7, 2008.
  5. ↵
    1. Kinney AL
    (2007) National scientific facilities and their science impact on nonbiomedical research. Proc Natl Acad Sci USA 104:17943–17947.
    OpenUrlAbstract/FREE Full Text
  6. ↵
    1. King DA
    (2004) The scientific impact of nations. Nature 430:311–316.
    OpenUrlCrossRefPubMed
  7. ↵
    1. Brooks TA
    (1986) Evidence of complex citer motivations. J Am Soc Inf Sci 37:34–36.
    OpenUrlCrossRef
  8. ↵
    1. Egghe L,
    2. Rousseau R
    (1990) Introduction to Informetrics: Quantitative Methods in Library, Documentation and Information Science (Elsevier, Amsterdam).
  9. ↵
    1. Adler R,
    2. Ewing J,
    3. Taylor P
    (2008) IMU Report, Citation statistics. http://www.mathunion.org/Publications/Report/CitationStatistics. Accessed Oct. 7, 2008.
  10. ↵
    1. Bornmann L,
    2. Daniel H-D
    (2008) What do citation counts measure? A review of studies on citing behavior. J Docum 64:45–80.
    OpenUrlCrossRef
  11. ↵
    1. Althouse BM,
    2. West JD,
    3. Bergstrom T,
    4. Bergstrom CT
    (4 19, 2008) Differences in impact factor across fields and over time. arXiv:0804.3116v1.
  12. ↵
    1. Garfield E
    (1979) Citation Indexing. Its Theory and Applications in Science, Technology, and Humanities (Wiley, New York).
  13. ↵
    1. Schubert A,
    2. Braun T
    (1986) Relative indicators and relational charts for comparative-assessment of publication output and citation impact. Scientometrics 9:281–291.
    OpenUrlCrossRef
  14. ↵
    1. Schubert A,
    2. Braun T
    (1996) Cross-field normalization of scientometric indicators. Scientometrics 36:311–324.
    OpenUrlCrossRef
  15. ↵
    1. Vinkler P
    (1996) Model for quantitative selection of relative scientometric impact indicators. Scientometrics 36:223–236.
    OpenUrlCrossRef
  16. ↵
    1. Vinkler P
    (2003) Relations of relative scientometric indicators. Scientometrics 58:687–694.
    OpenUrlCrossRef
  17. ↵
    1. Iglesias JE,
    2. Pecharroman C
    (2007) Scaling the h-index for different scientific ISI fields. Scientometrics 73:303–320.
    OpenUrlCrossRef
  18. ↵
    1. Zitt M,
    2. Ramanana-Rahary S,
    3. Bassecoulard E
    (2005) Relativity of citation performance and excellence measures: From cross-field to cross-scale effects of field-normalisation. Scientometrics 63:373–401.
    OpenUrlCrossRef
  19. ↵
    1. Redner S
    (1998) How popular is your paper? Eur Phys J B 4:131–134.
    OpenUrlCrossRef
  20. ↵
    1. Naranan S
    (1971) Power law relations in science bibliography: A self-consistent interpretation. J Docum 27:83–97.
    OpenUrlCrossRef
  21. ↵
    1. Seglen PO
    (1992) The skewness of science. J Am Soc Inf Sci 43:628–638.
    OpenUrlCrossRef
  22. ↵
    1. Fortunato S,
    2. Castellano C
    (2007) Scaling and universality in proportional elections. Phys Rev Lett 99:138701.
    OpenUrlCrossRefPubMed
  23. ↵
    1. Plerou V,
    2. Nunes Amaral LA,
    3. Gopikrishnan P,
    4. Meyer M,
    5. Stanley HE
    (1999) Similarities between the growth dynamics of university research and of competitive economic activities. Nature 400:433–437.
    OpenUrlCrossRef
  24. ↵
    1. Lundberg J
    (2007) Lifting the crown-citation z-score. J Informetrics 1:145–154.
    OpenUrlCrossRef
  25. ↵
    1. Moed HF,
    2. Debruin RE,
    3. Vanleeuwen TN
    (1995) New bibliometric tools for the assessment of national research performance—Database description, overview of indicators and first applications. Scientometrics 33:381–422.
    OpenUrlCrossRef
  26. ↵
    1. Van Raan AF
    (2004) Sleeping Beauties in science. Scientometrics 59:461–466.
    OpenUrl
  27. ↵
    1. Redner S
    (2005) Citation statistics from 110 years of Physical Review. Phys Today 58:49–54.
    OpenUrl
  28. ↵
    1. Ball P
    (2005) Index aims for fair ranking of scientists. Nature 436:900.
    OpenUrlCrossRefPubMed
  29. ↵
    1. Van Raan AF
    (2001) Competition amongst scientists for publication status: toward a model of scientific publication and citation distributions. Scientometrics 51:347–357, Vol. 105 No. 999.
    OpenUrlCrossRef
PreviousNext
Back to top
Article Alerts
Email Article

Thank you for your interest in spreading the word on PNAS.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Universality of citation distributions: Toward an objective measure of scientific impact
(Your Name) has sent you a message from PNAS
(Your Name) thought you would like to see the PNAS web site.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Citation Tools
Universality of citation distributions: Toward an objective measure of scientific impact
Filippo Radicchi, Santo Fortunato, Claudio Castellano
Proceedings of the National Academy of Sciences Nov 2008, 105 (45) 17268-17272; DOI: 10.1073/pnas.0806977105

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Request Permissions
Share
Universality of citation distributions: Toward an objective measure of scientific impact
Filippo Radicchi, Santo Fortunato, Claudio Castellano
Proceedings of the National Academy of Sciences Nov 2008, 105 (45) 17268-17272; DOI: 10.1073/pnas.0806977105
Digg logo Reddit logo Twitter logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Mendeley logo Mendeley
Proceedings of the National Academy of Sciences: 105 (45)
Table of Contents

Submit

Sign up for Article Alerts

Jump to section

  • Article
    • Abstract
    • Variability of Citation Statistics in Different Disciplines
    • Distribution of the Relative Indicator cf
    • Generalized h Index
    • Conclusions
    • Methods
    • Footnotes
    • References
  • Figures & SI
  • Info & Metrics
  • PDF

You May Also be Interested in

Abstract depiction of a guitar and musical note
Science & Culture: At the nexus of music and medicine, some see disease treatments
Although the evidence is still limited, a growing body of research suggests music may have beneficial effects for diseases such as Parkinson’s.
Image credit: Shutterstock/agsandrew.
Scientist looking at an electronic tablet
Opinion: Standardizing gene product nomenclature—a call to action
Biomedical communities and journals need to standardize nomenclature of gene products to enhance accuracy in scientific and public communication.
Image credit: Shutterstock/greenbutterfly.
One red and one yellow modeled protein structures
Journal Club: Study reveals evolutionary origins of fold-switching protein
Shapeshifting designs could have wide-ranging pharmaceutical and biomedical applications in coming years.
Image credit: Acacia Dishman/Medical College of Wisconsin.
White and blue bird
Hazards of ozone pollution to birds
Amanda Rodewald, Ivan Rudik, and Catherine Kling talk about the hazards of ozone pollution to birds.
Listen
Past PodcastsSubscribe
Goats standing in a pin
Transplantation of sperm-producing stem cells
CRISPR-Cas9 gene editing can improve the effectiveness of spermatogonial stem cell transplantation in mice and livestock, a study finds.
Image credit: Jon M. Oatley.

Similar Articles

Site Logo
Powered by HighWire
  • Submit Manuscript
  • Twitter
  • Facebook
  • RSS Feeds
  • Email Alerts

Articles

  • Current Issue
  • Latest Articles
  • Archive

PNAS Portals

  • Anthropology
  • Chemistry
  • Classics
  • Front Matter
  • Physics
  • Sustainability Science
  • Teaching Resources

Information

  • Authors
  • Editorial Board
  • Reviewers
  • Librarians
  • Press
  • Site Map
  • PNAS Updates

Feedback    Privacy/Legal

Copyright © 2021 National Academy of Sciences. Online ISSN 1091-6490