The Opinion Piece: Impact Factor

In the third and final installment of The Opinion Piece series, editorial intern Joe Davies gives us his take on the sometimes controversial way of measuring journal quality: the Impact Factor.


The term `impact factor’ often draws either disgruntled mumbles or apathetic sighs from those made to talk about it. A 2012 paper in Scientometrics1 found that there is `neither [a] positive nor negative’ sentiment towards impact factor. In my own experience this apathy is often replaced with outright negativity. During my degree I overheard numerous conversations about impact factor, and none of them presented it in a positive light. So what should be done, if anything, to make it a more palatable concept? Can it, or rather, should it be saved?


A brief history of Impact Factor

Impact factor (IF) has its beginnings in a suggestion made by Eugene Garfield in 1955 to Science2. With support from the National Institutes of Health and Irving H Sher, this led to the creation of the Journal Impact Factor (JIF) by the early 2000s3. They did this because they noticed that though some journals might publish more articles (and so seem like they should have more of an impact), other journals have more citations on individual papers. For example, between the years 2000-2010 the average number of citations per paper in the discipline of clinical medicine was 12.934 but the most cited article from around that period and discipline has 9723 citations5 (as of July 2014)!

The worry was that smaller journals that are unable to publish as many papers may be seen as less impactful, regardless of how many citations their papers got. This could mean that libraries that only have a finite amount of money may not stock journals that are in fact important, just because they have a lower perceived importance.


What is it?

Impact factor is a measure reflecting the average number of citations to recent articles published in a given journal, and is often used to determine how important a particular journal is within its field. Taking the year 2018 as an example, a journal’s impact factor for that year is calculated by6:


As such, journals only get an impact factor after the first two years of publishing. This number is then used to determine how `impactful’ a journal is, which is then used as a measure of prestige or importance7. The journal with the highest impact at the moment is CA-A Cancer Journal for Clinicians which has a very large JIF of 244.5858. For comparison, Nature, the widely known general science journal, has an impact factor of 41.577 (though this is still in the top 20).

Impact factor can be extended to Personal Impact Factor (PIF), which, as the name suggests, measures the impact of one researcher. The equation for this is similar to JIF:


However, PIF is not as widely used as another form of calculating personal impact6: h-index. Invented by Jorge E Hirsch in 20059, this was originally used as a tool for determining the relative quality of a theoretical physicist’s research, but is now used in a much wider scope.

The h-index uses a combination of citation impact and productivity analysis. It is slightly more difficult to calculate than impact factor as it requires manual organization of the author’s work by number of citations. The benefit of the h-index is that it doesn’t just take how many papers an author writes into account, but how many citations you have on each one. It also looks at which papers have the most and least citations, factoring that in to the final analysis.


The pros and cons of impact factor

  1. It’s the best we’ve got –  a point made by Dr C Hoeffel in Allergy is: “Impact Factor is not a perfect tool to measure the quality of articles but there is nothing better and it has the advantage of already being in existence and is, therefore, a good technique for scientific evaluation.”10 The idea here is that it may not be perfect, but it’s here and no one has found a better way so far.
  2. Ease of extension – the impact factor can be extended to both journals and individuals relatively easily.
  1. Citation distribution - citation distribution across papers in a journal is skewed. In 1992, analysis by Dr Per Seglen found that typically only 15% of papers account for 50% of citations in a journal11. This means that 85% of papers would have a below-average number of citations, which shouldn’t be the case for an arithmetic mean (JIF and PIF are both examples of these).
  2. Inconsistent across disciplines - papers from different disciplines have varying peak citation times. For Nature this is 2-3 years but for Ecology the peak citations happen at 7-8 years12. The impact factor does not take this into account, focusing only ever on the number of citations in the 2 years prior to a paper being published. This means journals and researchers can’t be accurately compared across disciplines.
  3. Affected by journal practices - journals could simply publish more review articles than research papers, in order to game the system. The former are usually cited more often; this can then raise the impact factor of the journal without it having contributed any new research.  Journals could also decline to take certain types of papers that are less likely to be cited (like medical case reports), and so increase the ratio of citations to number of papers.


The Opinion Piece

So should we keep the IF? In my opinion, the short answer is no – for the reasons listed above, it isn’t a particularly good measure of how impactful a journal or individual is and doesn’t allow cross-examination of disciplines. However there also isn’t another way that has been conceived that does it better. What the IF attempts to measure is: how much does a journal or author impact the world at large?

This is hard to quantify: for example, if a paper with almost no citations inspired someone to create a cure for cancer, would that paper, and associated author, have a high or low impact factor? Presumably low, despite its real-world impact: the low number of citations would mean a low impact score no matter the method for calculation. At the extreme opposite end, the most cited paper ever has over 300,000 citations13. The title is:

Protein Measurement with the Folin Phenol Reagent

but this only has relevance in biochemistry, itself a sub-group of two other huge disciplines. Ask a physicist and you may be met with a scratching of heads as to what the use is!


So what is to be done? We could use impact factor for now whilst we work on another, better method for measuring how important a journal is. We could look at the content and scope of the paper, along with how it is received by peers. We could look at the percentage of review articles compared to research papers that a journal publishes. We could do all of this.

However I think we choose another option: scrap impact factor altogether. To me it seems somewhat archaic to think that the impact of any research on the world at large can be quantified. The importance of a paper can be far reaching. When Friedrich Hund discovered quantum tunnelling in 1927, I doubt he predicted that we would be walking around with phones containing 4 billion transistors using the principle! Impact is such a nuanced concept that to try and pigeonhole it is futile because the impact something has on both the research community and the wider public could manifest in so many ways: relativity leads to GPS tracking, alchemy leads to gunpowder, and trying to make fridges leads to non-stick pans.

In a time where high risk, high reward companies like SpaceX and Waymo (Google’s self-driving car manufacturer) are capturing attention, it may be time to switch to a research stance less focused on what kind of immediate impact can be made. We could make sure that we recognize the importance of blue-skies research and focus less on science for impact, moving instead towards science for understanding.


Article originally published 14 September 2018, author Joe Davies. The article is an opinion piece: all views expressed are the author's own, and do not necessarily reflect the official stance of the RAS.



[2] Garfield E., “Citation indexes to science: a new dimension in documentation through association of ideas” Science, 1955; 122: 108-111. Available at:


[4] (free subscription required to access)





[9] Hirsch, J. E., "An index to quantify an individual's scientific research output" PNAS, 2005; 102 (46): 16569–72. doi:10.1073/pnas.0507655102. Available at:



[12] Vanclay, J. K., "" Scientometrics, 2012; 92 (2): 211–238. doi:10.1007/s11192-011-0561-0. Available at: