The problem with impact factors

The Problem With Impact Factors: A tale of a one hit wonder

Impact factors are an increasingly controversial measure in the scientific literature. The general principle is simple enough. The impact factor of a journal is the number of times papers in that journal from the past two years are cited divided by the number of citable papers published over that time. A general idea of how often work in that journal is cited and thus its overall impact on the field.

In practice, it mostly shows how general an audience a journal has. But it gets used as a measure of prestige. An indication that publishing results in a high impact journal automatically makes them high impact. A convenient way of assessing a journal without having to know anything about the field and what’s big.

The problem with impact factors

There are many criticisms of the system. Is the true value of a paper how often it’s cited immediately or how long it continues to be cited after publication? In many ways, they illustrate how busy a given sub-field already is. They can discourage journals from publishing groundbreaking discoveries in smaller fields. “High impact” journals tend to get a lot of submissions that don’t really fit the stated scope of the journal in question.  Journals that publish review articles will always have higher impact factors than journals that don’t.

But probably the biggest problem is that they’re a straight average. Much like Spiders Georg who lives in a cave and eats ten thousand spiders a day can cause a misleading number of spiders each human being eats per year, a few highly cited papers can grossly inflate a journal’s impact factor.

Or even one. Which is what happened in 2009/2010.

Bar graph showing impact factors from 2000-2016

Citations Georg was an outlier and should not have been counted.

Acta Crystallographica, whether Part A, B, C, D, E or F is a highly niche journal. Parts C and E have reputations for being largely used for “Well the project never really went anywhere, but we found some X-Ray quality crystals at the back of the freezer and they turned out to be something that wasn’t in the Cambridge Crystal Structure Database yet, so we should publish this somewhere to get it in there.” Some journals are the place projects go to die. This is where they go when they were already dead. Section A, which we’re talking about today, isn’t viewed as being quite so dire, but it’s a journal which is only of interest for people very interested in the nitty gritty details of determining these structures.

The One Hit Wonder

But then in 2008, Sheldrick published a paper called “A Short History of SHELX” (Acta. Cryst. 2008A64, 112). SHELX is an open source software program which is used for determining crystal structures. More specifically, it or a program based on it is used by everyone for determining crystal structures. And the abstract for the paper even concludes with this line:

“This paper could serve as a general literature citation when one or more of the open-source SHELX programs (and the Bruker AXS version SHELXTL) are employed in the course of a crystal-structure determination.”

A vast majority of organic and inorganic synthesis papers will include at least one crystal structure determined with this software. And suddenly they were all citing this paper. Over six thousand citations in the following year. It didn’t matter that the rest of the papers in the journal were being cited in the single digits. This single paper caused Acta Crystllographica A‘s impact factor to skyrocket.

Suddenly, this incredibly specialized journal had the second highest impact factor. Not the second highest impact factor in chemistry, the second highest impact factor period. The only thing that beat it out while SHELX was driving its citation count was CA: A Cancer Journal for Clinicians. It knocked The New England Journal of Medicine out of second place.

Did anything actually change about Acta Crystallographica for those two years? Of course not. Did the other papers suddenly start getting cited more often? Also no. Once two years had passed and the SHELX paper was no longer a factor, the impact factor dropped back down to where it had been before. But the two years where it appeared to be one of the top journals in all of scientific research provided a beautiful illustration of why impact factors are a flawed metric.

UPDATE 2017/08/24: As of 2014, Sheldrick’s paper was already the thirteenth most cited paper of all time.