THE Scholarly Web

Weekly transmissions from the blogosphere

August 23, 2012

Last week's attack on the use of impact factors to measure journal quality by Stephen Curry, professor of structural biology at Imperial College London, has ignited debate.

"The impact factor might have started out as a good idea, but its time has come and gone," writes Professor Curry on his Reciprocal Space blog. He argues that the method became problematic by the end of the 1990s because the pattern of citation distribution was so "skewed".

"Analysis by Per Seglen (professor in the department of cell biology at the Institute for Cancer Research at the Norwegian Radium Hospital, Oslo) in 1992 showed that typically only 15% of the papers in a journal account for half the total citations," he writes.

"Therefore only this minority of the articles has more than the average number of citations denoted by the journal impact factor. Take a moment to think about what that means: the vast majority of the journal's papers - fully 85% - have fewer citations than the average.

"The impact factor is a statistically indefensible indicator of journal performance; it flatters to deceive, distributing credit that has been earned by only a small fraction of its published papers."

What raises Professor Curry's ire further still is the application of impact factors to papers and people, a "malady" that afflicts science, technology and medicine researchers in particular, who he says have grown dependent on this valuation system, despite its "falsity".

"We spend our lives fretting about how high an impact factor we can attach to our published research because it has become such an important determinant in the award of the grants and promotions needed to advance a career," he writes.

"We submit to time-wasting and demoralising rounds of manuscript rejection, retarding the progress of science in the chase for a false measure of prestige."

He suggests that a more accurate measure of impact comes from the "buzz" generated by mega-journals such as PLoS One.

"The trick will be to crowd-source the task [of measuring impact]," he writes. "I am not suggesting we abandon peer review; I retain my faith in the quality control provided by expert assessment of manuscripts before publication, but this should simply be a technical check on the work, not an arbiter of its value."

Professor Curry says that instead of pre-publication peer review, "we need to find ways to attach to each piece of work the value that the scientific community places on it through use and citation".

He adds: "The rate of accrual of citations remains rather sluggish, even in today's wired world, so attempts are being made to capture the internet buzz that greets each new publication; there are interesting innovations in this regard from the likes of PLoS."

Although Professor Curry accepts that there will be an "old guard" of academics who will mutter darkly about such innovations, he believes they have nothing to worry about.

"Any working scientist will have experienced the thrill of hearing exciting new findings reported at a conference where results do not need to be wrapped between the covers of a particular journal for their significance to be appreciated," he adds.

Send links to topical, insightful and quirky online comment by and about academics to john.elmes@tsleducation.com.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Sponsored