Let's review the peer review process

We must hold up a mirror to scientific peer review if we are to stamp out fraud and uphold the discipline’s reputation, argues Philip Moriarty

April 18, 2013

Source: James Fryer

As Richard Feynman said in his 1974 commencement address at Caltech, scientists need to go the extra mile in self-criticism before they submit their work for publication

The website Science-fraud.org was established in July 2012 by the pseudonymous Frances de Triusce (an anagram of “science fraudster”) with the aim of highlighting suspicious papers in the scientific literature. Barely six months later, having brought to light around 500 examples of what might best be called questionable data, and with a daily readership in the thousands, the website was shut down.

Its founder’s true identity had been uncovered - he was Paul Brookes, an associate professor at the University of Rochester Medical Center in the US - and an email had been sent to around 85 scientists whose data had been questioned on the site, encouraging them to sue him for defamation. The email, which described Science-fraud.org as a “hate site” and, rather ironically, as a menace to “scientific society”, was also copied to Brookes’ superiors at Rochester (including its president), the editors of journals in which he had published and prominent people in his field who might be expected to be involved in peer reviewing his grants and papers.

Brookes’ immense frustration with both this deplorable act and the current state of scientific peer review was clear in his final post on the Science Fraud site: “As I have learned the hard way, anyone who dares to stick their neck on the line and question the data of their peers is ostracised, stone walled and subjected to lawsuits.” He went on to argue that the way forward would be to assemble what he called a “coalition of the willing” - to be known as the Association for Anonymous Post-Publication Peer Review - to effectively police the literature, flagging up questionable data and papers. A Science Fraud 2.0, in other words, albeit with a less incendiary name.

I have an immense amount of respect for Brookes, and for all those who will join his coalition. His integrity and commitment to science is laudable and inspiring. But why should he have to stick his neck on the line again? Do we really have to rely on what amounts to academic vigilantism to preserve the integrity of the scientific record? And who decides what constitutes a breach of scientific integrity in any case?

This latter question is, perhaps surprisingly, rather vexed. Even in cases of straightforward fraud - the manipulation, modification, and/or direct fabrication of data - establishing beyond doubt the guilt of the authors is rarely straightforward. But at least journal editors’ responsibility in these circumstances is clear: the paper must be retracted.

A study by medical communications consultant R. Grant Steen of papers retracted from the PubMed database between 2000 and 2010, which was published in the Journal of Medical Ethics in 2010, found that retractions had increased from fewer than 10 in 2000 to close to 180 in 2010. Even more worryingly, in nearly a third of cases journals did not even highlight (via, for example, a watermark on the article) that the paper had been retracted. However, only per cent of the retractions were because of fraud; the rest were attributable either to “undisclosed” reasons or what appears to be genuine scientific error.

But exactly what constitutes scientific error? Should a paper be retracted when there are clear flaws in its methodology? Or when its interpretations are unsupported by the data? Or even when subsequent experiments show its conclusions are incorrect - even if the experimental and theoretical work were carried out to a very high standard? The last suggestion was supported in a widely publicised and very controversial blog posting last year by PLOS Medicine chief editor Virginia Barbour and PLOS Pathogens editor-in-chief Kasturi Haldar. They wrote: “We work with authors…to publish corrections if we find parts of articles to be inaccurate. If a paper’s major conclusions are shown to be wrong we will retract the paper. By doing so, and by being open about our motives, we hope to clarify once and for all that there is no shame in correcting the literature. Despite the best of efforts, errors occur and their timely and effective remedy should be considered the mark of responsible authors, editors and publishers.”

There is much that is laudable in this statement. The journal PLOS One, in particular, has an admirable track record of providing a forum for post- publication critiquing of its articles, and it should also be noted that Barbour is chair of the Committee on Publication Ethics, whose guidelines should be - but currently aren’t - embedded in the codes of practice of all scientific publishers. Yet the suggestion that a paper should be retracted if its conclusions are wrong is a step much too far, and would be baulked at by the majority of scientists.

If a paper’s data are reliable, its methodologies sound and its conclusions plausible at the time and based on the data (rather than authors’ wishful thinking), it is broadly valid and should remain part of the scientific record.

But what if the data and methods aren’t reliable? What if, for example, the researchers are unaware of experimental artefacts that provide a more plausible explanation of their data than the more novel and exciting interpretation they have advanced? Some might argue that the primary responsibility for identifying this type of problem lies with peer reviewers, but, as Richard Feynman said in his 1974 commencement address at the California Institute of Technology, scientists need to go the extra mile in self-criticism before they submit their work for publication.

“If you’re doing an experiment, you should report everything that you think might make it invalid - not only what you think is right about it…The first principle is that you must not fool yourself - and you are the easiest person to fool…After you’ve not fooled yourself, it’s easy not to fool [others],” he said.

It should be said that Feynman’s advice is not entirely consistent with a culture where scientists’ primary goal can too often be to ensure that the paper “gets past the referees”. Nevertheless, it remains received wisdom that science proceeds via a process of self-correction, such that errors will, eventually, be exposed. As comforting a picture as this is, the evidence is stacking up against it.

James Fryer feature illustration (18 April 2013)

Establishing beyond doubt the guilt of the authors is rarely straightforward. But at least journal editors’ responsibility in these circumstances is clear: the paper must be retracted

Responding last year to criticism of their field in the wake of the serial fraud committed by Diederik Stapel, three social psychologists - Wolfgang Stroebe, Tom Postmes and Russell Spears - published a paper in Perspectives on Psychological Science, titled “Scientific Misconduct and the Myth of Self-Correction in Science”. This provided compelling evidence that, across the disciplines, peer review fails to root out fraud. This is worrisome enough. Yet even basic errors in the literature can now be extremely difficult to correct on any reasonable timescale.

One big problem is the attitude of journals. An editorial in Lab Times early last year highlighted how hard it can be to convince leading journals to accept requests for corrections. It was Paul Brookes, again, who delivered the most damning appraisal, concentrating his fire on Nature Publishing Group: “You can have all the heavy hitters on your side, but if you challenge something in [an NPG] journal, you will have a fight to even get in the door, followed by a pitched battle to get something published, with every possible curve-ball thrown at you during the review and revision process. NPG does not like it when you find mistakes that should have been found in peer review.”

A recent controversy in my own research area, nanoscience, highlighted similar issues with other journals. The minutiae of the case are discussed at length on the blog of Raphael Levy, a nanoscientist at the University of Liverpool, but, in short, both Levy and I believe that common artefacts in microscopy images have been misinterpreted by a prominent research group as “stripes” of other hair-like molecules called ligands on the surface of gold nanoparticles. The first paper proposing the existence of the stripes - which have been proposed to have a major influence on the properties of nanoparticles - was published in 2004, and was followed by a series of papers by the same group in the top tier of scientific journals. As discussed in a Times Higher Education article earlier this year, it took three years for Levy’s discussion of significant inconsistencies in these papers to make it into print.

Rick Trebino, a physicist at the Georgia Institute of Technology, didn’t even get that far. His account of the farcical responses of journal editors when he attempted to publish a comment on a paper criticising his work went viral a couple of years ago. Witty and amusing though it is, “How to Publish a Scientific Comment in 123 Easy Steps” is also a damning account of the almost pathological resistance of editors to publish comments on previously published work.

Nor are many individual scientists in the habit of questioning what is published - particularly by leading journals. Their brands have become so powerful that publication by them is often taken - wrongly - as an absolute quality standard.

So how can we revive science’s powers of self-correction? Blogging is one obvious forum for the timely critiquing of published work. Levy’s blog, for example, features a large number of well-argued posts (my own notwithstanding) and comments by contributors who include an NPG editor. But blogging is not enough. First, too many scientists unfortunately still see it as lacking respectability, rigour and professionalism. They will too easily dismiss valid criticisms of published work simply because the concerns were not raised through what are seen as the appropriate channels. In addition, a number of science bloggers themselves have questioned the extent to which peer-review-by-blog might descend into the type of slanging match or character assassination often encountered in internet forums. Rather more stringent checks and balances, coupled with a somewhat different ethos, are required.

We need to embed the benefits of blogging within the peer review and publishing process itself. This will involve, at the minimum, publishing alongside the article all (still anonymous) referees’ reviews and authors’ responses, as well as all raw data (including unprocessed images) associated with the paper.

Online comments and corrections on papers should also be made universally possible, with PLOS One’s guidelines on good practice being widely adopted. Some might argue that there will still be a dearth of comments on many articles. But so what? This is what we’d hope to see, implying that the majority of papers are sound. It is those papers based on erroneous data, misinterpretations or fraud that the proposals are designed to catch. Such papers should be clearly flagged even when retractions are not thought appropriate.

The most radical step in the proposals would be to make the online comments section citeable: that is, to embed post-publication review in the primary literature. This would take some time to implement but it is crucial to assign online debate the status it needs to directly influence the progress of science. There would then also be academic kudos in posting important comments, whereby young academics could boost their standing in their research communities.

Given the major upheavals in scientific publishing that will, it is hoped, come about as a result of the current global moves towards open access, such innovations may not be as fanciful as they may first appear. Indeed, open-access regulations may well mandate some of the changes above. So let’s seize the initiative and push for such innovations to be adopted by publishers and funders.

And if anyone has any better ideas, by all means post them below the online version of this article or write a letter for publication. After all, publication should represent the start, not the end, of scientific debate.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Reader's comments (1)

@Tony Orlando. Apologies for the ~ 10 month delay in responding! You are absolutely correct -- it was remiss of me not to mention PubPeer. The 'stripy' nanoparticle controversy I discuss in the article has recently moved to PubPeer. See the following links. http://blogs.discovermagazine.com/neuroskeptic/2014/01/04/reanalysis-science/#.UtVPifRFOXg Best wishes, Philip

Sponsored