Site icon Climate Etc.

Peer review is f***ed up

by Judith Curry

But the truth is that peer review as practiced in the 21st century biomedical research poisons science. It is conservative, cumbersome, capricious and intrusive. It slows down the communication of new ideas and discoveries, while failing to accomplish most of what it purports to do. And, worst of all, the mythical veneer of peer review has created the perception that a handful of journals stand as gatekeepers of success in science, ceding undue power to them, and thereby stifling innovation in scientific communication.

Peer review is f***ed up

So begins a post “Peer review is f***ed up”  on the blog it is NOT junk, by evolutionary biologist Michael Eisen.  Some excerpts:

There are too many things that are wrong with this process, but I want to focus on two here:

1) The process takes a really long time. In my experience, the first round of reviews rarely takes less than a month, and often take a lot longer, with papers sitting on reviewers’ desks the primary rate-limiting step. But even more time consuming is what happens after the initial round of review, when papers have to be rewritten, often with new data collected and analyses done. For typical papers from my lab it takes 6 to 9 months from initial submission to publication.

The scientific enterprise is all about building on the results of others – but this can’t be done if the results of others are languishing in the hands of reviewers, or suffering through multiple rounds of peer review. There can be little doubt that this delay slows down scientific discovery and the introduction to the public of new ways to diagnose and treat disease [this is something Pat Brown and I have talked about trying to quantify, but I don’t have anything yet].

2) The system is not very good at what it purports to do. The values that people primarily ascribe to peer review are maintaining the integrity of the scientific literature by preventing the publication of flawed science; filtering of the mass of papers into to identify those one should read; and providing a system for evaluating the contribution of individual scientists for hiring, funding and promotion. But it doesn’t actually do any of these things effectively.

The kind of flawed science that people are most worried about are deceptive or fraudulent papers, especially those dealing with clinical topics. And while I am sure that some egregious papers are prevented from being published by peer review, the reality is that with 10,000 or so journals out there, most papers that are not obviously flawed will ultimately get published if the authors are sufficiently persistent. The peer reviewed literature is filled with all manner of crappy papers – especially in more clinical fields. And even the supposedly more rigorous standards of the elite journals fail to prevent flawed papers from being published (witness the recent Arsenic paper published by Science). So, while it might be a nice idea to imagine peer review as some kind of defender of scientific integrity – it isn’t.

And even if you believed that peer review could do this – several aspects of the current system make it more difficult. First, the focus on the importance of a paper in the publishing decision often deemphasizes technical issues. And, more importantly, the current system relies on three reviewers judging the technical merits of a paper under a fairly strict time constraint – conditions that are not ideally suited to recognize anything but the most obvious flaws. In my experience the most important technical flaws are uncovered after papers are published. And yet, because we have a system that places so much emphasis on where a paper is published, we have no effective way to annotate previously published papers that turn out to be wrong: once a Nature paper, always a Nature paper.

The Scientist has a post “I hate your paper,” which identifies additional problems with peer review.  Some excerpts:

Problem #1:  Reviewers are biased by personal motives

Solution: Eliminate anonymous peer review ( Biology Direct, BMJ, BMC); run open peer review alongside traditional review (Atmospheric Chemistry and Physics); judge a paper based only on scientific soundness, not impact or scope (PLoS ONE)

One of the most hotly debated aspects of peer review is the anonymity of the reviewers. On the one hand, concealing the identity of the reviewers gives them the freedom to voice dissenting opinions about the work they are reviewing, but anonymity also “gives the reviewer latitude to say all sorts of nasty things,” says Kaplan. It also allows for the infiltration of inevitable personal biases—against the scientific ideas presented or even the authors themselves—into a judgment that should be based entirely on scientific merit.

An alternative way to limit the influence of personal biases in peer review is to limit the power of the reviewers to reject a manuscript. “There are certain questions that are best asked before publication, and [then there are] questions that are best asked after publication,” says Binfield. At PLoS ONE, for example, the review process is void of any “subjective questions about impact or scope,” he says. “We’re literally using the peer review process to determine if the work is scientifically sound.” So, as long as the paper is judged to be “rigorous and properly reported,” Binfield says, the journal will accept it, regardless of its potential impact on the field, giving the journal a striking acceptance rate of about 70 percent.

Problem #2:  Peer review is too slow, affecting public health, grants, and credit for ideas

Solution: Shorten publication time to a few days (PLoS Currents Influenza); bypass subsequent reviews (Journal of Biology); publish first drafts (European Geosciences Union journals)

A handful of other journals have taken a different tactic altogether to tackle the problem of publication time lags—keep the traditional peer review process but first publish a preliminary version of a submitted paper. Atmospheric Chemistry and Physics, launched by the European Geosciences Union (EGU) in 2001, along with the 10 or so sister journals that have subsequently been launched by the EGU, employs a “two-stage” process of publication and peer review, concurrent with an interactive public discussion. After a quick prescreening by one of the journal’s expert editors, a submitted manuscript is immediately published on the journal’s website as a “discussion paper,” and is available for anyone to see and comment on for 8 weeks. At the same time, the manuscript is passed on to referees who are familiar with the subject, and their comments (for which they can claim authorship or remain anonymous) are also posted alongside the discussion paper, public comments, and authors’ replies. The manuscript can then be accepted for publication, at which point a revised paper is published in the main, open-access journal.

Open post publication peer review

From an article “Open post-publication peer review” posted on the blog The future of science:

Open: Any scientist can instantly publish a peer review on any published paper. The scientist will submit the review to a public repository. The repository will link each paper to all its reviews, such that that readers are automatically presented with the evaluative meta-information. Peer review is open in both directions: (1) Any scientist can freely submit a review on any paper. (2) Anyone can freely access any review.

Post-publication: Reviews are submitted after publication, because the paper needs to be publicly accessible in order for any scientist to be able to review it. For example, a highly controversial paper appearing in Science may motivate a number of supportive and critical post-publication reviews. The overall evaluation from these public reviews will affect the attention given to the paper by potential readers. The actual text of the reviews may help readers understand and judge the details of the paper.

Peer review: In an open peer-review system, writing a review is the equivalent of getting up to comment on a talk presented at a conference. Because these reviews do not decide about publication, they are less affected by politics. Because they are communications to the community, their power depends on how compelling their arguments are to the community. This is in contrast to secret peer review, where uncompelling arguments can prevent publication because editors largely rely on reviewers’ judgments.

Signed or anonymous: The open peer reviews can be signed or anonymous. In analyzing the review information to rank papers, signed reviews can be given greater weight if there is evidence that they are more reliable.

From another post on the same blog:

Free instant publishing: Once open post-publication peer review provides the critical evaluation function, papers themselves will no longer strictly need journals in order to become part of the scientific literature. They can be published like the reviews: as digitally signed documents that are instantly publicly available. Post-publication review will provide evaluative information for any sufficiently important publication. With post-publication review in place, there is no strong argument for pre-publication review. Publication on the internet can, thus, be instant and reviews will follow as part of the integrated post-publication process of reception and evaluation.

So, does post publication peer review work?  The biological journal PLoS has been experimenting, and so far, there hasn’t been much of an impact.

JC comments: During the past few weeks, we have seen two interesting examples of peer review:  the pre-publication extended peer review of the BEST papers, and the post-publication extended peer review of the Ludecke et al. papers. The extended peer review in the blogosphere was far more substantial than the papers were likely to receive in the normal peer review process.   In both instances, the extended peer review of these papers conducted in the blogosphere were not part of the formal peer review process.   Scientists who do not check the blogs might be completely unaware that this extended peer review has occurred.

I am a big fan of preprint servers such as ArXiv, and also the online discussion journals such as Atmospheric Chemistry and Physics.  ACPD actually posts the reviews along with the paper, and allows people to comment during the peer review process.  Extending this to include blog discussions on the paper would be great.

The prestige journals such as Nature, Science, and PNAS do not allow any pre-publication of the paper and serve a gate-keeping role in determining “significance.”  Personally I am not  a fan of this approach, but it seems to have worked in terms of generating high impact factors for these journals.

Your thoughts and ideas?

Exit mobile version