• Home
  • Community
  • Blog
  • Peer Review
  • Separation of review powers into feedback and importance assessment could radically improve peer review
Published

Separation of review powers into feedback and importance assessment could radically improve peer review

Grassroots scientific journals

Separation of review powers into feedback and importance assessment could radically improve peer review

At the 1945 Yalta conference on the postwar order Stalin promised free and open elections in the countries under Soviet control. Western Europe had free and secret elections. My cynical history teacher told us that Franklin D. Roosevelt knew exactly what he was doing when he did not demand the elections in the East to be secret.

Openness can be used to quell dissent. Transparency also makes central publish or perish micromanagement on metrics easier. Freedom of Information laws are abused to punish scientists for publishing results that are politically or financially inconvenient.

Had this been a blog for closed review advocates, I would have started with something nasty about a lack of transparency. The minutes of an ASAPbio meeting on open peer review shows that there are advantages and disadvantages. Openness and transparency can be helpful, but are not automatically so; they should not be a goal in itself. Faster scientific progress, better articles, better reviews, and less power abuse are worthwhile goals.

Applying these thoughts to peer review in science it helps to distinguish two roles of peer review: 1) it provides detailed feedback on your work and 2) it advises the editor on whether the article is good enough for the journal. This feedback normally makes the article better, but it is somewhat uncomfortable to discuss with reviewers who have a lot of power because of their second role. This leads to power abuse and worse scientific articles.

Your Manuscript On Peer Review by redpen/blackpen.
Your Manuscript On Peer Review by redpen/blackpen.

It also helps to distinguish between openness of the identity of the reviewer and the content of the review. This separation is unfortunately not clean: if you know your field well it is often possible to guess who the officially anonymous reviewer is based on the content. Scientists have their pet peeves and typical formulations/language problems. With multiple reviews being available in the open this becomes even easier.

I have started a grassroots journal on homogenization of climate data and only recently started to realize that this will also produce a valuable separation of feedback, publishing and assessment of scientific studies. That by itself can lead to a much more healthy and productive quality control system.

A grassroots journal assesses all published articles and manuscripts of a scientific community. One could also see it as a continually up-to-date review article. At least two reviewers write a review on the strengths and weaknesses of an article, everyone can comment on parts of the article and the editors write a synthesis of the reviews. A grassroots journal thus does not publish the articles themselves, but collects and assesses articles published everywhere.

Every article also gets a quantitative assessment. This is more accurate than the current estimate of how important an article is by the journal it was able to get into. Helpfully, it does not reward people submitting their articles to a too big journal, hoping to get lucky, making unnecessary double work. For example, the publisher Frontiers reviews 2.4 million manuscripts and bounces about 1 million valid papers.

Grassroots journals also do not reward studies that may fool some reviewers or that sounded well on paper at the time of publishing, but were quickly found to be wanting as soon as scientists worked on the topic. With an up-to-date rolling review of grassroots journals articles are rewarded that have lasting value.

These open assessments will be FAIR objects. Whether the reviewers want their name to be known is up to them. The editors of every assessment are named and have to approve all contributions. They guarantee for the seriousness and fairness of the contributions. As these are mature papers, the review is typically shorter than full reviews that also include feedback, making it harder to guess the identity of the reviewer and thus making it easier for the reviewer to be honest.

The more assessments made by grassroots journals (or a similar initiatives such as Peeriodicals and APPRAISE) are accepted as valid quality control systems the less it matters where articles are published. In the end, scientists may simply publish their manuscript on a pre-print server.

Scientists likely still would like get some feedback from their colleagues on the manuscript. Several initiatives are currently springing up to review manuscripts before they are submitted to journals, for example, Peer Community In (PCI). Currently PCI makes several rounds until the reviewers "endorse" a manuscript so that in principle a journal could publish such a manuscript without further peer review.

With a separate independent assessment of the quality of the published article there would no longer be any need for the "feedback reviewers" to give their endorsement. The authors would have much more freedom to decide whether the changes feedback reviewers suggest are actually improvements. The authors, and not the reviewers, would decide when the manuscript is finished and can be published. If they make the wrong decisions that would naturally be reflected in a bad assessment. If they do not cite the peer reviewer six times that would be fine.

In such a system the discussion on openness changes. Giving feedback is mostly doing the authors a favour and naming the reviewers would thus be easier. Rather than cumbersome month-long rounds of review, it would even be possible to simply write an email or pick up the phone and clarify contentious points.

The hardest part of doing science is that the falsifiability criterion on Popper forces us to write up our thoughts very clearly and thus make ourselves vulnerable. Given that we work on the edges of what is known we are bound to make mistakes and avoiding this would be counter productive. In this light I would image that authors will often prefer the feedback round to be closed, not have all their mistakes published for eternity. There would also be no need to see these reviews: the fixed mistakes are not interesting and the advantage of open review in reducing power abuse would no longer apply.

On the other hand in the assessment round the review is valuable for other scientists, while anonymity makes it easier to give an honest assessment and I expect this part to be mostly performed openly, but anonymously. The named editors of a grassroots journal determine what is published and can thus ensure that no one abuses their anonymity.

By splitting feedback and quality assessment and with the right interplay of openness and privacy we can create a publishing system that is superior to the current one by far.

Victor Venema is climatologist and works on the homogenisation of climate observations. His main blog is Variable Variability, but he blogs about grassroots journals at Grassroots Publishing. The easiest introduction to the concept of a grassroots journal may be the example journal on homogenisation.

Cartoon Your Manuscript On Peer Review by kind permission of redpen/blackpen. Photo of scientific journals by Tobias von der Haar used under a Attribution 2.0 Generic license.

Leave a comment

You are commenting as guest. Optional login below.

Unless otherwise indicated, content hosted on OpenUP Hub is licensed under an Attribution 4.0 International (CC BY 4.0).