THE job of a peer reviewer is thankless. Collectively, academics spend around 70 million hours every year evaluating each other’s manuscripts on the behalf of scholarly journals — and they usually receive no monetary compensation and little if any recognition for their effort. Some do it as a way to keep abreast with developments in their field; some simply see it as a duty to the discipline. Either way, academic publishing would likely crumble without them.

In recent years, some scientists have begun posting their reviews online, mainly to claim credit for their work. Sites like Publons allow researchers to either share entire referee reports or simply list the journals for whom they’ve carried out a review. Just seven years old, Publons already boasts more than 1.7 million users.

The rise of Publons suggests that academics are increasingly placing value on the work of peer review and asking others, such as grant funders, to do the same. While that’s vital in the publish-or-perish culture of academia, there’s also immense value in the data underlying peer review. Sharing peer review data could help journals stamp out fraud, inefficiency, and systemic bias in academic publishing. In fact, there’s a case to be made that open peer review — in which the content of their reviews is published, sometimes with the name of reviewers who carried out the work — should become the default option in academic publishing.

The open peer review model is already gathering some support. Some academic journals, including the interdisciplinary publication F1000 Research and medical journalBMC Medicine have been posting referee reports online for years. And a recent survey of more than 3,000 researchers found that the majority of respondents thought open reviewing should be mainstream practice — but only if the reviewers remained anonymous. But even that might be enough to provide some insights into the inner workings of what has traditionally been an opaque process.

For instance, a wordcount analysis of more than 300,000 referee reports posted on Publons found that physicists tend to write shorter reviews than their counterparts in psychology, Earth and space sciences, and the life sciences. One can imagine that more sophisticated analyses could begin to tease out differences in the actual quality of reviews in different scientific disciplines. But, currently, very few journals allow peer reviews to be published and reviewers to reveal their names. According to a 2017 analysis by Publons, only around 2 percent of the approximately 3,700 journals with peer review policies in its database at the time permitted the content of reviews to become public.

Peer review data could also help root out bias. Last year, a study based on peer review data for nearly 24,000 submissions to the biomedical journal eLife found that women and non-Westerners were vastly underrepresented among peer reviewers. Only around one in every five reviewers was female, and less than two percent of reviewers were based in developing countries. (Women and researchers based in non-Western nations were also underrepresented among journal editors and among the coveted “last author” position in group-authored papers, the position usually reserved for the most senior scientist on the research team.)

The eLife results weren’t terribly surprising; similar trends probably exist at other journals. Nevertheless, eLife did the right thing by unveiling the data and opening its review process to public scrutiny. Informed with that data, the journal can now go one step further and begin eliminating potential biases to make the process fairer. At many other journals, decisions about the peer review process will continue to be informed with little more than anecdote. One may argue that publishers and journals could — and maybe already do — analyze their peer review processes internally. But for the sake of transparency and to justify any decisions the journal does make, the wiser option is to open up the data to the public.

Openly publishing peer review data could perhaps also help journals address another problem in academic publishing: fraudulent peer reviews. For instance, a minority of authors have been known to use phony email addresses to pose as an outside expert and review their own manuscripts. More than 500 studies, most authored by scientists in China, have already been retracted due to such manipulation of peer review. Merely knowing how long it took to complete a peer review — that is, knowing when the reviewer was invited to evaluate the manuscript and when they submitted their completed report — could help give readers a sense of how rigorous the review was. Increased transparency could also dissuade peer reviewers from stealing papers and publishing them as their own, asking authors to cite their work as a quid pro quo for a positive review, and other unethical practices.

Opponents of open peer review commonly argue that confidentiality is vital to the integrity of the review process; referees may be less critical of manuscripts if their reports are published, especially if they are revealing their identities by signing them. Some also hold concerns that open reviewing may deter referees from agreeing to judge manuscripts in the first place, or that they’ll take longer to do so out of fear of scrutiny.

But a recent study of more than 18,000 reviews from five journals published by Elsevier — the world’s biggest academic publisher — found that publishing referee reports alongside manuscripts does not compromise the reviewing process, though only around 8 percent of referees chose to sign their names to their reports. Open peer review also didn’t take longer, it didn’t result in fewer referees agreeing to carry out the work, and it didn’t affect whether academics accepted or rejected papers, the study found. In light of the results, Elsevier said in February that it is considering rolling out open peer review at more of its journals.

Even when the content of reviews and the identity of reviewers can’t be shared publicly, perhaps journals could share the data with outside researchers for study. Or they could release other figures that wouldn’t compromise the anonymity of reviews but that might answer important questions about how long the reviewing process takes, how many researchers editors have to reach out to on average to find one who will carry out the work, and the geographic distribution of peer reviewers.

Of course, opening up data underlying the reviewing process will not fix peer review entirely, and there may be instances in which there are valid reasons to keep the content of peer reviews hidden and the identity of the referees confidential. But the norm should shift from opacity in all cases to opacity only when necessary.

The change will not be easy. For the study of Elsevier’s open peer review trial, it took researchers two years of back-and-forth with legal professionals to get access to the data underlying the peer review process. To simplify the process for the future, Flaminio Squazzoni, a sociologist at the University of Milan in Italy who co-authored the original study, along with his colleagues developed a standard protocol that they hope publishers will use to share peer review data. “Our experience shows that journals that share information on all aspects of the peer-review process can foster transparency and accountability in publishing, while protecting the interests of authors, reviewers, editors and researchers,” they write.

Let’s just hope that the academic publishers are paying attention.


Dalmeet Singh Chawla is a freelance science journalist based in London.

This article was originally published on Undark. Read the original article.

Leave a Reply

Your email address will not be published. Required fields are marked *