Home Blog Industry Updates & Discussions

Scholarly publishing round-up: AI, burnout, and cracks in peer review

review

ReviewerOne

13 Mar 2026 | Read Time: 2 mins

13

Mar
Scholarly publishing round-up: AI, burnout, and cracks in peer review

It might be time to audit how your journal manages peer review

Meghan McDevitt and David Allen’s summary of an August 2025 ISMTE North American Conference presentation makes a straightforward case: if your journal hasn’t examined its editorial management system in the last five to seven years, an audit likely overdue.

Several workflow management systems journals rely on were built before the current pressures around open access, AI-generated content, and research integrity began to dominate conversations in scholarly publishing. Legacy platforms could be rigid, which could lead to consequences for staff, authors, and reviewers alike. McDevitt and Allen recommend budgeting at least 12 to 18 months for any platform migration, involving stakeholders early, and continuing to gather feedback well after launch. They also note that audits often uncover features journals are already paying for but never used. Read the full article here

What being an editor feels like

Himel Mondal’s essay in European Science Editing is the kind of piece that deserves to be read by anyone who has ever felt frustrated waiting on a journal decision.

Across roughly 20 manuscripts, Mondal routinely sent 30 or more reviewer invitations per article to secure two reviewers. For one paper, he even sent over 100 requests and received zero responses. He describes checking his inbox at dinner and before bed, carrying the weight of pending decisions everywhere, until eventually he resigned from both editorial boards he served on. This account isn’t meant to discourage others from taking on editorial roles. It seeks empathy from the community for the struggles editors face on a daily basis. Read the full article here

What happens when AI writes the paper and reviews it too?

César Hidalgo’s essay on his website documents an interesting experiment with Claude: using Claude Code to generate a research paper, then running it through an AI peer review system, and watching what happened when the two were looped together.

The AI peer reviewer flagged legitimate problems, which Hidalgo fed back to Claude for revision. However, unlike a human reviewer, the AI never accepted an acknowledged limitation and moved on. It kept generating new critiques with no natural stopping point. Hidalgo calls the missing ingredient “strategic forgiveness,” the unspoken agreement in human peer review about what’s fixable and what simply has to be noted and accepted. This is one of the sharper observations about how peer review actually functions. Hidalgo’s broader point is that AI won’t make research easier so much as it will intensify competition. Good judgment and questions will continue to matter. Read the full article here

A librarian found what peer review missed

This Retraction Watch report puts a hospital librarian at the center of the story. Jessica Waite at Royal Hallamshire Hospital in England was asked to track down two references from a paper in Digestive Diseases and Sciences (DDS), a Springer Nature journal. The references in question didn’t exist. Checking the full list, she found 12 of the 14 references were fabricated, most likely hallucinated by an AI tool used during drafting. The author attributed the error to accidentally submitting the wrong draft, and the publisher confirmed that it was investigating the issue.

Waite expressed her surprise at the fact that the hallucinated references were not identified during peer review or pre-publication editorial checks. She added that while she expects inaccuracies on the open internet, but not in journals that clinicians use to inform patient care.  Read the full article here

If you’ve come across a piece lately that sparked reflection or raised important questions, feel free to share it with the ReviewerOne community.

About the Author

review

ReviewerOne

ReviewerOne is a reviewer-centric initiative focused on strengthening peer review by supporting the people who make it work. ReviewerOne provides current and aspiring reviewers with AI-powered tools and resources to help them review more confidently, consistently, and fairly, without removing the human judgment that peer review depends on.

The ReviewerOne ecosystem brings together a reviewer-friendly peer review platform with structured guidance and AI-assisted checks; a community forum to foster networking and collaboration; a Reviewer Academy with practical learning resources on peer review, AI, ethics, and integrity; and meaningful recognition through verified credentials and professional profiles. ReviewerOne aims to reduce friction in peer review while elevating reviewer expertise, effort, and contribution.

Connect:

Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Take the next step in transforming your academic and professional journey

Get early access to a community and tools designed for peer reviewers

Join the ReviewerOne Community