Home Blog Industry Updates & Discussions

Scholarly publishing round-up: AI, peer review, and research integrity in focus

review

ReviewerOne

24 Apr 2026 | Read Time: 3 mins

24

Apr
Scholarly publishing round-up: AI, burnout, and cracks in peer review

AI in peer review: Mapping promise and risk

Ali Nabavi and co-authors’ review in the International Journal of Medical Informatics explores how artificial intelligence is being used in scholarly peer review and what it means for the future of research evaluation.

The review analyzes 189 sources from 2024 to 2025 and identifies two main roles for AI – assistive tools that help with tasks like manuscript screening and reviewer support, and autonomous systems that attempt to generate or evaluate reviews independently. The findings show that AI can improve efficiency and consistency, but current systems lack the domain expertise and ethical judgment required for independent decision-making.

The study also highlights critical risks, including bias amplification, confidentiality concerns, and inconsistent governance across publishers. A key insight is that AI does not just introduce new challenges but amplifies existing issues such as reviewer fatigue and lack of transparency.

The authors emphasize that AI should be used as a transparent and auditable support tool under human oversight, supported by stronger governance and aligned policies. Read the full article here

What defines a high-quality peer review report

An editorial in Prehospital and Disaster Medicine outlines what makes a strong and effective peer review report.

The article positions peer review as a system of quality improvement rather than a transactional step toward publication. It clarifies that the primary role of peer reviewers is to assess scientific validity, clarity, and accuracy, while leaving language and formatting to editorial processes.

A practical framework is provided to guide reviewers through each section of a manuscript, from title and abstract to methods, results, and conclusions. The emphasis is on asking the right questions and providing feedback that is specific, actionable, and aligned with the journal’s expectations.

The authors also share ten guiding principles, including maintaining professionalism, staying within one’s expertise, avoiding conflicts of interest, and ensuring consistency between comments and final recommendations. The overarching message reinforces that peer review should be constructive, clear, and fair. Read the full article here

Responsible use of research content in GenAI

Todd A Carpenter’s post in The Scholarly Kitchen reports on a new framework from the International Association of Scientific, Technical & Medical Publishers (STM) addressing how generative AI should use scholarly research content.

The framework responds to growing concerns about the reliability of AI-generated outputs in research contexts. It highlights risks such as inaccurate citations, use of non-peer-reviewed material, and lack of transparency in how information is sourced and presented.

To address these challenges, the STM guidance emphasizes core scholarly principles such as proper attribution, clear citation, prioritization of the version of record, and inclusion of corrections and retractions. It also identifies the need for safeguards across different layers of AI systems, including training data, output generation, and user-facing presentation.

The report calls for collaboration between publishers, technology providers, and the research community to align AI tools with established standards of research integrity. Read the full article here

The rise of hallucinated citations in research

A report published in Nature examines the growing presence of AI-generated false citations in scientific literature.

The analysis suggests that tens of thousands of publications from 2025 may contain invalid references, driven by increased use of AI tools in writing and literature review processes. These citations range from entirely fabricated sources to realistic-looking combinations of real research elements.

The article notes that this issue goes beyond traditional citation errors and introduces a new category of risk where references may not exist at all. Publishers and editors are responding with stricter screening processes and tools designed to detect problematic citations, but accurately identifying and correcting these issues continues to be a challenge.

The findings highlight the scale and complexity of maintaining research integrity in an environment where AI-generated content is becoming more common. Read the full article here

If you’ve come across a piece lately that sparked reflection or raised important questions, feel free to share it with the ReviewerOne community.

About the Author

review

ReviewerOne

ReviewerOne is a reviewer-centric initiative focused on strengthening peer review by supporting the people who make it work. ReviewerOne provides current and aspiring reviewers with AI-powered tools and resources to help them review more confidently, consistently, and fairly, without removing the human judgment that peer review depends on.

The ReviewerOne ecosystem brings together a reviewer-friendly peer review platform with structured guidance and AI-assisted checks; a community forum to foster networking and collaboration; a Reviewer Academy with practical learning resources on peer review, AI, ethics, and integrity; and meaningful recognition through verified credentials and professional profiles. ReviewerOne aims to reduce friction in peer review while elevating reviewer expertise, effort, and contribution.

Connect:

Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Take the next step in transforming your academic and professional journey

Get early access to a community and tools designed for peer reviewers

Join the ReviewerOne Community