AI in peer review: Mapping promise and risk
Ali Nabavi and co-authors’ review in the International Journal of Medical Informatics explores how artificial intelligence is being used in scholarly peer review and what it means for the future of research evaluation.
The review analyzes 189 sources from 2024 to 2025 and identifies two main roles for AI – assistive tools that help with tasks like manuscript screening and reviewer support, and autonomous systems that attempt to generate or evaluate reviews independently. The findings show that AI can improve efficiency and consistency, but current systems lack the domain expertise and ethical judgment required for independent decision-making.
The study also highlights critical risks, including bias amplification, confidentiality concerns, and inconsistent governance across publishers. A key insight is that AI does not just introduce new challenges but amplifies existing issues such as reviewer fatigue and lack of transparency.
The authors emphasize that AI should be used as a transparent and auditable support tool under human oversight, supported by stronger governance and aligned policies. Read the full article here
What defines a high-quality peer review report
An editorial in Prehospital and Disaster Medicine outlines what makes a strong and effective peer review report.
The article positions peer review as a system of quality improvement rather than a transactional step toward publication. It clarifies that the primary role of peer reviewers is to assess scientific validity, clarity, and accuracy, while leaving language and formatting to editorial processes.
A practical framework is provided to guide reviewers through each section of a manuscript, from title and abstract to methods, results, and conclusions. The emphasis is on asking the right questions and providing feedback that is specific, actionable, and aligned with the journal’s expectations.
The authors also share ten guiding principles, including maintaining professionalism, staying within one’s expertise, avoiding conflicts of interest, and ensuring consistency between comments and final recommendations. The overarching message reinforces that peer review should be constructive, clear, and fair. Read the full article here
Responsible use of research content in GenAI
Todd A Carpenter’s post in The Scholarly Kitchen reports on a new framework from the International Association of Scientific, Technical & Medical Publishers (STM) addressing how generative AI should use scholarly research content.
The framework responds to growing concerns about the reliability of AI-generated outputs in research contexts. It highlights risks such as inaccurate citations, use of non-peer-reviewed material, and lack of transparency in how information is sourced and presented.
To address these challenges, the STM guidance emphasizes core scholarly principles such as proper attribution, clear citation, prioritization of the version of record, and inclusion of corrections and retractions. It also identifies the need for safeguards across different layers of AI systems, including training data, output generation, and user-facing presentation.
The report calls for collaboration between publishers, technology providers, and the research community to align AI tools with established standards of research integrity. Read the full article here
The rise of hallucinated citations in research
A report published in Nature examines the growing presence of AI-generated false citations in scientific literature.
The analysis suggests that tens of thousands of publications from 2025 may contain invalid references, driven by increased use of AI tools in writing and literature review processes. These citations range from entirely fabricated sources to realistic-looking combinations of real research elements.
The article notes that this issue goes beyond traditional citation errors and introduces a new category of risk where references may not exist at all. Publishers and editors are responding with stricter screening processes and tools designed to detect problematic citations, but accurately identifying and correcting these issues continues to be a challenge.
The findings highlight the scale and complexity of maintaining research integrity in an environment where AI-generated content is becoming more common. Read the full article here
If you’ve come across a piece lately that sparked reflection or raised important questions, feel free to share it with the ReviewerOne community.

Leave a Comment