The Use of AI In Peer Review
In the scholarly publishing ecosystem, peer review remains the foundational mechanism for quality control, critical appraisal, and trust in research output. However, the conventional peer review model is undergoing a transformation owing to factors such as reviewer fatigue, increasing submission volumes, and the rise of predatory publication practices. As AI technologies continue to advance, the focus of the scholarly publishing community today is on understanding and exploring how AI can be integrated into the peer review process without affecting the integrity of the process.
Understanding AI-augmented peer review
The burgeoning scale of research and manuscript submissions has stretched the peer review system thin. In order to be able to process an increasing number of manuscripts, more and more journal editors seek peer reviewers, even as many reviewers report fatigue or decline invitations. AI offers the possibility of relieving some of the burden. AI can help make the peer review pipeline more efficient, relieve some burden, and enhance the consistency of routine aspects of a review.
What AI brings to the table
- Speed and scalability: Large language models (LLMs) and other AI-powered tools can process text quickly and help summarize large manuscripts, extract key features, match reviewers, or check adherence to reporting guidelines.
- Consistency in routine checks: AI can help flag missing information, detect formatting inconsistencies, check data in tables against the text, and surface possible instances of plagiarism or duplication.
- Enhancing reviewer capacity: By automating repetitive or mechanical tasks, reviewers may be able to devote more time and cognitive effort to higher-value contributions such as commenting on a study’s novelty, methodological depth, and significance.
The limitations and risks of AI in peer review
The introduction of AI into peer review is far from straightforward or risk-free. Some of the complexities that it could introduce include:
- Lack of domain expertise, nuance, and context
While AI could be effective at language processing and pattern recognition, it lacks the deep subject-matter expertise and contextual understanding that human reviewers bring. For instance, assessing whether a manuscript truly advances the field, recognizes a novel insight, or interprets a complex methodology remains a human strength. AI may also miss subtle methodological flaws or fail to grasp implications across disciplines.
- Bias, transparency, and accountability
AI models are trained on large data and could inherit the inherent biases in those datasets. This includes “positivity bias” or uneven performance depending on specific aspects such as demographics. Moreover, the “black box” nature of many AI algorithms makes it more challenging to ensure transparency and accountability. It also raises questions such who takes responsibility when an AI tool incorrectly flags content or misses a critical flaw.
- Confidentiality and data security
Peer review often involves unpublished, sensitive data. If such manuscripts are uploaded into external AI tools or cloud-based language models, there is risk of inadvertent disclosure, data leakage or unintended reuse.
- Cognitive off-loading and reviewer passivity
Over-reliance on AI may degrade human critical thinking. If reviewers default to AI-generated suggestions, they may become less engaged, thereby weakening review quality.
A balanced approach: AI as assistant, not replacement
The promise offered by AI as well as the risks it poses are too prominent to ignore. Treating AI as a partner rather than a substitute for human reviewers could offer constructive solutions. The ultimate responsibility for peer review decisions must remain with humans. In practical terms, this means designing workflows in which AI supports specific tasks, freeing up human reviewers’ time to focus on tasks that are aligned with their unique strengths. Here are a few best practices:
- Use AI to perform screening tasks such as grammar, formatting, compliance with reporting guidelines, detection of obvious plagiarism or duplication.
- Use AI to generate structured peer review templates or highlight issues (e.g., missing ethics statements, conflicting data) that reviewers can then interpret and expand.
- Preserve the human reviewer’s central role in evaluating novelty, methodological robustness, ethical context, and field significance.
- Maintain clear transparency about AI use and disclose when it was used, how outputs were interpreted, and by whom. Training reviewers on how and when to trust AI output is also essential.
- Establish governance and controls. Secure review data, restrict the use of public AI models for confidential manuscripts, and set up monitoring mechanisms for bias and error rates.
How the scholarly publishing community could use AI responsibly
AI tools offer genuine relief from tedious and time-consuming checks. Peer reviewers should view these tools as augmenting their judgment, not replacing it. Maintaining domain expertise, critical thinking, and willingness to dig into subtleties remains more important than ever. For publishers and platform, it is now more important than ever to ensure the thoughtful implementation of AI. Deploying tools is not enough. Journals must monitor their impact (on speed, quality, bias), build policies around confidentiality and accountability, and clearly communicate to authors and reviewers how AI is used. If AI-augmented workflows allow reviews to be delivered faster, more consistently, and by alleviating reviewer fatigue, the throughput and fairness of peer review can be improved. However, the community must also build safeguards to prevent and monitor the unintended effects of the use of AI, such as reduced diversity, institutional bias, or superficial reviews.
Where ReviewerOne stands
At ReviewerOne, we believe in the value of human judgement in peer review. AI is not a replacement for that value. It is a tool to support it. Our view is simple: peer reviewers bring domain expertise, experience, intuition, and accountability; AI brings speed, structure, and consistency. Together, they can form a more effective partnership.
Our objective is to make peer review more sustainable, reliable, and efficient while preserving the central role played by peer reviewers. In practice, that means designing workflows where reviewers have the benefit of AI-powered assistance (for example, upfront checks, structured feedback generation, flagging potential issues) and yet retain full ownership of their assessments and suggestions.
As the peer review landscape evolves, we’re committed to upholding transparency about how the use of AI, to maintaining reviewer choice and control, and safeguarding confidentiality and fairness at every step.
Looking ahead
The integration of AI into peer review is still nascent. Key questions to be addressed include:
- Identifying tasks that can be reliably aided by AI and that require human judgement
- Monitoring and evaluating the impact of AI on review quality, bias, and speed
- Training and supporting peer reviewers to make the best use of AI without becoming overly dependent on it
- Understanding how access to AI tools will affect equity and inclusion in the review workforce
- Setting up governance frameworks and transparency standards to preserve trust
Today, it is clear that AI will play a role in peer review, but how we harness to ensure rigorous, fair, and thoughtful review is more important. With the right partnership between human expertise and machine assistance, peer review can evolve to meet the demands of today’s research ecosystem while preserving its core values.
ReviewerOne looks forward to working with reviewers, editors, and researchers to build the future of peer review, where human insight remains central and is supported by smart, trustworthy tools.
Leave a Comment