An overview of guidelines on the use of AI in peer review
We are no longer simply speculating the possible use of AI in scholarly publishing. AI is very much here and has made its presence felt in both research and publication workflows. Conversations today are more about evaluating the limits of AI usage, increasing our own knowledge of its benefits and caveats, and defining ways to use AI ethically and transparently in academic/scientific research and scholarly publishing. What does this shift mean specifically for peer reviewers and editorial professionals? For editors and reviewers, this shift in focus indicates deliberate steps towards outlining AI usage scenarios in broader editorial workflows.
The use of AI in editorial workflows
Several publishers and industry bodies are not just recognizing the promise of AI-powered solutions but also outlining best and ethical AI usage policies and guidelines that will help safeguard the core values of editorial and peer review processes – confidentiality, fairness, and integrity. Editorial desks are adopting AI tools to assist with tasks such as plagiarism detection, language refinement, and flagging potential ethical concerns. These applications reduce the burden on editors by taking care of manual processes/checks and helping make editorial workflows more efficient. This in turn frees up humans to focus on intellectual evaluation and informed, experiential judgement.
The integration of AI into the peer review process has been complex and cautious. While publishers acknowledge AI’s potential to support and augment editorial workflows, they also acknowledge limitations especially when used freely for peer review. For instance, generative AI models may not be able to extract deep subject context or reflect the nuances of a researcher’s experience.
Transparency – a recurring theme
Across major publishing houses, transparency is a recurring theme in AI usage in scholarly publishing. Publishers and journal editors are signaling the need for openness about where and how AI is/was used in addition to clear indications about how it should not be used. For example, Springer Nature’s guidance for reviewers and editors explicitly prohibits uploading manuscript content into openly accessible generative AI tools due to confidentiality concerns. The publisher also requires reviewers to declare any AI assistance used in their evaluations – this implies that peer reviewers need to share what AI tools they used and how. Similarly, MDPI emphasizes that reviewers are responsible for the content of their peer review reports and must not share unpublished manuscript text with third-party AI tools. However, limited use of AI for improving peer review may be acceptable, if disclosed.
These guidelines reflect an industry-wide acknowledgement of AI while setting boundaries for its applications and transparency about how it is used. The openness in discussing the prevalence of AI has also led to newer standards in publishing. These new guidelines on AI usage, such as those outlined by the Committee on Publication Ethics (COPE), are anchoring new practices in established standards of transparency and accountability.
Guidelines on the use of AI in peer review
Industry guidance on AI use in peer review emphasizes some common themes.
- Peer review should be human led: One core expectation is that peer review should remain a fundamentally human evaluation and not be driven or performed by AI. Peer reviewers’ judgement should be shaped by their scholarly expertise and nuanced understanding of a field. Their experience and perspectives are invaluable and cannot be delegated to a machine. Most publisher policies make this clear by restricting AI tools from generating substantive review content or recommendations.
- Maintaining confidentiality is key: Manuscripts under review represent the intellectual capital of researchers and their confidentiality should be respected. Most guidelines on the use of AI in peer review prohibit reviewers from inputting any manuscript content into third-party AI tools that might store, learn from, or reuse that data.
- Transparent disclosures are encouraged: Guidelines on AI usage also focus on transparency and disclosure. If peer reviewers use an AI tool for minor tasks such as improving the quality of their writing, many publishers require that this be disclosed in the review report or to the handling editor. This approach mirrors expectations from authors regarding AI assistance for manuscript preparation.
Industry bodies such as COPE provide broader ethical principles that reinforce these expectations. COPE’s position emphasizes responsible, transparent use of AI and situates AI policy within larger commitments to integrity, fairness, and accountability in publishing.
What do these developments indicate?
Taken together, these developments reflect a major transition in the scholarly publishing ecosystem. The community is no longer rejecting AI outright. Instead, the focus is on thoughtful integration that preserves core human effort, especially in peer review, while taking advantage of tools that improve efficiency and support rigorous evaluation. The emphasis on transparency and disclosure signals a willingness to engage with AI openly rather than ignoring its presence. Moreover, the linking of AI usage guidelines to established ethical frameworks suggests a move toward greater standardization across publishers.
For peer reviewers, this evolution highlights a professional expectation. Awareness of AI guidelines is now part of the role. Reviewers are expected to navigate technological possibilities with integrity and ethical sensitivity and to uphold the trust authors and editors place in them. This does not mean that reviewers are expected to become AI experts. Rather, understanding how industry guidelines frame the role of AI will help reviewers uphold the integrity of the scholarly record in a rapidly changing environment.
As the scholarly publishing ecosystem continues to evolve, staying attuned to evolving guidelines on the use of AI and thinking about its place in peer review will support ethical, fair, and rigorous evaluations.
References
- COPE Focus on artificial intelligence
- Guidance for researchers on use of AI in publishing
- Nature Portfolio – Artificial Intelligence (AI)
- Publisher Policies on AI: MDPI
- Wiley – Using AI tools in your research
- BMJ Author Hub – AI use
- EASE – Recommendations on the use of ai in scholarly communication
- JAMA – Guidance for authors, peer reviewers, and editors on use of ai, language models, and chatbots
- Taylor & Francis – AI Policy
- Sage – Using AI in peer review and publishing
- The ICMJE recommendations on AI: Advice for authors and peer reviewers
- Navigating the future of peer review in the age of generative AI
- Artificial intelligence in peer review: Ethical risks and practical limits
Like this post? Sign up early to join our peer reviewer community and engage with ongoing discussions shaping the future of scholarly publishing.
Leave a Comment