Home Blog AI in Scholarly Publishing

Scholarly publishing round-up: Scale, preparedness, AI-generated research, and gift authorship

R

ReviewerOne

27 Feb 2026 | Read Time: 2 mins

27

Feb
Scholarly publishing round-up: Scale, preparedness, AI-generated research, and gift authorship

The impact of structural inequities on small publishers

A report commissioned by The Knowledge Exchange examines how small European publishers are navigating the shift to open access. The team argues that open access itself is not the primary threat to small publishers. Instead, structural inequities favor larger organizations that are better positioned to manage compliance requirements, negotiate transformative agreements, and absorb financial risks. Smaller publishers often operate with limited staff and infrastructure, making it harder for them to compete within complex and evolving funding and policy frameworks. The report also warns of risks to bibliodiversity, particularly in the humanities, social sciences, and non-English-language publishing in Europe. Many small publishers are closely connected to regional research communities. Without proportionate regulations and equitable funding flows, parts of that ecosystem may weaken.  Read the full report here

Academic freedom as societal preparedness

This essay in Universitetsavisa argues that academic freedom should be understood as part of society’s preparedness. Rice notes that research funding debates are often shaped by short-term political logic or headline appeal. History shows it is difficult to predict which research will later prove essential. Examples include long-running mRNA research prior to COVID-19 vaccines and Norway’s offshore engineering expertise developed before the discovery of oil at Ekofisk. The central claim of this essay is that research systems must maintain breadth. If funding focuses primarily on immediate and measurable outcomes, research becomes narrower and less capable of responding to unforeseen challenges. There is a need to establish governance models that combine transparency with a focus on long-term goals. Read the full article here.

AI and the automation of normal science

Seva Gunitsky’s post on his Substack reflects on changes in academic publishing during the rise of large language models. As the former associate editor of Security Studies, Gunitsky reports a significant increase in manuscript submissions, including AI-generated papers. While many low-quality submissions can be filtered through desk rejections, he focuses on a more consequential development. AI systems are increasingly capable of producing technically competent empirical research. Gunitsky suggests that if producing methodologically sound research becomes easier, the evaluation bottleneck shifts. Editors and reviewers will need to place greater emphasis on the significance of research questions, not only on technical correctness. He also anticipates a widening gap between highly selective journals prioritizing originality and other outlets publishing incremental, AI-assisted work. At the same time, he acknowledges potential benefits of AI for replication and error detection in empirical research. Read the full article here.

The global problem of gift authorship

Ch. Mahmood Anwar’s article on Reviewer Credits examines the persistence of gift authorship in academic publishing. Drawing on the International Committee of Medical Journal Editors (ICMJE) criteria, the article defines legitimate authorship as requiring substantial intellectual contribution, participation in drafting or revising the manuscript, approval of the final version, and accountability for the work. Gift authorship occurs when individuals are listed without meeting these standards. Anwar identifies common drivers, including hierarchical power dynamics, pressure to publish, reciprocity practices, and weak enforcement of guidelines. The article provides examples from China, South Korea, the United States, Europe, and India to show that the issue is global rather than region-specific. The consequences include distorted credit allocation, reduced accountability, inflated performance metrics, and weakened trust within research teams. The proposed responses include clearer institutional policies, stronger journal oversight, improved education on authorship criteria, and broader evaluation measures beyond publication counts. Read the full article here.

If you’ve come across a piece lately that sparked reflection or raised important questions, feel free to share it with the ReviewerOne community.

About the Author

review

ReviewerOne

Nova Techset

ReviewerOne is a reviewer-centric initiative focused on strengthening peer review by supporting the people who make it work. ReviewerOne provides current and aspiring reviewers with AI-powered tools and resources to help them review more confidently, consistently, and fairly, without removing the human judgment that peer review depends on.

The ReviewerOne ecosystem brings together a reviewer-friendly peer review platform with structured guidance and AI-assisted checks; a community forum to foster networking and collaboration; a Reviewer Academy with practical learning resources on peer review, AI, ethics, and integrity; and meaningful recognition through verified credentials and professional profiles. ReviewerOne aims to reduce friction in peer review while elevating reviewer expertise, effort, and contribution.

Connect:-

Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Take the next step in transforming your academic and professional journey

Get early access to a community and tools designed for peer reviewers

Join the ReviewerOne Community