06

Feb

Scholarly Publishing Round-Up: Authorship, Accountability, Access, and AI

In this round-up, we bring together a set of discussions that reflect where scholarly publishing finds itself today, balancing innovation with accountability and ambition with trust.

AI and ghost authorship

In this guest post on The Scholarly Kitchen, Ch. Mahmood Anwar examines generative AI not simply as a writing tool but as a growing authorship crisis. Anwar argues that the central ethical risk we are faced with is no longer debating whether AI should be listed as an author (which most publishers already prohibit) but the widespread and undisclosed use of large language models to generate substantial chunks of scholarly content. When AI is indiscriminately used to draft literature reviews, discussions, or conclusions without disclosure, the practice closely resembles ghost authorship. The result is a loss of authenticity, transparency, and accountability, since AI cannot take responsibility for errors, fabricated citations, or flawed reasoning. Anwar proposes a practical framework that differentiates minor language assistance, substantive text generation, and unethical uses such as data fabrication. This post also calls for stronger disclosure norms, clearer policies across publishers, and a renewed emphasis on human accountability. The central message is clear: if trust is the foundation of scholarly communication, transparency around AI use can no longer be optional. Read this guest post on The Scholarly Kitchen.

Correcting the record on robot-driven discovery

Journalist Dalmeet Singh Chawla reports in Chemical & Engineering News on the correction of a highly publicized Nature paper that claimed a robot laboratory had discovered 43 entirely new materials in just 17 days. The original study attracted global attention, but it was soon revealed that many of the materials already existed in established databases. In January 2026, the publisher issued a correction to address the study’s claims of novelty. While some critics feel that retraction is no longer necessary, they argue that deeper scientific concerns remain unresolved, particularly around the model’s ability to predict realistic atomic structures. This incident highlights a recurring tension in AI-generated research. Ambitious claims generate excitement and citations, but they also demand rigorous validation. Read the full article here.

Universities push back on publishing costs

In this article, Jack Grove talks about how three research-intensive universities have chosen not to renew their Elsevier journal subscriptions, even after a nationally negotiated agreement was reached. The universities of Kent, Essex, and Sussex cited concerns about rising costs and disagreements with the publisher’s approach to open access. Jisc, a not-for-profit infrastructure and technological solutions agency, described the agreement as strong and competitive, these institutions concluded that the deal did not align with their financial realities or long-term publishing values. Librarians indicate that more such institutional opt-outs may follow, especially as universities face mounting budget pressures. Read the full article here. 

When AI-generated references undermine trust

A recent case highlighted by Retraction Watch shows how even limited use of AI can compromise the scholarly record when safeguards fall short. The journal Intensive Care Medicine published a letter discussing potential AI applications in intensive care that were later revealed to be deeply flawed due to multiple non-existent references, including one that falsely cited the journal itself. Subsequent checks revealed that most of the cited sources could not be verified, leading the editor to retract the letter after losing confidence in its reliability and identifying failures in the peer review process. The authors stated that AI was used only to format references, an activity permitted under the journal’s guidelines. The guidelines also place full responsibility for content accuracy on authors. This case adds to a growing body of examples where AI-generated citation errors have entered the scholarly record, reinforcing the fact that even when AI is used for routine tasks, careful human verification remains essential. Read the full article here.

 

If you’ve come across a piece lately that sparked reflection or raised important questions, feel free to share it with the ReviewerOne community.

Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Ready to Book Your Appointment?

Take the next step in transforming your peer review process with powerful,
AI-driven tools designed for efficiency and accuracy.