Scholarly Publishing Round-Up: Integrity and Identity Under Siege
In this issue of our round-up, we bring you a few interesting perspectives across multiple fronts: publication ethics, identity falsification in peer review, citation manipulation in mathematics, and complications introduced by the use of generative AI in scholarly publishing.
Can having an institutional email address confirm your identity as a peer reviewer?
This piece showcases a preprint that discusses the alarming discovery of peer review identity theft in artificial intelligence (AI) research. The authors unearthed 94 fraudulent reviewer profiles that used genuine university email domains. They also found that researchers were creating multiple fake author profiles by using defunct or random .edu addresses and by exploiting aliases or institutional loopholes to pass as legitimate scholars. This discovery indicates that institutional email addresses, which are generally considered a strong gatekeeping measure, are being compromised. Publishers may need to move beyond domain checks to set up layered identity verification (e.g., linking ORCID IDs, institutional endorsements, or verification tokens). Read more.
Citation manipulation in the field of Mathematics
In mathematics, the reliance on bibliometric indicators has long been both a practical necessity and a vulnerability. Because publication and citation volumes are lower than in many life sciences or engineering fields, each citation carries outsized weight, increasing the field’s vulnerability to metrics gaming. The International Mathematical Union (IMU) and International Council for Industrial and Applied Mathematics (ICIAM) respond to this through joint recommendations. They call for a rebalancing of incentives. They also urge institutions and funders to reduce dependence on ranking systems, and to include expert peer assessments over straightforward numerical tallies. Universities are encouraged to emphasize the quality of work rather than volume (or h-index thresholds), and to educate faculty about predatory journals and citation cartels. Mathematicians are urged to scrutinize journals for integrity, avoid dubious outlets, and raise red flags when anomalies appear. Read more.
Why academic publishers and universities fear generative AI
This article explores why generative AI provokes anxiety among academic publishers, universities, and the wider research ecosystem. Publishers’ subscription revenues are undermined if AI models trained on journal content can deliver synthesized answers directly, reducing the need to cross paywalls. Likewise, publishers’ traditional role as curators of the scholarly record weakens as AI generates outputs detached from original sources. AI-generated content may also mimic publishers’ authority while fabricating citations or introducing errors, diminishing the credibility of peer-reviewed research. The challenge could change the culture of the scientific enterprise. If graduate students enter research programs with knowledge of prompting AI but without training in empiricism, critical reasoning, or research ethics, their ability to evaluate evidence and develop independent judgment is at risk. In the case of universities, generative AI threatens the systems of knowledge evaluation and academic career evaluation. The urgent question is how to train researchers in an AI-saturated environment and how to rethink evaluation when authorship itself is uncertain. Read more.
Ghost kitchens in academia
This article draws a comparison between AI “wrapper” tools in academia and ghost kitchens in the food industry that use multiple brands under the same operation. In scholarly contexts, AI wrapper tools market themselves as specialized research solutions for tasks like manuscript review, data analysis, or writing. However, these could just be interfaces built on top of existing large language models. While they may seem credible owing to academic branding, they add little genuine innovation beyond custom prompts and glossy packaging, often at a significant cost markup. They could also compromise transparency and reproducibility, making it harder for other researchers to audit methods or replicate findings. Sensitive or unpublished data fed into such tools may be exposed owing to weakly regulated systems with unclear safeguards. These wrappers could encourage misplaced trust by presenting generic AI outputs as if they were authoritative, domain-specific insights. These are real threats that could erode critical judgment, accountability, and ultimately the integrity of research itself. Read more.
Join the conversation
The ReviewerOne team is always looking for fresh perspectives and resources. If a blog post, toolkit, or discussion inspired you this week, let us know in the comments. We’d love to hear from you.
Leave a Comment