Scholarly publishing round-up: Funding reform, idea ownership, and AI in peer review
Early-career researchers speak out on funding reform
An open letter published on February 9th brings forward the voices of more than 750 early-career researchers across the UK’s particle physics, astronomy, and nuclear physics communities. The researchers raise urgent concerns about current funding reforms. While the UK’s national funding agency UK Research and Innovation (UKRI) has highlighted rising overall investments in R&D, the letter argues that instability in grant cycles and delays in funding decisions are placing disproportionate pressure on fixed-term researchers and early-career scientists. The letter also argues that curiosity-driven research should be viewed as essential, not discretionary, spending. Read the full article here.
When an idea is plagiarized before publication
Bianca Nogrady’s article in Nature examines what happens when a researcher believes that their idea has been plagiarized even before it was published. The article centers on an early-career neuroscientist in Japan who presented an innovative concept at a conference poster session. After discussing the idea with another researcher, the latter later published a preprint presenting a strikingly similar framework, with overlapping structure and wording, but no acknowledgment of the original work. The original scientist now fears being accused of plagiarism if their own paper is published later. The article distinguishes between text plagiarism and idea plagiarism. Text plagiarism is easier to prove because it involves copying protected written work. Idea plagiarism is much harder to establish, especially in the case of collaborations. Ideas are not protected in the same way, and proving ownership often depends on documentation, witnesses, or clear records. Research ethicists point out that intention is difficult to determine. One practical takeaway is prevention. Researchers, particularly those early in their careers, are encouraged to document and publicly timestamp their ideas early, including through preprints. The goal is not to discourage openness, but to combine openness with protection. Read the full article here.
AI-hallucinated citations enter the scholarly record
Sharon Goldman’s report in Fortune explores findings that dozens of accepted papers at the he Thirty-Ninth Annual Conference on Neural Information Processing Systems (NeurIPS 2025) included AI-hallucinated citations. According to an analysis by Canadian startup GPTZero, at least 53 accepted papers contained references that were either fully fabricated or partially altered. Some citations listed non-existent authors or venues. Others were based on real papers but modified in ways that made them inaccurate. These submissions cleared peer review and were included in the official conference proceedings. In AI research, citations are critical for tracing methods and supporting reproducibility, and hallucinated references weaken the foundation. NeurIPS acknowledged the evolving use of large language models in academic writing and emphasized that incorrect references do not necessarily invalidate research findings. However, the sheer volume of submissions makes detailed citation checks increasingly difficult for volunteer reviewers. Read the full article here.
A reviewer’s practical approach for identifying hallucinated citations
In a recent LinkedIn post, Aldar C-F. Chan talks about how peer reviewers are adapting to deal with the challenges introduced by AI. With conferences now asking peer reviewers to check for hallucinated citations, one reviewer experimented with a streamlined workflow. They copied the full reference list into Microsoft Copilot and asked it to verify whether the cited papers exist. Any suspicious entries were then manually confirmed using official venues, the database systems and logic programming (dblp) computer science bibliography, or Google searches. The approach does not guarantee zero false negatives. However, manually verifying flagged citations helps avoid wrongly accusing authors. In practice, the reviewer was able to identify a hallucinated citation using this method. The post ends with a pointed question. If AI tools can be used to generate citations, why can’t we use them to verify accuracy as well? Read the LinkedIn post here.
If you’ve come across a piece lately that sparked reflection or raised important questions, feel free to share it with the ReviewerOne community.

Leave a Comment