End-to-end AI research moves closer to reality
A recent paper in Nature presents the concept of “The AI Scientist,” a system designed to automate the full research lifecycle.
The system can generate research ideas, conduct literature reviews, write and execute code, analyze results, draft manuscripts, and perform peer review. In an experimental setting, one AI-generated paper met the acceptance threshold in a workshop at a machine learning conference.
The study also introduces an automated reviewer that produces evaluations comparable to human reviewers. Results indicate that output quality improves with stronger models and increased computational resources. The system currently operates in computational domains such as machine learning and shows variability in output quality. The authors also add that “As with any impactful new technology, there could be important risks, including taxing overwhelmed review systems and adding noise to the scientific literature. However, if developed responsibly, such autonomous systems could greatly accelerate scientific discovery.” Read the full article here
Study identifies large-scale networks driving scientific fraud
A study from Northwestern University reports that scientific fraud is being carried out through coordinated global networks and that the scale of this problem is much larger than we may have imagined.
The study identifies the role of paper mills, brokers, and compromised journals in producing and distributing fraudulent research. These operations include the sale of authorship slots, fabricated manuscripts, and citations. It also documents cases where compromised journals are shown to publish large volumes of papers.
The findings are based on analysis of publication databases, retraction records, and editorial data. The findings indicate that fraudulent publications are increasing in volume and multiple methods are being used to bypass editorial and peer review processes. Read the full article here
Round-up highlights preprints, policy gaps, and infrastructure needs
Alice Chadwick El-Ali and Haseeb Irfanullah’s round-up in the International Network for Advancing Science and Policy brings together a set of recent discussions on preprints and open science.
The featured pieces collectively point to gaps in how preprints are reflected in funder policies, noting that only a small number of funders currently include or mandate preprint sharing, with the Bill & Melinda Gates Foundation highlighted as one example. They also reflect ongoing conversations around research assessment systems, where journal-based outputs continue to carry more weight than early or open sharing.
Across the round-up, contributors emphasize the role of infrastructure in supporting preprints, including the need for sustained investment in community-led platforms. The collection also highlights ongoing awareness and capacity-building efforts led by groups such as ASAPbio and PREreview. Read the full article here
Journal retracts nearly 150 papers following peer review concerns
A report by Retraction Watch covers the retraction of 147 papers by the American Society for Testing and Materials (ASTM) International from its Journal of Testing and Evaluation.
The retractions follow an investigation into irregularities in the peer review process, particularly within special issues managed by guest editors. The manuscripts in question were published between 2019 and 2024 and account for between 9 percent and 25 percent of the journal’s annual output in that duration.
The publisher has issued multiple batches of retractions and indicated that further withdrawals may follow. The report also notes similar actions taken by other publishers, including Hindawi and Springer Nature, in response to concerns about compromised peer review in special issues. Read the full article here
If you’ve come across a piece lately that sparked reflection or raised important questions, feel free to share it with the ReviewerOne community.

Leave a Comment