30

Jan

Scholarly Publishing Round Up: Slowing Down, Counting Costs, Managing Risk, and Assessing Credibility

In this week’s round-up, we bring you four perspectives that, taken together, suggest the scholarly publishing ecosystem may be reaching a point of reckoning.

Making the case for slow science

In his article in Nature’s career column, Adrian Barnett  discloses his decision to publish less, not more, offering a strong personal argument for publishing less in order to publish better. Reflecting on his own career, Barnett describes how rising publication expectations have normalized output levels that once seemed exceptional, often reducing quality to a numbers game where the focus is on output of journal prestige alone. He has deliberately chosen to cap his output at seven papers a year, roughly half his recent average. This, he elaborates, is not about doing less research, but about spending more time on each paper through deeper reading, stronger analysis, and more careful interpretation. Barnett situates this decision within a system under strain, where publication volumes have surged, peer review capacity has not, and quality is increasingly compromised. While acknowledging that early-career researchers face stronger constraints, he calls for a broader cultural shift toward rigor, reflection, and care.

Read the full article here

 

Who should pay to publish science?

Writing in Undark, Peter Andrey Smith explores a clear overview of growing dissatisfaction with the article processing charge (APC) model that dominates open-access publishing. As funders mandate openness, researchers are required to pay substantial fees to publish publicly funded work. Smith examines backlash to proposals from the US National Institutes of Health (NIH) to cap or restrict publication fees. While these limits are intended to curb excessive charges, critics that warn this could further narrow down publishing options and disproportionately affect early-career researchers and those with limited funding. Smith frames this debate as part of a larger reckoning in a publication model that rewards prestige and volume while shifting costs onto researchers and funders. The question at stake is not only affordability, but whether the current models align with the goals of equitable and trustworthy research.

Read the full article here

 

Oversight, integrity, and the hidden cost of failure

On a Medium blog, Adam Day examines how research integrity failures translate into strategic and financial risk for journals. Using publication data and simple modeling, Day shows how surface-level submission growth can conceal deeper vulnerabilities. He differentiates between isolated misconduct cases and patterns that signal systemic risks. Individual incidents require time and resources to investigate and correct, and the long-term impact becomes visible when damaged trust in journals leads authors to submit elsewhere. Over time, this loss of confidence can reduce submission volumes and quality, weaken a journal’s position, and affect revenue stability. Day’s central argument is that effective oversight is less about reacting to failures after the fact and more about identifying risk patterns early enough to intervene. He also frames integrity safeguards as essential to the long-term sustainability of scholarly publishing, not just its ethical standing.

Read the full  article here

 

Why credibility matters and why it is so hard to assess

A study published in PeerJ  examines how researchers assess credibility when  serving on hiring, promotion, and grant review committees as well as areas where they feel least supported as reviewers. After surveying nearly 500 biology researchers, the authors find that credibility is central to research assessment. Several issues  appear in integrity-related areas such as detecting fabrication, falsification, plagiarism, and ethical misconduct. Despite valuing intrinsic qualities like sound methods and well-supported conclusions, many assessors still rely on proxies such as journal reputation or impact factor due to time pressures and a lack of better signals. The authors propose focusing on rigor, integrity, and transparency while assessing credibility, and argue that clearer definitions and easy-to-use indicators would improve the alignment between assessment practices and assessors’ values.

Read the full article here

 

Have you come across an article, post, or study lately that had an impact on you? Share it with the community below.

 

Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Ready to Book Your Appointment?

Take the next step in transforming your peer review process with powerful,
AI-driven tools designed for efficiency and accuracy.