Scholarly publishing round-up: A closer look at changing workflows, AI adoption, and what researchers need next
We are back, after a short break, with another round up of interesting discussions in scholarly publishing. This week, we bring together four perspectives that offer a clearer view of where scholarly publishing may be heading. The themes include evolving workflows, shifting expectations, responsible AI use, and what researchers need from peer review.
How publishing workflows are being reshaped
Hong Zhou’s post in The Scholarly Kitchen offers a thoughtful high-level map of how publishing workflows are changing. Instead of treating workflow modernization as a purely technical upgrade, Zhou frames it as a deeper shift in how research moves from idea to publication. The themes are practical: better interoperability between systems, smoother submission experiences, increased automation in quality checks, and a more intentional approach to connecting the right stakeholders at the right time.
The takeaway is simple but important. Workflow evolution works best when it supports the people who make publishing happen – authors, editors, reviewers, and production teams, and when it reduces avoidable friction. Changes are less about reinventing the system and more about making it more humane, predictable, and connected.
Read the full article on The Scholarly Kitchen here.
Rethinking metrics through satire
Dariusz Jemielniak’s piece in Nature offers a sharp and witty reflection on academia’s ongoing fixation with research metrics. He traces how indicators such as the h-index have grown from simple evaluative tools into a culture of constant measurement, shaping how researchers publish, collaborate, and even plan their careers. Jemielniak highlights how new metrics continue to emerge with the promise of capturing scholarly impact more precisely, often complicating assessment rather than clarifying it.
His satirical proposal of a new “j-index,” calculated by dividing the total weight of an academic’s authored books by the years since their PhD, humorously exposes the limits of metric-driven thinking. Behind the humor is a reminder that meaningful scholarship cannot be reduced to numbers alone, and that overreliance on metrics risks overshadowing the substance of research itself.
Read Jemielniak’s full article in Nature here.
Early-career researchers need more support
In this article, Bright Huo, Gary Collins, Giovanni Cacciamani, and Gordon Guyatt zoom in on early-career researchers who increasingly feel unprepared for the realities of clinical research. Many report inadequate training, unclear expectations, and difficulty in navigating the administrative and ethical components of their roles.
This, they insist, is not simply a skills gap but a structural issue, and stress that supporting early-career researchers is essential for maintaining research quality, ensuring patient safety, and creating a more equitable research culture. Stronger mentorship, standardized training, and more inclusive institutional practices should also be key priorities.
If you are interested in understanding how publishing intersects with the larger research ecosystem, you might find this article particularly relevant. Read the article here.
Researchers are becoming more open to AI in peer review
A recent article in Research Information highlights findings from IOP Publishing’s latest global reviewer survey, which shows a widening divide in attitudes toward generative AI in peer review. Even though more researchers now see AI as potentially beneficial, concerns remain strong, especially when AI is used to assess manuscripts they have authored. Although IOPP currently prohibits AI in peer review, many reviewers are already using AI tools for tasks such as improving clarity or summarizing content, raising ongoing questions around confidentiality and data security.
The survey also reveals differences across gender and seniority, with women and more senior reviewers expressing greater caution, while junior researchers being more optimistic about AI’s usefulness. Overall, the article stresses the need for clearer standards and secure, well-designed tools that support, rather than replace, human judgment. Read the full article here.
These four perspectives paint the picture of a publishing landscape that views AI with caution, and yet, acknowledges the potential it has. The need for more efficient workflows is acknowledged, as is the importance of ensuring the responsible use of AI without taking away the judgment and care that scholarship demands.
If you have come across an article, study, or commentary that shaped your thinking this month, we would love to hear about it.
Leave a Comment