ReviewerOne

17

Sep
Will the brain and the bot be friends_ The fine balance between AI and human in scholarly publishing

Will the brain and the bot be friends? The fine balance between AI and humans in scholarly publishing

Conversations around the use of artificial intelligence (AI) in scholarly publishing have picked up and how! In a system where editorial offices are often on the verge of buckling under the weight of submissions, delays, and reviewer fatigue, the rise of practices and players threatening the integrity of and trust in research further complicates the situation.

As such, AI introduces the sweet possibility of relief and speed for every stakeholder involved in conducting and disseminating research. The benefits of AI have been too tantalizing to ignore – improved speeds, ability to process content and systems at scale, and the undeniable promise of help whenever, wherever, and however you need it. However, there is also a dark side to this. The unfiltered use of AI has been revealed to pose a threat to research integrity and even generate gibberish knowledge. As a result, AI is viewed as an opaque mechanism that could undermine humanity’s trust in science.  

The truth, as is often the case, lies somewhere in between. The new reality in scholarly publishing need not be a battle between humans and bots but an evolving partnership of coexistence between the two. The challenge before us is figuring out how to balance what AI can do with what only humans should do. 

This balance is especially pressing in the context of peer review, a system that is considered the cornerstone of scholarly publishing. Peer review has always relied on the willingness of experts to volunteer their time and intellectual labor, often without tangible rewards. And when AI stepped into this delicate ecosystem, it shook things up by being consistently available, efficient, tireless, and seemingly objective. But is all of this enough? At what point does the human element become irreplaceable?  

What AI brings to the table 

Let’s start with the obvious first. AI has transformed publishing workflows in ways that were almost unimaginable a decade ago. Manuscript submissions, today, rarely arrive without first passing through multiple layers of machine assistance. Plagiarism detection tools, AI-driven grammar checks, and automated formatting systems already serve as invisible gatekeepers. More advanced applications are also able to recommend suitable reviewers, analyze citations, and flag ethical violations. 

For editors and publishers, this support is invaluable. The sheer volume of submissions in many fields is overwhelming. Journals with limited staff cannot realistically handle massive submission inflows without automation. AI not only helps editorial offices keep pace but also introduces a level of consistency that humans, prone to fatigue and subjectivity, could struggle to match. A plagiarism scan performed at 9 AM will yield the same results as one at midnight. A reviewer-matching algorithm will never forget to check for conflicts of interest. 

What AI has introduced, then, is scale, speed, and standardization. It excels at repetitive, rule-bound tasks. It can sift through mountains of data in seconds, freeing humans from the drudgery that drains time and energy. AI doesn’t need sleep, recognition, or motivation. It simply does the job it is programmed to do, but that is also its biggest weakness. AI does the job without the ability to distinguish between outputs that are ethically sound and those that are inherently problematic. Therein lies the paradox. Peer review is not merely a job to be done. It is essentially a human act of assessment, trust, and engagement, and AI could fall short here. 

What we (humans) bring to the table 

The human contribution to scholarly publishing is not only about efficiency; it is about meaning. Peer review is more than a technical check; it is a deeply meaningful exchange between “peers” – researchers who are committed to publishing robust research and rely on a culture of mutual respect and trust. 

  • Finding value: When peer reviewers evaluate a manuscript, they are not simply verifying data points or checking boxes. They are trying to find value, novelty, significance, and relevance. Such an evaluation requires experience in the field and familiarity with past and current debates. This effort cannot be neatly quantified or automated. It is deeply rooted in human interpretation and intellectual instinct. 
  • Connection: Humans also bring empathy and mentorship to the peer review process. A senior researcher might balance critique with encouragement, identifying the potential in an imperfect paper. An editor might mediate conflicting reviews, framing feedback in a way that pushes the author forward rather than shutting them down. These subtle but critical choices shape careers and build communities. They are essential to the exchange that facilitates knowledge. 
  • Publication ethics and research integrity: There’s also the matter of ethics and fairness. Humans can weigh highly complex context in ways AI cannot. Consider a manuscript from a researcher in a low-resource setting, where access to cutting-edge labs or datasets is limited. A human editor might still recognize the value and relevance of the work. Would an AI-based system, trained on limited data, reach the same conclusion?  

The human element is integral to scholarly publishing, especially in the context of peer review. Human peer reviewers bring perspective, compassion, and connection. Without these, the process risks becoming mechanical, transactional, and devoid of the values that underpin the scientific enterprise. The temptation to use AI in scholarly publishing is undeniable. However, the cons outweigh the pros. This calls for a highly balanced and well-informed approach that integrates the benefits of AI without sacrificing the essential inputs only human involvement can deliver. 

The AI–human interaction in scholarly publishing 

What makes this moment in publishing compelling is not the prospect of either AI or human peer review expertise on its own, but rather the interaction between the two. Together, they are reshaping the landscape of peer review. 

  • The upside: On the positive side, AI seems to have taken on the role of a co-pilot by reducing editorial workloads and supporting authors. Peer reviewers, too, can devote more time to assessing research quality. If incorporated thoughtfully, this shift could help combat reviewer fatigue and burnout, a problem that has long threatened the sustainability of peer review. 
  • The downside: This human-AI interaction is not without risks. Increasing reliance on algorithms introduces new challenges. AI tools learn from existing data that often reflects systemic inequities. This can reinforce certain groups, regions, or publication practices. Many AI models are also black boxes, and this opacity brings their transparency and accountability into question. Additionally, there is a temptation to trust the machine without question. But what happens when the algorithm misses something crucial, or worse, makes a mistake that no one notices until after publication? What if the machine enables the fabrication of information that appears authentic and escapes scrutiny? 

These issues highlight a deeper tension. Peer review has been criticized for being slow, inconsistent, and subjective. AI promises to fix these flaws, but in doing so, it risks stripping away the very qualities of judgment, context, and debate that give peer review its legitimacy. 

Partnership, not paradox 

Framing this as an either/or choice might not be the right solution. AI versus human. Machine efficiency versus human intuition. That framing is too simplistic and fails to capture the complexities of this relationship in the context of scholarly publishing and peer review. 

The real challenge lies in embracing a partnership. AI cannot (and should not) replace human peer reviewers and editors but amplify their strengths. While AI can be trained to carefully handle the heavy lifting of repetitive tasks, us humans can guide the intellectual and ethical heart of the process. 

When we stop seeing AI as a threat and start treating it as an enabler, the conversation shifts. The fine balance is not about dividing tasks or conflating the two but about recognizing complementarity. The more we lean into this mindset, the more publishing can benefit from AI without losing its human essence. 

The fine balance between AI and human in scholarly publishing is not a static formula. It will continue to evolve. New tools will raise new questions. New ethical dilemmas will emerge to challenge our assumptions. What matters is that we remain anchored to the purpose of publishing itself: to advance knowledge, foster trust, and support the stakeholders who make science possible. AI should accelerate processes, but it should never replace the human ability to interpret, empathize, support, and decide. The credibility of scholarly publishing will rest not on how fast we can process manuscripts, but on how we balance machine efficiency with human wisdom. 

That balance is not easy to achieve. But if the brain and the bot manage to establish a friendship based on clear boundaries, scholarly publishing and peer review will not just become faster, but more trustworthy and humane. 

Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Ready to Book Your Appointment?

Take the next step in transforming your peer review process with powerful,
AI-driven tools designed for efficiency and accuracy.