Home Blog Research Integrity and Publication Ethics

What to do when you suspect data fabrication in a manuscript

review

ReviewerOne

2 Apr 2026 | Read Time: 4 mins

02

Apr

Suspecting that data in a manuscript may be fabricated or falsified is one of the most serious situations a peer reviewer can face. The consequences for the scientific record, other researchers who build on the findings, and in clinical or applied fields for people who may depend on them, are significant. At the same time, the bar for a conclusion of fabrication is high. It is critical not to mistake statistical anomalies, honest error, or methodological approaches you’re less familiar with for deliberate misconduct. This post sets out what fabrication looks like to a reviewer, how to think about ambiguous signals, and how to respond appropriately.

Fabrication, falsification, and error: Understanding the distinction

Data fabrication means inventing data or results that were never actually collected or observed. Falsification means manipulating existing data, materials, or experimental processes to misrepresent findings. Both are serious forms of research misconduct under virtually all international guidelines, including the United States Office of Research Integrity (ORI) definition and the Singapore Statement on Research Integrity

These are distinct from honest errors, incorrect calculations, mislabeled figures, or transcription mistakes, which can be corrected through a corrigendum and do not carry implications of misconduct. The distinction matters because the appropriate response differs significantly. Honest errors are addressed through correction; data fabrication and falsification may warrant further investigation and, if necessary, retraction.

As a peer reviewer, you are rarely in a position to distinguish definitively between data fabrication and error from a manuscript alone. What you can do is identify patterns or features that make the data implausible and raise a qualified concern with the editor. Your role is not forensic analysis but informed flagging.

Warning signs a reviewer might notice

Data that may have been fabricated or falsified sometimes exhibit recognizable patterns. No single warning sign is diagnostic on its own, but a cluster of them, or a single very striking anomaly, warrants attention.

Statistical anomalies: Results that are implausibly clean, standard deviations that are too small, probability values (p-values) that cluster suspiciously close to the significance threshold across multiple tests, effect sizes that are unrealistically consistent across conditions, or data distributions that look smoother than biology or social systems tend to produce.

Internal inconsistencies: Figures or tables that do not agree with each other, statistical results that could not have been produced by the analysis described or reported sample sizes that conflict with the methods section. These may reflect honest errors or fabrication, but they need to be queried either way.

Implausible recruitment or collection figures: For studies reporting on human participants, the claimed sample size and recruitment period should be plausible for the stated setting. A study reporting 500 recruited participants in a setting where that volume is clearly impossible in the stated timeframe is worth questioning.

Image irregularities: Duplicated, spliced, or digitally altered images are a specific form of falsification. If figures purport to show different experimental conditions but appear to use the same underlying image, note this as a data integrity concern.

Discrepancies between registered protocol and reported results: If the paper references a pre-registration or registered clinical trial, check whether the primary outcomes, analysis plan, and participant numbers align. Significant unexplained deviations can indicate outcome reporting bias or, in more serious cases, manipulation.

Statistical warning signs

Statistically anomalous results are among the most common signals that reviewers may notice, but they require careful interpretation. A result that looks too clean is not proof of fabrication; small, well-controlled studies can produce tight data, and some fields routinely show stronger effects than others. The question is whether the pattern is plausible given the methods described.

Tools exist to support this kind of assessment. Granularity-Related Inconsistency of Means (GRIM) testing checks whether reported means are mathematically possible given the reported sample sizes and measurement scales. Sample Parameter Reconstruction via Iterative TEchniques (SPRITE) analysis examines whether reported summary statistics could plausibly arise from the described data. These are technically demanding but publicly available. If you have expertise in biostatistics or quantitative methods, and something in the data looks wrong, these tools can help you articulate the concern more precisely for the editor.

If you don’t have that depth of statistical expertise, a simpler framing is appropriate: note that specific aspects of the reported data appear unusually clean or internally inconsistent, and suggest the editor seek statistical review.

How to respond: The right process

The single most important rule: do not include accusations of fabrication or falsification in your written review. Your review is shared with the authors in most journals, and making an unsubstantiated accusation of misconduct in a document they will read is both unfair and potentially defamatory.

Instead, raise your concerns in the form of confidential comments to the editor, through a separate channel that authors do not see. Most submission systems have this as a distinct field; if the system does not, a brief email to the editor with the subject line clearly marked as confidential is appropriate.

In your confidential note, describe specifically what you have observed: which data, which figures, which tables, what appears implausible or inconsistent, and why. The more specific you are, the more actionable the concern is for the editor. Suggest that the editor consider requesting raw data files from the authors or seek a second opinion from a statistician or specialist in the relevant methods.

The Committee on Publication Ethics (COPE) flowchart for fabricated or falsified data in submitted manuscripts sets out what editors should do when dealing with instances of data falsification or fabrication. Editors need to communicate with the authors to seek explanation, potentially seeking the involvement of the authors’ institution, and considering rejection or referral for investigation. As a peer reviewer, you may not be involved in subsequent steps.

If you are wrong

Raising a concern that turns out to be unfounded is not an error to be afraid of. Editors understand that reviewers are working from limited information. A specific, good-faith concern about data plausibility that leads to an explanation from the authors, and then to a clean outcome, is exactly how the system is supposed to work. What would be wrong is noticing something genuinely troubling and saying nothing, allowing potentially fabricated research to enter the published record.

The critical requirement is proportionality. Raise a concern, not a verdict. Describe what you observed, not what you concluded. Allow the editor and the authors the opportunity to provide context before any judgment is made.

 

If you want to approach peer review with more clarity and confidence, Join the ReviewerOne platform to be part of a network that supports peer reviewers.

About the Author

review

ReviewerOne

ReviewerOne is a reviewer-centric initiative focused on strengthening peer review by supporting the people who make it work. ReviewerOne provides current and aspiring reviewers with AI-powered tools and resources to help them review more confidently, consistently, and fairly, without removing the human judgment that peer review depends on.

The ReviewerOne ecosystem brings together a reviewer-friendly peer review platform with structured guidance and AI-assisted checks; a community forum to foster networking and collaboration; a Reviewer Academy with practical learning resources on peer review, AI, ethics, and integrity; and meaningful recognition through verified credentials and professional profiles. ReviewerOne aims to reduce friction in peer review while elevating reviewer expertise, effort, and contribution.

Connect:

Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Take the next step in transforming your academic and professional journey

Get early access to a community and tools designed for peer reviewers

Join the ReviewerOne Community