Home Blog Navigating Peer Review

How to assess methodology as a peer reviewer

review

ReviewerOne

14 Mar 2026 | Read Time: 3 mins

14

Mar
How to assess methodology as a peer reviewer

If you had to identify the single most important section of a research manuscript for a peer reviewer to assess carefully, most experienced peer reviewers would point to the Methods section or methodology. A manuscript can present a compelling research question and interesting results and still be fundamentally flawed if the methodology is not sound. Conversely, one with modest findings, but a rigorous, transparent methodology makes is more robust and indicates a genuine contribution to the scientific record.

The methodology is among the most intense focus areas for peer reviewers. It could be dense and technical and take longer to read carefully than the other sections. The problems it contains could also be less obvious than a poorly argued conclusion or a misleading figure. This post covers what to look for when assessing the methodology with specific hints to guide your evaluation.

1. The reproducibility test

The most fundamental question you can ask about any Methods section is whether someone with appropriate expertise and resources could reproduce this study using only the information provided. If the answer is no, something is missing.

Reproducibility is the practical standard that makes scientific knowledge cumulative. If a finding cannot be replicated because the methods are not described in sufficient detail, the contribution of the paper to the field is limited. As you read the methods, ask yourself: What would I need to know to run this study myself? Is that information here? Common gaps include:

  • insufficient details about materials or instruments used
  • missing information about data collection procedures
  • missing descriptions of how variables were operationalized
  • analytical steps that are described too vaguely to follow

2. Study design appropriateness

A more subtle but equally important question is whether the study design is appropriate for the research question being asked. This is distinct from whether the methods are well executed. A study can be carried out rigorously and still be the wrong approach for what the researchers are trying to find out. Examples of this include:

  • A cross-sectional study that draws causal conclusions rather than acknowledging its correlational limitations
  • A sample size that is too small to detect the effect the study claims to measure
  • A control condition that does not adequately control for the confounding variables the study is meant to address

Assessing design appropriateness requires you to simultaneously consider the research question and study design in mind and ask whether the one can actually complement/complete the other.

3. Sample and participant information

For studies involving human participants, animal subjects, or sampled data, the Methods section should provide clear information about:

  • How the subjects or participants were identified, selected, and obtained/recruited
  • Sample size and the justification for that size
  • Inclusion and exclusion criteria
  • For human studies, evidence of ethical approval and informed consent

It is not uncommon for the sample size justification to be missing or inadequate. Such studies can produce findings that appear significant but are not robust. Look for a power calculation or an equivalent justification and note its absence, if relevant.

4. Statistical methods

Statistical analysis is one of the areas where reviewers most often feel uncertain, particularly if the methods go beyond common techniques. The honest approach is to evaluate what you can and note explicitly when something is outside your expertise.

  • For common statistical approaches, check whether
  • The correct test is being used for the type of data and the research question
  • Assumptions of the test are addressed

The results are reported completely, effect sizes and confidence intervals, not just mentioned as p-values

A specific pattern to watch for is outcome switching where the primary outcome reported in the results does not match that specified in the methods. Although this is particularly relevant for clinical and intervention studies, it occurs across disciplines.

5. Data availability and transparency

Many journals now expect authors to make their data and analysis codes available, either as supplementary material or in a public repository. Check whether the journal requires this and whether the authors have complied with the requirement. If data sharing is not required, consider whether the data is described in sufficient detail to allow independent scrutiny. A manuscript that asks readers to trust results without providing any mechanism for verification is making a different kind of claim than one that provides full transparency.

What to flag and how

When you identify a problem with the methods, describe it precisely and explain why it matters. ‘The Methods section is insufficient’ is not a useful comment. ‘The sample size is not justified, and the study appears to be underpowered to detect the effect size described in the introduction’ is a comment the authors can respond to. Not every can be fixed with additional information or analysis. Others may be more difficult to address, such as, a fundamentally inappropriate study design. Be clear in your report about which category you think each concern falls into.

We’d like to hear from you

What aspect of the Methods section do you find most challenging to evaluate? Sample size? Statistical approaches? Something else? Share it in the comments below. Your contribution could be useful for reviewers across the community.

For practical, structured guidance on methods evaluation and other aspects of peer review, join the ReviewerOne Community.

About the Author

review

ReviewerOne

ReviewerOne is a reviewer-centric initiative focused on strengthening peer review by supporting the people who make it work. ReviewerOne provides current and aspiring reviewers with AI-powered tools and resources to help them review more confidently, consistently, and fairly, without removing the human judgment that peer review depends on.

The ReviewerOne ecosystem brings together a reviewer-friendly peer review platform with structured guidance and AI-assisted checks; a community forum to foster networking and collaboration; a Reviewer Academy with practical learning resources on peer review, AI, ethics, and integrity; and meaningful recognition through verified credentials and professional profiles. ReviewerOne aims to reduce friction in peer review while elevating reviewer expertise, effort, and contribution.

Connect:

Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Take the next step in transforming your academic and professional journey

Get early access to a community and tools designed for peer reviewers

Join the ReviewerOne Community