In Resource

In the AI era, fluent writing is becoming less and less of a differentiator in scientific publishing. Access to polished, idiomatic English is no longer limited by language background alone, and sentence-level fluency is now easier than ever to achieve.

What really distinguishes manuscripts that move forward, and always has done, is editorial insight: the ability to anticipate how a paper will be read, evaluated, and judged by others in the field.

Below are the key notes that need to be hit across each manuscript section for a paper to move forward. Feel free to download and print this list to keep alongside you when revising your next manuscript!

Title & Abstract

1. Precision of the Central Claim

Signal: Does the title reflect what the paper actually demonstrates, rather than what it hopes to suggest?
Why it matters: Over- or under-precision at the title level shapes expectations before the paper is even read.

2. Alignment Between Abstract and Manuscript

Signal: Does the abstract accurately mirror the strength, scope, and limits of the results?
Why it matters: Misalignment is one of the fastest ways to lose reviewer confidence.

3. Commitment Without Overreach

Signal: Are conclusions in the abstract explicit and defensible, or hedged by default?
Why it matters: The abstract is where intellectual ownership is first assessed.

Introduction

4. Framing of the Knowledge Gap

Signal: Is the gap clearly articulated as a problem that genuinely needs addressing?
Why it matters: Reviewers look for necessity, not just novelty.

5. Direction of the Narrative

Signal: Does the introduction clearly lead the reader toward a specific question or claim?
Why it matters: An introduction without direction weakens everything that follows.

6. Audience Awareness

Signal: Is it clear who this paper is written for, and what that audience already knows?
Why it matters: Writing for “everyone” often convinces no one.

Methods

7. Transparency vs. Over-Explanation

Signal: Are methodological details presented clearly and proportionately?
Why it matters: Excessive detail can obscure rigour just as much as insufficient detail.

8. Alignment With Claims

Signal: Do the methods clearly support the questions and claims posed in the introduction?
Why it matters: Misalignment raises doubts about study design, even when methods are sound.

Results

9. Narrative Progression

Signal: Do the results change the reader’s understanding step by step, or simply accumulate data?
Why it matters: Significance emerges through progression, not completeness.

10. Use of Emphasis

Signal: Is emphasis applied selectively, or spread evenly across all findings?
Why it matters: When everything is highlighted, nothing stands out.

11. Logical Transitions

Signal: Are transitions between results conceptually clear, not just grammatical?
Why it matters: Logic, not fluency, determines readability at expert level.

Figures & Data Presentation

12. Figure–Text Alignment

Signal: Do the figures clearly support the narrative in the Results, or do they compete with it?
Why it matters: Figures should carry the story forward, not force the reader to reconstruct it.

13. Interpretability

Signal: Can the main message of each figure be understood without excessive cross-referencing?
Why it matters: Reviewers often assess figures before reading the full text.

Discussion

14. Conceptual Interpretation

Signal: Does the discussion take a clear conceptual step beyond summarising results?
Why it matters: Readers and reviewers expect meaning, not repetition.

15. Proportionality of Claims

Signal: Are claims scaled appropriately to the strength and limits of the data?
Why it matters: Overreach and under-claiming are equally damaging.

16. Framing of Limitations

Signal: Are limitations framed as boundaries of interpretation rather than weaknesses?
Why it matters: Thoughtful framing strengthens trust and authority.

Whole-Manuscript Signals

17. Consistency of Concepts and Terminology

Signal: Are key concepts used consistently throughout, without subtle drift in meaning?
Why it matters: Conceptual drift confuses reviewers more than stylistic issues.

18. Anticipation of Reviewer Questions

Signal: Does the manuscript implicitly address obvious reviewer concerns?
Why it matters: Anticipation signals maturity and command of the field.

19. Decision Resolution

Signal: Has the manuscript resolved key questions of scope, audience, and claim strength?
Why it matters: Manuscripts stall when decisions are deferred.

20. Sense of Direction

Signal: Does the manuscript feel as though it knows where it is going?
Why it matters: Direction creates confidence; uncertainty invites scrutiny.

In Blog

As we approach the Lunar New Year, and the transition from the Year of the Snake to the Year of the Horse, it feels like an appropriate moment to reflect on how scientific manuscripts really move forward.

The Snake is often associated with reflection, intuition, and careful thought. These qualities are essential in research and in the early stages of drafting a paper. But reflection alone rarely gets a manuscript published. At some point, work has to move from careful thinking to confident decision-making.

That shift has become more visible in recent years. In an AI-normalised writing landscape, fluent text is no longer scarce. Well-phrased sentences, smooth transitions, and grammatically polished drafts are now easy to generate. What gives a paper the edge now is something else entirely: the ability to be decisive and to offer insight.

So what do we mean by insight in this context? We mean the ability to see what the data genuinely support, to decide how strongly to commit to a claim, to recognise where interpretation should stop, and to anticipate how a manuscript will be read by reviewers and by the field.

Across the hundreds of papers that cross our desks each year, it is at this level that we most often see manuscripts begin to struggle. It is not because the science is weak, nor because the English is poor, but because the decisions that give a paper its shape have not quite been made.

Over the past month, we’ve been highlighting several recurring editorial signals that point to this problem. Here, we bring them together as part of a broader view on what we believe actually moves manuscripts forward.

When clarity stops short of commitment

One of the most common patterns we see is writing that is clear but non-committal. The language flows, the structure is sound, but the central conclusion remains carefully hedged. In many cases, this reflects caution rather than uncertainty. Authors know their data are solid, but hesitate to state explicitly what those data demonstrate.

AI-assisted drafting tends to amplify this tendency, defaulting to safe, tentative phrasing precisely because interpretation and responsibility cannot be automated. For readers, this creates uncertainty: they finish the paper unsure what they should now believe, apply, or build on. For reviewers and editors, it raises questions about confidence and clarity of thinking.

This is a good point to pause, because what distinguishes convincing papers is not stronger language, but clearer decisions. Authors who do this well anchor their conclusions firmly to the data, define the conditions under which a claim holds, and accept responsibility for interpretation without overstating it. Commitment, in this sense, is not hype but proportion — and that is an essential distinction to make.

Why papers need narrative movement, not just data

A related issue often appears in Results sections. We frequently read papers that are technically correct and perfectly precise, yet narratively static. The data are presented accurately and the figures are described clearly, but the reader’s understanding does not progress.

When this happens, significance becomes difficult to grasp — not because it isn’t there, but because it has not been shaped into a story that moves forward.

Strong Results sections create momentum by making change visible. After each key finding, the reader should understand what they now know that they did not before, even if that change is incremental. This does not require interpretation in the Results section, but it does require narrative logic: an awareness of how each piece of data advances the overall argument.

The conceptual step that turns results into meaning

Discussion sections often compound these issues. Many do an excellent job of summarising findings and situating them within the literature, but stop short of taking the conceptual step that turns results into meaning.

You will no doubt have read papers where interpretation feels hesitant, as though the authors are reluctant to draw conclusions that might be challenged. Yet this interpretive step is exactly what readers and reviewers are looking for. Without it, discussions feel descriptive rather than engaged.

Papers that resonate are those in which authors clearly articulate how their findings refine, extend, or challenge existing understanding — with appropriate restraint. This is not about saying more; it is about saying what matters.

Strong papers don’t over-hype — they draw clear boundaries

A natural concern at this point is how to achieve all of this without over-hyping a story. After all, every study has limits. It is rare for a paper to close every loop or answer every question.

Problems tend to arise not because limitations exist, but because they are left implicit. When gaps are unspoken, reviewers often infer overreach, even where none was intended.

Clear boundaries do not weaken a manuscript; they strengthen it. They signal rigour, honesty, and control over the narrative. Papers that do this well are explicit about what their findings do not address and where interpretation should stop. This level of transparency builds trust and allows the contribution to stand on solid ground.

Why perspective makes the difference

Taken together, these patterns point to a single underlying issue: perspective.

When you are close to your own work, it is difficult to see where commitment is lacking, where narrative progression stalls, or where boundaries need to be named more clearly. This is not a failure of expertise, but a natural consequence of being deeply invested in the science. This challenge is precisely why Insight Editing London evolved into the company it is today.

We approach manuscripts with the distance needed to spot where they might stall — reading with the combined perspective of editor, reviewer, and reader, rather than author alone. Our primary emphasis is not on correcting language. It is on making the decisions that shape clarity, coherence, and confidence visible, so that authors can move their work forward with intent.

As the Year of the Horse begins, and you find yourselves drafting new manuscripts, revising submissions, or responding to reviewer comments, remember that momentum matters. Build it on clear decisions about claims, narrative, interpretation, and limits — and it will carry your work further, just as the Horse carries us into 2026.

Wishing all our collaborators, clients and colleagues, a happy Lunar New Year

In Blog

How IEL Supports Researchers in the AI Era

Artificial intelligence has moved from a novelty to a day-to-day writing tool remarkably quickly. Many researchers now draft sections of a manuscript with the help of AI, tidy their English using automated editors, or explore tools that summarise literature and speed up early brainstorming. In laboratories, offices, and home workspaces around the world, AI has simply become part of the writing ecosystem.

At Insight Editing London, our perspective is straightforward: researchers will use AI, and should feel able to do so safely, transparently, and without losing their scientific voice. The question is no longer whether AI will shape academic writing, but how researchers and editors can use it responsibly while preserving clarity, accuracy, and integrity.

AI is here as a tool, not an author

Major publishers are increasingly integrating custom AI-screening tools into their manuscript triage systems. These tools are not designed to “catch” AI use outright, but to flag text that may require human review — such as overly generic phrasing, formulaic structures, or inconsistencies that sometimes arise in AI-assisted writing. In many ways, these automated screens now function like an additional quality and integrity checkpoint.

This is good news. It means researchers can use AI tools where helpful, as long as the manuscript remains clearly authored, clearly human, and clearly grounded in the authors’ understanding of their own work. What matters is that:

AI assists — but it does not think, interpret, double-check, take ownership of ideas, or say “I don’t know”. And this is precisely where trusted editorial support becomes essential: to help ensure that AI-assisted text is accurate, authentic, and aligned with the standards of international scientific publishing.

Embracing AI without losing your voice

One of the most common concerns we hear from researchers — particularly those whose first language is not English — is that AI can flatten the tone of a manuscript. Text may become linguistically correct but stylistically generic; phrasing becomes repetitive; the narrative loses shape and clarity.

Researchers often tell us:

This isn’t a failure on the researcher’s part — it is simply how large language models operate. They predict patterns, smooth variation, and remove the idiosyncrasies that make scientific writing human.

Our role at IEL is to help you to restore that individuality: your structure, your reasoning, your emphasis, and your way of communicating your science. Our editors help ensure AI-assisted text remains genuine, coherent, and aligned with the conventions of scientific writing — and with your authorial voice.

Embracing AI without losing your skills

A second concern raised by supervisors and journal editors is that an over-reliance on AI can slowly weaken the communication skills researchers need throughout their careers. Clear scientific writing strengthens critical thinking, deepens understanding of the work, and helps researchers communicate confidently with funders, collaborators, and the wider community. When AI does too much of this work, those essential skills — structuring arguments, explaining results, polishing narratives — are practised less often, and eventually weakened.

None of this means AI should be avoided. It simply means that researchers benefit from using it thoughtfully, as a tool that supports the writing process rather than replaces it. IEL’s role is to help maintain that balance.

How can IEL help? We believe AI should support researchers, not replace the skills that underpin strong scientific communication. Online training has always been central to our work and is one of the key ways we differ from other editing companies.

Through clear in-text comments and concise editorial reports, we explain what we adjusted, why it matters, and how the writing could be strengthened further. This approach not only improves the current manuscript but also helps researchers build confidence for future papers. In an era where AI can tidy language but cannot teach judgement, narrative flow, or discipline-specific conventions, our focus remains on helping authors strengthen — not sidestep — their communication skills.

Supporting researchers who feel uncertain about AI

In many regions, the pressures surrounding AI use are acute. Researchers aiming for international English-language journals often worry about:

These concerns are legitimate, and the more AI develops, the more researchers need clear, human-led guidance to navigate it safely. For that reason, IEL helps authors:

Where human editorial expertise still matters most

While AI can tidy sentences, it cannot replace the analytical, interpretive, and ethical judgement that sits at the heart of scientific communication. These are areas where human editorial expertise remains essential:

We embrace this new era with optimism and realism. We are here to help you use AI confidently, preserve your scientific voice, and communicate your work with clarity, accuracy, and integrity. Your science deserves to be heard — in your words, in your way, and with the trusted support of experienced human editors.

In Training

We’re thrilled to share our free short training primer on research ethics and integrity — a concise introduction to the principles that underpin responsible, high-quality research.

Our IEL avatar brings the topic to life, guiding you through key ideas in this accessible 10-minute session.

This video offers a glimpse of the broader training we provide at Insight Editing London — from in-depth courses to focused primers covering every stage of the research process, including study design, writing, and publishing.

📩 Contact us to learn more about our tailored training options and how we can support your team’s professional development.

In Resource

Before you hit “submit,” it’s worth asking: is your manuscript really integrity-proof? Feel free to use our handy checklist to make sure you have covered all the main points:

Authorship — Have all contributors been properly acknowledged, and does everyone meet the journal’s authorship criteria?

Responsibility — Do all authors accept responsibility for the integrity and accuracy of the work as submitted?

Attribution — Have you cited all sources fairly, avoided text recycling, and checked for accidental plagiarism?

Data Reporting — Are all relevant findings (including negative or conflicting results) presented transparently, with limitations clearly stated?

Conflicts of Interest — Have you disclosed any financial, institutional, or personal interests that could be seen as influencing the work?

Data Availability — Have you deposited your data, code, or materials in a trusted repository, or clearly explained why this isn’t possible?

Journal Choice — Have you vetted the journal to make sure it’s reputable (e.g., indexed, clear peer review process, not predatory)?

Revisions — If revising after peer review, are your responses complete, professional, and transparent?

Ethics Approval — For human/animal studies, have you obtained and clearly stated ethics approval from the relevant committee?

Informed Consent — For human participants, have you documented informed consent (and assent where relevant)?

Trial Registration — If applicable, have you registered your clinical trial or study in a recognized registry?

Image/Data Integrity — Have figures, images, and data been prepared responsibly (no inappropriate manipulation, clear labels, raw data available on request)?

Funding Statement — Have you declared all sources of funding and specified the funders’ role (or non-role) in the study?

Acknowledgments — Have you credited non-author contributors (technicians, facilities, advisors) appropriately?

Supplementary Materials — Are supplementary data complete, accurate, and consistent with the main text?

Language Editing / AI Use — If AI tools or editing services were used, have you declared them in line with journal policy?

How did you do?
If you answered “yes” to all, you’re ready to go. If you hesitated on any, that’s exactly where IEL can help—supporting not just the polish, but the clarity, transparency, and integrity that journals and reviewers expect.

In Blog

At Insight Editing London, we see first-hand the challenges that researchers face in writing and publishing responsibly. Every week we edit manuscripts where authors are doing excellent science, but every now and then we come across examples where the way in which the science is written risks unintentionally undermining its integrity. And of course, there are also the wider systemic pressures: competition for funding, the pressure to publish quickly, and the rise of new challenges like AI and paper mills.

This month, we are highlighting some of the issues we encounter most often in scientific communication today. These aren’t abstract problems—you’ll recognise them in discussions in your lab, in peer review reports, and perhaps even in your own drafts. Our aim is to explain why these practices matter, what can go wrong if they are ignored, and how you can avoid the pitfalls.

Our aim here is not to cover every detail, but to raise awareness of the key issues that can affect integrity in scientific communication. To help you take this further, we’ve included links to trusted resources and guidelines where you can explore each topic in more depth.

1. Authorship & Contributorship: Giving Credit Where It’s Due

Questions about authorship are one of the most common integrity issues we hear about from clients. Should a lab head be included? What about the student who generated the data but has since left?

Gift authorship (adding someone with little or no input) and ghost authorship (leaving out someone who made a real contribution) both distort the record. They might feel like small compromises in the moment, but they create tension and unfairness.

We’ve seen the consequences: disputes between collaborators, strained relationships, and in some cases, corrections or even retractions. The best solution is transparency. Talk about authorship early, update agreements as the project develops, and use contributorship statements to spell out everyone’s role.

2. Attribution & Plagiarism: Respecting the Work of Others

As editors, we sometimes come across text that looks “borrowed.” Often it isn’t intentional plagiarism, but rather careless paraphrasing or recycling wording from an earlier paper. Still, it matters.

Plagiarism—whether direct copying, close paraphrasing, or recycling your own text without acknowledgement—misrepresents originality. Journals use detection software and take it seriously. Consequences can include rejection, retraction, or worse.

The solution is simple: cite properly, paraphrase carefully, and if in doubt, over-acknowledge rather than under.

3. Ethical Writing & Data Reporting: Telling the Whole Story

One of the issues we notice most in manuscripts is not outright misconduct, but overstatement. For example, presenting results as more definitive than they are, or ignoring contradictory findings. Sometimes this comes from enthusiasm—it’s natural to want to highlight your strongest results—but it can tip into misrepresentation.

Reviewers are quick to spot cherry-picking or claims that aren’t backed by the data. At best, you’ll be asked for revisions; at worst, trust in your work is damaged.

Our advice: follow reporting guidelines, present limitations openly, and write conclusions that reflect the evidence you have—not the evidence you wish you had.

4. Paper Mills & Predatory Publishers: Protecting the Literature

The rise of paper mills and predatory publishers is something we all need to be vigilant about. We hear from researchers who have been approached with offers of guaranteed publication or authorship slots. It can be tempting under pressure, but it comes with serious risks.

These outlets undermine trust, waste resources, and can permanently damage your reputation. We recommend always checking journals through resources like Think. Check. Submit. or by looking at the editorial board and peer review process.

5. Conflicts of Interest: Transparency Builds Trust

Conflicts of interest aren’t always financial. They can be professional, institutional, or even personal. But whatever the form, lack of disclosure erodes trust.

We sometimes notice missing or incomplete conflict declarations during editing, and we always flag it. It’s better to over-disclose than under-disclose—transparency reassures readers and reviewers.

6. AI Misuse: A New Frontier in Integrity

AI is the newest issue in integrity, and one we’re already seeing in manuscripts. We notice when text feels “AI-smooth”—grammatically perfect but oddly flat, with structural or logical weaknesses.

AI can be useful for refining sentences or cutting word counts, but it cannot interpret data or argue a case. Journals are clear that AI cannot be an author, and use must be declared. Our position is simple: use AI for small tasks if you wish, but never outsource the intellectual core of your paper.

7. Peer Review Ethics: Safeguarding the Process

We also hear stories from clients about unfair or biased peer reviews, or about confidential data leaking from the process. Peer review relies on trust and professionalism.

If you’re invited to review, only accept if you have the right expertise, be transparent about conflicts, and above all, respect confidentiality.

Integrity isn’t just about avoiding misconduct—it’s about building credibility and impact. As editors, we can tell you that reviewers do notice when manuscripts are overstated, under-acknowledged, or poorly structured. And they also notice when they are clear, transparent, and responsible.

At IEL, we help researchers strengthen their manuscripts not only for clarity and flow, but also for integrity. Through our reports, comments, and ongoing dialogue, we aim to make every paper not just publishable, but also a genuine contribution to the field.

If you’re preparing a manuscript and want to ensure it’s not only well written but also ethically sound, get in touch—we’ll be pleased to help.