What AI detectors actually check
Different vendors use different mixes of signals, but most detectors revolve around four broad ideas.
1) Predictability of your word choices (perplexity and burstiness)
Language models pick highly probable next words. Detectors measure how predictable your text is. If most words are very likely given the preceding context, the prose can look machine-like. Human writing usually has more “spiky” rhythm: a simple sentence here, an odd turn of phrase there, an example from lived experience, then a short punchy line. Tools summarise this as perplexity (overall surprise) and burstiness (variation in sentence length and structure). Very even prose with similar sentence lengths and repeated connective patterns (“Moreover…, Furthermore…, Additionally…”) can push a detector towards “AI-ish”.
2) Stylometric fingerprints
Stylometry is the study of writing style. Detectors look at function-word balance (the small glue words like “and”, “of”, “that”), parts-of-speech ratios, punctuation habits, collocations (words that often travel together), and syntactic patterns. Some tools compare the target text with a baseline for your cohort or course; others compare the opening section of your work with later sections to spot sudden, unexplained style shifts.
3) Token-level traces and sampling artefacts
Under the bonnet, large language models generate text token by token. Certain sampling settings (temperature, top-p) can leave statistical footprints, overuse of “safe” phrasing, neatly hedged claims, or an oddly uniform cadence. A few research tools (for example, GLTR-style visualisations) highlight how many tokens were among a model’s “top-k” easiest picks. That doesn’t prove AI use, but it adds to a risk picture.
4) Non-textual and contextual hints
Some detectors read metadata (file history, creation timestamps, revision gaps), compare drafts, or examine whether citations actually support the sentences around them. Others cross-check for text reuse (traditional similarity checking) and for semantic recycling: the ideas are not copied word-for-word, but the structure and paragraph logic mirror well-known online explainers. Where institutions allow it, teachers also review process evidence, notes, outlines, and intermediary drafts, to judge authenticity.
Important: None of these checks is a silver bullet. The outputs are probabilistic, not verdicts. That’s why most universities frame detector results as a starting point for enquiry, not a final judgment.
Why perfectly honest students sometimes get flagged
Understanding false positives helps you avoid them without turning your essay into a contrived puzzle.
-
Very polished language throughout. If every sentence is grammatically pristine, with similar length and a safe, general tone, the text can look model-like, even when you wrote it yourself after heavy use of grammar tools.
-
Over-templated structure. Many students learn to write by following the same internet templates (“In this essay, I will…”, “Furthermore…”, “In conclusion…”). When hundreds of essays share the same skeleton, detectors see uniformity.
-
Thin detail. General statements without local examples, numbers, or references feel like model drafts.
-
Last-minute uniform editing. A document written in bursts over weeks but fully polished in one session can produce metadata that looks odd.
-
Short answers. Extremely short responses (a paragraph or two) don’t give enough signal for a fair reading, so tools lean on surface regularities and can over-flag.
How to stay original (and sound like you)
You don’t need to “evade” detectors. You need to write in a way that’s anchored in your own thinking and sources. The steps below help your work read as unmistakably human, and simply make it better.
1) Start with a concrete angle and a question
Instead of “The impact of AI on education,” narrow it: “When do AI detectors misread students’ drafts, and what evidence can students keep to document honest work?” A specific research question forces specific writing.
2) Build a small evidence pack before you draft
Keep a live note of:
-
three to five credible sources you’ll actually cite (standards bodies, official guidance, refereed articles, reputable news with named sources);
-
two short case examples (from your workplace, course or public cases you can reference);
-
any numbers you’ll use (dates, figures, definitions).
This pre-work naturally seeds names, dates and examples into your prose, things language models often avoid or generalise.
3) Write a quick “process log” alongside your draft
A few lines each time you sit down to write: what you read, what you decided to include, what you cut, and why. If a tutor later asks about authenticity, you can show your process. It also helps you keep consistent style across sections.
4) Draft in your spoken voice first
Open a blank page and explain your answer the way you’d explain it to a course mate. Use short sentences. Add the fancy phrasing later. That first pass sets a human cadence with natural variation and personal choices.
5) Weave in “traceable” specifics
Name frameworks, cite policies, and ground points with a small number or date. For example, when mentioning institutional expectations about AI in assessment, you can point to national guidance such as the UK QAA’s resources for universities on generative AI (useful context for you and your reader). Quality Assurance Agency
6) Keep examples local
Where possible, choose examples you can reasonably claim: lab reports from your module, a case from your placement, a public policy your city council actually implemented, or a specific journal article you read. Add one sentence of commentary explaining why the example matters.
7) Mix sentence rhythms on purpose
After your first tidy-up, do a “rhythm pass”:
-
Break a long sentence into two.
-
Add a short, emphatic line after a complex paragraph.
-
Swap a predictable opener (“Furthermore,”) for a direct statement or a question.
This disrupts the ultra-regular cadence that pings detectors.
8) Use citations the way scholars do
Quote when words are distinctive. Paraphrase when ideas matter more than wording. Always add your own commentary after a quote or paraphrase, one or two lines that connect it to your argument. That “glue” is your voice.
9) Revise to remove generic filler
Detectors and markers both dislike empty scaffolding (“In today’s rapidly evolving landscape…”). Replace it with a concrete claim, or cut it. If a sentence wouldn’t be missed, it probably shouldn’t be there.
10) Keep your drafts and exports
If you write in Google Docs or Word, keep version history on. Export a PDF of your notes and outline. Save web pages you cite. Authentic students sometimes lose the benefit of the doubt because they can’t show how the work was built. Keep the paper trail.
Ethical use of tools
Grammar checkers, reference managers and even idea-generators can be used ethically if your institution allows them and you acknowledge support where required. The line you mustn’t cross is submitting work written by someone, or something, else as if it were entirely yours.
-
Request editing and feedback on your draft (structure, argument flow, clarity), not a ghost-written submission.
-
Use source-finding support to gather reading lists, then read and cite those sources yourself.
-
Keep ownership: your final submission should be in your words, reflecting your understanding, with appropriate acknowledgements if your course requires them.
That approach improves your work and keeps you comfortably inside academic-integrity rules.
A clear, original structure for this blog topic (that detectors also tend to like)
When you write for the web, you still want a coherent shape. Here’s a simple, reader-friendly structure you can adapt:
-
Set the question. One paragraph that frames the problem in your own words.
-
Explain how detectors think. Two to three paragraphs covering predictability, stylometry, sampling artefacts and context checks, with one grounded example.
-
Show the pitfalls. A paragraph on why honest students get flagged.
-
Give your method. Five to six paragraphs that show how you personally keep work original (angle, note-taking, drafting voice, local examples, citations, rhythm pass, saved drafts).
-
Ethics and tools. A paragraph that sets expectations about tool use and external help, including how to work with a mentor or service ethically.
-
Wrap up with a short checklist. Not a long bullet list, just a few lines readers can act on today.
That shape naturally raises your “human signals”: specific examples, varied rhythm, and a visible thought process.