AI Resume Builders: What Actually Works in 2026
Why AI-Written CVs Get Caught: A Recruiter Breakdown
A 12-year UK recruiter on how we actually detect AI-written CVs, the 7 telltale signals, the honest catch rate, and how to use AI without getting flagged.
AI-written CVs aren’t a future problem for recruiters. They’ve been here for over two years, they’re now the majority of what lands in my inbox, and the conversation about whether we can spot them is the wrong conversation. We can. The right question is which ones get caught and why.
From where I sit, screening 80-100 CVs a week for a London recruitment desk, I’d estimate 60% of CVs I see now have meaningful AI involvement. Two years ago that figure was closer to 15%. The shift happened fast and it isn’t reversing. But the catch rate inside that 60% is small. Around 7% are obvious enough that I flag them within ten seconds, and almost all of those are candidates who copy-pasted from ChatGPT and clicked submit.
The distinction that matters isn’t AI versus human. It’s AI-assisted versus AI-dumped. AI-assisted means the candidate drafted with a chatbot, edited heavily, kept their own voice, and the document reads like them. That’s fine. I don’t even know it’s AI-assisted half the time. AI-dumped means they pasted a job description into ChatGPT, copied the output, and sent it. That’s the version that gets caught, and the rest of this article is a breakdown of how — for journalists, for job seekers, and for hiring managers trying to calibrate. This sits inside the broader resume guide, which covers what to do once you’ve understood the problem.
How recruiters actually detect AI
Detection happens in three layers, in roughly this order.
Layer one is pattern recognition. This is the ten-second scan. After 15,000 CVs you don’t read individual words on the first pass, you read shapes. AI-generated text has a recognisable shape: parallel sentence structures, a particular vocabulary cluster, predictable paragraph rhythms. I’m not consciously analysing — I’m matching against a template my brain has built from years of reading both human and AI-written documents. The same way an art appraiser spots a forgery before they can articulate why.
Layer two is tonal mismatch. This catches CVs that survived layer one. I look at the CV alongside the cover letter, the LinkedIn profile, the email the candidate sent me, and any other written communication. Real humans have a consistent voice across these. AI-dumped CVs have a polished corporate voice in the CV and the candidate’s actual voice everywhere else. The gap between the two is the giveaway. It’s the same logic as a forensic accountant looking for inconsistencies between tax returns and bank statements.
Layer three is verifiable claim collapse. This is the interview filter. AI-written CVs contain plausible-sounding metrics and project descriptions that a human candidate cannot defend in detail. “Increased operational efficiency by 30%” survives the screen but dies the moment I ask which efficiency metric, measured how, baselined against what. The CV writes cheques the candidate can’t cash. By layer three the candidate has already been invited in, so the cost is theirs, not mine.
The 7 detection signals
Here’s the framework I use, named for what it is. Each signal alone proves nothing. Two together raises an eyebrow. Three or more, and I’m reading the rest of the document looking for confirmation.
Signal 1: The em-dash spike
ChatGPT uses em-dashes at roughly four times the rate of human British writers. Where you and I would use a comma or a full stop, the model reaches for an em-dash. A typical AI bullet looks like “Led product strategy — driving growth across three verticals — with measurable impact on retention.” Three em-dashes in 14 words is statistically near-impossible in human writing. I scan the punctuation density before I read the words.
Signal 2: Buzzword density
The cluster of “leveraged”, “synergy”, “spearheaded”, “results-driven”, “cross-functional”, “holistic” and “passionate” appearing within a single bullet, or two of them in the opening line, is a near-perfect AI tell. Real corporate writers know these words are dead. The AI tools were trained on resume databases that valued them, so the models still produce them at high frequency. I keep a running mental list and flag it when more than three appear on the page. There’s a fuller breakdown in the AI buzzwords recruiters hate.
Signal 3: Tonal whiplash
The CV reads like a McKinsey deck. The cover letter reads like a WhatsApp message. The LinkedIn About section reads like a third person again. Real candidates produce a consistent voice across all three because they came out of the same head. When the documents sound like three different writers, two of them are AI and one is the candidate. Nine times out of ten the human one is the cover letter, because candidates often skip AI for short documents.
Signal 4: The phantom metric
Specific-sounding percentages that don’t survive a follow-up question. “Improved team productivity by 27%.” Improved against what baseline? Measured how? Over what period? Real candidates have a fluent answer because they did the work. AI candidates pause, then either invent a baseline on the fly or admit they don’t remember. The metric was generated to sound credible on paper, not to be defended in conversation.
Signal 5: Identical structural rhythm
Every bullet starts with a verb. Every bullet has three clauses. Every bullet ends with an outcome. The whole CV has a metronome quality that real human writing never has. Humans write messy bullets, sometimes starting mid-thought, sometimes verb-first, sometimes outcome-first. When the rhythm is too clean across 15 bullets, it’s a model speaking, not a person.
Signal 6: Sector-language mismatch
A warehouse operations CV that talks about “optimising stakeholder engagement” or “delivering transformational outcomes”. A graduate CV using the phrase “operational excellence frameworks”. The vocabulary doesn’t match the role. AI tools trained on senior corporate text import that vocabulary into junior or non-corporate roles where nobody in the actual industry speaks like that. Had a finance candidate last quarter whose CV said “orchestrated cross-functional alignment to drive enterprise-wide value realisation”. She was a junior treasury analyst. Nobody in treasury speaks like that. The model did.
Signal 7: Generic cover letter, identical company
Multiple applications from the same candidate to different roles at the same company group, with cover letters that are word-for-word identical except for the role title. This is the AI-at-scale tell. Candidates feed a job description into ChatGPT, get a cover letter, swap the role name, send it. The recruiter system catches it because we see all the applications in one place. This one’s fatal — not because of AI per se, but because the laziness signal it sends is unforgivable.
What happens when we catch it
Here’s the part that surprises candidates: most of us don’t reject outright. The legal and reputational risk of rejecting on suspicion alone is too high, the false positive rate would burn good candidates, and frankly the AI-written CV is rarely the worst CV in the stack on a busy week.
What actually happens is a flag-for-follow-up. The candidate moves from interview-list to maybe-list. If the role is competitive and we have stronger candidates, the AI-flagged one quietly drops. If the role is hard to fill and the experience looks credible, we bring them in for interview with a specific set of probing questions ready. “Walk me through the 30% productivity improvement on bullet two. What was the baseline. How was it measured.”
The honest catch rate at the screening stage is low — I’d estimate the 7% I flag visibly is the ceiling, and the actual fail-rate at interview from undetected-on-paper AI CVs is much higher, probably 25-30% of AI-dumped applicants. The CV gets you the meeting. The interview is where the AI-written-but-can’t-defend-it candidates collapse. They’ve optimised for the wrong checkpoint. The screen filter is permeable. The conversation isn’t. This is the same dynamic that plays out inside the ATS, where keyword optimisation gets you past the system but doesn’t help you in the room.
The cost to the candidate is invisible. They never hear “we suspected AI”. They hear “we went with another candidate”. They draw the wrong lessons and apply the same approach to the next role. That’s the real damage of AI-dumping a CV. It doesn’t look like rejection — it looks like bad luck.
How to use AI without getting caught
Five rules. I’ve watched candidates use AI well for two years now and none of them get flagged. The pattern is consistent.
Rule 1: AI for first-draft, you for the final 40%. Get the AI to produce a starting draft from your real experience, then physically rewrite at least four out of every ten sentences in your own words. This is the single highest-leverage rule. AI gets you to 60%. You finish.
Rule 2: Strip AI’s favourite verbs. Either ban them in your prompt (“never use the words leverage, spearhead, results-driven, dynamic, passionate, cross-functional, holistic, robust”) or grep them out manually after. Replace each with the verb you’d actually use describing the work. “Spearheaded a new pricing strategy” becomes “Built a new pricing model” or “Wrote the pricing review the team adopted”. Specificity beats grandeur.
Rule 3: Defend every bullet. Read each bullet on your CV and ask: could I answer three follow-up questions on this in an interview without flinching. If not, the bullet is wrong. Either it’s AI-inflated or it’s vague. Rewrite until you can defend every line. This single test eliminates the phantom metric problem entirely. Tools like ChatGPT are useful here too — paste your draft and ask it to generate the interview questions a recruiter would ask about each bullet, then test yourself.
Rule 4: Vary structure deliberately. Mix bullet shapes. Some short, some longer. Some verb-first, some outcome-first. Some with metrics, some without. The goal isn’t randomness — it’s the natural messiness of how a human actually writes about their own work. Identical-structure bullets are the loudest signal-five tell, and varying them is the easiest fix.
Rule 5: Read it aloud. This is the test that catches everything else. Read the CV aloud at normal speaking pace. Anything that sounds like a corporate brochure, a LinkedIn thought-leader post, or a McKinsey memo, rewrite. Anything that sounds like you describing your work to a friend at the pub, keep. Your voice on paper should match your voice in the room. That’s the whole game.
The bigger picture
AI detection in recruitment is asymmetric, and the asymmetry is permanent.
On one side, candidates have access to better and better models. ChatGPT in 2026 produces output substantially harder to spot than ChatGPT in 2024. The buzzwords have softened. The em-dash density has dropped slightly. Models are starting to produce bullets with deliberately varied structure if asked. Detection on the page alone is getting harder year by year.
On the other side, recruiter pattern-matching evolves too. I read more CVs in 2026 than I did in 2024, and a higher proportion of them are AI-influenced, which means my baseline for what AI looks like keeps recalibrating. I notice tells now I wouldn’t have noticed eighteen months ago. Tonal whiplash between the CV and the cover letter, for instance, only became a reliable signal once enough candidates were using AI for one but not the other. Detection methods I haven’t articulated yet are forming below my conscious awareness, and they’ll surface when the current methods stop working.
The arms race is permanent because the underlying incentive is permanent. Candidates want to optimise their applications. Recruiters want to find people who can actually do the job. Those goals are compatible at the level of AI-as-editor and incompatible at the level of AI-as-ghostwriter. The candidates who win this race long-term aren’t the ones who use the cleverest AI workflow. They’re the ones who use AI to think faster and write better — but not to write final.
Put differently: AI is a tool for compressing the time between having something to say and saying it well. It is not a tool for not having something to say.
The 3-point summary
If you take three things from this:
One. AI is now used by the majority of candidates, and using it isn’t the problem. Dumping it without editing is the problem. The catch rate of AI-assisted CVs is near-zero. The catch rate of AI-dumped CVs is roughly 7% caught at screen and another 25-30% failed at interview when they can’t defend the words on the page.
Two. The 7 signals — em-dash spike, buzzword density, tonal whiplash, phantom metric, identical rhythm, sector mismatch, generic cover letters at scale — are the framework recruiters use, consciously or not. Test your CV against them before you submit. Two signals raises a flag. Three or more is fatal.
Three. Use AI as an editor, not a ghostwriter. Bullet-dump your real work, get the AI to tighten it, rewrite at least 40% in your own words, ban the buzzwords, defend every bullet, vary your structure, and read it aloud. If you do those six things you’ll never be caught, because there’ll be nothing to catch. Your CV will be a faster, cleaner version of you — which is exactly what AI was supposed to be for.
If you want a side-by-side primer on what recruiters notice in real-time, see can recruiters tell you used AI. If you want to know which AI tool to use for the editor-not-ghostwriter workflow, the best AI resume builders 2026 roundup is the next stop.
Sources & further reading
Frequently asked questions
What percentage of CVs are AI-written in 2026?
Are recruiters using AI detection tools like GPTZero on CVs?
Will using AI to write my CV get me rejected automatically?
Which AI model leaves the most obvious fingerprints?
What's the single biggest tell that a CV is AI-written?
Can I use ChatGPT for my CV without getting caught?
Keep reading
The 8-Second CV Scan: What Recruiters Actually Look At First
A 12-year UK recruiter breaks down the 8-second CV scan: the 5 zones we check, why most CVs fail Zone 1, and a 12-second test you can run tonight.
Can Recruiters Tell If You Used AI? An Honest Recruiter Take
A 12-year recruiter on whether we can spot AI-written CVs, the 8 dead giveaways, and how to use AI without getting caught.
13 AI Resume Buzzwords That Make Recruiters Roll Their Eyes (2026)
A 12-year recruiter flags the AI-generated phrases I see in every CV now — and gives you the specific words to use instead to actually get interviews.