Skip to content
JL JobLabs

AI Resume Builders: What Actually Works in 2026

Can Recruiters Tell If You Used AI? An Honest Recruiter Take

A 12-year recruiter on whether we can spot AI-written CVs, the 8 dead giveaways, and how to use AI without getting caught.

Can Recruiters Tell If You Used AI? An Honest Recruiter Take
Alex
By Alex · Founder & Head of Recruitment Insights
12+ years in recruitment · · Updated · 12 min read

It’s 4pm on a Friday. I’ve got 47 CVs to screen for a senior operations role and a train to catch in two hours. The third CV in the stack opens with: “Results-driven operations professional leveraging cross-functional expertise to spearhead transformative outcomes.” I set it aside without reading the rest.

Not because using AI is disqualifying. It isn’t. I set it aside because that opening tells me the candidate didn’t spend ninety seconds editing the AI output before sending it. Whatever else is in the document, I already know the experience section will be three-bullet patterns with suspiciously round percentages and zero specific detail.

So, can recruiters tell if you used AI? Yes. Most of us can spot it inside 10 seconds, and the better recruiters can spot which model you used (ChatGPT, Claude, Gemini all have slightly different fingerprints). But the more useful question is: does it matter, and what do we actually do about it? Twelve years and 15,000+ CVs in, here’s the honest answer — and where this fits into the wider resume guide.

The 8 dead giveaways recruiters spot in seconds

These are the patterns that flag AI use to me before I’ve consciously processed the content. None are individually fatal. Three or more together, and I’m reading with suspicion.

1. The “leveraged / spearheaded / results-driven” cluster. These were corporate buzzwords before ChatGPT existed, but the AI tools trained on resume databases have weaponised them. When I see all three on one CV, I know the candidate fed a job description into ChatGPT with no constraints. I wrote a full breakdown of the 13 buzzwords AI keeps producing if you want the complete pattern.

2. The three-part bullet with parallel verbs. AI loves the rhythm of “Designed X, implemented Y, delivered Z.” Real humans don’t write that way naturally. We write messier bullets, often starting mid-thought, with the verb in the middle. When every bullet on a CV follows the exact same syntactic structure, it’s a model speaking, not a person.

3. Suspiciously round metrics. 25%, 30%, 40%, 50%. These show up constantly because they’re the numbers AI defaults to when it’s hallucinating. Real metrics from real projects look like 23%, 41%, 17%. If I see “increased efficiency by 30%” three times on the same CV, I’m noting it for the interview.

4. The cover letter opener “I am writing to express my keen interest.” I get this in roughly four of every ten cover letters now. It’s the AI defaulting to formal English business correspondence as it was taught in textbooks. Nobody writes like this in 2026 unless they fed a prompt and didn’t edit.

5. Em-dash overuse in odd places. This one’s a fingerprint specifically. ChatGPT uses em-dashes far more than humans do, and it puts them in places British writers wouldn’t (mid-bullet, between two short clauses). When a CV is studded with em-dashes that don’t quite earn their keep, the model’s hand is showing.

6. Industry jargon that doesn’t fit the candidate’s level. A graduate CV that uses phrases like “stakeholder alignment frameworks” or “operational excellence methodologies” raises my eyebrow. Either the candidate is wildly senior for their experience, or the AI has imported vocabulary from a director-level training set into a junior application. Nine times out of ten, it’s the second.

7. “Furthermore” and “Moreover” inside CV bullets. No human starts a CV bullet with “Furthermore.” It’s the model treating the bullet list like an essay, transitioning between points the way it would in flowing prose. Dead giveaway.

8. Identical paragraph rhythm. Every sentence on the CV is roughly the same length. Every bullet has roughly the same word count. Real human writing has natural variation. AI writing is metronomic. Once you see this pattern, you can’t unsee it.

What we actually do when we suspect AI

Here’s the bit nobody tells you. Most recruiters don’t pause the screening process to run a detection test. We don’t have time. The CV either makes the shortlist or it doesn’t, and a polished AI CV often passes the initial screen because it hits all the right keywords.

What we do instead is mark the CV mentally and probe in the interview.

If a candidate’s experience section says “Drove 32% improvement in operational efficiency through cross-functional process optimisation,” I’m going to ask: what was the baseline number? What was the specific process? Who was on the team? What was the hardest decision you made along the way? What didn’t work?

A candidate who lived that achievement answers in 30 seconds with specifics. A candidate whose AI invented it stalls, hedges, gives generic answers. The whole interview tilts at that moment. I’m no longer assessing fit. I’m checking whether anything on the CV is real.

About 1 in 6 candidates fail this test in interviews I run. They get rejected, but the rejection officially cites “communication style not aligned with the role” or similar. Hiring teams almost never write “we suspect this CV was AI-fabricated” in feedback. Legal risk is too high.

The 4 things AI detection software actually catches (and what it doesn’t)

A few candidates ask me whether GPTZero or Originality.ai are reliable. Honest answer: they catch maybe 70% of obviously AI-generated text and they false-positive on 15-30% of perfectly human writing. Here’s what they actually measure.

Perplexity is how predictable each word is given the words around it. AI writing is more predictable, because the model is literally generating the most likely next word. Human writing has more surprising word choices. Useful in long passages, almost worthless on short CV bullets.

Burstiness is the variation in sentence length and complexity. Human writing bursts. Long sentence, short sentence, long sentence, fragment. AI writing is steadier. Detection software flags steady text as suspicious.

Pattern matching against known AI outputs. The detectors maintain databases of common AI phrases (“delve into,” “navigate the landscape,” “in today’s fast-paced”) and flag documents containing many of them.

Source comparison. Originality.ai cross-references against indexed AI training data. If your text matches passages used in model training, it scores high.

What they don’t catch: AI text that’s been heavily edited by a human, AI text that’s been translated and back-translated, hybrid documents where the human wrote the structure and AI filled in the language, and any AI output where the user used good prompting (specific constraints, banned words, voice samples).

Most ATS providers don’t license these tools. The cost-per-scan combined with the false positive rate creates legal exposure that HR departments won’t accept. The biggest detector remains the human reading at 4pm on a Friday afternoon.

Why some AI-written CVs DO get through (and the brutal trade-off)

Here’s the part candidates don’t expect: generic AI CVs often pass the initial filters more easily than personal, well-crafted CVs do.

Why? ATS systems match keywords from the job posting. AI tools fed the job description spit out a CV stuffed with exactly those keywords, in the exact phrasing the JD used. The match score is artificially high — the mechanic I break down in how the ATS really works. The CV gets pulled into the recruiter’s queue.

Then comes the trade-off. Once a human reads the CV, the same generic phrasing that beat the ATS now flags as AI-generated. The candidate either gets rejected at screen, or makes it to interview where the gap between the polished CV and the actual person becomes painfully obvious.

The candidates who get hired use AI differently. They use it to structure rather than to generate. They write the raw material themselves (rough bullets, real numbers, actual project names), then ask the AI to tighten the language while preserving the specifics. Tools like ChatGPT, Teal and Rezi all support this workflow if you constrain the prompt properly. The output reads like a polished version of the candidate, not a polished version of nobody in particular.

This is the difference between AI as a ghostwriter (caught) and AI as an editor (invisible).

The honest fix: how to use AI without getting caught

The fix is process, not avoidance. Here’s the workflow I see working consistently:

  1. Always start from real material. Open a blank document and bullet-dump your actual experience. Real numbers. Real project names. Real problems you solved. Don’t ask the AI to write from scratch. It will hallucinate, and the hallucinations are what get caught.

  2. Constrain the AI hard. Your prompt should specify: banned words (results-driven, leveraged, spearheaded, dynamic, passionate), max 18 words per bullet, British English spelling, no em-dashes, no buzzwords, preserve specific numbers exactly as given. Without these constraints, the model defaults to the patterns I listed above.

  3. Replace 30% of every output with your own phrasing before saving. Open the AI output and the original side by side. For every bullet, change at least one word, ideally to something only you would write. This breaks the AI rhythm and inserts genuine voice.

  4. Read the whole CV aloud. If it sounds smooth, corporate, and like a LinkedIn thought-leader post, it’s wrong. If it sounds like you describing your work at the pub, it’s right. Real voice has hesitation, emphasis, slightly awkward phrasing in places.

  5. Test against the 8 dead giveaways above before sending. Run a buzzword search. Count em-dashes. Check that no bullet starts with “Furthermore.” Check that your metrics aren’t all round numbers. If anything fails, rewrite that section.

The exact prompt templates I recommend, with the constraint syntax baked in, are in my ChatGPT prompts for resume guide. They’re built specifically to defeat the patterns recruiters spot.

The “post-AI signal” recruiters now look FOR (counter-intuitive)

Here’s something that’s shifted in the last 12 months. Because AI text is now everywhere, recruiters have started actively looking for signs of human authorship as a positive signal. These are the things I now register as “this person wrote this themselves”:

Hand-fixed typos. A CV with one or two minor errors that have been corrected (you can sometimes tell by formatting inconsistency around the fix) reads as authentically human. Pristine grammar across 600 words is a flag, not a feature.

One slightly imperfect sentence. A bullet that’s structured a bit oddly, ends mid-thought, or uses an unusual word choice. AI doesn’t make these moves. Humans make them constantly. Don’t sand them all down.

Specific, weird, niche details. “Led the team that fixed the 2019 stocktake bug” beats “Drove operational improvements” every time. AI won’t invent the 2019 stocktake bug because it has no way to know about it. Specificity is uncopyable.

Proactive disclosure. Candidates who write something like “I drafted this cover letter with ChatGPT and edited it for my own voice” get bonus points from me. It’s honest, it’s self-aware, and it tells me the candidate has thought about their tooling. I’d rather hire that person than someone pretending they hand-wrote everything.

British-specific reference points. UK recruiters reading UK candidates can spot when the CV’s reference points are subtly American (using “Fortune 500” instead of “FTSE 250,” writing “labor” not “labour,” referring to “401k” rather than “pension”). When the cultural fingerprint of the CV doesn’t match the candidate’s stated location, AI is usually involved.

The signal humans are now sending each other is: I am a person who wrote this. AI users who don’t sand off the human-ness keep that signal intact.

My verdict

Use AI, edit ruthlessly, leave the seams visible — recruiters are no longer trying to catch you, we’re trying to find you under the polish.

FAQs

Will Applicant Tracking Systems detect AI in 2026?

Most major ATS platforms (Workday, Greenhouse, SmartRecruiters, iCIMS) still don’t run AI-content detection on incoming CVs as of April 2026. The reasons are practical: the tools have a 15-30% false positive rate on legitimate human writing, the legal risk of auto-rejecting a real candidate is high, and the cost-per-scan adds up across thousands of applications. A handful of enterprise employers pilot it for graduate schemes where volume is enormous, but in standard recruitment, the human reading at 4pm Friday is still the detector that matters.

Should I tell the recruiter I used AI?

Yes, briefly, and only if asked or if it fits naturally. A line like “I drafted this with ChatGPT and edited heavily for my own voice” in a cover letter signals self-awareness and honesty, both of which I value. What I don’t want is a defensive paragraph explaining your AI workflow. Treat it the way you’d treat using spellcheck. Useful tool, not a confession.

Is GPTZero accurate enough to rely on?

Not for hiring decisions. GPTZero, Originality.ai and Copyleaks all hover around 70-85% accuracy on long passages and drop sharply on short ones like CV bullets. They flag plenty of human-written text that happens to be polished, particularly from non-native English speakers. I’ve seen well-written graduate CVs scored 95% AI when the candidate wrote every word herself. If a recruiter is making decisions based on these tools alone, they’re making bad decisions.

Can recruiters tell if my LinkedIn About section is AI?

Often, yes. LinkedIn About sections are where AI tells are loudest because most people write theirs once and never touch it. The opening “As a passionate, results-driven professional with a track record of delivering exceptional value” is the digital equivalent of a flashing AI sign. We don’t reject candidates over LinkedIn copy, but it does set expectations low for the CV that follows.

What about AI cover letters specifically? Are they easier to spot than CVs?

Much easier. Cover letters are the worst place to lean on AI because the genre is so formulaic that GPT defaults to a near-identical template every time. The “I am writing to express my keen interest in the position of X at Y” opener appears in roughly 40% of cover letters I now receive. If you’re going to use AI for a cover letter, write the first paragraph yourself, feed that voice into the AI as an example, then only let the model fill in the middle. Or just write the whole thing. It’s 250 words. You can do this.

Does using AI hurt my chances even if the recruiter doesn’t notice?

Indirectly, yes. A generic AI CV passes the ATS keyword filter more easily, so you get more interviews. But you fail those interviews harder because your CV bullets don’t match how you actually talk about your work. The candidates I place reliably are the ones whose CV reads like the conversation we have on the phone. AI-generated CVs create a gap between the paper version of you and the spoken version, and that gap shows up the moment we ask follow-up questions.

Key takeaway from Can Recruiters Tell If You Used AI? An Honest Recruiter Take

Frequently asked questions

Will Applicant Tracking Systems detect AI in 2026?
Most major ATS platforms (Workday, Greenhouse, SmartRecruiters, iCIMS) still don't run AI-content detection on incoming CVs as of April 2026. The reasons are practical: the tools have a 15-30% false positive rate on legitimate human writing, the legal risk of auto-rejecting a real candidate is high, and the cost-per-scan adds up across thousands of applications. A handful of enterprise employers pilot it for graduate schemes where volume is enormous, but in standard recruitment, the human reading at 4pm Friday is still the detector that matters.
Should I tell the recruiter I used AI?
Yes, briefly, and only if asked or if it fits naturally. A line like 'I drafted this with ChatGPT and edited heavily for my own voice' in a cover letter signals self-awareness and honesty, both of which I value. What I don't want is a defensive paragraph explaining your AI workflow. Treat it the way you'd treat using spellcheck. Useful tool, not a confession.
Is GPTZero accurate enough to rely on?
Not for hiring decisions. GPTZero, Originality.ai and Copyleaks all hover around 70-85% accuracy on long passages and drop sharply on short ones like CV bullets. They flag plenty of human-written text that happens to be polished, particularly from non-native English speakers. I've seen well-written graduate CVs scored 95% AI when the candidate wrote every word herself. If a recruiter is making decisions based on these tools alone, they're making bad decisions.
Can recruiters tell if my LinkedIn About section is AI?
Often, yes. LinkedIn About sections are where AI tells are loudest because most people write theirs once and never touch it. The opening 'As a passionate, results-driven professional with a track record of delivering exceptional value' is the digital equivalent of a flashing AI sign. We don't reject candidates over LinkedIn copy, but it does set expectations low for the CV that follows.
What about AI cover letters specifically? Are they easier to spot than CVs?
Much easier. Cover letters are the worst place to lean on AI because the genre is so formulaic that GPT defaults to a near-identical template every time. The 'I am writing to express my keen interest in the position of X at Y' opener appears in roughly 40% of cover letters I now receive. If you're going to use AI for a cover letter, write the first paragraph yourself, feed that voice into the AI as an example, then only let the model fill in the middle. Or just write the whole thing. It's 250 words. You can do this.
Does using AI hurt my chances even if the recruiter doesn't notice?
Indirectly, yes. A generic AI CV passes the ATS keyword filter more easily, so you get more interviews. But you fail those interviews harder because your CV bullets don't match how you actually talk about your work. The candidates I place reliably are the ones whose CV reads like the conversation we have on the phone. AI-generated CVs create a gap between the paper version of you and the spoken version, and that gap shows up the moment we ask follow-up questions.
Can recruiters tell if I used ChatGPT vs Claude vs Gemini?
The better recruiters can, yes. Each model has a fingerprint. ChatGPT loves em-dashes, the words 'delve' and 'navigate', and three-part parallel sentences. Claude leans formal, slightly British, and over-uses 'crucially' and 'fundamentally.' Gemini defaults to bullet-heavy listicle structures and the phrase 'in essence.' After 15,000 CVs, you start to spot the model the same way you spot a regional accent. We don't share this internally as a recruitment criterion, but it's noted. The fix is the same regardless of model: heavy human editing, banned-word lists, and inserting your specific voice.
Will using AI to write my CV get me blacklisted?
No. There is no industry blacklist for AI-written CVs and there will not be one. The legal and reputational risk of maintaining such a list is too high for any recruiter or employer to touch. What can happen is informal: if you use the same generic AI CV across 30 applications at the same company group, the recruiter notices, and your name gets a quiet flag. The fix is to actually tailor each application. AI itself is not the problem. Lazy AI use sent at scale is the problem.
Should I use AI to rewrite my entire LinkedIn profile?
No, rewrite section by section with heavy editing, and never let AI write your About section without your voice in the prompt. The About section is where AI tells are loudest because everyone fed the same prompt to ChatGPT and got near-identical openings. Write the first two sentences yourself in your own voice, then let AI structure the rest using those sentences as a tone sample. The Headline should be human-written full stop, it's 220 characters and you can do that in five minutes. The Experience bullets are the safest place to use AI-as-editor on your real source material.
What's the safest way to use AI for a job application?
Use it as an editor, not a ghostwriter. Open a blank document, bullet-dump your real experience with real numbers, real project names, and the actual problems you solved. Then ask the AI to tighten the language without inventing anything, banning the words 'leveraged, results-driven, spearheaded, dynamic, passionate, cross-functional' in your prompt. Replace 30% of the output with your own phrasing before saving. Read it aloud. If it sounds like a LinkedIn thought-leader post, it's wrong. If it sounds like you describing your work to a friend, it's right.

Keep reading