How to Know You're Actually Hiring a Human in 2026

Something quietly changed in hiring over the last eighteen months.

The signals we spent years learning to trust — a polished resume, a confident async video response, a well-articulated answer about strengths and weaknesses — are no longer reliable indicators of the person sitting on the other side of the process.

AI didn't just change how candidates apply. It changed what hiring teams are actually measuring without realising it.

The problem is not that candidates are using AI. Most of them are, and in many cases that is entirely reasonable. The problem is that the line between AI-assisted and AI-impersonated has blurred to the point where some hiring teams cannot tell the difference — and some are not even trying.

Here is what is actually happening, where the real risk sits, and what the most rigorous hiring teams are doing differently in 2026.

What's Actually Happening Out There

The reports started filtering through TA communities in 2024. A candidate performs flawlessly through three rounds of interviews, clears all assessments, and accepts an offer. Two weeks into the role, it becomes clear they cannot do the job. The resume was real. The references checked out. But the async video responses were scripted with AI, the take-home assessment was completed by a different person entirely, and the interview answers were fed in real-time through an earpiece while the candidate read from a second screen.

This is not a hypothetical. It is happening in volume, and it is getting more sophisticated.

The most documented cases involve remote technical hiring — engineering, data, product — where async video and take-home work are common. But it is spreading. Roles in finance, marketing, and People Operations are now seeing the same patterns. And the methods are evolving faster than most hiring processes can adapt.

The current landscape, in plain terms:

AI-generated applications are now table stakes. Most candidates — across every level — are using AI to write or polish their resumes and cover letters. This is not fraud. It is adaptation. The hiring teams still running processes designed to filter on resume quality are measuring AI fluency, not candidate capability.

AI-coached interviews sit in a grey zone. A candidate who used AI to prepare exhaustively for your interview questions is not the same as a candidate who is being fed answers live. Preparation has always been legal. The question is whether your interview design tests the thinking, or just the answer.

Live AI assistance during interviews is where the line is clearly crossed. Tools that transcribe the interview in real time, generate answers, and feed them back to the candidate through a second screen are widely available and actively marketed to job seekers. You cannot detect them visually. Standard video calls give you no signal.

AI-impersonated async video responses are the fastest-growing fraud vector. Candidates submit a video response that was pre-recorded, edited, or in some cases generated using their likeness. The face matches the resume photo. The answers are fluent. The person in the video may have rehearsed with AI assistance to the point where their responses are essentially scripted outputs.

Ghost candidates and identity fraud — where the person who interviewed is not the person who shows up on day one — have increased sharply in remote and hybrid hiring. Background checks catch some of this. Skills assessments catch more. But neither is designed specifically for this problem.

Where Hiring Teams Are Getting This Wrong

Mistake 1: Treating this as a technology problem.

The instinct is to fight AI with AI — buy a detection tool, add a layer to the ATS, flag AI-generated text. These tools exist and some are useful at the margins. But AI detection is an arms race you are not winning. Detection models are trained on last year's generation tools. Generation tools improve monthly. By the time a detection product ships an update, the fraud it was designed to catch has already evolved.

Detection is a tactic. It is not a strategy.

Mistake 2: Relying on async video as a verification layer.

Many teams added async video (HireVue, Spark Hire, Willo) after COVID as a way to add a human element to early screening. In 2022, that was reasonable. In 2026, async video is one of the easiest stages to manipulate. If you are treating a strong async video response as a trust signal, you may be screening for production quality, not candidate quality.

Mistake 3: Ignoring this entirely.

The opposite failure mode. Some hiring leaders have decided that if everyone is using AI, it evens out and they should not worry about it. This logic collapses when you hire someone who cannot perform the role they interviewed brilliantly for. The cost of a bad hire — severance, lost productivity, re-hiring timeline, team disruption — has not changed just because the fraud method has.

Mistake 4: Making process changes without involving candidates honestly.

Some companies have gone hard on anti-AI policies in hiring — requiring declarations, running detection software, rejecting candidates for AI use. Candidates know this and respond by being more covert. Worse, aggressive anti-AI stances signal to strong candidates (who have options) that your culture is adversarial and paranoid. You filter out the honest people and keep the ones who are good at hiding things. The incentive structure is backwards.

What Actually Works

The hiring teams navigating this well are not doing something radically new. They are doing something old, rigorously: designing processes that test thinking, not outputs.

1. Replace output-based assessments with process-based ones.

Instead of asking candidates to submit a finished work product (a strategy doc, a sourcing plan, a data analysis), ask them to walk you through how they would approach the problem — live, out loud, in conversation. Better yet: give them a finished work product with deliberate flaws and ask them to critique it. AI cannot fake genuine critical thinking in real time with follow-up questions. A live problem-solving conversation reveals the person inside ten minutes.

2. Interview design that assumes preparation.

If your interview questions are googleable, candidates will google them (and now AI them). Stop evaluating the quality of their prepared answers. Start evaluating what happens when you follow up. "Tell me about a time you had to make a hard decision with incomplete information" is a mediocre question. "Tell me more about that — what did you know at the time versus what you found out later, and how did that change your assessment of the decision?" is where you find the real signal. AI-fed candidates run out of steam on second and third-order follow-ups. Genuine experience does not.

3. Use live assessments for anything high-stakes.

For roles where a take-home assessment is part of the process, move it to a live, shared-screen format for the final round. The candidate works through the problem in front of you. You can ask questions, explore their thinking, and see how they handle uncertainty. This is not more time-consuming — it replaces a lengthy review of a submitted document with a more diagnostic conversation.

4. Verify identity early for remote roles.

For fully remote positions, introduce a lightweight identity verification step before the final round — not as a punitive measure, but as a transparent part of your process. Frame it the same way you frame a background check: standard, non-negotiable, applies to everyone. Candidates who object are telling you something useful. Most candidates who have nothing to hide will not object at all.

5. Weight in-person or live interaction more heavily.

The fraud that is hardest to execute is a live, unscripted, face-to-face conversation with a skilled interviewer who follows the thread wherever it goes. If you have moved entirely to async-first, remote-only hiring processes, you may be optimising for candidate convenience at the cost of hiring accuracy. Reintroducing at least one unscripted live conversation — video or in-person — for any senior hire is worth the friction.

The Harder Question: What Are You Actually Testing For?

Behind all of this is a question that most hiring teams have not sat with long enough.

If AI can complete your assessments, write the answers to your interview questions, and produce work that is indistinguishable from a strong candidate's — what does that tell you about your assessments and interview questions?

In many cases, the answer is uncomfortable: you were already measuring the wrong things. You were filtering on communication polish, not communication clarity. On confidence, not judgment. On the ability to produce a formatted document, not the ability to think through a problem.

AI has done hiring teams an accidental favour by making visible what was always true: most hiring processes are designed to find people who are good at being hired, not people who are good at doing the job.

The teams that come out ahead in this environment are the ones who use this disruption as a forcing function — to redesign their processes around genuine job-relevant assessment, rather than patching them with detection layers and hoping the problem goes away.

The TPPG Community Is Talking About This

On May 12, we're hosting Hilary Gridley, Head of Core Product & AI at WHOOP, for a live AMA on exactly this territory — how AI is changing what it means to evaluate talent, what tools are worth using, and what the most thoughtful People leaders are doing right now.

This is not a panel discussion with polished talking points. It is a direct conversation with a practitioner who is living this problem at scale.

Reserve your spot for the May 12 AMA →

And in June, we are going deeper on the identity and trust side of AI hiring — with a theme built specifically around the question at the top of this article: how do you know who you are actually hiring?

A Note on Where This Goes Next

The hiring teams that will be in the best position in two years are the ones investing now in two things: better interview design and stronger identity infrastructure. Not because AI fraud is going away — it is not — but because those two investments make your hiring more accurate regardless of what candidates are doing on their end.

The question was never really "how do we stop candidates from using AI?" That ship sailed. The better question is: "Does our process actually surface the people who can do this job, and would we know if it didn't?"

Most teams, if they are honest, do not have a great answer to that question yet.


The People People Group is a community of 6,000+ senior HR and People leaders across North America. We host regular AMAs, events, and conversations with practitioners at the frontier of this work. Apply for membership →

Previous
Previous

The Best HR and Talent Conferences to Attend in 2026 (North America Edition)

Next
Next

When to Hire a Fractional Recruiter vs. a Full-Time TA Lead