ok so 35%. That is the number that wrecked my week. 35% of candidates in a Fabric study of over 50,000 people were cheating with AI by December 2025, up from 15% six months earlier. i found that stat at like 1 AM on a Wednesday while trying to prove my buddy Derek wrong. Derek runs hiring at a SaaS company, maybe 400 people, and he had texted me "i think a third of my candidates are getting AI help live and i literally cannot tell which ones." i told him no way, a third seemed insane. Turns out he was basically right. i felt dumb.
That sent me down a month-long rabbit hole pulling every stat i could find on AI interview cheating. Only ones with real sources because every article i kept running into would say "studies show" and then just not link anything lol. i could not find one place that had all the numbers so this is me trying to be that place. It got long. i do not care.
Back to Fabric. Their deeper study of 19,368 interviews found 38.5% of all candidates got flagged. Technical roles sat at 48%, sales at 12%, roughly a 4x gap. Junior candidates with zero to five years cheated at nearly double the rate of senior people. nearly double. Sarah mentors new grads and she says a lot of them do not even view it as cheating which honestly tracks with that 35% number Derek threw at me. She compared it to using a calculator on a math test and i told her that comparison would make hiring managers lose their minds lol.
The number that spooked Derek the most when i showed him was this. 61% of cheaters scored above 7.0 passing thresholds and would have gone to the next round without detection. More than half the people using AI help are getting through. Companies catch them later when the person falls apart in the first month or they just never catch them at all. Could some of those 61% be legitimately good and just wanted an edge? maybe. but the performance data says otherwise. Derek told me about one hire on his team last year who completely fell apart first sprint and it took three months to manage that person out. That is not an anecdote that lives in a vacuum either, direct costs for a single bad hire in a hundred fifty thousand dollar engineering role run 30 to 150% of first year earnings. fifty thousand dollars minimum. 42 days average time to fill before a restart. fifty thousand dollars and 42 days because someone had ChatGPT whispering in their ear during a system design round lol
i asked Derek how he thought people were pulling it off and then Fabric actually confirmed his guess with a breakdown. dedicated AI tools, Cluely and Interview Coder and things like those, accounted for 45%. LLM voice mode, basically talking to ChatGPT on a second device, was 34%. tab switching and secondary screens 18%. live human help only 3%. my buddy Marcus works in cybersecurity and when i showed him that breakdown he said the move from improvised methods to dedicated tools reminds him of how phishing went from amateur emails to organized kits you can buy online. that stuck with me because it means the floor keeps going up. and here is the part that got me. among candidates who interviewed multiple times 47% never cheated in any interview. fine. 30% cheated in every single one. every. single. one. the other 23% were situational, flip a coin depending on the day. but that 30%, those people are not experimenting. that is standard procedure for them. which is exactly what Derek suspected when he texted me that night about his third of candidates.
Karat saw a five-fold increase in cheating detection over two years which lines up with everything Fabric found. 59% of hiring managers now suspect candidates are faking abilities with AI during live assessments. 62% said job seekers are better at faking than recruiters are at catching it. sixty two percent. i read that one to Derek and he went quiet for a second. just quiet. then "yeah that sounds about right." 71% of recent job seekers admitted to cheating during hiring, everything from Googling answers to running full AI generators. one tech leader said 80% of candidates used an LLM on their top-of-funnel code test even though they were told not to. my friend Priya runs engineering hiring at a startup and after she made their take-home explicitly open-book with AI allowed the quality of submissions barely changed. her theory is most people were already using it and honestly after looking at all these numbers i believe her.
Timing patterns are kind of funny. Fabric found a 3x increase in cheating from July to September 2025 with a spike in late 2025 that they called a shift from experimental to structural cheating. Sunday interviews had the highest rate at 47.1%, other weekdays sat between 35 and 40%. The Sunday thing makes total sense, people at home on a weekend with nobody watching. Marcus called Sunday "cheat day in more ways than one" and i wish i could tell you i did not laugh but i did. remember that 35% from the top? the Sunday number is 47%. almost half.
Gartner projects by 2028 entirely fabricated candidate profiles will be 25% of the candidate pool. one in four. Derek read that and said "so now i have to worry about whether the person even exists?" i mean. yeah.
Deepfakes are picking up speed too. 17% of HR managers in a mid-2025 survey said they ran into deepfake technology in a video interview. Q1 2025 alone had 179 deepfake incidents which beat the entire 2024 total. AI deepfake scams rose 700% in 2025 and deepfake fraud losses in the US tripled from 360 million to 1.1 billion dollars. Sarah told me she had a candidate last month whose video lighting on their face looked slightly off compared to the room behind them. She could not prove anything. Said it was weird though.
For the tools side of things right now there are twenty plus interview AI tools on the market, browser extensions to desktop apps. Pricing runs from twelve dollars a month for InterviewMan on the annual plan up to two hundred ninety nine for Interview Coder which only does coding rounds. The Cluely breach in mid 2025 exposed 83,000 users including which interviews each person used the tool in after hackers found an admin password in a public GitHub repo. That one rattled a lot of people.
i use InterviewMan and the thing i care about most is whether a tool protects your data or leaves you hanging. InterviewMan runs as a desktop app not a browser extension so it does not show up in proctoring logs. 57,000 users, 4.8 stars from 257 ratings, twenty plus stealth features, twelve bucks a month annual. Across all those users i could not find one confirmed detection report on any proctored platform. Whether you think using these tools is right or wrong the data says the same thing. The percentage of candidates using them is going up and pretending otherwise puts you behind the 35% who are not pretending.
i sent a draft of this to Derek and Sarah and Marcus and Priya before posting. All four said the same thing in different words. The genie is out, companies need to rethink evaluation because the current system is already broken and the numbers just have not caught up to the conversation yet.
Ready to Ace Your Next Interview?
Join 57,000+ professionals using InterviewMan to get real-time AI assistance during their interviews.
