$ cat fbi-ai-scams-893-million.mdx

FBI: AI scams hit $893 million - first dedicated category in 25 years

Apr 17, 2026 · #fbi #ai scams #deepfake #voice cloning #cybersecurity #ic3 #fraud

For 25 years, the FBI’s Internet Crime Complaint Center has tracked every type of online fraud imaginable. But they’ve never had a category for AI. Until now. In 2025, Americans filed 22,364 AI-related fraud complaints totaling $893 million in losses.

Dark computer screen with security warning in background


The FBI has published its annual IC3 report since 2000. Over a quarter century, categories evolved - phishing, ransomware, crypto fraud all got their own sections. But artificial intelligence? That wasn’t on the list. Now it is, and the numbers are staggering.

Why did the FBI create a new category?

Because the problem got too big to ignore.

$893 million in losses isn’t a projection or an estimate - it’s the sum of reported cases. And we know most fraud never gets reported. The real number is likely several times higher.

Here’s what stands out: the FBI didn’t just tack AI onto an existing category. They created an entirely new one. That’s a signal. They’re treating AI-enabled fraud as a distinct phenomenon, not a variation of something they’ve seen before.

What do these scams look like?

The methods are terrifyingly simple to deploy and incredibly hard to detect.

Voice cloning - a few seconds of audio from social media is all it takes. Scammers grab a clip from Instagram, clone the voice, and call the victim’s parents. “Mom, I’ve been in an accident, wire me money.” It sounds exactly like your son or daughter.

Fake faces on video calls - live conversations with a person who looks real but doesn’t exist. Especially common in romance scams and fake job interviews.

Generated ID documents - driver’s licenses, passports, government IDs. Good enough to pass online verification checks.

Synthetic social profiles - entire identities built from scratch. Photos, posting history, social media activity. None of it real.


Where AI changes the game

Business email compromise - someone impersonates a CEO or CFO. With voice cloning, these scams became dramatically more effective. An accountant hears their boss’s voice and wires the money.

Romance scams - fake faces and voices create people who never existed but can maintain conversations for weeks.

Investment fraud - convincing presentations, fake advisors, AI-generated marketing materials that look polished and professional.

Employment scams - fake job interviews where the “recruiter” is a deepfake. You think you’re talking to a hiring manager at a real company. You’re not.


”Deepfake Digital Arrest” - a disturbing new trend

In India, the problem has taken an even darker turn. Reports indicate 92,000 cases of so-called “Deepfake Digital Arrest” scams. The playbook? Fraudsters video-call victims posing as police officers. They use fake faces, uniforms, even forged documents. They tell the victim they’re suspected of a crime and must immediately pay “bail.”

It works because people panic. And the face on the screen looks like a real cop.

The one sentence that should worry everyone

FBI experts put it bluntly - AI enables scaling. One person can run thousands of personalized conversations simultaneously. This isn’t a scammer manually typing messages. AI generates content in real time, adapts language to each victim, responds to questions naturally.

A fraudster used to manage maybe a dozen “romances” at once. Now? Thousands. And every single conversation sounds authentic.


My take

I think this is one of those moments where a government report tells us something we should’ve known already. $893 million is just the reported number. Real losses could be ten times that.

But the bigger point isn’t the dollar amount. It’s that the FBI officially says: AI is a new crime category. Not “in the future.” Not “potentially.” Right now.

How do you protect yourself? Set up a safety word with your family for suspicious calls. Seriously. A simple word someone has to say before anyone wires money. In a world where a voice can be cloned in seconds - that might be the only defense that actually works.

AITU #04 - full episode on YouTube.

Sources

$ cd ../