AI Cold War - Big Tech coalition vs Chinese model theft
Three of America’s biggest AI rivals just did something unprecedented - they started sharing intelligence with each other. The reason? Chinese companies are stealing their models at industrial scale.

Picture this. You spend years and billions of dollars building something. Then someone shows up, asks your product millions of questions, and uses the answers to build a cheaper knockoff. That’s exactly what’s happening in AI right now.
The Frontier Model Forum gets a new job
The Frontier Model Forum was created in 2023 by OpenAI, Anthropic, Google, and Microsoft. It was supposed to focus on AI safety. Now, for the first time ever, it’s been repurposed as a threat-intelligence operation.
This isn’t a minor pivot. Companies that compete fiercely for every customer are now sharing data about attacks on their models. We’ve never seen anything like this in tech.
How the theft works
The method is called “adversarial distillation.” Sounds complicated, but the principle is dead simple.
You take someone else’s model. You query it millions of times. You collect the responses. Then you use those responses to train your own, cheaper model that behaves similarly to the original.
It’s not traditional code theft. Think of it more like a student who copies a classmate’s answers so many times that they start recognizing the patterns themselves.
The main targets? Three Chinese firms: DeepSeek, Moonshot AI, and MiniMax.
16 million exchanges through fake accounts
The scale is staggering. Anthropic alone revealed that Claude processed 16 million exchanges through roughly 24,000 fraudulent accounts. Each account systematically queried the model, harvesting responses for training data.
24,000 accounts. Not ten. Not a hundred. Someone built serious infrastructure for this.
OpenAI goes to Congress
OpenAI took things further. The company filed a formal memo to the House Select Committee on China - the congressional committee focused on US-China relations.
That’s a signal this has gone beyond business. When a tech company asks for government intervention, you know the situation is genuinely severe.
The catalyst: DeepSeek R1
What changed the equation? DeepSeek R1 - a reasoning model released in early 2025. It proved that distillation works. And it works well.
R1 achieved results comparable to far more expensive American models. The question everyone in the industry was asking: how do you get that quality at a fraction of the cost? The answer - at least partly - is mass-querying other people’s models.
My take
Here’s the thing. Theft is theft, even when nobody’s picking a lock. But this also exposes a fundamental problem with AI models served over the internet. How do you protect a product that reveals its value with every single response?
I think this is just the beginning. The OpenAI-Anthropic-Google coalition is step one. But if distillation is this effective, technical safeguards alone won’t cut it. We’re going to see regulations, sanctions, maybe even region-based access restrictions for frontier models.
AI cold war? It’s already warming up.
AITU episode 4 - full episode on YouTube.
Sources
- Reuters - “US AI labs share intelligence on Chinese model theft” (April 2026)
- The Information - “OpenAI Memo to House Select Committee on China” (March 2026)
- Anthropic - “Addressing adversarial distillation attempts” (March 2026)
- Financial Times - “Frontier Model Forum pivots to threat intelligence” (April 2026)
- Ars Technica - “DeepSeek R1 and the distillation controversy” (January 2025)
- CNBC - “Google, OpenAI, Anthropic form anti-theft coalition” (April 2026)
- TechCrunch - “Inside the 24,000 fake accounts used to steal AI models” (April 2026)
- Axios - “The AI cold war heats up” (April 2026)