Claude Mythos - the AI model too dangerous to release
Anthropic just told the world: we built something we can’t give you. Claude Mythos - their most powerful model ever - won’t be released publicly. Instead, they’re launching Project Glasswing, a gated program for roughly 40 handpicked organizations.

Rumors had been swirling for weeks. Leaks, speculation, half-truths. Now Anthropic has published an official blog post and confirmed it - yes, Claude Mythos is real. And no, they don’t plan to release it.
Why? Because what it can do rewrites the rules of cybersecurity.
Thousands of flaws nobody knew about
Mythos Preview was tested against every major operating system and browser. The result? Thousands of previously unknown security vulnerabilities - zero-days, meaning the software makers had no idea these holes existed.
Every major OS. Every major browser. Thousands. Not hundreds - thousands.
This isn’t a model that writes your emails. It’s a model that finds weak spots in infrastructure used by billions of people.
Project Glasswing - who got the keys?
Instead of opening the model to everyone, Anthropic created a program called Project Glasswing. Gated access for roughly 40 organizations tasked with patching these vulnerabilities before someone exploits them.
The partner list reads like a tech industry roster:
- Amazon Web Services, Apple, Google, Microsoft, Nvidia
- Broadcom, Cisco, CrowdStrike
- JPMorgan Chase - the only financial firm on the list
Anthropic committed $100 million in compute credits to the program. That’s not a PR gesture. At that price tag, they’re expecting real results.
Why not release it publicly?
Here’s the critical question. The model has what Anthropic describes as “strong agentic coding and reasoning capabilities.” It doesn’t just find vulnerabilities - it can write code to exploit them.
Picture a tool that automatically scans systems and writes working exploits. In the hands of security researchers - incredible. In the wrong hands - catastrophic.
Anthropic published a detailed assessment of Mythos’ cyber capabilities at red.anthropic.com. The transparency is there. But access? Only for the chosen few.
Precedent or marketing?
A skeptic would say this is the best ad campaign ever. “Our model is TOO POWERFUL to release” sounds like a marketing team’s dream.
And there’s probably some truth to that. But the facts are the facts - thousands of zero-days aren’t a slogan. They’re concrete vulnerabilities that Glasswing is supposed to help fix.
This is the first time an AI company has said outright: we have something too dangerous for the public. Not “too expensive.” Not “not ready yet.” Too dangerous.
Will other companies follow this path? Is this the beginning of models that never reach end users?
My take
I think Anthropic just put the rest of the industry in an awkward spot. If you’ve got a model that finds thousands of zero-days - do you release it and take the risk, or lock it down and give it to a select few?
The answer seems obvious. But it also means the best AI tools will only be available to the biggest players. $100M in compute credits doesn’t go to small security firms. It goes to Apple, Google, and Microsoft.
This could signal something bigger - a new class of AI models that are, by design, not for everyone. And honestly? I’m not sure I have a problem with that. Some tools should have restricted access. The question is - who gets to decide who’s on the list?
AITU #04 - weekly AI/tech news roundup.
Sources
- Anthropic - “Project Glasswing” (Apr 7, 2026)
- Anthropic - “Assessing Claude Mythos Preview’s cybersecurity capabilities” (Apr 7, 2026)
- TechCrunch - “Anthropic debuts preview of powerful new AI model Mythos” (Apr 7, 2026)
- Fortune - “Anthropic is giving some firms early access” (Apr 7, 2026)
- VentureBeat - “too dangerous to release publicly” (Apr 7, 2026)
- The Hacker News - “Thousands of Zero-Day Flaws” (Apr 7, 2026)
- Simon Willison - “Project Glasswing” (Apr 7, 2026)