$ cat quitgpt-pentagon-anthropic-openai.mdx

QuitGPT: Why Millions Uninstalled ChatGPT in One Weekend

Mar 26, 2026 · #quitgpt #chatgpt #anthropic #openai #pentagon #ai #artificial intelligence #ai ethics #claude #sam altman #dario amodei

The Pentagon blacklisted Anthropic, OpenAI signed a deal hours later, ChatGPT uninstalls surged 295%, and Claude hit #1 on the App Store.

US Capitol building in Washington D.C.


Late February 2026. The Pentagon wants AI companies to hand over full control of their models. One company says no. Hours later, its competitor signs the contract. Then two and a half million people uninstall ChatGPT.

That is not a simplification. That is literally what happened.

Background — The Pentagon Contracts

In the summer of 2025, the Pentagon awarded contracts worth up to $200 million each to four AI companies: Anthropic, OpenAI, Google, and Elon Musk’s xAI. Anthropic’s Claude was the only model approved for use on classified military networks.

In early 2026, the Pentagon pushed for revised terms. It wanted Anthropic’s AI available for “all lawful purposes” — no restrictions imposed by the company. In other words: give us the keys, we’ll decide what we do with it.

February 26–27 — The Ultimatum

On Tuesday, February 25, Defense Secretary Pete Hegseth met with Anthropic CEO Dario Amodei and issued an ultimatum: by Friday, February 28, at 5:01 PM, accept the terms or lose the contract.

Anthropic went public the next day. In a statement on its website, Amodei wrote:

“The new contract language made virtually no progress on preventing Claude’s use for mass surveillance of Americans or in fully autonomous weapons. We cannot in good conscience accede to their request.”

The company was not refusing to work with the military. It was insisting on two things: no mass domestic surveillance, and no autonomous weapons without human oversight. The Pentagon decided that was too much to ask.

February 27 — The Blacklist and the OpenAI Deal

On February 27, the Pentagon formally designated Anthropic as a “supply chain risk to national security.” According to Malwarebytes, this label had previously been reserved for companies from adversary nations — Huawei being the textbook example. It had never been used against an American company.

Anthropic called the designation “unlawful and politically motivated.”

Here is where it gets interesting. That same day — hours after Anthropic was blacklisted — OpenAI signed a contract with the Pentagon. The detail that matters: CEO Sam Altman had publicly backed Anthropic’s position that same morning.


The Fallout — 295% Surge in Uninstalls

People moved fast.

According to Sensor Tower data cited by Sovereign Magazine, ChatGPT uninstalls in the US jumped 295% on February 28. For context, the typical daily fluctuation is around 9%.

Anthropic’s Claude hit #1 among free apps on the US App Store — a first. According to almcorp.com, Claude had been sitting at #42 at the start of the year and trending downward.

QuitGPT organizers at quitgpt.org claimed over 2.5 million people took some form of action — uninstalling, cancelling subscriptions, or sharing the boycott. The number is self-reported and hasn’t been independently verified, but even if inflated, the scale is unlike anything a tech boycott has produced before.

On March 3, protesters showed up outside OpenAI’s San Francisco headquarters. Signs reading “Sam Altman is watching you.” Chalk on the sidewalk: “Technology in service of humanity, not war.” Over 875 employees across Google and OpenAI signed an open letter backing Anthropic. They wrote:

“They’re trying to divide each company with fear that the other will give in. That strategy only works if none of us know where the others stand.”


Altman Walks It Back

On March 3, Altman posted a memo on X. He admitted the deal “looked opportunistic and sloppy” and that he “shouldn’t have rushed.” He announced revised terms — a prohibition on surveillance of US citizens and on NSA use.

Sounds good, but the details matter. Professor Jessica Tillipman of George Washington University, quoted by MIT Technology Review, pointed out that the revised contract “does not give OpenAI an Anthropic-style, free-standing right to prohibit otherwise-lawful government use.” The new clauses rely on existing law — law that the government itself controls. It is a lock where the homeowner keeps a spare key.

In an internal memo that leaked to The Information, Amodei called OpenAI’s communications around the deal “straight up lies” and “safety theater.” TechCrunch published excerpts.


Where Things Stand

Anthropic has sued the Pentagon over the designation. Microsoft publicly backed Anthropic on the legal question — which is notable given that Microsoft is also OpenAI’s largest investor. The case is ongoing.

The Pentagon is deploying models from OpenAI and xAI on classified networks. Ironically, Claude is still being used in military operations — including in the context of Iran — because of a six-month transition period. The model the Pentagon officially declared a security threat is still running in their systems.


Sources

$ cd ../