Anthropic bought power for all of Warsaw
Anthropic bought power for all of Warsaw
Anthropic - the company behind Claude, ChatGPT’s competitor - signed a 5 gigawatt compute deal with Amazon. Five gigawatts is what a full nuclear power plant produces. What two million people consume. Just to train AI.

A year ago, if someone said “AI is heavy industry”, they got looks like “dude, it’s just code”. In twelve months every AI company is hunting for gigawatts. Not megawatts. Gigawatts.
Anthropic - creators of the Claude model, OpenAI’s second biggest competitor - just signed a deal with Amazon for 5 gigawatts of compute capacity. The same day they signed a deal with Broadcom for new chips starting 2027.
The scale stays the same: this isn’t buying a few servers. This is buying entire new energy infrastructure.
What 5 gigawatts actually is
Concrete comparisons:
- One nuclear power plant produces on average 1-1.5 GW. Five gigawatts is three to five power plants.
- City of Warsaw consumes about 2-3 GW at peak demand. The Anthropic-AWS deal is more than twice as much as Warsaw.
- Two million people on average consume five gigawatts.
- All of Denmark consumes about 6-7 GW at peak. This one deal is almost as much as an entire country.
And all of this - to train one AI model. Claude. In a few years Claude will have a successor, then another. Infrastructure must be ready.
Why Anthropic needs this
Training a language model like Claude Opus/Sonnet requires:
- GPU clusters - tens, hundreds of thousands of Nvidia cards (or custom chips from Google/Amazon).
- Power - those cards heat up to thousands of watts. A 100,000-card cluster = 100-200 MW demand alone.
- Cooling - inference generates so much heat that server rooms are built by rivers to cool with water.
- Network - real-time data exchange between cards requires special infrastructure.
A model at GPT-5 / Claude Opus 5 scale requires a cluster with several to a dozen gigawatts of available power. Without it - no progress.
Deal with Broadcom
Same day - Anthropic signs a second deal. With Broadcom - a company known for custom chips for Google (TPU), Apple (modems), Meta. Goal: new chips designed specifically for Claude, starting 2027.
This shows Anthropic doesn’t want to depend only on Nvidia. Just as Google has its TPU, as Tesla is building its AI5, Anthropic will have custom chips. It costs tens of billions of dollars but gives independence and higher efficiency.
What this means for you
Two direct consequences:
1. Electricity bills in Europe. AI data centers in the US are pumping up demand for power and chips. Prices rise globally. Europe is connected to the global energy market - all these giga-deals translate to electricity prices in Warsaw, Berlin, Paris. Maybe not today, maybe in two years. But it’s coming.
2. AI usage prices. The more Anthropic spends on infrastructure, the higher Claude’s prices - which sometimes get passed to customers. ChatGPT just doubled in price (GPT-5.5 news). Claude will too, soon enough.
My take
In my opinion, this is the first week you can see AI is heavy industry. Not a startup with a laptop. Entire infrastructures.
For two years everyone repeated “AI is accessible, democratic, anyone can train”. That was true for small models. For Claude Opus, GPT-5, Gemini Ultra - this is industry on the scale of petrochemistry. Costs billions of dollars. Requires infrastructure. Hires engineers by the briefcase-load.
In five years this market will look exactly like commodity markets - a few companies control the chain from power through chips to distribution. And that’s a signal to buy (or avoid) the right stocks.
Sources
- Motley Fool - “Anthropic Just Announced Huge News for Alphabet and Broadcom” (April 22, 2026)
- TechCrunch - “Exclusive: Google deepens Thinking Machines Lab ties” (April 22, 2026)
- Crescendo AI - “Latest AI News 2026”
- Bloomberg - “Google Releases New AI Agents” (April 22, 2026)
- Ofox AI - “GPT-5.5 Released: Infrastructure Context” (2026)