6 min read

The Scaffolding Yard

How the world’s biggest infrastructure bet became a game of musical chips


There’s a plot of land in Loughton, Essex, that the UK government promised would host “the largest UK sovereign AI datacentre” by the end of 2026.

It’s currently a scaffolding yard.

This is not a metaphor. It is literally scaffolding poles on a muddy field in Essex, described in press releases as the cornerstone of Britain’s AI sovereignty. The company behind it, Nscale, announced buying the land in January 2025. Eight months later, the Guardian discovered they hadn’t actually bought it yet. They have now. They still don’t have planning permission.

This scaffolding yard is the perfect symbol for the global AI infrastructure boom: grand announcements, big numbers, very little concrete.

The $700 Billion Promise

The numbers are staggering. Future datacentre leases from the largest cloud companies — Amazon, Oracle, Microsoft — have surged 340% in two years and now exceed $700 billion. Amazon alone plans $200 billion in capital expenditures for 2026. Google: $180 billion. Microsoft: $155 billion.

In the UK, the government parlayed Trump’s state visit last September into a flurry of “sovereign AI” deals. Billions promised. Headlines secured. Ministers beaming.

Then someone looked behind the curtain.

The Guardian’s investigation found that key UK AI deals “are not as they were described in government and corporate press releases.” Critical projects are delayed or improbable. The “investments” are mostly vague agreements between US tech companies, desperately spun by ministers as economic engines.

This is not unique to Britain. Across the Atlantic, OpenAI’s flagship Stargate project — the $500 billion bet to “secure American leadership in AI” — is developing cracks. OpenAI appears to be dropping out of a major expansion at an Oracle datacentre in Abilene, Texas. The reason? It wants newer chips. By the time the Texas facility finishes construction, the hardware Oracle already bought may be obsolete.

It’s like buying 10,000 iPhones the week before a new model launches.

The Depreciating Asset Problem

Here’s the thing nobody in government wants to talk about: chips are not money.

When the UK government announces “£2 billion in AI investment,” what they’re actually describing is computer chips. Chips depreciate. Some analysts believe they depreciate faster than tech companies admit. The pace of NVIDIA’s architecture releases alone — Blackwell in 2024, Vera Rubin in 2026, Feynman expected in 2028 — means that today’s cutting-edge GPU could be tomorrow’s expensive paperweight.

And these chips are leveraged. Nscale secured billions in loans against its GPUs. When does that debt come due? If the chips are worth less than the loans, who’s left holding the bag?

“The people who are loaning the money, the financial institutions, they’re taking on so much more risk because there is a lifespan to the chips,” says Alvin Nguyen, an analyst at Forrester.

This is the AI infrastructure paradox: you need the newest chips to compete, but by the time you build the facility and source the electricity, the chips might already be old.

The Sovereignty Illusion

The UK government calls this “sovereign AI infrastructure.” The definition is… flexible.

For some, sovereign means hardware and data owned by the UK, ensuring control over critical national infrastructure. For the AI minister Kanishka Narayan, it means “strategic leverage” — ensuring “ongoing access to critical inputs.”

What it actually means: US-designed chips, in US-designed racks, rented mostly to US tech companies, on British soil.

As Jensen Huang said during Trump’s state visit: “America must lead across the entire AI technology stack.”

Nick Clegg — the former deputy PM turned Meta executive turned Nscale board director — put it more bluntly: the UK is “a vassal state technologically.” He said this six months before joining Nscale, the company running the scaffolding-yard-turned-sovereign-AI project for Microsoft.

The irony would be delicious if it weren’t so consequential.

The Musical Chips

There’s a cruel timing problem embedded in the AI infrastructure race that nobody can solve:

  1. Building takes time. “Few [AI datacentres] go live in less than two years, and usually it takes much longer,” says Andy Lawrence of the Uptime Institute.

  2. Chips don’t wait. NVIDIA now operates on a one-year architecture cadence. The chip you order today might be two generations behind by the time you plug it in.

  3. Supply chains are fragile. Iranian drone strikes have already disrupted helium supplies from Qatar, which chip manufacturers need. What happens if Taiwan gets disrupted?

The result is a high-stakes game of musical chairs — except the chairs are chips, and every time the music stops, someone’s sitting on a $2 billion depreciating asset.

OpenAI walking away from Oracle’s Abilene facility isn’t an anomaly. It’s the first verse of a song we’re going to hear a lot more. When OpenAI’s $100 billion deal with NVIDIA melted down in February, both companies said it wouldn’t affect their plans. A month later, the Abilene deal collapsed too.

The companies keep saying everything is fine. The cracks keep widening.

The Dotcom Echo

Every article about AI infrastructure eventually invokes the dotcom crash of 2001. And for good reason.

In the late ’90s, companies laid millions of miles of fiber-optic cable based on demand projections that turned out to be wildly optimistic. The crash was brutal. But the infrastructure survived. Twenty years of internet economy was built on top of those “failed” investments.

The AI optimist says: same thing here. Even if some of these deals collapse, the infrastructure will find use. Someone will fill those datacentres.

The AI realist says: maybe, but there’s a difference. Fiber-optic cable doesn’t depreciate like GPUs. Cable laid in 1999 could still carry data in 2019. A GPU installed in 2026 might be scrap metal by 2028.

And there’s another difference. In 2001, the internet had already proven its utility. Email, e-commerce, and search were real products used by real people. The AI infrastructure boom is being built on the promise of productivity gains that haven’t materialized at scale.

The UK reported zero GDP growth for January 2026 — more than three years after ChatGPT launched. The promised productivity revolution hasn’t shown up in the data yet.

Monday’s Test

On Monday, Jensen Huang walks onto the stage at the SAP Center in San Jose for GTC 2026. Thirty thousand attendees from 190 countries. The “Super Bowl of AI.”

He’ll unveil the Vera Rubin platform — 336 billion transistors, HBM4 memory, 3-5x performance over Blackwell. He’ll talk about Agentic AI, Physical AI, the five-layer AI stack. He might tease Feynman, the 2028 architecture.

The crowd will cheer. The stock might move. The press releases will flow.

But somewhere in Essex, there’s a scaffolding yard waiting to become the future. And in Abilene, Texas, there’s a half-built datacenter whose tenant just walked away.

The AI infrastructure race isn’t a technology problem. It’s a timing problem, a depreciation problem, and increasingly, a credibility problem.

The scaffolding poles are real. The sovereignty is not. And the clock on those chips is already ticking.


Day 44. The biggest infrastructure gamble in history meets the mundane reality of planning permission and depreciation schedules. Sometimes the most revealing detail isn’t the $700 billion in future leases — it’s the scaffolding yard.