And who gets burned if the AI bubble deflates

Every week a new headline announces another hundred billion dollars flowing into AI datacenters. Microsoft, Amazon, Google, and Meta have collectively pledged over $650 billion in capital expenditure for 2026 alone — more than the GDP of Sweden. Investors are simultaneously thrilled and terrified.

But most commentary misses the fundamental mechanics. How does a datacenter actually make money? Why is AI infrastructure so different from traditional cloud? And in the bear scenario nobody wants to discuss — who actually loses?

Let me walk you through the complete picture, from a single GPU server to the trillion-dollar industry.


Part 1: How Traditional Cloud Works — The Proven Mode

The Simple Version

Amazon Web Services, Microsoft Azure, and Google Cloud are, at their core, computing rental businesses. They build enormous physical facilities, fill them with servers, and charge businesses to use those servers by the hour. A company that used to run its own server room in the basement now pays AWS $0.025 per virtual CPU-hour instead.

This seems mundane. The economics are extraordinary.

What It Costs to Build

A traditional cloud datacenter — running standard Intel or AMD CPU servers — costs approximately $17 million per megawatt of capacity to build. The breakdown:

  • Servers and hardware (CPUs): $8M (47%) — standard servers cost $2,000–5,000 each, you fit thousands per megawatt
  • Electrical infrastructure: $3.8M (22%) — dedicated power substations, redundant feeds, UPS systems
  • Land and civil construction: $2.5M (15%) — the building itself
  • Cooling: $1.2M (7%) — air cooling is sufficient; standard HVAC at scale
  • Networking: $0.8M (5%)
  • Security and miscellaneous: $0.7M (4%)

The crucial characteristic of traditional cloud hardware: it is multi-tenant friendly. A single physical server can be sliced into dozens of virtual machines, each serving a different customer simultaneously. This keeps utilization high and unit economics excellent.

What It Earns

At 80% utilization — realistic for a mature cloud facility — a 1 MW traditional datacenter generates roughly $11.1 million in annual revenue. Against $6.5 million in annual costs (depreciation, electricity, labor, networking), that produces $4.6 million in operating income — a ~41% infrastructure margin.

The payback period on a $17M investment at $4.6M/year is ~3.7 years. At a 10-year datacenter life, you've generated ~$29M in profit on a $17M investment. Excellent business.

Where the Real Margins Live

But raw infrastructure is not where hyperscalers make their best money. On top of physical compute, they layer managed services — pre-packaged software that customers consume via API without managing any infrastructure themselves. Think Amazon RDS (managed database), AWS Lambda (serverless computing), or Azure Active Directory (identity management).

These managed services carry 65–75% gross margins because the hardware cost is shared across thousands of customers and the incremental cost of serving one more customer is nearly zero. When you add enterprise software on top — Microsoft 365, productivity suites, compliance tools — margins reach 75–88%. Pure software, no hardware cost.

This three-layer structure is the key to understanding hyperscaler economics:

LayerGross MarginExamples
Raw infrastructure7–26%CPU-hours, storage TB, data transfer
Managed services50–75%Databases, analytics, security tools
Enterprise software75–88%Microsoft 365, Google Workspace

AWS operates at ~40% overall operating margin. Microsoft Azure runs at ~42%. These are among the most profitable businesses ever constructed, precisely because the software layer transforms modestly profitable infrastructure into a machine that generates tens of billions in annual cash flow.

Part 2: How AI Cloud Works — Same Building, Different Universe


Share this post