Home
Blog

$725 billion on infrastructure: Big Tech builds data centers, and you pay for their mistakes

Article

$725 billion on infrastructure: Big Tech builds data centers, and you pay for their mistakes

Published on 5/1/2026

Engineering

When Google, Amazon, Microsoft, and Meta plan to spend $725 billion on capital expenditures (Capex) by 2026 — 77% more than last year — these aren't just numbers for reports. Behind those billions are concrete decisions: where new data centers will be located, which chips to buy, and most importantly, who ultimately pays for the capacity imbalance.

For us, as engineers working with infrastructure, these numbers are a signal not about the "future of AI," but about vendor lock-in and cost overruns that are already baked into cloud service pricing. If a cloud provider invested $10 billion in a GPU farm, they won't keep prices at "pay-as-you-go" levels — they need to recoup the investment, and you'll see it in your compute bills.

Why Capex grows faster than revenue

According to the report cited by Tom's Hardware, the Capex growth is primarily driven by GPU purchases (NVIDIA H100/B200) and building data centers for AI workloads. But there's a catch: these capacities don't always match real demand. Analysts call the "bear thesis" garbage, but we see that most of these investments are a bet that AI workloads will grow enough to justify the costs. If that doesn't happen — cloud prices will simply be raised to compensate for idle capacity.

In practice, this means that companies choosing a cloud provider for an AI product are essentially taking on part of the Capex risk. You pay not only for the resources you consume, but also for your neighbors' empty racks.

Build your own data center or rent — it's not a dilemma

We often hear: "Major clouds are cheaper than your own hardware." That's true for 80% of tasks, but not for AI inference with high latency requirements. In our experience, if you're running an LLM in production with RPS > 1000, renting GPUs from a single provider can cost more than colocating your own servers — simply due to the managed service markup. Big Tech builds into the price not just hardware, but also amortization of infrastructure that may sit idle.

We'd recommend calculating TCO (Total Cost of Ownership) over a 3-year horizon, rather than looking at current rates. If your project grows, you'll become dependent on a vendor that has already invested your future money in new GPUs.

What an engineer can do about it

First, don't believe in "unlimited scaling" of the cloud — it has a ceiling, and that ceiling is tied to the provider's Capex. Second, build multi-cloud or hybrid into your architecture: at least at the level of data storage and stateless compute. If your AI product is tied to a single cloud via managed Kubernetes or serverless — migration will cost more than it seems.

Third, keep an eye on news about capacity rollouts: when AWS announces a new region in your country, it's not just a "service improvement" — it's a signal that in 6–12 months, compute prices may drop due to competition. Conversely, if a provider delays a data center launch, expect prices to rise.

$725 billion isn't just a record. It's the cost of a bet Big Tech is making on AI. And like any bet, the losers pay more.

← All Articles