The Data Center Gold Rush: Why Everyone’s Building AI Supercomputers (And Why It Matters)

There’s a massive AI infrastructure arms race happening right now, and most people have no idea it’s even going on.

There’s a massive AI infrastructure arms race happening right now, and most people have no idea it’s even going on.

If you’ve been paying attention to tech news lately, you might’ve noticed something weird: companies are spending absolutely insane amounts of money building data centers specifically for artificial intelligence. We’re talking billions. Plural.

CoreWeave just filed for an IPO targeting a $35 billion valuation. Nebius signed a $17 billion deal with Microsoft. NVIDIA is basically printing money selling GPUs faster than they can make them. And according to industry analysts, spending on AI infrastructure is expected to hit over $200 billion by 2028.

So what the hell is going on? Why are we suddenly building massive computing facilities like we’re preparing for digital war?

The Problem: AI Needs Ridiculous Amounts of Computing Power

Here’s the thing about modern AI – especially the stuff like ChatGPT, Midjourney, and all these new models that seem to drop every week – they require absolutely monstrous amounts of computing power to train and run.

Training a single large language model can cost tens of millions of dollars in compute alone. Running these models for millions of users simultaneously? That takes thousands upon thousands of specialized chips working together in perfect harmony.

Your regular cloud providers like Amazon Web Services or Microsoft Azure? They were built for normal internet stuff – websites, apps, databases. They work fine for that. But AI workloads are different. They need GPUs (graphics processing units) instead of regular CPUs, ultra-fast networking between thousands of chips, and cooling systems that can handle equipment pumping out heat like miniature suns.

It’s like trying to race a Formula 1 car on roads built for bicycles. It technically works, but it’s not optimal.

Enter the New Players: Purpose-Built AI Clouds

This gap created an opportunity, and a bunch of companies jumped in to fill it. The biggest names right now are CoreWeave and Nebius (formerly part of Yandex), but there are others like Lambda and Crusoe also getting in the game.

These aren’t your traditional tech companies. They’re building data centers specifically designed for AI workloads from the ground up. And they’re moving fast.

CoreWeave started as a cryptocurrency mining operation (remember when I couldn’t get a GPU because of crypto miners? Yeah, they were part of that). When crypto crashed, they pivoted hard into AI infrastructure. Now they’re opening 10 new data centers in 2025 alone and have massive contracts with companies like Microsoft and OpenAI.

By the end of 2024, CoreWeave opened 28 data centers globally, and they’re racing to deploy the latest NVIDIA H200 and upcoming Blackwell GPUs before anyone else can get their hands on them.

Nebius is a newer player but they’re moving fast. They posted 385% year-over-year revenue growth in the first quarter of 2025, and NVIDIA took an equity stake in them. They’re expanding across Europe and the US, and just signed that massive $17 billion deal with Microsoft. They’re still smaller than the big players, but they’re one to watch.

There are others too – Lambda Labs, Crusoe Energy, even traditional giants like AWS and Azure are adapting their infrastructure for AI workloads.

The basic playbook is the same: pack as many high-end NVIDIA GPUs into buildings as possible, connect them with ultra-fast networking, cool them with advanced systems, and rent out that computing power to anyone building AI products.

Why NVIDIA Owns This Entire Game

You can’t talk about AI infrastructure without talking about NVIDIA. They’re basically the kingmaker in this space.

CoreWeave leveraged NVIDIA BlueField-3 DPUs to increase efficiency, provide scalability, and optimize performance within their AI cloud infrastructure, and they’re one of NVIDIA’s largest customers for GPUs. NVIDIA even took equity stakes in both CoreWeave and Nebius.

Here’s why NVIDIA dominates: their GPUs aren’t just faster – they’ve built an entire ecosystem. Their CUDA software platform, their networking technology (InfiniBand), their specialized chips (DPUs) – it all works together. And crucially, all the AI software developers already know how to use it.

Switching away from NVIDIA would be like trying to convince everyone to stop using iPhones and switch to some obscure phone OS nobody’s heard of. Technically possible, but good luck with that.

The Numbers Are Actually Insane

Let’s talk scale for a second because these numbers are hard to wrap your head around.

CoreWeave reported a revenue surge to $1.9 billion in 2024, marking a 737% increase from the previous year. That’s not a typo. Seven hundred thirty-seven percent growth in one year.

Microsoft, Google, Amazon, and Meta are planning to spend over $300 billion combined on AI infrastructure in the next few years, according to their public financial disclosures. Data center power demand in the US alone is expected to balloon to 80 gigawatts by 2030 – that’s like adding several nuclear power plants worth of electricity consumption just for data centers.

We’re witnessing one of the biggest infrastructure buildouts in modern history, and it’s all happening in service of AI.

Why This Should Worry You (At Least a Little)

Okay, so we’re building a ton of computing infrastructure for AI. That’s progress, right? Maybe. But there are some concerning aspects here.

The power consumption is nuts – We’re talking about facilities that consume as much electricity as small cities. And we’re building dozens of them. At a time when we’re supposed to be reducing energy consumption to fight climate change, we’re massively increasing it for AI.

It’s incredibly expensive – CoreWeave’s net loss widened to $863.4 million in 2024. These companies are burning cash at an incredible rate betting that AI demand will keep growing. What happens if it doesn’t?

Customer concentration risk – Here’s a scary stat: CoreWeave’s 77% of total revenues in 2024 came from the top two customers. If Microsoft or another big client decides to build their own infrastructure instead, these companies could be in serious trouble.

It’s creating a massive moat – The companies that control AI infrastructure will have enormous power over who can build AI and how expensive it is. We’re essentially creating a new class of gatekeeper, and they’re all racing to lock in their positions.

Environmental impact – Beyond just power consumption, these facilities require massive amounts of water for cooling, produce electronic waste, and need to be constantly upgraded as new chips come out.

What It All Means

We’re in the middle of a genuine infrastructure revolution. Just like how the internet required building millions of miles of fiber optic cables and data centers, the AI era requires building an entirely new computing infrastructure purpose-built for machine learning.

The companies winning this race right now aren’t just CoreWeave and Nebius – it’s also the traditional giants like AWS, Azure, and Google Cloud who are rapidly adapting, plus specialized players like Lambda Labs. But make no mistake: NVIDIA is the real winner here. They’re positioned as the arms dealer in this gold rush, selling the picks and shovels (GPUs, networking, software) to everyone digging for AI gold.

The bigger concern? Access to compute is becoming genuinely scarce. Top-tier GPUs are backordered months out. Small startups often can’t get the compute they need. The companies controlling GPU inventory – whether that’s CoreWeave, the big cloud providers, or NVIDIA deciding who gets allocation priority – are effectively deciding who can build cutting-edge AI and who can’t.

That’s real gatekeeping power, and it’s spreading across multiple players rather than concentrating in one or two companies. Which might actually be better than having a single gatekeeper, but it’s still a bottleneck.

But here’s the question that keeps me up: what happens when we’ve built all this infrastructure and the AI bubble slows down? We’ll have spent hundreds of billions building specialized facilities that eat enormous amounts of power. We can’t exactly repurpose them into apartments.

We’re making massive bets on a future that hasn’t arrived yet. Building infrastructure for technology we’re still figuring out. Spending money at a rate that would make even the dot-com bubble blush.

And maybe it all works out. Maybe AI really does transform everything and all this investment pays off spectacularly. Or maybe we’re building the most expensive white elephants in history.

The infrastructure is being built whether we like it or not. The question is whether we’re building it thoughtfully, or just racing to see who can throw money at GPUs the fastest.

Right now, it feels like the latter.

And by the time we figure out if we were right, we’ll have already spent the money.

Leave a comment