Powering Data in the Age of AI: Part 3 – Inside the AI Data Center Rebuild

Powering Data in the Age of AI: Part 3 – Inside the AI Data Center Rebuild

In the first two parts of this series, we looked at how AI’s growth is now constrained by power — not chips, not models, but the ability to feed electricity to massive compute clusters. We explored how companies are turning to fusion startups, nuclear deals, and even building their own energy supply just to stay ahead. AI can’t keep scaling unless the energy does too.

However, even if you get the power, that’s only the start. It still has to land somewhere. That somewhere is the data center. Most of the older data centers weren’t built for this. This means that the cooling systems aren’t cutting it. The layout, the grid connection, and the way heat moves through the building all need to keep up with the changing demands of the AI era. In Part 3, we look at what’s changing (or what should change) inside these sites: immersion tanks, smarter coordination with the grid, and the quiet redesign that’s now critical to keep AI moving forward.

Why Traditional Data Centers Are Starting to Break

The surge in AI workloads is physically overwhelming the buildings meant to support it. Traditional data centers were designed for general-purpose computing, with power densities around 7 to 8 kilowatts per rack, maybe 15 at the high end. However, AI clusters running on next-gen chips like NVIDIA’s GB200 are blowing past those numbers. Racks now regularly draw 30 kilowatts or more, and some configurations are climbing toward 100 kilowatts. 

According to McKinsey, the rapid increase in power density has created a mismatch between infrastructure capabilities and AI compute requirements. Grid connections that were once more than sufficient are now strained. Cooling systems, especially traditional air-based setups, can’t remove heat fast enough to keep up with the thermal load. 

(Chart: Brian PotterSource: Semianalysis)

In many cases, the physical layout of the building itself becomes a problem, whether it’s the weight limits on the floor or the spacing between racks. Even basic power conversion and distribution systems inside legacy data centers often aren’t rated for the voltages and current levels needed to support AI racks.

As Alex Stoewer, CEO of Greenlight Data Centers, told BigDATAwire, “Given this level of density is new, very few existing data centers had the power distribution or liquid cooling in place when these chips hit the market. New development or material retrofits were required for anyone who wanted to run these new chips.” 

That’s where the infrastructure gap really opened up. Many legacy facilities simply couldn’t make the leap in time. Even when grid power is available, delays in interconnection approvals and permitting can slow retrofits to a crawl. Goldman Sachs now describes this transition as a shift toward “hyper-dense computational environments,” where even airflow and rack layout must be redesigned from the ground up.

The Cooling Problem Is Bigger Than You Think

If you walk into a data center built just a few years ago and try to run today’s AI workloads at full intensity, cooling is often the first thing that starts to give. It doesn’t fail all at once. It breaks down in small parts but in more compounding ways. Airflow gets tight. Power usage spikes. Reliability slips. And all of this contributes to a broken system. 

Traditional air systems were never built for this kind of heat. Once rack power climbs above 30 or 40 kilowatts, the energy needed just to move and chill that air becomes its own problem. McKinsey puts the ceiling for air-cooled systems at around 50 kilowatts per rack. But today’s AI clusters are already going far beyond that. Some are hitting 80 or even 100 kilowatts. That level of heat disrupts the entire balance of the facility.

This is why more operators are turning to immersion and liquid cooling. These systems pull heat directly from the source, using fluid instead of air. Some setups submerge servers entirely in nonconductive liquid. Others run coolant straight to the chips. Both offer better thermal performance and far greater efficiency at scale. In some cases, operators are even reusing that heat to power nearby buildings or industrial systems.

(Make more Aerials/Shutterstock)

Still, this shift isn’t as straightforward as one might think. Liquid cooling demands new hardware, plumbing, and ongoing support. So, it requires space and careful planning. However, as densities rise, staying with air isn’t just inefficient, it sets a hard limit on how far data centers can scale. As operators realize there’s no way to air-tune their way out of 100 kilowatt racks, other solutions must emerge – and they have.

The Case for Immersion Cooling

For a long time, immersion cooling felt like overengineering. It was interesting in theory, but not something most operators seriously considered. That’s changed. The closer facilities get to the thermal ceiling of air and basic liquid systems, the more immersion starts looking like the only real option left.

Instead of trying to force more air through hotter racks, immersion takes a different route. Servers go straight into nonconductive liquid, which pulls the heat off passively. Some systems even use fluids that boil and recondense inside a closed tank, carrying heat out with almost no moving parts. It’s quieter, denser, and often more stable under full load.

While the benefits are clear, deploying immersion still takes planning. The tanks require physical space, and the fluids come with upfront costs. However, compared to redesigning an entire air-cooled facility or throttling workloads to stay within limits, immersion is starting to look like the more straightforward path. For many operators, it’s not an experiment anymore. It has to be the next step.

From Compute Hubs to Energy Nodes

If immersion cooling solves the heat, but what about the timing?  When can you actually pull that much power from the grid? That’s where the next bottleneck is forming, and it’s forcing a shift in how hyperscalers operate.

Google has already signed formal demand-response agreements with regional utilities like the TVA. The deal goes beyond lowering total consumption as it shapes when and where that power gets used. AI workloads, especially training jobs, have built-in flexibility. 

With the right software stack, those jobs can migrate across facilities or delay execution by hours. That delay becomes a tool. It’s a way to avoid grid congestion, absorb excess renewables, or maintain uptime when systems are tight.

Source: Datacenter as a Computer Morgan & Claypool Publishers (2013)

It’s not just Google. Microsoft has been testing energy-matching models across its data centers, including scheduling jobs to align with clean energy availability. The Rocky Mountain Institute projects that data center alignment with grid dynamics could unlock gigawatts of otherwise stranded capacity.

Make no doubt that these aren’t sustainability gestures. They’re survival strategies. Grid queues are growing. Permitting timelines are slipping. Interconnect caps are becoming real limits on AI infrastructure. The facilities that thrive won’t just be well-cooled, they’ll be grid-smart, contract-flexible, and built to respond. So, from compute hubs to energy nodes, it’s no longer just about how much power you need. It’s about how well you can dance with the system delivering it.

Designing for AI Means Rethinking Everything

You can’t design around AI the way data centers used to handle general compute. The loads are heavier, the heat is higher, and the pace is relentless. You start with racks that pull more power than entire server rooms did a decade ago, and everything around them has to adapt.

New builds now work from the inside out. Engineers start with workload profiles, then shape airflow, cooling paths, cable runs, and even structural supports based on what those clusters will actually demand. In some cases, different types of jobs get their own electrical zones. That means separate cooling loops, shorter throw cabling, dedicated switchgear — multiple systems, all working under the same roof.

Power delivery is changing, too. In a conversation with BigDATAwire, David Beach, Market Segment Manager at Anderson Power, explained, “Equipment is taking advantage of much higher voltages and simultaneously increasing current to achieve the rack densities that are necessary. This is also necessitating the development of components and infrastructure to properly carry that power.”

(Tommy Lee Walker/Shutterstock)

This shift isn’t just about staying efficient. It’s about staying viable. Data centers that aren’t built with heat reuse, expansion room, and flexible electrical design won’t hold up long. The demands aren’t slowing down. The infrastructure has to meet them head-on.

What This Infrastructure Shift Means Going Forward

We know that hardware alone doesn’t move the needle anymore. The real advantage comes from pushing it online quickly, without getting bogged down by power, permits, and other obstacles. That’s where the cracks are beginning to open.

Site selection has become a high-stakes filter. A cheap piece of land isn’t enough. What you need is utility capacity, local support, and room to grow without months of negotiating. Funded projects are hitting walls, even ones with exceptional resources.

Those who have been pulling ahead began early. Microsoft is already working on multi-campus builds that can handle gigawatt loads. Google is pairing facility growth with flexible energy contracts and nearby renewables. Amazon is redesigning its electrical systems and working with zoning authorities before permits even go live.

The pressure now is steady, and any delays will ripple through everything. If you lose a window, you lose training cycles. The rate at which models are developed doesn’t wait for the infrastructure to keep up. Rear-end planning used to be a front-line strategy. Now, data center builders are the ones who are defining what happens next. As we move forward, AI performance won’t just be measured in FLOPs or latency. It would come down to who could build when it really mattered.

Related Items 

New GenAI System Built to Accelerate HPC Operations Data Analytics

Bloomberg Finds AI Data Centers Fueling America’s Energy Bill Crisis

OpenAI Aims to Dominate the AI Grid With Five New Data Centers

The post Powering Data in the Age of AI: Part 3 – Inside the AI Data Center Rebuild appeared first on BigDATAwire.