The era of data center air cooling is dead. Overstatement? No, I don't think it is. If you haven't visited it yet, take a look at our analysis of system power trends (LINK) in the wake of the rush to AI. Here's a quick review:

TDPs for performance oriented CPUs are in the 300-350 watt range today. In the next year or so, both AMD and Intel will have top SKUs with TDPs of 500 watts. Today's DDR5 high-end memory is also power hungry with server-grade 64GB and 128GB RDIMMs consuming 10 and 15 watts respectively. Doing the math, this means up to an additional 320 or 480 watts of power (and heat) per server. Add in another 100 watts or so for network cards, local storage, etc.

So a 2U high performance server next year could consume 1,580 watts. Put 20 of them in a 42U rack and you get a whopping 31kW (31,600 watts to be more precise) power draw per rack. And remember that computing components are extremely efficient at converting electricity to heat. This will absolutely overwhelm air cooling.

But we're entering the Age of AI and soon we'll see the majority of applications and/or workflows sport of machine learning or inferencing features, driving up compute intensity. This means accelerators, and plenty of them. Today's Nvidia GPUs consume 400 watts each in their base performance PCIe trim and up to 700 watts with the much faster NVLink interface. AMD's current MI300 GPU line can consume up to 760 watts and their just announced MI325x GPU will hit 1,000 watts. Intel's Gaudi 3 accelerator, which is great for LLMs, is at 1,200 and their upcoming Falcon Shores GPU will eat 1,500 watts each. And, again, you'll need multiples of whatever flavor you decide to use. More power equals more heat that you'll have to get rid of.

This is why you won't be able to exclusively rely on air cooling anymore. You will have to move to liquid cooling in some form or other. You could outsource all of your IT to a third party or a cloud, but there are very significant cost, flexibility, and other concerns that need to be rigorously examined. That said, let's take a look at liquid cooling technology and how it works.

Example of sitting in front of an air conditioner vs jumping into a lake vs standing in a river. This is how liquid cooling works in a broad sense. Different types:

DLC:  explain water blocks and where blocks can be located, how this can remove anywhere from 80% to 95% of system heat., manifolds, quick disconnects, two separate loops of liquid, one that circulates through the system (chemically treated maybe?) and one that transfers the heat out of the building, "free cooling" through roof top dry coolers, then chillers if they need it even colder (they might not, depending on ambient). Talk about 'approach temp' and output temperature. Single phase vs 2 phase. how there is monitoring for temperatures, flow rates, and how it can be controlled. Require any server modifications? Fans go away, talk fan power draw. what it does for density, frees up real estate. Also all in one liquid cooling - only good for small systems plus you're still dealing with the heat via air conditioning.

ADIABATIC COOLING - NEW

Immersive Cooling:  how it works, liquids, server modifications, density, frees up real estate, gets rid of nearly 100% of heat.

Rear door heat exchangers

Costs, TCO, Benefits:  see docs from vendors ASHRAE on this.

Introduce Vendor table