Research

How data centers are adapting to a new wave of AI-infused enterprise and HPC workloads—through system design, power and cooling, workload behavior, and more efficient use of infrastructure.

AI Isn't Replacing HPC - it IS HPC

In recent discussions with industry vendor sales and marketing teams, I’ve been hearing that HPC demand is falling while AI system demand continues to increase. I’ve also seen articles implying that AI is displacing HPC.

This just isn’t the case. Period.

AI is a subset of the very broad HPC (High Performance Computing) category of workloads.

First, what is HPC? It’s not an application. It’s a loose term that covers applications and workflows across many domains—from financial services to pharma to manufacturing and more. These are workloads that are demanding enough and important enough to justify significant investment.

Here are some reasons why AI firmly belongs in the HPC category.

Both AI (including machine learning, inference, and generative workloads) and traditional HPC workflows are complex and computationally intensive. Garden-variety systems simply can’t meet the time-to-solution and accuracy requirements these workloads demand.

High-performance infrastructure is critical to both. You could run OpenFOAM or train an LLM on a laptop, but the time to solution would be so long that the results wouldn’t be relevant—or you’ll be retired by the time they are. The complexity of models and the size of datasets would also have to be severely constrained.

There is a constant push for greater accuracy and the ability to solve larger problems. That means analyzing more compounds in more permutations, adding more data to machine learning models, or pushing model sizes to hundreds of billions of parameters.

Some will argue that AI uses accelerators like GPUs while many HPC applications do not, so they must be different. That’s not a relevant distinction. HPC is not an application—it’s a category. Many HPC applications already use accelerators, and many more will as they become increasingly infused with AI.

Just because some AI users don’t think of what they’re doing as HPC doesn’t mean it isn’t. Many HPC users don’t use the term “HPC” either. They call it technical computing or something else entirely. But under the hood, you’ll find HPC-style applications running on HPC-class infrastructure—fast CPUs, accelerators, clustered systems, high-speed interconnects, and MPI—making it all work at scale.

Soon, we won’t even be having this discussion. AI will be folded into most applications and workflows. We’re already seeing this happen in enterprise software and commercial environments.

Drug discovery, fusion research, materials science, and manufacturing are all traditional HPC domains—and all are being augmented with AI. At the same time, entirely new areas are emerging that require HPC-class infrastructure: personalized healthcare analytics, optimized agriculture, improved manufacturing efficiency, cybersecurity, and more.

Industry analysts (myself included) like to talk about spending and market size. But trying to separate “AI spending” from “HPC spending” today is mostly guesswork—an exercise in interpreting financial disclosures, press releases, and a fair amount of speculation.

AI Augments HPC — It Doesn’t Replace It

All of the work we’re doing today will continue—whether it’s modeling molecules, designing aircraft, or improving weather prediction.

But new workloads are emerging as well. Determining the optimal price for a flight, for example, isn’t traditional HPC—but it is AI, and it still requires high-performance infrastructure to meet accuracy and time-to-solution requirements.

The industry supporting HPC isn’t shrinking—it’s expanding. AI and AI-infused HPC have fundamentally similar requirements. As a result, data centers are being forced to rethink their infrastructure to support these new workloads at scale—and at a cost that doesn’t break the bank.

The real question isn’t whether AI is replacing HPC.

It’s how data centers are going to adapt to this shift.