Enterprise data centers will need to evolve toward supercomputing

As workloads become more demanding, efficiency across the entire system becomes the key to performance

The Data Center Perfect Storm

The forces reshaping modern data center infrastructure.

Enterprise data centers are entering a period of rapid, disruptive change - and many aren't prepared for it.

The rapid adoption of artificial intelligence is driving computational demands to levels that traditional enterprise infrastructure was never designed to handle. Power consumption, heat density, operational complexity, and costs are rising dramatically — creating what we call The Data Center Perfect Storm.

These challenges are not entirely new. High-performance computing (HPC) environments have been dealing with extreme compute density, aggressive power and cooling requirements, and intense efficiency pressure for decades.

What HPC organizations learned long ago is that simply throwing more hardware at the problem is not enough. Real progress requires system-wide efficiency gains across power, cooling, system architecture, workload management, and operations.

This site exists to help non-HPC data centers navigate this storm.

We explain the real problems in clear, practical terms, show what actually works to reduce their impact, and point you to the vendors and solutions that can help — all focused on making on-prem data centers efficient and viable for the long term.

The Explosion of Compute Demand

AI is no longer just a specialized workload — it is rapidly becoming embedded in everyday business and technical applications, analytics, customer experiences, and operational systems.

Unlike traditional enterprise workloads, which are typically predictable, bursty, and relatively modest in their compute needs, AI workloads differ in several critical ways:

  • They are highly compute-intensive and require parallel processing to deliver timely results
  • They scale poorly without the right hardware and software stack
  • They create extreme power density and heat loads
  • They require tight integration between compute, memory, and networking

Even modest AI adoption can dramatically increase power consumption, heat generation, storage demands, and network traffic. More advanced use cases — such as large language models, real-time inference, computer vision, and AI-driven modeling and simulation — push these demands much higher.

In other words, the workloads now emerging across enterprise data centers increasingly resemble the kinds of computational problems long associated with high-performance computing.

While AI workloads can technically run on almost any computer, training and operating modern models within practical timeframes requires highly parallel systems equipped with accelerators such as GPUs, large memory pools, high-speed interconnects, and fast storage. These architectural characteristics closely mirror the systems used for decades in scientific computing and supercomputing environments.

The core idea behind Olds Research is to help data centers apply the hard-won wisdom from supercomputing and radically increase the amount of useful compute per dollar spent.

The Power Problem

As computing demand increases and becomes more computationally demanding, so does the electricity required to support it.

For many years, improvements in processor efficiency helped offset rising compute capability. That trend has now reversed. Modern CPUs, GPUs, and accelerators are pushing the limits of current semiconductor technology, consuming significantly more power than previous generations.

High-performance GPUs used for AI training and inference now operate at power levels that would have seemed extreme only a few years ago. Future generations are expected to push those limits even further. At the same time, high-speed networking, large memory configurations, and fast storage are also driving up power consumption inside modern servers.

The result is a rapid increase in power density. Rack configurations that once fit comfortably within traditional enterprise power envelopes are being replaced by systems that demand far more electrical capacity. In many environments, rack power levels once considered extreme are becoming the new normal.

One thing computers are exceptionally good at is converting electricity into heat. A lot of heat.

These trends have major implications for data center infrastructure. Power distribution, electrical capacity, and facility design are now critical considerations when deploying AI and other high-performance workloads.

Why Air Cooling Has Reached Its Limits

For decades, air cooling was sufficient for most enterprise data centers. It was simple, well-understood, and relatively inexpensive.

That era is ending.

Modern AI servers generate far more heat per square foot than traditional systems. A single high-density GPU rack can now produce heat loads that rival small industrial equipment. Air simply cannot remove this much heat efficiently enough to keep components within safe operating temperatures, especially as rack densities continue to climb.

The physics are unforgiving. Air has low thermal conductivity and low heat capacity. As power density increases, fans must spin faster, consuming more power and generating more noise. Eventually, you hit a wall where adding more fans or bigger heat sinks no longer provides meaningful improvement.

Many organizations are discovering that they can no longer rely on air cooling alone for their most demanding AI workloads. The transition to liquid cooling — whether direct-to-chip, rear-door heat exchangers, or immersion — is becoming a practical necessity rather than a future consideration.

We explore the different liquid cooling approaches, their trade-offs, and the vendors who can help implement them in the Cooling Technology section.

The New Cost Reality

The old assumption that computing would keep getting cheaper over time has broken down.

Each new generation of processors and accelerators is cheaper when measured per unit of compute. But with the advent of AI, the demand for compute has never been higher and is still increasing rapidly. At the same time, the absolute cost of the highest-end components has risen sharply, and power consumption has increased even more dramatically.

This shift is changing the economics of data centers in a fundamental way. What used to be dominated by capital expenses (the upfront cost of servers and accelerators) is increasingly driven by operating expenses — especially power and cooling. For example, on the CapEx side, here’s a quick history of top-end compute processors.

Server CPUs – Price and Power Comparison

  • 2016/2017 era: Intel Xeon Platinum 8180 (28 cores) had an MSRP of ~$10,000 (street prices up to $13,000) with a TDP of 205 W.
  • Current (2025/2026): Flagship Intel Xeon Platinum models (e.g., 6980P or similar high-core Granite Rapids) carry MSRPs of $12,000–$17,800+ with TDPs of 500 W.

Compute GPUs – Price and Power Comparison

  • 2016/2017 era: NVIDIA Tesla P100 (data center compute GPU) launched at ~$5,700–$7,400 with a TDP of 250 W. The Tesla V100 followed shortly after at similar or higher pricing with 250–300 W TDP.
  • Current (2025/2026): NVIDIA H100 (SXM) typically costs $35,000–$40,000+ with a TDP up to 700 W (PCIe versions are lower power but still significantly more expensive than previous generations).

Memory follows the same pattern. A high-end server in 2016 might have used 256–512 GB of DDR4 at a total memory power draw of ~100–200 W. Today’s AI-capable servers routinely carry 1–2 TB+ of DDR5, with the memory subsystem often consuming 300–600 W or more under load — even though DDR5 runs at a slightly lower voltage.

On the CapEx side of the ledger, power is now a major driver of total cost of ownership. Higher TDPs mean more electricity consumed and more heat generated, which must be removed by the cooling system. This creates a compounding effect on both power and cooling budgets.

Running a data center today is significantly more expensive than it was just a few years ago. Both the cost of modern AI-capable systems and the ongoing expenses for power and cooling have risen sharply, making efficiency improvements essential for controlling total cost of ownership.

What Supercomputing Already Learned

Many of the challenges now emerging in enterprise data centers are not new. High-performance computing environments have been operating under similar constraints for decades. Supercomputing centers routinely manage extremely dense computing systems, massive power requirements, complex cooling challenges, and constant pressure to maximize performance within limited budgets.

In these environments, simply adding more hardware has never been a sustainable solution. Supercomputing organizations learned long ago that achieving meaningful gains in capability requires improving efficiency across the entire computing environment. Infrastructure design, cooling systems, system architecture, interconnect performance, workload scheduling, and overall system utilization all play critical roles in determining how much useful computing work can be delivered.

As AI workloads spread across enterprise IT, many organizations are beginning to encounter the same physical and economic constraints that supercomputing centers have been managing for years. Power availability, cooling capacity, and infrastructure costs are becoming key factors that influence how computing systems are designed and deployed.

The technologies and operational practices developed within the HPC community provide valuable lessons for organizations navigating these challenges. High-speed interconnects, accelerator-based architectures, advanced cooling technologies, workload management systems, and a relentless focus on efficiency have long been essential components of successful supercomputing environments.

As enterprise data centers evolve to support AI-driven workloads, many of these same technologies and practices are becoming increasingly relevant outside the traditional HPC world.

The Infrastructure Efficiency Framework

Navigating the Data Center Perfect Storm requires more than simply deploying faster processors or larger systems. As computing demand grows and infrastructure constraints become more pronounced, organizations must focus on improving efficiency across the entire computing environment.

In high-performance computing environments, this approach has long been essential. Every watt of power, every unit of cooling capacity, and every cycle of compute time must be used as effectively as possible. Achieving that level of efficiency requires attention to many different aspects of the computing infrastructure, from the physical design of the data center to the way workloads are scheduled and executed.

As enterprise data centers adapt to the demands of AI-driven computing, these same considerations are becoming increasingly important. Organizations must understand how their existing infrastructure is being used, how efficiently their systems operate, and where improvements can be made to support new workloads.

At Olds Research we examine these issues across several key areas of infrastructure efficiency:

Data Center Assessment
Understanding how existing infrastructure is being used and identifying opportunities to improve efficiency and free up capacity.

Cooling
Technologies and approaches for managing dramatically increasing heat loads generated by modern computing systems.

Systems & Integrators
Architectures and integration approaches designed to support high-performance AI and data-intensive workloads.

System Components
Critical technologies—including accelerators, interconnects, memory, and storage—that determine system capability and efficiency.

Operational Efficiency
How compute resources are actually used and optimized—covering application profiling, workload management, utilization monitoring, policy control, and throughput and outcome tracking.

Infrastructure & Power Management
Facility-level technologies and strategies that support modern compute densities while managing energy consumption and operational costs.

Each of these areas plays an important role in helping organizations adapt their data centers to the demands of modern computing. Throughout this site we explore the technologies, operational practices, and vendor ecosystems that are shaping this transformation.

Looking Ahead

The forces shaping modern data centers are not temporary. Artificial intelligence, data-intensive applications, and increasingly complex digital services are continuing to drive demand for computing at unprecedented levels. At the same time, the physical realities of power consumption, heat generation, and infrastructure cost are becoming impossible to ignore.

For many organizations, this represents a fundamental shift in how computing infrastructure must be designed and operated. Systems will become denser, power requirements will grow, and the efficiency of the entire computing environment will become a critical factor in determining how much useful work a data center can deliver.

In many ways, the future of enterprise data centers will resemble environments that the high-performance computing community has been operating for years. The technologies, architectural approaches, and operational practices developed in those environments provide valuable guidance for organizations navigating this transition.

The goal of Olds Research is to help explain these changes, explore the technologies that are shaping them, and highlight the companies and ideas driving the evolution of modern data center infrastructure.

In each area, we examine the underlying technologies, how they improve efficiency, and where they are being applied in real-world environments. We also include curated lists of vendors with relevant solutions, along with direct links for further exploration.

The Data Center Perfect Storm is already underway. The organizations that adapt most effectively will be those that understand the changes happening now and begin preparing their infrastructure for the next generation of computing.

Worth a Look

Fake Science Racket Revealed!

According to Northwestern, fake science is spreading faster than real science and they have the receipts to back it up. This is NOT good.