Latest Research
placeholder for content
Research placeholder – report when released
Enterprise data centers will need to evolve toward supercomputing
As workloads become more demanding, efficiency across the entire system becomes the key to performance
The Data Center Perfect Storm
The forces reshaping modern data center infrastructure.
Enterprise data centers are entering a period of rapid, disruptive change - and many aren't prepared for it.
The rapid adoption of artificial intelligence is driving computational demands to levels that traditional enterprise infrastructure was never designed to handle. Power consumption, heat density, operational complexity, and costs are rising dramatically — creating what we call The Data Center Perfect Storm.
These challenges are not entirely new. High-performance computing (HPC) environments have been dealing with extreme compute density, aggressive power and cooling requirements, and intense efficiency pressure for decades.
What HPC organizations learned long ago is that simply throwing more hardware at the problem is not enough. Real progress requires system-wide efficiency gains across power, cooling, system architecture, workload management, and operations.
This site exists to help non-HPC data centers navigate this storm.
We explain the real problems in clear, practical terms, show what actually works to reduce their impact, and point you to the vendors and solutions that can help — all focused on making on-prem data centers efficient and viable for the long term.
The Explosion of Compute Demand
AI is no longer just a specialized workload — it is rapidly becoming embedded in everyday business and technical applications, analytics, customer experiences, and operational systems.
Unlike traditional enterprise workloads, which are typically predictable, bursty, and relatively modest in their compute needs, AI workloads differ in several critical ways:
- They are highly compute-intensive and require parallel processing to deliver timely results
- They scale poorly without the right hardware and software stack
- They create extreme power density and heat loads
- They require tight integration between compute, memory, and networking
Even modest AI adoption can dramatically increase power consumption, heat generation, storage demands, and network traffic. More advanced use cases — such as large language models, real-time inference, computer vision, and AI-driven modeling and simulation — push these demands much higher.
In other words, the workloads now emerging across enterprise data centers increasingly resemble the kinds of computational problems long associated with high-performance computing.
While AI workloads can technically run on almost any computer, training and operating modern models within practical timeframes requires highly parallel systems equipped with accelerators such as GPUs, large memory pools, high-speed interconnects, and fast storage. These architectural characteristics closely mirror the systems used for decades in scientific computing and supercomputing environments.
The core idea behind Olds Research is to help data centers apply the hard-won wisdom from supercomputing and radically increase the amount of useful compute per dollar spent.
The Power Problem
As computing demand increases and becomes more computationally demanding, so does the electricity required to support it.
For many years, improvements in processor efficiency helped offset rising compute capability. That trend has now reversed. Modern CPUs, GPUs, and accelerators are pushing the limits of current semiconductor technology, consuming significantly more power than previous generations.
High-performance GPUs used for AI training and inference now operate at power levels that would have seemed extreme only a few years ago. Future generations are expected to push those limits even further. At the same time, high-speed networking, large memory configurations, and fast storage are also driving up power consumption inside modern servers.
The result is a rapid increase in power density. Rack configurations that once fit comfortably within traditional enterprise power envelopes are being replaced by systems that demand far more electrical capacity. In many environments, rack power levels once considered extreme are becoming the new normal.
One thing computers are exceptionally good at is converting electricity into heat. A lot of heat.
These trends have major implications for data center infrastructure. Power distribution, electrical capacity, and facility design are now critical considerations when deploying AI and other high-performance workloads.
Why Air Cooling Has Reached Its Limits
For decades, air cooling was sufficient for most enterprise data centers. It was simple, well-understood, and relatively inexpensive.
That era is ending.
Modern AI servers generate far more heat per square foot than traditional systems. A single high-density GPU rack can now produce heat loads that rival small industrial equipment. Air simply cannot remove this much heat efficiently enough to keep components within safe operating temperatures, especially as rack densities continue to climb.
The physics are unforgiving. Air has low thermal conductivity and low heat capacity. As power density increases, fans must spin faster, consuming more power and generating more noise. Eventually, you hit a wall where adding more fans or bigger heat sinks no longer provides meaningful improvement.
Many organizations are discovering that they can no longer rely on air cooling alone for their most demanding AI workloads. The transition to liquid cooling — whether direct-to-chip, rear-door heat exchangers, or immersion — is becoming a practical necessity rather than a future consideration.
We explore the different liquid cooling approaches, their trade-offs, and the vendors who can help implement them in the Cooling Technology section.
The New Cost Reality
The old assumption that computing would keep getting cheaper over time has broken down.
Each new generation of processors and accelerators is cheaper when measured per unit of compute. But with the advent of AI, the demand for compute has never been higher and is still increasing rapidly. At the same time, the absolute cost of the highest-end components has risen sharply, and power consumption has increased even more dramatically.
This shift is changing the economics of data centers in a fundamental way. What used to be dominated by capital expenses (the upfront cost of servers and accelerators) is increasingly driven by operating expenses — especially power and cooling. For example, on the CapEx side, here’s a quick history of top-end compute processors.
Server CPUs – Price and Power Comparison
- 2016/2017 era: Intel Xeon Platinum 8180 (28 cores) had an MSRP of ~$10,000 (street prices up to $13,000) with a TDP of 205 W.
- Current (2025/2026): Flagship Intel Xeon Platinum models (e.g., 6980P or similar high-core Granite Rapids) carry MSRPs of $12,000–$17,800+ with TDPs of 500 W.
Compute GPUs – Price and Power Comparison
- 2016/2017 era: NVIDIA Tesla P100 (data center compute GPU) launched at ~$5,700–$7,400 with a TDP of 250 W. The Tesla V100 followed shortly after at similar or higher pricing with 250–300 W TDP.
- Current (2025/2026): NVIDIA H100 (SXM) typically costs $35,000–$40,000+ with a TDP up to 700 W (PCIe versions are lower power but still significantly more expensive than previous generations).
Memory follows the same pattern. A high-end server in 2016 might have used 256–512 GB of DDR4 at a total memory power draw of ~100–200 W. Today’s AI-capable servers routinely carry 1–2 TB+ of DDR5, with the memory subsystem often consuming 300–600 W or more under load — even though DDR5 runs at a slightly lower voltage.
On the CapEx side of the ledger, power is now a major driver of total cost of ownership. Higher TDPs mean more electricity consumed and more heat generated, which must be removed by the cooling system. This creates a compounding effect on both power and cooling budgets.
Running a data center today is significantly more expensive than it was just a few years ago. Both the cost of modern AI-capable systems and the ongoing expenses for power and cooling have risen sharply.
And if you’re thinking that moving AI workloads to the public cloud is an easy escape from these realities, think again. Running sustained, moderate-to-high utilization workloads in the cloud is like selling your house to move into a hotel to save money — the numbers are very hard to make work over time.
What Supercomputing Already Learned
Many of the challenges now hitting enterprise data centers are not new — high-performance computing environments have been living with them for decades.
Supercomputing centers routinely manage extreme compute density, massive power demands, complex cooling requirements, and constant pressure to deliver maximum performance within limited budgets. In those environments, simply adding more hardware has never been a viable long-term strategy.
Instead, HPC organizations learned that real gains come from improving efficiency across the entire stack: better infrastructure design, advanced cooling, high-speed interconnects, optimized system architectures, intelligent workload scheduling, and relentless focus on utilization.
As AI workloads spread into mainstream enterprise IT, many organizations are now encountering the same physical and economic constraints that supercomputing centers have managed for years. Power availability and management, cooling capacity, infrastructure costs, and operational complexity are quickly becoming the dominant limiting factors.
The hard-won lessons from HPC — advanced cooling technologies, accelerator-based architectures, high-speed interconnects, sophisticated workload management, and a relentless drive for maximum efficiency — are becoming highly relevant to enterprise data centers.
The Infrastructure Efficiency Framework
Navigating the Data Center Perfect Storm requires more than simply deploying faster processors or larger systems. As computing demand grows and infrastructure constraints become more pronounced, organizations must focus on improving efficiency across the entire computing environment.
High-performance computing environments have long operated under this reality. Every watt of power, every unit of cooling capacity, and every cycle of compute time must be used as effectively as possible. Achieving that level of efficiency demands attention to multiple layers of the infrastructure stack.
At Olds Research, we examine these issues through a practical Infrastructure Efficiency Framework organized around six key areas:
- Data Center Assessment Understanding how your existing infrastructure is being used, identifying inefficiencies, and finding opportunities to improve efficiency and free up capacity.
- Cooling Technology Technologies and approaches for managing the dramatically increasing heat loads generated by modern AI systems.
- Power Infrastructure Addressing rising electrical demands, reducing losses in power conversion and distribution, and improving overall power efficiency and capacity.
- Systems OEMs & Integrators How to select the right partners and system-level designs to support high-performance, AI-driven workloads efficiently.
- System Components Evaluating accelerators, composable/disaggregated infrastructure, and other critical components that determine real capability and efficiency.
- Operational Efficiency Optimizing how compute resources are scheduled, monitored, and managed to maximize useful work delivered.
Each of these areas plays a vital role in helping organizations adapt their data centers to the demands of AI and other high-intensity workloads. Throughout this site we explore the practical technologies, operational practices, and vendor solutions that can make a real difference.
Looking Ahead
The forces shaping modern data centers are not temporary. Artificial intelligence, data-intensive applications, and increasingly complex digital services continue to drive compute demand at unprecedented levels. At the same time, the physical realities of power consumption, heat generation, and infrastructure cost are becoming impossible to ignore.
For many organizations, this represents a fundamental shift in how computing infrastructure must be designed and operated. Systems will become denser, power requirements will grow, and overall efficiency will determine how much useful work a data center can actually deliver.
In many ways, the future of enterprise data centers will look a lot like the environments the high-performance computing community has been managing for years. The technologies, architectural approaches, and operational practices developed in HPC provide valuable guidance for organizations navigating this transition.
The goal of Olds Research is simple: to help you understand these changes, explore the technologies that matter, and identify the practical solutions that can help your data center adapt and thrive.
Throughout this site we break down the key areas — assessment, cooling, power, systems, components, and operational efficiency — and show what actually works in real-world environments. We’re not picking winners and losers here. We present curated lists with clear differentiators and best-fit guidance based on our experience, along with direct links for further exploration.
The Data Center Perfect Storm is already here. The organizations that will succeed are those that recognize the shift happening now and begin preparing their infrastructure for the next generation of computing.
Worth a Look
According to Northwestern, fake science is spreading faster than real science and they have the receipts to back it up. This is NOT good.