Why “Chiller-Free” AI Data Centers Are a Myth Even in the Age of Liquid-Cooled GPUs
Recent headlines around NVIDIA’s Vera Rubin architecture have sparked bold claims: AI chips that run so hot they eliminate the need for chillers entirely. It’s a compelling story, but it’s also incomplete.
While next-generation GPUs are indeed pushing thermal boundaries, the idea that data centers can abandon chillers altogether ignores how real facilities operate, how risk is managed, and how cooling systems interact with the broader power and mechanical ecosystem. The truth is simpler and more grounded: even the most advanced liquid-cooled AI data centers still need chillers.
Do liquid-cooled GPUs eliminate the need for chillers?
No. Even with direct-to-chip liquid cooling, chillers remain essential infrastructure.
Direct-to-chip liquid cooling removes heat efficiently from GPUs and CPUs, but those components are only part of the data center thermal load. The data hall itself still must be cooled, along with networking equipment, storage, power electronics, and auxiliary systems. CRAH (Computer Room Air Handler) and CRAC (Computer Room Air Conditioner) units are still required to maintain safe operating conditions, and these systems rely on chilled water.
In addition, power generation infrastructure inside the data center, such as turbines used in on-site power plants, also requires chiller-based cooling to avoid sharp efficiency losses in high ambient temperatures. Liquid cooling at the chip level does not remove the need for mechanical cooling elsewhere in the facility.
Didn’t Blackwell already prove chips can run on warm water?
Yes—and that’s exactly the point.
NVIDIA’s Blackwell generation can operate at water temperatures above 40°C. Yet despite this capability, most data centers deploying Blackwell still design for 25°C to 30°C supply water temperatures. Why? Because chip tolerance is only one variable in a complex system.
Higher water temperatures introduce trade-offs across the facility: larger heat exchangers, increased airflow requirements, higher material stress, and reduced flexibility across mixed IT loads. Many operators find that traditional chiller-based architectures are still more space-efficient, operationally resilient, and easier to standardize than so-called “chiller-less” designs.
In short, just because chips can run hotter doesn’t mean the data center should.
What happens when ambient temperatures spike?
This is where theory meets reality.
In markets like Texas, Arizona, the Middle East, India, and Southeast Asia, outside air temperatures regularly approach or exceed 40°C. Even if chillers only run for short periods each year, not having them at all is a massive operational risk.
Extreme heat events don’t politely align with load forecasts. They arrive during peak demand, grid stress, and often during critical AI workloads. A data center designed without chillers is betting its uptime on the assumption that ambient conditions will remain within narrow margins. That’s not resilience, that’s hope.
Chillers are not just about efficiency; they are about insurance against worst-case scenarios.
Why traditional electric chillers are becoming a power problem
As AI density increases, electrical power becomes the scarcest resource in the data center. Ironically, the fewer hours chillers are needed, the more wasteful their reserved electrical capacity becomes.
Electrical infrastructure must still be sized for peak cooling loads even if those loads occur only a fraction of the year. That means megawatts of grid capacity sit idle most of the time, capacity that could otherwise be allocated to revenue-generating IT.
This is where conventional thinking breaks down and where Tecogen steps in.
How Tecogen’s dual-power chillers change the equation
Tecogen’s dual-power source chillers fundamentally reframe the cooling conversation.
Instead of consuming valuable electrical capacity, Tecogen chillers operate primarily on natural gas, freeing up electrical power for AI compute. When mechanical cooling is required, whether for CRAHs, CRACs, or on-site power generation, those loads can be shifted off the grid without compromising performance or reliability.
For AI-driven data centers, this delivers three critical advantages:
- More power for IT: Electrical capacity previously reserved for chillers can be redeployed to high-density racks.
- Reduced grid dependency: Facilities become more resilient during grid congestion, heat waves, and utility delays.
- Future-proof flexibility: As chip architectures evolve, operators retain mechanical cooling without locking themselves into inefficient electrical designs.
The bottom line
Liquid cooling is a powerful advancement, but it is not a silver bullet. AI data centers are complex systems, not chip demos. Chillers are still required, risk still exists, and power constraints are only tightening.
The winners in the AI era won’t be those who eliminate chillers at all costs but those who deploy them intelligently.
Tecogen enables exactly that. Contact us for a free assessment.