>>
Technology>>
Artificial intelligence>>
Hybrid Liquid Cooling Is the O...Author: Dr. Kelley Mullick
The world is rushing headlong into an AI-driven future, yet the infrastructure to support it is still grappling with last-generation cooling methods, and that mismatch is beginning to threaten our momentum. As someone who has spent two decades bridging the gap between engineering and business strategy in data centers, I’m convinced we need to adopt a liquid-cooling “hybrid first” mindset now if AI-scale compute is going to remain sustainable.
Over the past two years, we’ve seen demand for AI compute surge in ways few anticipated. Models are larger, clusters are denser, and racks are drawing tens of kilowatts of power each, which is well beyond the design limits of traditional air-cooling architecture. Traditional air-cooled data centers are increasingly being replaced by high-density configurations. Many facilities now support rack densities exceeding 20 kW, and in advanced deployments, even approaching 240 kW per rack.
Meanwhile, the liquid-cooling methods that were once niche are becoming core. The proportion of data centers planning to use liquid cooling is set to rise from just over 20% in early 2024 to nearly 40% by 2026. These are the structural shifts that demand new thinking.
I believe the best path forward is to deploy a hybrid architecture that combines the maturity of single-phase cold plates with the scalability of immersion cooling, tailored into modular data centers that support heat reuse and enable rapid deployment. Single-phase cold plates, already widely adopted, offer reliability and familiarity. Immersion cooling delivers higher density, uniform heat extraction, and a significant percentage of heat recapture. By combining the two, we get a solution that’s high-performing, scalable, and sustainable. Furthermore, by rethinking infrastructure solutions that can also support two-phase cold plate integration, the data center will be future-proofed for AI.
Some might argue that immersion cooling is still too new for large-scale commercial rollouts, standards are inconsistent, costs remain high, and retrofitting air-cooled facilities is complex. Similarly, two-phase cold plate solutions may not have a large scale today, but with the exponential growth of the thermal design power of IT equipment, it cannot be ignored. Research into fluids supporting two-phase liquid cooling and infrastructure that can be integrated into the data center is happening in parallel as the industry shifts to liquid.
Some data center providers might think that staying with enhanced air cooling, single-phase cold plate solutions, or rear-door heat exchangers for now is the safest option. On the surface, that seems reasonable. But in my experience, what feels safe today can turn out to be costly tomorrow.
Liquid cooling is rapidly gaining traction as a more effective solution for managing heat in high-performance computing environments. In fact, the global liquid cooling market is projected to grow at a compound annual growth rate (CAGR) of over 20% through 2030, signaling a fundamental shift.
Here’s why the hybrid liquid approach comes out ahead. First, as AI racks become increasingly dense, they require cooling capabilities that air alone can no longer reliably provide. Liquid cooling solutions enable higher compute densities without a proportional increase in floor space, fan power, or water consumption.
Third, there’s a critical sustainability dimension. Many liquid-cooled systems are designed to capture and reuse waste heat for district heating, greenhouse operations, or industrial processes, a benefit simply not achievable when relying solely on large volumes of ambient air. In regions where water is scarce or energy costs are high, this capability offers a decisive advantage.
The risk of not making the shift is real. If we continue scaling air-cooled infrastructure or limited options that solely focus on direct to chip for AI, we will face escalating operational costs, water and power bottlenecks, and limits to expansion. Traditional air cooling maxes out around 15–20 kW per rack, but high-density AI workloads will push far beyond that. At that point, infrastructure becomes the bottleneck for innovation.
Consider this. Transitioning to liquid cooling is like the shift from internal combustion to hybrid electric vehicles. Early hybrid cars paired familiar engine technology with battery systems to ease the leap. We are at a similar inflection point in data centers. Instead of ripping and rebuilding, we can integrate what we trust (cold plates) with what will carry us forward (immersion cooling) in modular, flexible units that can be deployed in urban and edge locations, closer to users, and enable heat reuse.
What should decision-makers do now? They need to build infrastructure that’s liquid-ready, including support for direct-to-chip cold plates as well as plumbing, power, and thermal loops designed for future immersion deployment. Moreover, they should explore modular data centers that combine both cooling methods and support rapid deployment where legacy infrastructure is limited. They must forge partnerships with heat-reuse stakeholders: greenhouses, industrial heating systems, or district energy providers. And they should engage in industry standards to ensure plug-and-play plumbing, common rack form factors, and coolant distribution units that support multiple types of liquid cooling. We are entering the era of heterogeneous compute, where the items within the rack are interoperable and the data hall supports multiple types of advanced cooling solutions.
The impact of this approach is substantial. Organizations that embrace hybrid liquid cooling alongside the reality of heterogeneous compute will unlock higher compute density, reduce energy and water usage, accelerate deployment cycles, and strengthen sustainability credentials. Regions that previously lacked hyperscale infrastructure due to power, water, or space constraints can outpace legacy deployments and host AI clusters using modular, liquid-cooled units. Ultimately, the computing backbone of the AI era becomes more agile, more efficient, and far more environmentally responsible.
AI is heating up, and the wrong cooling strategy will constrain its future. Air cooling alone is a bottleneck we cannot afford. Hybrid liquid cooling is mission-critical. If we want AI to scale smarter, faster, and cleaner, we need to build infrastructure that keeps pace. The future won’t wait for our hesitations, and neither should we.
About the Author![]()
Dr. Kelley Mullick is the founder and CEO of Avayla, a consultancy specializing in AI infrastructure strategy, data center design, and liquid cooling solutions. She chairs the industry liaison team for the Open Compute Project, whose mission is to facilitate standards creation to accelerate adoption of liquid cooling. With nearly 20 years of experience spanning systems engineering, business development, and platform strategy for cloud and AI-driven data centers, she holds a PhD in engineering and has led the development of liquid cooling strategies for major infrastructure firms and industry consortia.