Normal view

Received today — 2 April 2026

Purpose-Built for AI: The Shift Toward Modular Data Center Infrastructure

26 March 2026 at 14:00

Originally posted on Compu Dynamics.

Discover how AI is transforming mission‑critical infrastructure: From modular data center design and liquid cooling to extreme power density to purpose‑built AI facilities, Steve Altizer, President and CEO of Compu Dynamics, covers these topics in this recent conversation.

At PTC 2026 in Hawaii, Isabel Paradis of HOT TELECOM held a discussion with Altizer to discuss how AI is reshaping the way modular data centers are designed now and in the future.

AI Is Rewriting the Rules of Data Center Design

AI is transforming data centers. While many are still trying to shoehorn AI workloads into traditional designs, that approach is only going to last a few more years. Hyperscalers are leading the way into an AI‑centric future, where liquid cooling – once a specialty – is now becoming standard across the industry.

Retrofitting conventional colo or cloud facilities for AI is not ideal. It’s not as cost effective as doing something that’s purpose built, yet building AI‑only facilities also carries risk, because repurposing that heavy investment later is difficult. The industry is therefore moving toward modular infrastructure, which allows for hybrid, purpose‑built AI facilities that remain flexible enough to serve a range of customers.

To continue reading, please click here.

The post Purpose-Built for AI: The Shift Toward Modular Data Center Infrastructure appeared first on Data Center POST.

Data Center Liquid Cooling Market to Surpass USD 27.1 Billion by 2035

9 February 2026 at 16:00

The global data center liquid cooling market was valued at USD 4.8 billion in 2025 and is estimated to grow at a CAGR of 18.2% to reach USD 27.1 billion by 2035, according to a recent report by Global Market Insights Inc.

Rising energy costs, coupled with stringent sustainability requirements, are accelerating the adoption of liquid cooling technologies across data centers. Liquid cooling systems offer significantly lower Power Usage Effectiveness (PUE) ratios ranging from 1.05 to 1.15 compared to 1.4-1.8 for traditional air-cooled facilities, which directly lowers electricity consumption and reduces carbon emissions. Regulatory mandates, including the EU Energy Efficiency Directive, Germany’s Energy Efficiency Act targeting PUE 1.3 by 2027, and California’s energy efficiency standards, are pushing operators toward advanced cooling solutions.

Furthermore, the ability of liquid cooling systems to recover waste heat for district heating or industrial processes transforms data centers into contributors to circular energy economies, supporting corporate net-zero initiatives and enhancing operational sustainability. North America continues to lead the data center liquid cooling market, driven by a dense concentration of hyperscale cloud operators, semiconductor manufacturers, and systems integrators deploying high-density AI and HPC infrastructure.

The solution segment held a 71% share in 2025 and is forecast to grow at a CAGR of 15% from 2026 to 2035. Direct-to-chip cooling is the fastest-growing technology, employing cold plates and micro-channel coolers attached directly to processors, GPUs, and memory to remove 60-80% of heat before it enters the air. These systems circulate coolants such as water with inhibitors or glycol mixtures across chip surfaces, achieving thermal resistances as low as 0.01-0.05°C/W.

The single-phase liquid cooling systems segment reached USD 3.1 billion in 2025. These systems maintain coolant in liquid form throughout the cycle, transferring heat via conduction and convection without phase change. Coolants circulate through cold plates, immersion tanks, or heat exchangers at 18-50°C, depending on design, while facility chillers, dry coolers, or towers remove heat from the loop.

U.S. data center liquid cooling market captured USD 1.29 billion in 2025. Federal initiatives, including AI and HPC programs, semiconductor funding under the CHIPS Act, and defense modernization projects incorporating AI, are key drivers of liquid cooling adoption in public sector data centers.

Leading companies in the data center liquid cooling market include Alfa Laval, Asetek, Boyd, CoolIT Systems, Green Revolution Cooling, LiquidStack, Rittal, Schneider Electric (Motivair), Stulz, and Vertiv. Key strategies adopted by companies in the market focus on technological innovation, such as developing high-efficiency immersion and direct-to-chip cooling solutions for next-generation processors and GPUs. Firms are forming strategic partnerships with hyperscale cloud providers, semiconductor manufacturers, and HPC integrators to expand deployment. Investments in R&D for energy-efficient, modular, and scalable systems strengthen product differentiation. Companies are also emphasizing geographic expansion into emerging markets, supporting sustainability initiatives, and integrating IoT-enabled monitoring tools to optimize performance, enhance reliability, and maintain long-term client relationships.

The post Data Center Liquid Cooling Market to Surpass USD 27.1 Billion by 2035 appeared first on Data Center POST.

Company Profile: STT GDC Philippines on Building the Philippines’ Largest AI-Ready Data Center Campus

4 February 2026 at 17:30

Data Center POST had the opportunity to connect with Carlo Malana, President and CEO of STT GDC Philippines, which is a joint venture among Globe Telecom, Ayala Corporation and ST Telemedia Global Data Centres. The company provides secure, reliable, and sustainable data centers to enable digital transformation for global and local businesses. With more than two decades of diverse leadership experience in the ICT industry, his background includes strategic roles at AT&T and as CIO for Globe. He earned a double degree from the University of California at Berkeley and an MBA from Southern Methodist University.

With over 20 years in Information Communications Technology (ICT) including roles with AT&T, across the United States, Mexico, and the Philippines, he has led both technology and business organizations in such diverse areas as strategy, program management, merger integration, retail, finance, customer operations, and sales.

The interview information below has been summarized to provide readers with clarity into who STT GDC Philippines is, what they do and the problems they are solving in the industry.

What does STT GDC Philippines do?  

ST Telemedia Global Data Centres (STT GDC) Philippines empowers business digital transformation through a service model integrating Colocation, Cross connect, and Support Services. We provide Colocation via scalable, sustainable, and secure infrastructure operated to strict global standards, a commitment recently validated by our flagship 124MW STT Fairview Data Center Campus, achieving the IDCA G2 Design Certification, and our STT Cavite 1 data center earning the Uptime Institute Tier III Design Certification. While our Interconnect & Connectivity solutions provide a carrier-neutral platform optimized for seamless access to hybrid and multi-cloud environments, our Support Services complement this technology as your extended technical team, managing critical facility operations so you can focus exclusively on your core business performance.

What problems does STT GDC Philippines solve in the market?

STT GDC Philippines addresses the critical shortage of high-quality digital infrastructure in Southeast Asia (SEA) by replacing outdated systems with massive, scalable facilities built for the future. We solve the capacity shortfall by delivering hyperscale-ready infrastructure, such as our 124MW STT Fairview campus, designed to meet the rigorous TIA-942 Rated 3 and Uptime Institute Tier III standards for concurrent maintainability. We specifically address the urgent demand for AI and high-performance computing by building AI-ready facilities equipped with high power density and advanced liquid cooling support. Most importantly, we eliminate downtime concerns by providing SLA-backed availability, ensuring your mission-critical business operations remain secure and stable 24/7 with a sustainable environment. Finally, we remove connectivity restrictions through our carrier-neutral ecosystem, providing a resilient platform that offers customers superior network choice and the flexibility to connect with the partners that best serve their requirements.

What are STT GDC Philippines’s core products or services?

Our core services are colocation, cross connect, and support services.

What markets do you serve?

ST Telemedia Global Data Centres (STT GDC) Philippines is a leading carrier-neutral provider dedicated to supporting the high-density requirements of Hyperscalers, AI companies, and large enterprises in the banking, financial services, and telecommunications sectors.

As a joint venture between Globe Telecom, Ayala Corporation, and STT GDC, we enable digital transformation by offering scalable, sustainable, and secure infrastructure designed for mission-critical applications. Our facilities are specifically optimized for high-performance workloads, leveraging strategic partnerships with industry leaders and partners to deploy advanced solutions such as liquid cooling for AI-driven demands.

Our data centers provide a flexible technology foundation with direct access to major global cloud platforms and a diverse ecosystem of connectivity partners. This carrier-neutral approach ensures optimal connectivity for hybrid and multi-cloud environments, while our strict operational excellence and 24/7 on-site technical expertise deliver industry-leading uptime. By integrating these best-in-class partnerships, we allow your organization to rely completely on our reliable infrastructure while you focus on driving your core business growth.

What challenges does the global digital infrastructure industry face today?

The industry is currently facing a massive energy and power crisis, where securing reliable electricity has become significantly harder than finding physical land. Because AI operations consume vast amounts of energy, they place an immense strain on local power grids, making it difficult for operators to find suitable locations while sticking to green energy goals.

Secondly, the rapid adoption of AI has created a thermal management challenge; the extreme heat generated by modern high-performance chips exceeds the limits of traditional air cooling, forcing a pivot toward advanced liquid cooling methods even as universal standards remain undefined.

Finally, geopolitical instability and supply chain disruptions are acting as a major brake on progress. Rising global tensions are complicating where secure networks can be built, while acute shortages of essential equipment, like high-voltage transformers and backup generators, are delaying construction and preventing the infrastructure from keeping pace with global demand.

How is STT GDC Philippines adapting to these challenges?

STT GDC Philippines is adapting by building flexible, high-capacity infrastructure, such as the 124 MW STT Fairview Data Center Campus, that is fully ready for AI and liquid cooling but remains adaptable to changing technology rather than being limited to a single purpose. We are addressing the energy challenge by committing to 100% renewable energy for our operations. To navigate global instability, we maintain a fairly neutral position as a carrier-neutral platform, ensuring resilience and open choices for all networks.

What are STT GDC Philippines’s key differentiators?

Our key differentiators begin with our adherence to global standards, ensuring that every facility in our portfolio operates with the same rigor and reliability found across our international platform. This foundation allows us to provide the most extensive capacity in the region, highlighted by the 124MW STT Fairview Data Center Campus, the largest, most interconnected carrier-neutral, and sustainable data center in the Philippines. Our commitment to international, sustainability-driven design is evident in our LEED Gold and TIA-942 Rated 3 certifications, as well as our “AI-ready” infrastructure that supports liquid cooling to reduce environmental impact.

Beyond physical assets, we prioritize our talent through the DC Power Up program, a milestone initiative that trains and certifies the next generation of data center professionals to ensure a future-ready workforce. Our operational excellence is the heartbeat of our business, utilizing advanced automation and AI-powered cooling to maintain peak efficiency 24/7. Finally, we leverage deep local expertise through our powerful partnership with Globe and Ayala, combining the country’s leading telecommunications reach and corporate heritage to provide customers with a seamless, trustworthy gateway into the Philippine digital economy.

What can we expect to see/hear from STT GDC Philippines in the future?  

STT GDC Philippines is focused on rapidly scaling its delivery capabilities, a goal already in motion as we begin operating with our first customers at STT Fairview 1. This marks a significant milestone for what will be the largest and most AI-ready data center campus in the Philippines, featuring infrastructure specifically engineered for high-density computing and advanced liquid cooling. Our commitment to innovation is further showcased at our AI Synergy Lab, where we demonstrate the future of thermal management and high-efficiency power solutions. To support this growth, we are accelerating partnerships across the ecosystem by  recently onboarding key connectivity partners to ensure our facilities serve as the premier, carrier-neutral gateway for Southeast Asia’s digital future.

What upcoming industry events will you be attending? 

We are excited to represent STT GDC Philippines at two of the most influential technology gatherings in the region and the world this year. This February, our team will be in Jakarta for APRICOT 2026, the Asia Pacific region’s premier internet operations and networking summit. This event is a critical forum for us to collaborate with network engineers and policymakers to strengthen the digital fabric of Southeast Asia. Following this, we will be attending NVIDIA GTC in March in San Jose, California. Often called the “Super Bowl of AI,” GTC is where we engage with the latest breakthroughs in AI infrastructure and high-performance computing, ensuring that our data centers remain at the cutting edge of the global AI revolution.

Do you have any recent news you would like us to highlight?

We are excited to share several major milestones that underscore our rapid growth and commitment to the Philippines’ digital future. Most recently, in October 2025, we announced the onboarding of our first connectivity partners at our flagship STT Fairview Data Center campus. These partnerships are significant for our carrier-neutral ecosystem, providing customers with diverse network choices and the resilience needed for AI-powered growth. Additionally, the 124MW STT Fairview Data Center campus recently achieved the prestigious IDCA G2 Design Certification, recognizing its world-class N+1 design and operational excellence. On the sustainability front, we are proud to have transitioned to 100% renewable energy across all our operational data centers as of early 2025.

Is there anything else you would like our readers to know about STT GDC Philippines and capabilities?

Finally, we want your readers to know that STT GDC Philippines is actively pioneering the future of high-performance computing through our AI Synergy Lab. Launched in collaboration with industry leaders, the lab allows enterprises to run actual AI workloads in a controlled environment, providing a live showroom for high-density computing solutions that are essential for modern digital transformation. By bridging the gap between theoretical AI potential and real-world deployment, the AI Synergy Lab ensures that our partners can optimize their hardware configurations for maximum performance and efficiency. This initiative reinforces our commitment to making the Philippines a premier hub for AI innovation in Southeast Asia, providing the specialized environment required to support the next generation of intelligent computing.

Where can our readers learn more about STT GDC Philippines?  

Readers can learn more on our company website, www.sttelemediagdc.com/ph-en.

How can our readers contact STT GDC Philippines? 

You can contact us through Facebook, Linkedin, or our website.

# # #

About Data Center POST

Data Center POST provides a comprehensive view of the digital infrastructure landscape, delivering industry insights into the global data center ecosystem. As the industry’s only peer-contributed and online publication, we offer relevant information from developers, managers, providers, investors, and trendsetters worldwide.

Data Center POST works hard to get the most current information and thought-provoking ideas most apt to add relevance to the success of the data center industry. Stay informed, visit www.datacenterpost.com.

If you are interested in contributing to Data Center POST, contact us at contributions@datacenterpost.com or submit your article here.

Want more digital infrastructure news? Stay in the know and subscribe to Data Center POST today!

The post Company Profile: STT GDC Philippines on Building the Philippines’ Largest AI-Ready Data Center Campus appeared first on Data Center POST.

Received before yesterday

2025 in Review: Sabey’s Biggest Milestones and What They Mean

26 January 2026 at 18:00

Originally posted on Sabey Data Centers.

At Sabey Data Centers, progress is more than a series of headlines. It’s a blueprint for what’s possible when infrastructure, efficiency and stewardship go hand in hand. From our award-winning sustainability initiatives to bold new campus designs and record-setting expansions, each milestone this year has demonstrated our commitment to powering tomorrow’s workloads with conscience and confidence.

As 2026 is already well underway and already promising to be a banner year, we wanted to pause and reflect on the path we forged in 2025.

Capacity expansion: built for growth 

In 2025, we announced strategic power expansions across our Pacific Northwest, Ashburn, Austin and Quincy locations. In Seattle and Columbia, 30MW of new power is now anticipated to come online by 2027, enabling tenants to scale quickly while leveraging ultra-efficient energy (carbon-free in Seattle) and a regional average PUE as low as 1.2.

On the East Coast, Ashburn’s third and final building broke ground, set to introduce 54MW of additional capacity with the first tranches set to come online in 2026. This will be the first three-story facility in Sabey’s portfolio, purpose-built for air-cooled, liquid-cooled and hybrid deployments, with rack densities over 100kW and an average PUE of 1.35. In both regions, Sabey’s expansion balances hyperscale demand with customization, modular scale and resilient connectivity.

The launch of construction for Austin Building B was another major milestone, expanding our presence in the dynamic Round Rock tech corridor. This three-story, liquid-cooling-ready facility is designed to deliver 54 megawatts of total power capacity. Building B continues our commitment to scalable, energy-efficient digital infrastructure, tailored for enterprise and hyperscale workloads.

To continue reading, please click here.

The post 2025 in Review: Sabey’s Biggest Milestones and What They Mean appeared first on Data Center POST.

Sabey and JetCool Push Liquid Cooling from Pilot to Standard Practice​

22 January 2026 at 17:00

As AI and high‑performance computing (HPC) workloads strain traditional air‑cooled data centers, Sabey Data Centers is expanding its partnership with JetCool Technologies to make direct‑to‑chip liquid cooling a standard option across its U.S. portfolio. The move signals how multi‑tenant operators are shifting from experimental deployments to programmatic strategies for high‑density, energy‑efficient infrastructure.

Sabey, one of the largest privately held multi‑tenant data center providers in the United States, first teamed with JetCool in 2023 to test direct‑to‑chip cooling in production environments. Those early deployments reported 13.5% server power savings compared with air‑cooled alternatives, while supporting dense AI and HPC racks without heavy reliance on traditional mechanical systems.

The new phase of the collaboration is less about proving the technology and more about scale. Sabey and JetCool are now working to simplify how customers adopt liquid cooling by turning what had been bespoke engineering work into repeatable designs that can be deployed across multiple sites. The goal is to give enterprises and cloud platforms a predictable path to high‑density infrastructure that balances performance, efficiency and operational risk.

A core element of that approach is a set of modular cooling architectures developed with Dell Technologies for select PowerEdge GPU‑based servers. By closely integrating server hardware and direct‑to‑chip liquid cooling, the partners aim to deliver pre‑validated building blocks for AI and HPC clusters, rather than starting from scratch with each project. The design includes unified warranty coverage for both the servers and the cooling system, an assurance that Sabey says is key for customers wary of fragmented support models.

The expanded alliance sits inside Sabey’s broader liquid cooling partnership program, an initiative that aggregates multiple thermal management providers under one framework. Instead of backing a single technology, Sabey is positioning itself as a curator of proven, ready‑to‑integrate cooling options that map to varying density targets and sustainability goals. For IT and facilities teams under pressure to scale GPU‑rich deployments, that structure promises clearer design patterns and faster time to production.

Executives at both companies frame the partnership as a response to converging pressures: soaring compute demand, tightening efficiency requirements and growing scrutiny of data center energy use. Direct‑to‑chip liquid cooling has emerged as one of the more practical levers for improving thermal performance at the rack level, particularly in environments where power and floor space are limited but performance expectations are not.

For Sabey, formalizing JetCool’s technology as a standard, warranty‑backed option is part of a broader message to customers: liquid cooling is no longer a niche or one‑off feature, but an embedded part of the company’s roadmap for AI‑era infrastructure. Organizations evaluating their own cooling strategies can find the full announcement here.

The post Sabey and JetCool Push Liquid Cooling from Pilot to Standard Practice​ appeared first on Data Center POST.

Power Should Serve Compute: Airsys’ Energy-First Approach to Data Center Cooling

21 January 2026 at 17:00

Data Center POST had the opportunity to connect with Paul Quigley, President of Airsys Cooling Technologies, Inc., ahead of PTC’26. With more than 36 years of experience spanning HVAC contracting, distribution, and executive leadership in manufacturing, Quigley is widely recognized for his work in large-scale, high-demand cooling environments. His career includes co-designing one of the largest VRF projects in North America and leading complex cooling initiatives across the United States. Since joining Airsys in 2021, Quigley has focused on advancing precision and liquid-assisted cooling technologies to help data center operators address power constraints driven by AI, high-density compute, and rapid digital infrastructure growth. In the Q&A below, Quigley shares his perspective on the challenges facing the global digital infrastructure industry and how Airsys is helping operators turn cooling efficiency into usable compute power.

Data Center Post (DCP) Question: What does your company do?  

Paul Quigley (PQ) Answer: Airsys is an energy-focused cooling solutions company founded on a simple idea: power should serve computation, not be lost protecting it. Every system we build returns power back to the data center where it belongs. Our innovative, award-winning technologies give operators more usable power for compute, improving capacity, resilience, and overall profitability.

DCP Q: What problems does your company solve in the market?

PQ A: Airsys helps data centers recover power lost to cooling. As AI clusters push energy systems past their limits, operators are forced to choose between compute, capacity, and cost. Our cooling technologies solve that problem by returning power back to the data center, reducing stranded capacity, and giving operators more usable power for compute.

DCP Q: What are your company’s core products or services?

PQ A: We design and build the world’s most advanced cooling solutions for data centers, including:

  • LiquidRack spray-cooling and liquid-assisted rack systems
  • High-efficiency DX and chilled-water cooling systems
  • Flooded-evaporator chillers (CritiCool-X)
  • Indoor and outdoor precision cooling systems
  • Edge, modular, and containerized data-center cooling
  • Control systems, energy-optimization tools, and PCE/ROIP performance frameworks

DCP Q: What markets do you serve?

PQ A:

  • Hyperscale and AI compute environments
  • Colocation and enterprise data centers
  • Modular and prefabricated data centers
  • Edge and telecom infrastructure
  • Education, industrial, government, and defense applications requiring mission-critical cooling

DCP Q: What challenges does the global digital infrastructure industry face today?

PQ A: The industry now operates in an environment of rapid compute expansion and structural power scarcity. Grid limitations, long construction timelines, inefficient legacy cooling, land constraints, and the explosive rise of AI workloads have created an energy bottleneck. Data centers no longer struggle with space… they struggle with power.

DCP Q: How is your company adapting to these challenges?

PQ A: Airsys has always shifted the conversation from cooling to energy stewardship. Every system we design focuses on reducing cooling overhead and returning power to computation. Our LiquidRack spray-cooling platform, low-lift CritiCool-X chillers, and high-efficiency DX systems enable operators to deploy more compute with less infrastructure. We’re also advancing new metrics – PCE and ROIP – to help operators quantify the financial value of returned power.

DCP Q: What are your company’s key differentiators?

PQ A:

  • Energy-first design philosophy — our systems return power to compute
  • Rapid delivery and global manufacturing — critical in today’s supply-strained market
  • LiquidRack spray cooling — enabling high-density AI clusters without stranded power
  • Flooded-evaporator chiller technology — high efficiency, low lift, faster deployment
  • End-to-end portfolio — DX, chilled water, liquid-assisted cooling, modular systems
  • Practical engineering — simple, reliable, maintainable designs
  • PCE / ROIP frameworks — financial and operational tools that change how operators evaluate cooling impact

DCP Q: What can we expect to see/hear from your company in the future?  

PQ A: You will see a major expansion of our LiquidRack platform, new ultra-efficient chiller technologies, deeper integration of PCE/ROIP metrics, and broader support for modular and edge deployments. We are continuing to push innovation toward one goal: giving operators more usable power for compute.

DCP Q: What upcoming industry events will you be attending? 

PQ A: Events Airsys is considering for 2026:

PTC’26 Hawaii; Escape the Cold Aisle Phoenix; Advancing DC Construction West; DCD>Connect New York; Data Center World DC; World of Modular Las Vegas; NSPMA; 7×24 Exchange Spring Orlando; Data Center Nation Toronto; Datacloud USA Austin; AI Infra Summit Santa Clara; DCW Power Dallas; Yotta Las Vegas; 7×24 Exchange Fall San Antonio; PTC DC; DCD>Connect Virginia; DCAC Austin; Supercomputing; Gartner IOCS; NVIDIA GTC; OCP Global Summit.

DCP Q: Do you have any recent news you would like us to highlight?

PQ A: Yes, Airsys has expanded its high-density cooling portfolio with several major advancements:

More announcements are planned for early 2026 as Airsys continues to expand its advanced cooling portfolio for high-density compute environments

DCP Q: Is there anything else you would like our readers to know about your company and capabilities?

PQ A: Airsys is built on a simple idea: power should serve computation, not be lost protecting it. Our mission is to help operators turn cooling losses into compute gains and to make energy stewardship a competitive advantage in the AI era.

DCP Q: Where can our readers learn more about your company?  

PQ A: www.airsysnorthamerica.com

DCP Q: How can our readers contact your company? 

PQ A: www.airsysnorthamerica.com/contact

To learn more about PTC’26, please visit www.ptc.org/ptc26. The event takes place January 18-21, 2026 in Honolulu, HI.

If you are interested in contributing to Data Center POST, contact us at contributions@datacenterpost.com.

Stay in the know! Subscribe to Data Center POST today.

# # #

About Data Center POST

Data Center POST provides a comprehensive view of the digital infrastructure landscape, delivering industry insights into the global data center ecosystem. As the industry’s only peer-contributed and online publication, we offer relevant information from developers, managers, providers, investors, and trendsetters worldwide.

Data Center POST works hard to get the most current information and thought-provoking ideas most apt to add relevance to the success of the data center industry. Stay informed, visit www.datacenterpost.com.

ADVERTISE | CONTRIBUTE | SUBSCRIBE

The post Power Should Serve Compute: Airsys’ Energy-First Approach to Data Center Cooling appeared first on Data Center POST.

Sabey Data Centers Taps OptiCool to Tackle High-Density Cooling

19 January 2026 at 15:30

Sabey Data Centers is adding OptiCool Technologies to its growing ecosystem of advanced cooling partners, aiming to make high-density and AI-driven compute deployments more practical and energy efficient across its U.S. footprint. This move highlights how colocation providers are turning to specialized liquid and refrigerant-based solutions as rack densities outstrip the capabilities of traditional air cooling.

OptiCool is known for two-phase refrigerant pumped systems that use a non-conductive refrigerant to absorb heat through phase change at the rack level. This approach enables efficient heat removal without chilled water loops or extensive mechanical plant build-outs, which can simplify facility design and cut both capital and operating costs for data centers pushing into higher power densities. Sabey is positioning the OptiCool alliance as part of its integrated cooling technologies partnership program, which is designed to lower barriers to liquid and alternative cooling adoption for customers. Instead of forcing enterprises to engineer bespoke solutions for each deployment, Sabey is curating pre-vetted architectures and partners that align cooling technology, facility infrastructure and operational responsibility. For operators planning AI and HPC rollouts, that can translate into clearer deployment paths and reduced integration risk.

The appeal of two-phase refrigerant cooling lies in its combination of density, efficiency and retrofit friendliness. Because the systems move heat directly from the rack to localized condensers using a pumped refrigerant, they can often be deployed with minimal disruption to existing white space. That makes them attractive for operators that need to increase rack power without rebuilding entire data halls or adding large amounts of chilled water infrastructure.

Sabey executives frame the partnership as a response to customer demand for flexible, future-ready cooling options. As more organizations standardize on GPU-rich architectures and high-density configurations, cooling strategy has become a primary constraint on capacity planning. By incorporating OptiCool’s technology into its program, Sabey is signaling to customers that they will have multiple, validated pathways to support emerging workload profiles while staying within power and sustainability envelopes.

As liquid and refrigerant-based cooling rapidly move into the mainstream, customers evaluating their own AI and high-density strategies may benefit from understanding how Sabey is standardizing these technologies across its portfolio. To explore how this partnership and Sabey’s broader integrated cooling program could support specific deployment plans, readers can visit Sabey’s website for more information at www. sabeydatacenters.com.

The post Sabey Data Centers Taps OptiCool to Tackle High-Density Cooling appeared first on Data Center POST.

The Critical Role of Material Selection in Direct-to-Chip Liquid Cooling

30 December 2025 at 15:00

As data centers evolve to support AI, edge computing, and high-density cloud architectures, the challenge is no longer just maintaining an optimal power usage effectiveness (PUE), it is about achieving thermal reliability with unprecedented compute loads. Direct-to-chip liquid cooling continues to push the envelope on heat transfer performance, but one underestimated element in overall system reliability is the material composition of the fluid conveyance network itself. The hoses and fittings that transport coolant through these systems operate in extreme thermal and chemical environments, and their design directly influences uptime, maintenance intervals, and total cost of ownership.

Why Material Selection Matters

At its core, a liquid cooling system is only as reliable as its weakest component. If hoses, fittings, or seals fail due to poor material compatibility, the result could be leaks, contamination, or shortened system life, leading to downtime and costly remediation. Rubber, and rubber-like, materials are critical in hose assemblies, as they must balance flexibility for installation and serviceability with long-term resistance to temperature, pressure, permeation, coolant and coolant additives.

The challenge lies in the fact that not all rubbers, or rubber-like materials, are created equal. Each formulation is a complex recipe of polymers, plasticizers, fillers, and curing agents designed to deliver specific performance characteristics. The wrong selection can lead to issues such as fluid permeation, premature aging, or contamination of the coolant. In mission-critical environments like data centers, where even minor disruptions are unacceptable, this risk is magnified.

Temperature and Chemical Compatibility

Although data center cooling systems typically operate at temperatures between 45°C (113°F) and 65°C (149°F), sometimes reaching 100°C (212°F), those ranges can stress certain materials. Nitrile rubber, for example, performs well in oil-based environments but ages quickly in water-glycol systems, especially at higher temperatures. This can cause hardening, cracking, or coolant contamination.

By contrast, ethylene propylene diene monomer (EPDM) rubber has excellent resistance to water, glycols, and the additives commonly used in data center coolants, such as corrosion inhibitors and biocides. EPDM maintains its properties across the required operating range, making it a proven choice for direct-to-chip applications.

However, not all EPDM is the same. Developing the right EPDM for the application demands a deep understanding of polymer chemistry, filler interactions, and process control to precisely balance flexibility, heat resistance, and long-term stability.

Additionally, two curing processes, sulfur-cured and peroxide-cured, produce different performance outcomes. Sulfur-cured EPDM, while widely used, introduces zinc ions during the curing process. When exposed to deionized water, these ions can leach into the coolant, causing contamination and potentially degrading system performance. Peroxide-cured EPDM avoids this issue, offering higher temperature resistance, lower permeation rates, and greater chemical stability, making it the superior choice for modern liquid cooling.

Even among peroxide-cured EPDM compounds, long term performance is not uniform. While the cure system defines the crosslink chemistry, other formulation choices, particularly filler selection and dispersion, can influence how the material performs over time.

The use of fillers and additives is common in rubber compounding. These ingredients are often selected to control cost, improve processability, or achieve certain performance characteristics such as flame resistance, strength, or flexibility.

The challenge is that some filler systems perform well during initial qualification but are not optimized for long-term exposures faced in the operating environment. Certain fillers or processing aids can slowly migrate over time, introducing extractables into the coolant or subtly altering elastomer properties. For data center applications, EPDM compounds must therefore be engineered with a focus on long term stability, reinforcing why EPDM should not be treated as a commodity material in critical cooling systems.

Risks of Non-Compatible Materials

Material incompatibility can have several cascading effects:

  • Contamination – Non-compatible materials can leach extractables into the coolant, leading to discoloration, chemical imbalance, and reduced thermal efficiency.
  • Permeation – Some rubbers allow fluid to slowly migrate through the hose walls, causing coolant loss or altering the fluid mixture over time.
  • Premature Failure – Elevated temperatures can accelerate aging, leading to cracking, swelling, or loss of mechanical strength.
  • Leakage – Rubber under compression may deform over time, jeopardizing seal integrity if not properly formulated for resistance to compression set and tear.

In a recent two-week aging test at 80°C using a water-glycol coolant, hoses made of nitrile and sulfur-cured EPDM showed visible discoloration of the coolant, indicating leaching and breakdown of the material. Peroxide-cured EPDM, on the other hand, maintained stability, demonstrating its compatibility and reliability in long-term data center applications.

The Gates Approach

Drawing on lessons from mission critical industries that have managed thermal challenges for decades, Gates engineers apply material science rigor to the design of liquid cooling hoses for data center applications.

Rather than relying solely on initial material ratings or short-term qualification criteria, Gates begins by tailoring compound design to the operating environment. This includes deliberate control of polymer selection, filler systems, and cure chemistry to manage long term aging behavior, extractables, permeation, and retention of mechanical properties over time in high purity coolant systems.

Compounds are validated through extended aging and immersion testing that reflects real operating conditions, including exposure to heat, deionized water, and water-glycol coolants. This allows potential material changes to be identified and addressed during development, before installation in the field.

This material science driven process is applied across Gates liquid cooling platforms, including the Data Master, Data Master MegaFlex, and newly released Data Master Eco product lines. By engineering for long term stability rather than only initial compliance, Gates designs hose solutions intended to support reliable operation, predictable maintenance intervals, and extended service life in direct-to-chip liquid cooled data center environments.

Looking Ahead

As data centers continue to scale, thermal management solutions must adapt in parallel. Advanced architectures, higher rack densities, and growing environmental regulations all point to a future where liquid cooling is standard. In this environment, material selection is no longer a secondary consideration; it is foundational to system reliability.

Operators who prioritize material compatibility in fluid conveyance lines will benefit from longer service intervals, improved coolant stability, and reduced risk of downtime. In other words, the proper rubber formulation doesn’t just move fluid, it moves the industry forward.

At Gates, sustainable, high-performance cooling begins with the details. By focusing on the science of materials, we help ensure that data center operators can confidently deploy liquid cooling systems designed for the challenges of today and the innovations of tomorrow.

# # #

About the Author

Chad Chapman is a Mechanical Engineer with over 20 years of experience in the fluid power industry. He currently serves as a Product Application Engineering Manager at Gates, where he leads a team that provides technical guidance, recommendations, and innovative solutions to customers utilizing Gates products and services.

Driven by a passion for problem-solving, Chad thrives on collaborating with customers to understand their unique challenges and deliver solutions that optimize performance. He is energized by learning about new applications and technologies, especially where insights can be shared across industries. At Gates, he has been exploring the emerging field of direct-to-chip liquid cooling, an exciting extension of his deep expertise in thermal management. The rapid advancements in IT technology and AI have made his journey an inspiring and rewarding learning experience.

The post The Critical Role of Material Selection in Direct-to-Chip Liquid Cooling appeared first on Data Center POST.

The Rising Risk Profile of CDUs in High-Density AI Data Centers

10 December 2025 at 17:00

AI has pushed data center thermal loads to levels the industry has never encountered. Racks that once operated comfortably at 8-15 kW are now climbing past 50-100 kW, driving an accelerated shift toward liquid cooling. This transition is happening so quickly that many organizations are deploying new technologies faster than they can fully understand the operational risks.

In my recent five-part LinkedIn series:

  • 2025 U.S. Data Center Incident Trends & Lessons Learned (9-15-2025)
  • Building Safer Data Centers: How Technology is Changing Construction Safety (10-1-2025)
  • The Future of Zero-Incident Data Centers (1ind0-15-2025)
  • Measuring What Matters: The New Safety Metrics in Data Centers (11-1-2025)
  • Beyond Safety: Building Resilient Data Centers Through Integrated Risk Management (11-15-2025)

— a central theme emerged: as systems become more interconnected, risks become more systemic.

That same dynamic influenced the Direct-to-Chip Cooling: A Technical Primer article that Steve Barberi and I published in Data Center POST (10-29-2025). Today, we are observing this systemic-risk framework emerging specifically in the growing role of Cooling Distribution Units (CDUs).

CDUs have evolved from peripheral equipment to a true point of convergence for engineering design, controls logic, chemistry, operational discipline, and human performance. As AI rack densities accelerate, understanding these risks is becoming essential.

CDUs: From Peripheral Equipment to Critical Infrastructure

Historically, CDUs were treated as supplemental mechanical devices. Today, they sit at the center of the liquid-cooling ecosystem governing flow, pressure, temperature stability, fluid quality, isolation, and redundancy. In practice, the CDU now operates as the boundary between stable thermal control and cascading instability.

Yet, unlike well-established electrical systems such as UPSs, switchgear, and feeders, CDUs lack decades of operational history. Operators, technicians, commissioning agents, and even design teams have limited real-world reference points. That blind spot is where a new class of risk is emerging, and three patterns are showing up most frequently.

A New Risk Landscape for CDUs

  • Controls-Layer Fragility
    • Controls-related instability remains one of the most underestimated issues in liquid cooling. Many CDUs still rely on single-path PLC architectures, limited sensor redundancy, and firmware not designed for the thermal volatility of AI workloads. A single inaccurate pressure, flow, or temperature reading can trigger inappropriate or incorrect system responses affecting multiple racks before anyone realizes something is wrong.
  • Pressure and Flow Instability
    • AI workloads surge and cycle, producing heat patterns that stress pumps, valves, gaskets, seals, and manifolds in ways traditional IT never did. These fluctuations are accelerating wear modes that many operators are just beginning to recognize. Illustrative Open Compute Project (OCP) design examples (e.g., 7–10 psi operating ranges at relevant flow rates) are helpful reference points, but they are not universal design criteria.
  • Human-Performance Gaps
    • CDU-related high-potential near misses (HiPo NMs) frequently arise during commissioning and maintenance, when technicians are still learning new workflows. For teams accustomed to legacy air-cooled systems, tasks such as valve sequencing, alarm interpretation, isolation procedures, fluid handling, and leak response are unfamiliar. Unfortunately, as noted in my Building Safer Data Centers post, when technology advances faster than training, people become the first point of vulnerability.

Photo Image: Borealis CDU
Photo by AGT

Additional Risks Emerging in 2025 Liquid-Cooled Environments

Beyond the three most frequent patterns noted above, several quieter but equally impactful vulnerabilities are also surfacing across 2025 deployments:

  • System Architecture Gaps
    • Some first-generation CDUs and loops lack robust isolation, bypass capability, or multi-path routing. Single points of failure, such as a valve, pump, or PLC drive full-loop shutdowns, mirroring the cascading-risk behaviors highlighted in my earlier work on resilience.
  • Maintenance & Operational Variability
    • SOPs for liquid-cooling vary widely across sites and vendors. Fluid handling, startup/shutdown sequences, and leak-response steps remain inconsistent and/or create conditions for preventable HiPo NMs.
  • Chemistry & Fluid Integrity Risks
    • As highlighted in the DTC article Steve Barberi and I co-authored, corrosion, additive depletion, cross-contamination, and stagnant zones can quietly degrade system health. ICP-MS analysis and other advanced techniques are recommended in OCP-aligned coolant programs for PG-25-class fluids, though not universally required.
  • Leak Detection & Nuisance Alarms
    • False positives and false negatives, especially across BMS/DCIM integrations, remain common. Predictive analytics are becoming essential despite not yet being formalized in standards.
  • Facility-Side Dynamics
    • Upstream conditions such as temperature swings, ΔP fluctuations, water hammer, cooling tower chemistry, and biofouling often drive CDU instability. CDUs are frequently blamed for behavior originating in facility water systems.
  • Interoperability & Telemetry Semantics
    • Inconsistent Modbus, BACnet, and Redfish mappings, naming conventions, and telemetry schemas create confusion and delay troubleshooting.

Best Practices: Designing CDUs for Resilience, Not Just Cooling Capacity

If CDUs are going to serve as the cornerstone of liquid cooling in AI environments, they must be engineered around resilience, not simply performance. Several emerging best practices are gaining traction:

  1. Controls Redundancy
    • Dual PLCs, dual sensors, and cross-validated telemetry signals reduce single-point failure exposure. These features do not have prescriptive standards today but are rapidly emerging as best practices for high-density AI environments.
  2. Real-Time Telemetry & Predictive Insight
    • Detecting drift, seal degradation, valve lag, and chemistry shift early is becoming essential. Predictive analytics and deeper telemetry integration are increasingly expected.
  3. Meaningful Isolation
    • Operators should be able to isolate racks, lines, or nodes without shutting down entire loops. In high-density AI environments, isolation becomes uptime.
  4. Failure-Mode Commissioning
    • CDUs should be tested not only for performance but also for failure behavior such as PLC loss, sensor failures, false alarms, and pressure transients. These simulations reveal early-life risk patterns that standard commissioning often misses.
  5. Reliability Expectations
    • CDU design should align with OCP’s system-level reliability expectations, such as MTBF targets on the order of >300,000 hours for OAI Level 10 assemblies, while recognizing that CDU-specific requirements vary by vendor and application.

Standards Alignment

The risks and mitigation strategies outlined above align with emerging guidance from ASHRAE TC 9.9 and the OCP’s liquid-cooling workstreams, including:

  • OAI System Liquid Cooling Guidelines
  • Liquid-to-Liquid CDU Test Methodology
  • ASTM D8040 & D1384 for coolant chemistry durability
  • IEC/UL 62368-1 for hazard-based safety
  • ASHRAE 90.4, PUE/WUE/CUE metrics, and
  • ANSI/BICSI 002, ISO/IEC 22237, and Uptime’s Tier Standards emphasizing concurrently maintainable infrastructure.

These collectively reinforce a shift: CDUs must be treated as availability-critical systems, not auxiliary mechanical devices.

Looking Ahead

The rise of CDUs represents a moment the data center industry has seen before. As soon as a new technology becomes mission-critical, its risk profile expands until safety, engineering, and operations converge around it. Twenty years ago, that moment belonged to UPS systems. Ten years ago, it was batteries. Now, in AI-driven environments, it is the CDU.

Organizations that embrace resilient CDU design, deep visibility, and operator readiness will be the ones that scale AI safely and sustainably.

# # #

About the Author

Walter Leclerc is an independent consultant and recognized industry thought leader in Environmental Health & Safety, Risk Management, and Sustainability, with deep experience across data center construction and operations, technology, and industrial sectors. He has written extensively on emerging risk, liquid cooling, safety leadership, predictive analytics, incident trends, and the integration of culture, technology, and resilience in next-generation mission-critical environments. Walter led the initiatives that earned Digital Realty the Environment+Energy Leader’s Top Project of the Year Award for its Global Water Strategy and recognition on EHS Today’s America’s Safest Companies List. A frequent global speaker on the future of safety, sustainability, and resilience in data centers, Walter holds a B.S. in Chemistry from UC Berkeley and an M.S. in Environmental Management from the University of San Francisco.

The post The Rising Risk Profile of CDUs in High-Density AI Data Centers appeared first on Data Center POST.

Europe’s Digital Infrastructure Enters the Green Era: A Conversation with Nabeel Mahmood at Capacity Europe

13 November 2025 at 16:00

Interview: Jayne Mansfield, ZincFive, with Nabeel Mahmood, Mahmood

At this year’s Capacity Europe conference in London – the epicenter for conversations shaping the digital infrastructure landscape – one theme cut through every panel and hallway exchange: Europe’s data future must be both powerful and sustainable.

To unpack what that really means for investors, operators, and policymakers, we sat down with technology executive and Top 10 Global Influencer Nabeel Mahmood, who spoke at the event about the region’s evolving data-center ecosystem.

“Demand is exploding across the UK and Europe,” Mahmood told us. “AI, edge compute, high-density GPU workloads, and hyperscale cloud deployments are all converging – and they’re forcing a rethink of what infrastructure looks like.” 

The Shift from Scale to Strategy

Mahmood’s central message was that the market’s priorities are shifting from ‘how much’ capacity to ‘how and where’ it’s built. Across the region, sustainability and energy resilience are no longer nice-to-have checkboxes; they’re becoming the foundation of investment decisions.

“Infrastructure used to be a race for megawatts,” he explained. “Now it’s a race for smarter, greener, and more sustainable megawatts.”

That shift is already visible in the UK, where annual data-center investment is projected to soar from roughly £1.75 billion in 2024 to £10 billion by 2029. While London remains dominant, new projects are spreading beyond the M25 as developers chase available power and faster permitting timelines.

Mahmood pointed out that “the UK’s declaration of data centers as critical national infrastructure is a step in the right direction – it signals recognition that digital infrastructure underpins everything from jobs to national competitiveness.”

Europe’s Tightrope: Power, Land, and Policy

Across continental Europe, the picture is similar but more constrained. The so-called FLAP-D markets – Frankfurt, London, Amsterdam, Paris, and Dublin – are nearing record-low vacancy rates, with take-up expected to hit 855 MW in 2025, up 22 % year-on-year.

“Grid capacity and land availability have become the new bottlenecks,” Mahmood said. “Those constraints are pushing investors to look at secondary markets – Milan, Nordic hubs, even parts of Southern Europe – where renewable energy integration and policy agility are improving.”

That migration is reshaping the map of European data infrastructure, with sustainability as the common denominator. Operators are incorporating liquid cooling, renewable sourcing, and battery-microgrid systems into new designs to support increasingly power-hungry AI clusters.

Why Power Chemistry Now Matters

In that context, Mahmood emphasized the critical role of next-generation battery technology – particularly nickel-zinc (Ni-Zn) – as a cornerstone of the sustainable data-center model.

“Battery systems are no longer just backup,” he said. “They’re becoming part of the strategic infrastructure footprint.”

Ni-Zn chemistry, he explained, offers a combination of high power density, safety, and circularity that aligns with Europe’s sustainability mandates. Unlike lithium-ion or lead-acid systems, Ni-Zn avoids thermal-runaway risks, reduces cooling needs, and offers recyclability benefits that fit the EU’s evolving battery-regulation framework.

“For operators, it’s not just an ESG checkbox,” Mahmood added. “It’s about freeing up space, cutting long-term costs, and demonstrating a credible pathway to low-carbon operations.”

A New Definition of Digital Infrastructure

Perhaps Mahmood’s most resonant message at Capacity Europe was philosophical: the way the industry defines “infrastructure” itself must evolve.

“Data centers aren’t just cost centers or tech assets,” he said. “They’re critical national infrastructure – pillars of the modern economy that touch climate policy, energy strategy, and digital sovereignty.”

That redefinition brings a new level of accountability. It means that as Europe scales for AI, cloud, and edge computing, the choices around power, cooling, materials, and footprint will determine not just commercial success but environmental integrity.

The Takeaway

Mahmood closed our conversation with a clear challenge to the industry:

“The digital-infrastructure boom sweeping through Europe must be anchored in responsible, resilient, and sustainable design. Adopting technologies like Ni-Zn isn’t just a technical upgrade – it’s a strategic differentiator. Those who embrace that mindset now will lead the next wave of growth.”

At Capacity Europe, optimism for digital expansion was everywhere – but so was a recognition that the future will belong to those who innovate responsibly. Mahmood’s vision distilled that reality perfectly: the next frontier of infrastructure isn’t just bigger. It’s smarter, greener, and built for permanence.

The post Europe’s Digital Infrastructure Enters the Green Era: A Conversation with Nabeel Mahmood at Capacity Europe appeared first on Data Center POST.

Self-contained liquid cooling: the low-friction option

Each new generation of server silicon is pushing traditional data center air cooling closer to its operational limits. In 2025, the thermal design power (TDP) of top-bin CPUs reached 500 W, and server chip product roadmaps indicate further escalation in pursuit of higher performance. To handle these high-powered chips, more IT organizations are considering direct liquid cooling (DLC) for their servers. However, large-scale deployment of DLC with supporting facility water infrastructure can be costly and complex to operate, and is still hindered by a lack of standards (see DLC shows promise, but challenges persist).

In these circumstances, an alternative approach has emerged: air-cooled servers with internal DLC systems. Referred to by vendors as either air-assisted liquid cooling (AALC) or liquid-assisted air cooling (LAAC), these systems do not require coolant distribution units or facility water infrastructure for heat rejection. This means that they can be deployed in smaller, piecemeal installations.

Uptime Intelligence considers AALC a broader subset of DLC — defined by the use of coolant to remove heat from components within the IT chassis — that includes options for multiple servers. This report discusses designs that use a coolant loop — typically water in commercially available products — that fit entirely within a single server chassis.

Such systems enable IT system engineers and operators to cool top-bin processor silicon in dense form factors — such as 1U rack-mount servers or blades — without relying on extreme-performance heat sinks or elaborate airflow designs. Given enough total air cooling capacity, self-contained AALC requires no disruptive changes to the data hall or new maintenance tasks for facility personnel.

Deploying these systems in existing space will not expand cooling capacity the way full DLC installations with supporting infrastructure can. However, selecting individual 1U or 2U servers with AALC can either reduce IT fan power consumption or enable operators to support roughly 20% greater TDP than they otherwise could — with minimal operational overhead. According to the server makers offering this type of cooling solution, such as Dell and HPE, the premium for self-contained AALC can pay for itself in as little as two years when used to improve power efficiency.

Does simplicity matter?

Many of today’s commercial cold plate and immersion cooling systems originated and matured in high-performance computing facilities for research and academic institutions. However, another group has been experimenting with liquid cooling for more than a decade: video game enthusiasts. Some have equipped their PCs with self-contained AALC systems to improve CPU and GPU performance, as well as reduce fan noise. More recently, to manage the rising heat output of modern server CPUs, IT vendors have started to offer similar systems.

The engineering is simple: fluid tubing connects one or more cold plates to a radiator and pump. The pumps circulate warmed coolant from the cold plates through the radiator, while server fans draw cooling air through the chassis and across the radiator (see Figure 1). Because water is a more efficient heat transfer medium than air, it can remove heat from the processor at a greater rate — even at a lower case temperature.

Figure 1 Closed-loop liquid cooling within the server

Diagram: Closed-loop liquid cooling within the server

The coolant used in commercially shipping products is usually PG25, a mixture of 75% water and 25% propylene glycol. This formulation has been widely adopted in both DLC and facility water systems for decades, so its chemistry and material compatibility are well understood.

As with larger DLC systems, alternative cooling approaches can use a phase change to remove IT heat. Some designs use commercial two-phase dielectric coolants, and an experimental alternative uses a sealed system containing a small volume of pure water under partial vacuum. This lowers the boiling point of water, effectively turning it into a two-phase coolant.

Self-contained AALC designs with multiple cold plates usually have redundant pumps — one on each cold plate in the same loop — and can continue operating if one pump fails. Because AALC systems for a single server chassis contain a smaller volume of coolant than larger liquid cooling systems, any leak is less likely to spill into systems below. Cold plates are typically equipped with leak detection sensors.

Closed-loop liquid cooling is best applied in 1U servers, where space constraints prevent the use of sufficiently large heat sinks. In internal testing by HPE, the pumps and fans of an AALC system in a 1U server consumed around 40% less power than the server fans in an air-cooled equivalent. This may amount to as much as a 5% to 8% reduction in total server power consumption under full load. The benefits of switching to AALC are smaller for 2U servers, which can mount larger heat sinks and use bigger, more efficient fan motors.

However, radiator size, airflow limitations and temperature-sensitive components mean that self-contained AALC is not on par with larger DLC systems, therefore making it more suitable as a transitory measure. Additionally, these systems are not currently available for GPU servers.

Advantages of AALC within the server:

  • Higher cooling capacity (up to 20%) than air cooling in the same form factor and for the same energy input, offers more even heat distribution and faster thermal response than heat sinks.
  • Requires no changes to white space or gray space.
  • Components are widely available.
  • Can operate without maintenance for the lifetime of the server, with low risk of failure.
  • Does not require space outside the rack, unlike “sidecars” or rear-mounted radiators.

Drawbacks of AALC within the server:

  • Closed-loop server cooling systems use several complex components that cost more than a heat sink.
  • Offers less IT cooling capacity than other liquid cooling approaches: systems available outside of high-performance computing and AI-specific deployments will typically support up to 1.2 kW of load per 1U server.
  • Self-contained systems generally consume more energy than larger DLC systems for server fan power, a parasitic component of IT energy consumption.
  • No control of coolant loop temperatures; control of flow rate through pumps may be available in some designs.
  • Radiator and pumps limit space savings within the server chassis.

Outlook

For some organizations, AALC offers the opportunity to maximize the value of existing investments in air cooling infrastructure. For others, it may serve as a measured step on the path toward DLC adoption.

This form of cooling is likely to be especially valuable for operators of legacy facilities that have sufficient air cooling infrastructure to support some high-powered servers but would otherwise suffer from hot spots. Selecting AALC over air cooling may also reduce server fan power enough to allow operators to squeeze another server into a rack.

Much of AALC’s appeal is its potential for efficient use of fan power and its compatibility with existing facility cooling capabilities. Expanding beyond this to increase a facility’s cooling capacity is a different matter, requiring larger, more expensive DLC systems supported by additional heat transport and rejection equipment. In comparison, server-sized AALC systems represent a much smaller cost increase over heat sinks.

Future technical development may address some of AALC’s limitations, although progress and funding will largely depend on the commercial interest in servers with self-contained AALC. In conversations with Uptime Intelligence, IT vendors have diverging views of the role of self-contained AALC in their server portfolios, suggesting that the market’s direction remains uncertain. Nonetheless, there is some interesting investment in the field. For example, Belgian startup Calyos has developed passive closed-loop cooling systems that operate without pumps, instead moving coolant via capillary action. The company is working on a rack-scale prototype that could eventually see deployment in data centers.


The Uptime Intelligence View

AALC within the server may only deliver a fraction of the improvements associated with DLC, but it does so at a fraction of the cost and with minimal disruption to the facility. For many, the benefits may seem negligible. However, for a small group of air-cooled facilities, AALC can deliver either cooling capacity benefits or energy savings.

The post Self-contained liquid cooling: the low-friction option appeared first on Uptime Institute Blog.

❌