Normal view

Received yesterday — 31 January 2026

LFB Group rebrands data centre division as Apx

30 January 2026 at 12:03

LFB Group’s dedicated data centre division has rebranded to Apx, in a move the company says reflects the “complexity, pace and performance expectations” now defining the European data centre market.

The rebrand comes as operators and developers grapple with rising compute intensity, with AI deployments pushing rack densities higher and putting greater scrutiny on cooling performance and delivery timelines. In that environment, Apx says closer collaboration earlier in the design and build cycle – including co-engineering and pre-commissioning – is becoming increasingly important.

The name should also feel familiar. Apx has already been used by LFB Group before – with it naming an entire cooling infrastructure product series after it. Now, however, that name is going to be expanded to the whole division.

Apx will feature the familiar dedicated team from LFB Group, which was previously part of Lennox, so the experience that the company has gathered over the last 20 years will continue to be there – just under a new name. 

Why has LFB Group rebranded its data centre division to Apx? 

Given its established position in the market – why the rebrand? Well, the company says that Apx is all about market positioning. Not only has the company recently debuted three new products, but the company is keen to capitalise on the explosive growth that is occurring in the data centre market – especially in Europe. 

The company is positioning its strength on the pre-commissioning and early validation work, with capabilities it describes as spanning precision manufacturing, automated testing and climatic validation.

Matt Evans, CEO at Apx Data Centre Solutions, argued that the ability to validate performance earlier has become a differentiator as large projects are announced at pace. He noted, “The industry’s dams have well and truly burst, with billion dollar projects and developments being announced almost every week. Keeping on top of this demand though, has never been more important.

“Today, collaboration is everything. Operators are searching for partners who can offer them both flexibility and agility, enabling them to build for the future while reacting quickly to what’s happening right now. That’s where co-engineering becomes critical; by working with designers, contractors and operators from day one, we can shape decisions together, anticipate challenges and engineer solutions before they become problems.”

Evans added that front-loading engineering work is intended to reduce uncertainty once equipment reaches site. He continued, “While no one can predict what’s around the corner, one thing is clear: performance has to be proven earlier. It’s been one of our grounding principles since the start; the idea that pre-commissioning must be core to every product’s DNA. By front-loading engineering, validating performance up-front and removing uncertainty before components reach sites, we give operators the head space, and time, to meet the demand.

“The direction of travel is clear: scale, capacity and density. And I couldn’t be more excited about where we’ve taken this business. The new Apx name marks our next chapter, and it’s one we’re genuinely proud to be part of.”

While it has a new name, Apx will continue to sit within the wider LFB Group, which also includes HVAC specialist Redge and refrigeration business Friga-Bohn. The group says this structure provides industrial-scale manufacturing support and engineering expertise across refrigeration and mechanical disciplines.

Alongside the branding change, Apx is also expanding headcount. The company said it will recruit across project management, operations, controls, commissioning and sales support roles in France, Germany and the Netherlands. By 2027, its dedicated data centre team is expected to reach around 50 employees.

AI and cooling: toward more automation

AI is increasingly steering the data center industry toward new operational practices, where automation, analytics and adaptive control are paving the way for “dark” — or lights-out, unstaffed — facilities. Cooling systems, in particular, are leading this shift. Yet despite AI’s positive track record in facility operations, one persistent challenge remains: trust.

In some ways, AI faces a similar challenge to that of commercial aviation several decades ago. Even after airlines had significantly improved reliability and safety performance, making air travel not only faster but also safer than other forms of transportation, it still took time for public perceptions to shift.

That same tension between capability and confidence lies at the heart of the next evolution in data center cooling controls. As AI models — of which there are several — improve in performance, becoming better understood, transparent and explainable, the question is no longer whether AI can manage operations autonomously, but whether the industry is ready to trust it enough to turn off the lights.

AI’s place in cooling controls

Thermal management systems, such as CRAHs, CRACs and airflow management, represent the front line of AI deployment in cooling optimization. Their modular nature enables the incremental adoption of AI controls, providing immediate visibility and measurable efficiency gains in day-to-day operations.

AI can now be applied across four core cooling functions:

  • Dynamic setpoint management. Continuously recalibrates temperature, humidity and fan speeds to match load conditions.
  • Thermal load forecasting. Predicts shifts in demand and makes adjustments in advance to prevent overcooling or instability.
  • Airflow distribution and containment. Uses machine learning to balance hot and cold aisles and stage CRAH/CRAC operations efficiently.
  • Fault detection, predictive and prescriptive diagnostics. Identifies coil fouling, fan oscillation, or valve hunting before they degrade performance.

A growing ecosystem of vendors is advancing AI-driven cooling optimization across both air- and water-side applications. Companies such as Vigilent, Siemens, Schneider Electric, Phaidra and Etalytics offer machine learning platforms that integrate with existing building management systems (BMS) or data center infrastructure management (DCIM) systems to enhance thermal management and efficiency.

Siemens’ White Space Cooling Optimization (WSCO) platform applies AI to match CRAH operation with IT load and thermal conditions, while Schneider Electric, through its Motivair acquisition, has expanded into liquid cooling and AI-ready thermal systems for high-density environments. In parallel, hyperscale operators, such as Google and Microsoft, have built proprietary AI engines to fine-tune chiller and CRAH performance in real time. These solutions range from supervisory logic to adaptive, closed-loop control. However, all share a common aim: improve efficiency without compromising compliance with service level agreements (SLAs) or operator oversight.

The scope of AI adoption

While IT cooling optimization has become the most visible frontier, conversations with AI control vendors reveal that most mature deployments still begin at the facility water loop rather than in the computer room. Vendors often start with the mechanical plant and facility water system because these areas present fewer variables, such as temperature differentials, flow rates and pressure setpoints, and can be treated as closed, well-bounded systems.

This makes the water loop a safer proving ground for training and validating algorithms before extending them to computer room air cooling systems, where thermal dynamics are more complex and influenced by containment design, workload variability and external conditions.

Predictive versus prescriptive: the maturity divide

AI in cooling is evolving along a maturity spectrum — from predictive insight to prescriptive guidance and, increasingly, to autonomous control. Table 1 summarizes the functional and operational distinctions among these three stages of AI maturity in data center cooling.

Table 1 Predictive, prescriptive, and autonomous AI in data center cooling

Table: Predictive, prescriptive, and autonomous AI in data center cooling

Most deployments today stop at the predictive stage, where AI enhances situational awareness but leaves action to the operator. Achieving full prescriptive control will require not only a deeper technical sophistication but also a shift in mindset.

Technically, it is more difficult to engineer because the system must not only forecast outcomes but also choose and execute safe corrective actions within operational limits. Operationally, it is harder to trust because it challenges long-held norms about accountability and human oversight.

The divide, therefore, is not only technical but also cultural. The shift from informed supervision to algorithmic control is redefining the boundary between automation and authority.

AI’s value and its risks

No matter how advanced the technology becomes, cooling exists for one reason: maintaining environmental stability and meeting SLAs. AI-enhanced monitoring and control systems support operating staff by:

  • Predicting and preventing temperature excursions before they affect uptime.
  • Detecting system degradation early and enabling timely corrective action.
  • Optimizing energy performance under varying load profiles without violating SLA thresholds.

Yet efficiency gains mean little without confidence in system reliability. It is also important to clarify that AI in data center cooling is not a single technology. Control-oriented machine learning models, such as those used to optimize CRAHs, CRACs and chiller plants, operate within physical limits and rely on deterministic sensor data. These differ fundamentally from language-based AI models such as GPT, where “hallucinations” refer to fabricated or contextually inaccurate responses.

At the Uptime Network Fall Americas Fall Conference 2025, several operators raised concerns about AI hallucinations — instances where optimization models generate inaccurate or confusing recommendations from event logs. In control systems, such errors often arise from model drift, sensor faults, or incomplete training data, not from the reasoning failures seen in language-based AI. When a model’s understanding of system behavior falls out of sync with reality, it can misinterpret anomalies as trends, eroding operator confidence faster than it delivers efficiency gains.

The discomfort is not purely technical, it is also human. Many data center operators remain uneasy about letting AI take the controls entirely, even as they acknowledge its potential. In AI’s ascent toward autonomy, trust remains the runway still under construction.

Critically, modern AI control frameworks are being designed with built-in safety, transparency and human oversight. For example, Vigilent, a provider of AI-based optimization controls for data center cooling, reports that its optimizing control switches to “guard mode” whenever it is unable to maintain the data center environment within tolerances. Guard mode brings on additional cooling capacity (at the expense of power consumption) to restore SLA-compliant conditions. Typical examples include rapid drift or temperature hot spots. In addition, there is also a manual override option, which enables the operator to take control through monitoring and event logs.

This layered logic provides operational resiliency by enabling systems to fail safely: guard mode ensures stability, manual override guarantees operator authority, and explainability, via decision-tree logic, keeps every AI action transparent. Even in dark-mode operation, alarms and reasoning remain accessible to operators.

These frameworks directly address one of the primary fears among data center operators: losing visibility into what the system is doing.

Outlook

Gradually, the concept of a dark data center, one operated remotely with minimal on-site staff, has shifted from being an interesting theory to a desirable strategy. In recent years, many infrastructure operators have increased their use of automation and remote-management tools to enhance resiliency and operational flexibility, while also mitigating low staffing levels. Cooling systems, particularly those governed by AI-assisted control, are now central to this operational transformation.

Operational autonomy does not mean abandoning human control; it means achieving reliable operation without the need for constant supervision. Ultimately, a dark data center is not about turning off the lights, it is about turning on trust.


The Uptime Intelligence View

AI in thermal management has evolved from an experimental concept into an essential tool, improving efficiency and reliability across data centers. The next step — coordinating facility water, air and IT cooling liquid systems — will define the evolution toward greater operational autonomy. However, the transition to “dark” operation will be as much cultural as it is technical. As explainability, fail-safe modes and manual overrides build operator confidence, AI will gradually shift from being a copilot to autopilot. The technology is advancing rapidly; the question is how quickly operators will adopt it.

The post AI and cooling: toward more automation appeared first on Uptime Institute Blog.

2025 in Review: Sabey’s Biggest Milestones and What They Mean

26 January 2026 at 18:00

Originally posted on Sabey Data Centers.

At Sabey Data Centers, progress is more than a series of headlines. It’s a blueprint for what’s possible when infrastructure, efficiency and stewardship go hand in hand. From our award-winning sustainability initiatives to bold new campus designs and record-setting expansions, each milestone this year has demonstrated our commitment to powering tomorrow’s workloads with conscience and confidence.

As 2026 is already well underway and already promising to be a banner year, we wanted to pause and reflect on the path we forged in 2025.

Capacity expansion: built for growth 

In 2025, we announced strategic power expansions across our Pacific Northwest, Ashburn, Austin and Quincy locations. In Seattle and Columbia, 30MW of new power is now anticipated to come online by 2027, enabling tenants to scale quickly while leveraging ultra-efficient energy (carbon-free in Seattle) and a regional average PUE as low as 1.2.

On the East Coast, Ashburn’s third and final building broke ground, set to introduce 54MW of additional capacity with the first tranches set to come online in 2026. This will be the first three-story facility in Sabey’s portfolio, purpose-built for air-cooled, liquid-cooled and hybrid deployments, with rack densities over 100kW and an average PUE of 1.35. In both regions, Sabey’s expansion balances hyperscale demand with customization, modular scale and resilient connectivity.

The launch of construction for Austin Building B was another major milestone, expanding our presence in the dynamic Round Rock tech corridor. This three-story, liquid-cooling-ready facility is designed to deliver 54 megawatts of total power capacity. Building B continues our commitment to scalable, energy-efficient digital infrastructure, tailored for enterprise and hyperscale workloads.

To continue reading, please click here.

The post 2025 in Review: Sabey’s Biggest Milestones and What They Mean appeared first on Data Center POST.

Received before yesterday

This Valve Could Halve EV Fast-Charge Times

17 December 2025 at 19:15


Fast, direct-current charging can charge an EV’s battery from about 20 percent to 80 percent in 20 minutes. That’s not bad, but it’s still about six times as long as it takes to fill the tank of an ordinary petrol-powered vehicle.

One of the major bottlenecks to even faster charging is cooling, specifically uneven cooling inside big EV battery packs as the pack is charged. Hydrohertz, a British startup launched by former motorsport and power-electronics engineers, says it has a solution: fire liquid coolant exactly where it’s needed during charging. Its solution, announced in November, is a rotary coolant router that fires coolant exactly where temperatures spike, and within milliseconds—far faster than any single-loop system can react. In laboratory tests, this cooling tech allowed an EV battery to safely charge in less than half the time than was possible with conventional cooling architecture.

A Smarter Way to Move Coolant

Hydrohertz calls its solution Dectravalve. It looks like a simple manifold, but it contains two concentric cylinders and a stepper motor to direct coolant to as many as four zones within the battery pack. It’s installed in between the pack’s cold plates, which are designed to efficiently remove heat from the battery cells through physical contact, and the main coolant supply loop, replacing a tangle of valves, brackets, sensors, and hoses.

To keep costs low, Hydrohertz designed Dectravalve to be produced with off-the-shelf materials, and seals, as well as dimensional tolerances that can be met with the fabrication tools used by many major parts suppliers. Keeping things simple and comparatively cheap could improve Dectravalve’s chances of catching on with automakers and suppliers notorious for frugality. “Thermal management is trending toward simplicity and ultralow cost,” says Chao-Yang Wang, a mechanical and chemical engineering professor at Pennsylvania State University whose research areas include dealing with issues related to internal fluids in batteries and fuel cells. Automakers would prefer passive cooling, he notes—but not if it slows fast charging. So, at least for now, Intelligent control is essential.

“If Dectravalve works as advertised, I’d expect to see a roughly 20 percent improvement in battery longevity, which is a lot.”–Anna Stefanopoulou, University of Michigan

Hydrohertz built Dectravalve to work with ordinary water-glycol, otherwise known as antifreeze, keeping integration simple. Using generic antifreeze avoids a step in the validation process where a supplier or EV manufacturer would otherwise have to establish whether some special formulation is compatible with the rest of the cooling system and doesn’t cause unforeseen complications. And because one Dectravalve can replace the multiple valves and plumbing assemblies of a conventional cooling system, it lowers the parts count, reduces leak points, and cuts warranty risk, Hydrohertz founder and CTO Martyn Talbot claims. The tighter thermal control also lets automakers shrink oversize pumps, hoses, and heat exchangers, improving both cost and vehicle packaging.

The valve reads battery-pack temperatures several times per second and shifts coolant flow instantly. If a high-load event—like a fast charge—is coming, it prepositions itself so more coolant is apportioned to known hot spots before the temperature rises in them.

Multizone control can also speed warm-up to prevent the battery degradation that comes from charging at frigid temperatures. “You can send warming fluid to heat half the pack fast so it can safely start taking load,” says Anna Stefanopoulou, a professor of mechanical engineering at the University of Michigan who specializes in control systems, energy, and transportation technologies. That half can begin accepting load, while the system begins warming the rest of the pack more gradually, she explains. But Dectravalve’s main function remains cooling fast-heating troublesome cells so they don’t slow charging.

Quick response to temperature changes inside the battery doesn’t increase the cooling capacity, but it leverages existing hardware far more efficiently. “Control the coolant with more precision and you get more performance for free,” says Talbot.

Charge Times Can Be Cut By 60 Percent

In early 2025, the Dectravalve underwent bench testing conducted by the Warwick Manufacturing Group (WMG), a multidisciplinary research center at the University of Warwick, in Coventry, England, that works with transport companies to improve the manufacturability of battery systems and other technologies. WMG compared Dectravalve’s cooling performance with that of a conventional single-loop cooling system using the same 100-kilowatt-hour battery pack. During fast-charge trials from 10 percent to 80 percent, Dectravalve held peak cell temperature below 44.5 °C and kept cell-to-cell temperature variation to just below 3 °C without intervention from the battery management system. Similar thermal performance for the single-loop system was made possible only by dialing back the amount of power the battery would accept—the very tapering that keeps fast charging from being on par with gasoline fill-ups.

Keeping the cell temperatures below 50 °C was key, because above that temperature lithium plating begins. The battery suffers irreversible damage when lithium starts coating the surface of the anode—the part of the battery where electrical charge is stored during charging—instead of filling its internal network of pores the way water does when it’s absorbed by a sponge. Plating greatly diminishes the battery’s charge-storage capacity. Letting the battery get too hot can also cause the electrolyte to break down. The result is inhibited flow of ions between the electrodes. And reduced flow within the battery means reduced flow in the external circuit, which powers the vehicle’s motors.

Because the Dectravalve kept temperatures low and uniform—and the battery management system didn’t need to play energy traffic cop and slow charging to a crawl to avoid overheating—charging time was cut by roughly 60 percent. With Dectravalve, the battery reached 80 percent state of charge in between 10 and 13 minutes, versus 30 minutes with the single-cooling-loop setup, according to Hydrohertz.


When Batteries Keep Cool, They Live Longer

Using Warwick’s temperature data, Hydrohertz applied standard degradation models and found that cooler, more uniform packs last longer. Stefanopoulou estimates that if Dectravalve works as claimed, it could boost battery life by roughly 20 percent. “That’s a lot,” she says.

Still, it could be years before the system shows up on new EVs, if ever. Automakers will need years of cycle testing, crash trials, and cost studies before signing off on a new coolant architecture. Hydrohertz says several EV makers and battery suppliers have begun validation programs, and CTO Talbot expects licensing deals to ramp up as results come in. But even in a best-case scenario, Dectravalve won’t be keeping production-model EV batteries cool for at least three model years.

New Device Generates Power by Beaming Heat to Space

7 December 2025 at 21:00


Instead of absorbing energy from the sun to produce electricity, a new class of devices generates power by absorbing heat from its surroundings and beaming it at outer space. Such devices, which do not require exotic materials as their predecessors did, could help ventilate greenhouses and homes, researchers say.

In 2014, scientists invented superthin materials that can cool buildings without using electricity by beaming heat into outer space. When these materials absorb warmth, their compositions and structures ensure they emit heat outward as very specific wavelengths of infrared radiation, ones that air does not absorb. Instead, the radiation is free to leave the atmosphere, carrying energy with it and cooling the area around the material in a process called radiative cooling. The materials could help reduce demand for electricity. Air-conditioning accounts for nearly 15 percent of the electricity consumed by buildings in the United States alone.

Researchers then began exploring whether they could harness radiative cooling to generate power. Whereas solar cells produce electricity from the flow of energy into them from the sun, thermoradiative devices could generate power from energy flowing out from them into space.

Thermoradiative devices operate like solar cells in reverse,” says Jeremy Munday, professor of electrical and computer engineering at the University of California, Davis. “Rather than pointing them at a hot object like the sun, you point them at a cool object, like the sky.”

However, these devices were typically semiconductor electronics that needed rare or expensive materials to operate efficiently. In a new study, Munday and his colleagues investigated using Stirling engines, which “are mechanically simple and do not rely on exotic materials,” he says. “They also directly produce mechanical power—which is valuable for applications like air movement or water pumping—without needing intermediate electrical conversion.”

A Stirling engine meets a heat-emitting antenna

At the heart of a Stirling engine is a gas sealed in an airtight chamber. When the gas is heated, it expands, and pressure increases within the chamber; when it is cooled, it contracts, reducing pressure. This creates a cycle of expansion and contraction that drives a piston, generating power.

Whereas internal combustion engines rely on large differences in temperature to generate power, a Stirling engine is very efficient when it comes to small differences in temperature.

“Stirling engines have been around since the early 1800s, but they always operated by touching some warm object and rejecting waste heat into the local, ambient environment,” Munday says. Instead, the new device is heated by its surroundings and cooled when it radiates energy into space.

The new device combines a Stirling engine with a panel that acts as a heat-radiating antenna. The researchers placed it on the ground outdoors at night.

A year of nighttime experiments revealed that the device could generate more than 10 degrees Celsius of cooling most months, which the researchers could convert to produce more than 400 milliwatts of mechanical power per square meter. The scientists used their invention to directly power a fan and also coupled it to a small electrical motor to generate current.

Close-up of Jeremy Munday's experimental engine, which resembles a mechanical pinwheel and is mounted on a metal sheet. Jeremy Munday’s experimental engine resembles a mechanical pinwheel and is mounted on a metal sheet.Jeremy Munday

Since the source of the new device’s energy is Earth’s ambient heat instead of the sun, its power output “is much lower than solar photovoltaics—roughly two orders of magnitude lower,” Munday says. “However, the goal is not to replace solar. Instead, this enables useful work when solar power is unavailable, such as at night and without requiring batteries, wiring, or fuel.”

The researchers calculated the device could generate more than 5 cubic feet per minute of air flow, the minimum air rate the American Society of Heating, Refrigerating and Air-Conditioning Engineers requires to minimize detrimental effects on health inside public buildings. Potential applications may include circulating carbon dioxide within greenhouses and improving comfort inside residential buildings, they say.

Munday and his colleagues note there are many ways in which they could further improve the device’s performance. For instance, they could replace the air sealed in the device with hydrogen or helium gas, which would reduce internal engine friction. “With more-efficient engine designs, we think this approach could enable a new class of passive, around-the-clock power systems that complement solar energy and help support resilient, off-grid infrastructure,” Munday says.

In the future, “we would like to set up these devices in a real greenhouse as a first proof-of-concept application,” Munday says. They would also like to engineer the device to work during the day, he notes.

The scientists detailed their findings in the journal Science Advances.

This article appears in the February 2026 print issue as “Engine Generates Power by Beaming Heat into Space.”

Sabey and JetCool Push Liquid Cooling from Pilot to Standard Practice​

22 January 2026 at 17:00

As AI and high‑performance computing (HPC) workloads strain traditional air‑cooled data centers, Sabey Data Centers is expanding its partnership with JetCool Technologies to make direct‑to‑chip liquid cooling a standard option across its U.S. portfolio. The move signals how multi‑tenant operators are shifting from experimental deployments to programmatic strategies for high‑density, energy‑efficient infrastructure.

Sabey, one of the largest privately held multi‑tenant data center providers in the United States, first teamed with JetCool in 2023 to test direct‑to‑chip cooling in production environments. Those early deployments reported 13.5% server power savings compared with air‑cooled alternatives, while supporting dense AI and HPC racks without heavy reliance on traditional mechanical systems.

The new phase of the collaboration is less about proving the technology and more about scale. Sabey and JetCool are now working to simplify how customers adopt liquid cooling by turning what had been bespoke engineering work into repeatable designs that can be deployed across multiple sites. The goal is to give enterprises and cloud platforms a predictable path to high‑density infrastructure that balances performance, efficiency and operational risk.

A core element of that approach is a set of modular cooling architectures developed with Dell Technologies for select PowerEdge GPU‑based servers. By closely integrating server hardware and direct‑to‑chip liquid cooling, the partners aim to deliver pre‑validated building blocks for AI and HPC clusters, rather than starting from scratch with each project. The design includes unified warranty coverage for both the servers and the cooling system, an assurance that Sabey says is key for customers wary of fragmented support models.

The expanded alliance sits inside Sabey’s broader liquid cooling partnership program, an initiative that aggregates multiple thermal management providers under one framework. Instead of backing a single technology, Sabey is positioning itself as a curator of proven, ready‑to‑integrate cooling options that map to varying density targets and sustainability goals. For IT and facilities teams under pressure to scale GPU‑rich deployments, that structure promises clearer design patterns and faster time to production.

Executives at both companies frame the partnership as a response to converging pressures: soaring compute demand, tightening efficiency requirements and growing scrutiny of data center energy use. Direct‑to‑chip liquid cooling has emerged as one of the more practical levers for improving thermal performance at the rack level, particularly in environments where power and floor space are limited but performance expectations are not.

For Sabey, formalizing JetCool’s technology as a standard, warranty‑backed option is part of a broader message to customers: liquid cooling is no longer a niche or one‑off feature, but an embedded part of the company’s roadmap for AI‑era infrastructure. Organizations evaluating their own cooling strategies can find the full announcement here.

The post Sabey and JetCool Push Liquid Cooling from Pilot to Standard Practice​ appeared first on Data Center POST.

Power Should Serve Compute: Airsys’ Energy-First Approach to Data Center Cooling

21 January 2026 at 17:00

Data Center POST had the opportunity to connect with Paul Quigley, President of Airsys Cooling Technologies, Inc., ahead of PTC’26. With more than 36 years of experience spanning HVAC contracting, distribution, and executive leadership in manufacturing, Quigley is widely recognized for his work in large-scale, high-demand cooling environments. His career includes co-designing one of the largest VRF projects in North America and leading complex cooling initiatives across the United States. Since joining Airsys in 2021, Quigley has focused on advancing precision and liquid-assisted cooling technologies to help data center operators address power constraints driven by AI, high-density compute, and rapid digital infrastructure growth. In the Q&A below, Quigley shares his perspective on the challenges facing the global digital infrastructure industry and how Airsys is helping operators turn cooling efficiency into usable compute power.

Data Center Post (DCP) Question: What does your company do?  

Paul Quigley (PQ) Answer: Airsys is an energy-focused cooling solutions company founded on a simple idea: power should serve computation, not be lost protecting it. Every system we build returns power back to the data center where it belongs. Our innovative, award-winning technologies give operators more usable power for compute, improving capacity, resilience, and overall profitability.

DCP Q: What problems does your company solve in the market?

PQ A: Airsys helps data centers recover power lost to cooling. As AI clusters push energy systems past their limits, operators are forced to choose between compute, capacity, and cost. Our cooling technologies solve that problem by returning power back to the data center, reducing stranded capacity, and giving operators more usable power for compute.

DCP Q: What are your company’s core products or services?

PQ A: We design and build the world’s most advanced cooling solutions for data centers, including:

  • LiquidRack spray-cooling and liquid-assisted rack systems
  • High-efficiency DX and chilled-water cooling systems
  • Flooded-evaporator chillers (CritiCool-X)
  • Indoor and outdoor precision cooling systems
  • Edge, modular, and containerized data-center cooling
  • Control systems, energy-optimization tools, and PCE/ROIP performance frameworks

DCP Q: What markets do you serve?

PQ A:

  • Hyperscale and AI compute environments
  • Colocation and enterprise data centers
  • Modular and prefabricated data centers
  • Edge and telecom infrastructure
  • Education, industrial, government, and defense applications requiring mission-critical cooling

DCP Q: What challenges does the global digital infrastructure industry face today?

PQ A: The industry now operates in an environment of rapid compute expansion and structural power scarcity. Grid limitations, long construction timelines, inefficient legacy cooling, land constraints, and the explosive rise of AI workloads have created an energy bottleneck. Data centers no longer struggle with space… they struggle with power.

DCP Q: How is your company adapting to these challenges?

PQ A: Airsys has always shifted the conversation from cooling to energy stewardship. Every system we design focuses on reducing cooling overhead and returning power to computation. Our LiquidRack spray-cooling platform, low-lift CritiCool-X chillers, and high-efficiency DX systems enable operators to deploy more compute with less infrastructure. We’re also advancing new metrics – PCE and ROIP – to help operators quantify the financial value of returned power.

DCP Q: What are your company’s key differentiators?

PQ A:

  • Energy-first design philosophy — our systems return power to compute
  • Rapid delivery and global manufacturing — critical in today’s supply-strained market
  • LiquidRack spray cooling — enabling high-density AI clusters without stranded power
  • Flooded-evaporator chiller technology — high efficiency, low lift, faster deployment
  • End-to-end portfolio — DX, chilled water, liquid-assisted cooling, modular systems
  • Practical engineering — simple, reliable, maintainable designs
  • PCE / ROIP frameworks — financial and operational tools that change how operators evaluate cooling impact

DCP Q: What can we expect to see/hear from your company in the future?  

PQ A: You will see a major expansion of our LiquidRack platform, new ultra-efficient chiller technologies, deeper integration of PCE/ROIP metrics, and broader support for modular and edge deployments. We are continuing to push innovation toward one goal: giving operators more usable power for compute.

DCP Q: What upcoming industry events will you be attending? 

PQ A: Events Airsys is considering for 2026:

PTC’26 Hawaii; Escape the Cold Aisle Phoenix; Advancing DC Construction West; DCD>Connect New York; Data Center World DC; World of Modular Las Vegas; NSPMA; 7×24 Exchange Spring Orlando; Data Center Nation Toronto; Datacloud USA Austin; AI Infra Summit Santa Clara; DCW Power Dallas; Yotta Las Vegas; 7×24 Exchange Fall San Antonio; PTC DC; DCD>Connect Virginia; DCAC Austin; Supercomputing; Gartner IOCS; NVIDIA GTC; OCP Global Summit.

DCP Q: Do you have any recent news you would like us to highlight?

PQ A: Yes, Airsys has expanded its high-density cooling portfolio with several major advancements:

More announcements are planned for early 2026 as Airsys continues to expand its advanced cooling portfolio for high-density compute environments

DCP Q: Is there anything else you would like our readers to know about your company and capabilities?

PQ A: Airsys is built on a simple idea: power should serve computation, not be lost protecting it. Our mission is to help operators turn cooling losses into compute gains and to make energy stewardship a competitive advantage in the AI era.

DCP Q: Where can our readers learn more about your company?  

PQ A: www.airsysnorthamerica.com

DCP Q: How can our readers contact your company? 

PQ A: www.airsysnorthamerica.com/contact

To learn more about PTC’26, please visit www.ptc.org/ptc26. The event takes place January 18-21, 2026 in Honolulu, HI.

If you are interested in contributing to Data Center POST, contact us at contributions@datacenterpost.com.

Stay in the know! Subscribe to Data Center POST today.

# # #

About Data Center POST

Data Center POST provides a comprehensive view of the digital infrastructure landscape, delivering industry insights into the global data center ecosystem. As the industry’s only peer-contributed and online publication, we offer relevant information from developers, managers, providers, investors, and trendsetters worldwide.

Data Center POST works hard to get the most current information and thought-provoking ideas most apt to add relevance to the success of the data center industry. Stay informed, visit www.datacenterpost.com.

ADVERTISE | CONTRIBUTE | SUBSCRIBE

The post Power Should Serve Compute: Airsys’ Energy-First Approach to Data Center Cooling appeared first on Data Center POST.

Sabey Data Centers Taps OptiCool to Tackle High-Density Cooling

19 January 2026 at 15:30

Sabey Data Centers is adding OptiCool Technologies to its growing ecosystem of advanced cooling partners, aiming to make high-density and AI-driven compute deployments more practical and energy efficient across its U.S. footprint. This move highlights how colocation providers are turning to specialized liquid and refrigerant-based solutions as rack densities outstrip the capabilities of traditional air cooling.

OptiCool is known for two-phase refrigerant pumped systems that use a non-conductive refrigerant to absorb heat through phase change at the rack level. This approach enables efficient heat removal without chilled water loops or extensive mechanical plant build-outs, which can simplify facility design and cut both capital and operating costs for data centers pushing into higher power densities. Sabey is positioning the OptiCool alliance as part of its integrated cooling technologies partnership program, which is designed to lower barriers to liquid and alternative cooling adoption for customers. Instead of forcing enterprises to engineer bespoke solutions for each deployment, Sabey is curating pre-vetted architectures and partners that align cooling technology, facility infrastructure and operational responsibility. For operators planning AI and HPC rollouts, that can translate into clearer deployment paths and reduced integration risk.

The appeal of two-phase refrigerant cooling lies in its combination of density, efficiency and retrofit friendliness. Because the systems move heat directly from the rack to localized condensers using a pumped refrigerant, they can often be deployed with minimal disruption to existing white space. That makes them attractive for operators that need to increase rack power without rebuilding entire data halls or adding large amounts of chilled water infrastructure.

Sabey executives frame the partnership as a response to customer demand for flexible, future-ready cooling options. As more organizations standardize on GPU-rich architectures and high-density configurations, cooling strategy has become a primary constraint on capacity planning. By incorporating OptiCool’s technology into its program, Sabey is signaling to customers that they will have multiple, validated pathways to support emerging workload profiles while staying within power and sustainability envelopes.

As liquid and refrigerant-based cooling rapidly move into the mainstream, customers evaluating their own AI and high-density strategies may benefit from understanding how Sabey is standardizing these technologies across its portfolio. To explore how this partnership and Sabey’s broader integrated cooling program could support specific deployment plans, readers can visit Sabey’s website for more information at www. sabeydatacenters.com.

The post Sabey Data Centers Taps OptiCool to Tackle High-Density Cooling appeared first on Data Center POST.

AI Is Moving to the Water’s Edge, and It Changes Everything

5 January 2026 at 15:00

A new development on the Jersey Shore is signaling a shift in how and where AI infrastructure will grow. A subsea cable landing station has announced plans for a data hall built specifically for AI, complete with liquid-cooled GPU clusters and an advertised PUE of 1.25. That number reflects a well-designed facility, but it highlights an emerging reality. PUE only tells us how much power reaches the IT load. It tells us nothing about how much work that power actually produces.

As more “AI-ready” landing stations come online, the industry is beginning to move beyond energy efficiency alone and toward compute productivity. The question is no longer just how much power a facility uses, but how much useful compute it generates per megawatt. That is the core of Power Compute Effectiveness, PCE. When high-density AI hardware is placed at the exact point where global traffic enters a continent, PCE becomes far more relevant than PUE.

To understand why this matters, it helps to look at the role subsea landing stations play. These are the locations where the massive internet cables from overseas come ashore. They carry banking records, streaming platforms, enterprise applications, gaming traffic, and government communications. Most people never notice them, yet they are the physical beginning of the global internet.

For years, large data centers moved inland, following cheaper land and more available power. But as AI shifts from training to real-time inference, location again influences performance. Some AI workloads benefit from sitting directly on the network path instead of hundreds of miles away. This is why placing AI hardware at a cable landing station is suddenly becoming not just possible, but strategic.

A familiar example is Netflix. When millions of viewers press Play, the platform makes moment-to-moment decisions about resolution, bitrate, and content delivery paths. These decisions happen faster and more accurately when the intelligence sits closer to the traffic itself. Moving that logic to the cable landing reduces distance, delays, and potential bottlenecks. The result is a smoother user experience.

Governments have their own motivations. Many countries regulate which types of data can leave their borders. This concept, often called sovereignty, simply means that certain information must stay within the nation’s control. Placing AI infrastructure at the point where international traffic enters the country gives agencies the ability to analyze, enforce, and protect sensitive data without letting it cross a boundary.

This trend also exposes a challenge. High-density AI hardware produces far more heat than traditional servers. Most legacy facilities, especially multi-tenant carrier hotels in large cities, were never built to support liquid cooling, reinforced floors, or the weight of modern GPU racks. Purpose-built coastal sites are beginning to fill this gap.

And here is the real eye-opener. Two facilities can each draw 10 megawatts, yet one may produce twice the compute of the other. PUE will give both of them the same high efficiency score because it cannot see the difference in output. Their actual productivity, and even their revenue potential, could be worlds apart.

PCE and ROIP, Return on Invested Power, expose that difference immediately. PCE reveals how much compute is produced per watt, and ROIP shows the financial return on that power. These metrics are quickly becoming essential in AI-era build strategies, and investors and boards are beginning to incorporate them into their decision frameworks.

What is happening at these coastal sites is the early sign of a new class of data center. High density. Advanced cooling. Strategic placement at global entry points for digital traffic. Smaller footprints but far higher productivity per square foot.

The industry will increasingly judge facilities not by how much power they receive, but by how effectively they turn that power into intelligence. That shift is already underway, and the emergence of AI-ready landing stations is the clearest signal yet that compute productivity will guide the next generation of infrastructure.

# # #

About the Author

Paul Quigley is the former President and current Chief Strategic Partnership Officer of Airsys Cooling Technologies, and a global advocate for high density, energy efficient data center design. With more than three decades in HVAC and mission critical cooling, he focuses on practical solutions that connect energy stewardship with real world compute performance. Paul writes and speaks internationally about PCE, ROIP, and the future of data center health in the age of AI.

The post AI Is Moving to the Water’s Edge, and It Changes Everything appeared first on Data Center POST.

The Critical Role of Material Selection in Direct-to-Chip Liquid Cooling

30 December 2025 at 15:00

As data centers evolve to support AI, edge computing, and high-density cloud architectures, the challenge is no longer just maintaining an optimal power usage effectiveness (PUE), it is about achieving thermal reliability with unprecedented compute loads. Direct-to-chip liquid cooling continues to push the envelope on heat transfer performance, but one underestimated element in overall system reliability is the material composition of the fluid conveyance network itself. The hoses and fittings that transport coolant through these systems operate in extreme thermal and chemical environments, and their design directly influences uptime, maintenance intervals, and total cost of ownership.

Why Material Selection Matters

At its core, a liquid cooling system is only as reliable as its weakest component. If hoses, fittings, or seals fail due to poor material compatibility, the result could be leaks, contamination, or shortened system life, leading to downtime and costly remediation. Rubber, and rubber-like, materials are critical in hose assemblies, as they must balance flexibility for installation and serviceability with long-term resistance to temperature, pressure, permeation, coolant and coolant additives.

The challenge lies in the fact that not all rubbers, or rubber-like materials, are created equal. Each formulation is a complex recipe of polymers, plasticizers, fillers, and curing agents designed to deliver specific performance characteristics. The wrong selection can lead to issues such as fluid permeation, premature aging, or contamination of the coolant. In mission-critical environments like data centers, where even minor disruptions are unacceptable, this risk is magnified.

Temperature and Chemical Compatibility

Although data center cooling systems typically operate at temperatures between 45°C (113°F) and 65°C (149°F), sometimes reaching 100°C (212°F), those ranges can stress certain materials. Nitrile rubber, for example, performs well in oil-based environments but ages quickly in water-glycol systems, especially at higher temperatures. This can cause hardening, cracking, or coolant contamination.

By contrast, ethylene propylene diene monomer (EPDM) rubber has excellent resistance to water, glycols, and the additives commonly used in data center coolants, such as corrosion inhibitors and biocides. EPDM maintains its properties across the required operating range, making it a proven choice for direct-to-chip applications.

However, not all EPDM is the same. Developing the right EPDM for the application demands a deep understanding of polymer chemistry, filler interactions, and process control to precisely balance flexibility, heat resistance, and long-term stability.

Additionally, two curing processes, sulfur-cured and peroxide-cured, produce different performance outcomes. Sulfur-cured EPDM, while widely used, introduces zinc ions during the curing process. When exposed to deionized water, these ions can leach into the coolant, causing contamination and potentially degrading system performance. Peroxide-cured EPDM avoids this issue, offering higher temperature resistance, lower permeation rates, and greater chemical stability, making it the superior choice for modern liquid cooling.

Even among peroxide-cured EPDM compounds, long term performance is not uniform. While the cure system defines the crosslink chemistry, other formulation choices, particularly filler selection and dispersion, can influence how the material performs over time.

The use of fillers and additives is common in rubber compounding. These ingredients are often selected to control cost, improve processability, or achieve certain performance characteristics such as flame resistance, strength, or flexibility.

The challenge is that some filler systems perform well during initial qualification but are not optimized for long-term exposures faced in the operating environment. Certain fillers or processing aids can slowly migrate over time, introducing extractables into the coolant or subtly altering elastomer properties. For data center applications, EPDM compounds must therefore be engineered with a focus on long term stability, reinforcing why EPDM should not be treated as a commodity material in critical cooling systems.

Risks of Non-Compatible Materials

Material incompatibility can have several cascading effects:

  • Contamination – Non-compatible materials can leach extractables into the coolant, leading to discoloration, chemical imbalance, and reduced thermal efficiency.
  • Permeation – Some rubbers allow fluid to slowly migrate through the hose walls, causing coolant loss or altering the fluid mixture over time.
  • Premature Failure – Elevated temperatures can accelerate aging, leading to cracking, swelling, or loss of mechanical strength.
  • Leakage – Rubber under compression may deform over time, jeopardizing seal integrity if not properly formulated for resistance to compression set and tear.

In a recent two-week aging test at 80°C using a water-glycol coolant, hoses made of nitrile and sulfur-cured EPDM showed visible discoloration of the coolant, indicating leaching and breakdown of the material. Peroxide-cured EPDM, on the other hand, maintained stability, demonstrating its compatibility and reliability in long-term data center applications.

The Gates Approach

Drawing on lessons from mission critical industries that have managed thermal challenges for decades, Gates engineers apply material science rigor to the design of liquid cooling hoses for data center applications.

Rather than relying solely on initial material ratings or short-term qualification criteria, Gates begins by tailoring compound design to the operating environment. This includes deliberate control of polymer selection, filler systems, and cure chemistry to manage long term aging behavior, extractables, permeation, and retention of mechanical properties over time in high purity coolant systems.

Compounds are validated through extended aging and immersion testing that reflects real operating conditions, including exposure to heat, deionized water, and water-glycol coolants. This allows potential material changes to be identified and addressed during development, before installation in the field.

This material science driven process is applied across Gates liquid cooling platforms, including the Data Master, Data Master MegaFlex, and newly released Data Master Eco product lines. By engineering for long term stability rather than only initial compliance, Gates designs hose solutions intended to support reliable operation, predictable maintenance intervals, and extended service life in direct-to-chip liquid cooled data center environments.

Looking Ahead

As data centers continue to scale, thermal management solutions must adapt in parallel. Advanced architectures, higher rack densities, and growing environmental regulations all point to a future where liquid cooling is standard. In this environment, material selection is no longer a secondary consideration; it is foundational to system reliability.

Operators who prioritize material compatibility in fluid conveyance lines will benefit from longer service intervals, improved coolant stability, and reduced risk of downtime. In other words, the proper rubber formulation doesn’t just move fluid, it moves the industry forward.

At Gates, sustainable, high-performance cooling begins with the details. By focusing on the science of materials, we help ensure that data center operators can confidently deploy liquid cooling systems designed for the challenges of today and the innovations of tomorrow.

# # #

About the Author

Chad Chapman is a Mechanical Engineer with over 20 years of experience in the fluid power industry. He currently serves as a Product Application Engineering Manager at Gates, where he leads a team that provides technical guidance, recommendations, and innovative solutions to customers utilizing Gates products and services.

Driven by a passion for problem-solving, Chad thrives on collaborating with customers to understand their unique challenges and deliver solutions that optimize performance. He is energized by learning about new applications and technologies, especially where insights can be shared across industries. At Gates, he has been exploring the emerging field of direct-to-chip liquid cooling, an exciting extension of his deep expertise in thermal management. The rapid advancements in IT technology and AI have made his journey an inspiring and rewarding learning experience.

The post The Critical Role of Material Selection in Direct-to-Chip Liquid Cooling appeared first on Data Center POST.

The Rising Risk Profile of CDUs in High-Density AI Data Centers

10 December 2025 at 17:00

AI has pushed data center thermal loads to levels the industry has never encountered. Racks that once operated comfortably at 8-15 kW are now climbing past 50-100 kW, driving an accelerated shift toward liquid cooling. This transition is happening so quickly that many organizations are deploying new technologies faster than they can fully understand the operational risks.

In my recent five-part LinkedIn series:

  • 2025 U.S. Data Center Incident Trends & Lessons Learned (9-15-2025)
  • Building Safer Data Centers: How Technology is Changing Construction Safety (10-1-2025)
  • The Future of Zero-Incident Data Centers (1ind0-15-2025)
  • Measuring What Matters: The New Safety Metrics in Data Centers (11-1-2025)
  • Beyond Safety: Building Resilient Data Centers Through Integrated Risk Management (11-15-2025)

— a central theme emerged: as systems become more interconnected, risks become more systemic.

That same dynamic influenced the Direct-to-Chip Cooling: A Technical Primer article that Steve Barberi and I published in Data Center POST (10-29-2025). Today, we are observing this systemic-risk framework emerging specifically in the growing role of Cooling Distribution Units (CDUs).

CDUs have evolved from peripheral equipment to a true point of convergence for engineering design, controls logic, chemistry, operational discipline, and human performance. As AI rack densities accelerate, understanding these risks is becoming essential.

CDUs: From Peripheral Equipment to Critical Infrastructure

Historically, CDUs were treated as supplemental mechanical devices. Today, they sit at the center of the liquid-cooling ecosystem governing flow, pressure, temperature stability, fluid quality, isolation, and redundancy. In practice, the CDU now operates as the boundary between stable thermal control and cascading instability.

Yet, unlike well-established electrical systems such as UPSs, switchgear, and feeders, CDUs lack decades of operational history. Operators, technicians, commissioning agents, and even design teams have limited real-world reference points. That blind spot is where a new class of risk is emerging, and three patterns are showing up most frequently.

A New Risk Landscape for CDUs

  • Controls-Layer Fragility
    • Controls-related instability remains one of the most underestimated issues in liquid cooling. Many CDUs still rely on single-path PLC architectures, limited sensor redundancy, and firmware not designed for the thermal volatility of AI workloads. A single inaccurate pressure, flow, or temperature reading can trigger inappropriate or incorrect system responses affecting multiple racks before anyone realizes something is wrong.
  • Pressure and Flow Instability
    • AI workloads surge and cycle, producing heat patterns that stress pumps, valves, gaskets, seals, and manifolds in ways traditional IT never did. These fluctuations are accelerating wear modes that many operators are just beginning to recognize. Illustrative Open Compute Project (OCP) design examples (e.g., 7–10 psi operating ranges at relevant flow rates) are helpful reference points, but they are not universal design criteria.
  • Human-Performance Gaps
    • CDU-related high-potential near misses (HiPo NMs) frequently arise during commissioning and maintenance, when technicians are still learning new workflows. For teams accustomed to legacy air-cooled systems, tasks such as valve sequencing, alarm interpretation, isolation procedures, fluid handling, and leak response are unfamiliar. Unfortunately, as noted in my Building Safer Data Centers post, when technology advances faster than training, people become the first point of vulnerability.

Photo Image: Borealis CDU
Photo by AGT

Additional Risks Emerging in 2025 Liquid-Cooled Environments

Beyond the three most frequent patterns noted above, several quieter but equally impactful vulnerabilities are also surfacing across 2025 deployments:

  • System Architecture Gaps
    • Some first-generation CDUs and loops lack robust isolation, bypass capability, or multi-path routing. Single points of failure, such as a valve, pump, or PLC drive full-loop shutdowns, mirroring the cascading-risk behaviors highlighted in my earlier work on resilience.
  • Maintenance & Operational Variability
    • SOPs for liquid-cooling vary widely across sites and vendors. Fluid handling, startup/shutdown sequences, and leak-response steps remain inconsistent and/or create conditions for preventable HiPo NMs.
  • Chemistry & Fluid Integrity Risks
    • As highlighted in the DTC article Steve Barberi and I co-authored, corrosion, additive depletion, cross-contamination, and stagnant zones can quietly degrade system health. ICP-MS analysis and other advanced techniques are recommended in OCP-aligned coolant programs for PG-25-class fluids, though not universally required.
  • Leak Detection & Nuisance Alarms
    • False positives and false negatives, especially across BMS/DCIM integrations, remain common. Predictive analytics are becoming essential despite not yet being formalized in standards.
  • Facility-Side Dynamics
    • Upstream conditions such as temperature swings, ΔP fluctuations, water hammer, cooling tower chemistry, and biofouling often drive CDU instability. CDUs are frequently blamed for behavior originating in facility water systems.
  • Interoperability & Telemetry Semantics
    • Inconsistent Modbus, BACnet, and Redfish mappings, naming conventions, and telemetry schemas create confusion and delay troubleshooting.

Best Practices: Designing CDUs for Resilience, Not Just Cooling Capacity

If CDUs are going to serve as the cornerstone of liquid cooling in AI environments, they must be engineered around resilience, not simply performance. Several emerging best practices are gaining traction:

  1. Controls Redundancy
    • Dual PLCs, dual sensors, and cross-validated telemetry signals reduce single-point failure exposure. These features do not have prescriptive standards today but are rapidly emerging as best practices for high-density AI environments.
  2. Real-Time Telemetry & Predictive Insight
    • Detecting drift, seal degradation, valve lag, and chemistry shift early is becoming essential. Predictive analytics and deeper telemetry integration are increasingly expected.
  3. Meaningful Isolation
    • Operators should be able to isolate racks, lines, or nodes without shutting down entire loops. In high-density AI environments, isolation becomes uptime.
  4. Failure-Mode Commissioning
    • CDUs should be tested not only for performance but also for failure behavior such as PLC loss, sensor failures, false alarms, and pressure transients. These simulations reveal early-life risk patterns that standard commissioning often misses.
  5. Reliability Expectations
    • CDU design should align with OCP’s system-level reliability expectations, such as MTBF targets on the order of >300,000 hours for OAI Level 10 assemblies, while recognizing that CDU-specific requirements vary by vendor and application.

Standards Alignment

The risks and mitigation strategies outlined above align with emerging guidance from ASHRAE TC 9.9 and the OCP’s liquid-cooling workstreams, including:

  • OAI System Liquid Cooling Guidelines
  • Liquid-to-Liquid CDU Test Methodology
  • ASTM D8040 & D1384 for coolant chemistry durability
  • IEC/UL 62368-1 for hazard-based safety
  • ASHRAE 90.4, PUE/WUE/CUE metrics, and
  • ANSI/BICSI 002, ISO/IEC 22237, and Uptime’s Tier Standards emphasizing concurrently maintainable infrastructure.

These collectively reinforce a shift: CDUs must be treated as availability-critical systems, not auxiliary mechanical devices.

Looking Ahead

The rise of CDUs represents a moment the data center industry has seen before. As soon as a new technology becomes mission-critical, its risk profile expands until safety, engineering, and operations converge around it. Twenty years ago, that moment belonged to UPS systems. Ten years ago, it was batteries. Now, in AI-driven environments, it is the CDU.

Organizations that embrace resilient CDU design, deep visibility, and operator readiness will be the ones that scale AI safely and sustainably.

# # #

About the Author

Walter Leclerc is an independent consultant and recognized industry thought leader in Environmental Health & Safety, Risk Management, and Sustainability, with deep experience across data center construction and operations, technology, and industrial sectors. He has written extensively on emerging risk, liquid cooling, safety leadership, predictive analytics, incident trends, and the integration of culture, technology, and resilience in next-generation mission-critical environments. Walter led the initiatives that earned Digital Realty the Environment+Energy Leader’s Top Project of the Year Award for its Global Water Strategy and recognition on EHS Today’s America’s Safest Companies List. A frequent global speaker on the future of safety, sustainability, and resilience in data centers, Walter holds a B.S. in Chemistry from UC Berkeley and an M.S. in Environmental Management from the University of San Francisco.

The post The Rising Risk Profile of CDUs in High-Density AI Data Centers appeared first on Data Center POST.

Europe’s Digital Infrastructure Enters the Green Era: A Conversation with Nabeel Mahmood at Capacity Europe

13 November 2025 at 16:00

Interview: Jayne Mansfield, ZincFive, with Nabeel Mahmood, Mahmood

At this year’s Capacity Europe conference in London – the epicenter for conversations shaping the digital infrastructure landscape – one theme cut through every panel and hallway exchange: Europe’s data future must be both powerful and sustainable.

To unpack what that really means for investors, operators, and policymakers, we sat down with technology executive and Top 10 Global Influencer Nabeel Mahmood, who spoke at the event about the region’s evolving data-center ecosystem.

“Demand is exploding across the UK and Europe,” Mahmood told us. “AI, edge compute, high-density GPU workloads, and hyperscale cloud deployments are all converging – and they’re forcing a rethink of what infrastructure looks like.” 

The Shift from Scale to Strategy

Mahmood’s central message was that the market’s priorities are shifting from ‘how much’ capacity to ‘how and where’ it’s built. Across the region, sustainability and energy resilience are no longer nice-to-have checkboxes; they’re becoming the foundation of investment decisions.

“Infrastructure used to be a race for megawatts,” he explained. “Now it’s a race for smarter, greener, and more sustainable megawatts.”

That shift is already visible in the UK, where annual data-center investment is projected to soar from roughly £1.75 billion in 2024 to £10 billion by 2029. While London remains dominant, new projects are spreading beyond the M25 as developers chase available power and faster permitting timelines.

Mahmood pointed out that “the UK’s declaration of data centers as critical national infrastructure is a step in the right direction – it signals recognition that digital infrastructure underpins everything from jobs to national competitiveness.”

Europe’s Tightrope: Power, Land, and Policy

Across continental Europe, the picture is similar but more constrained. The so-called FLAP-D markets – Frankfurt, London, Amsterdam, Paris, and Dublin – are nearing record-low vacancy rates, with take-up expected to hit 855 MW in 2025, up 22 % year-on-year.

“Grid capacity and land availability have become the new bottlenecks,” Mahmood said. “Those constraints are pushing investors to look at secondary markets – Milan, Nordic hubs, even parts of Southern Europe – where renewable energy integration and policy agility are improving.”

That migration is reshaping the map of European data infrastructure, with sustainability as the common denominator. Operators are incorporating liquid cooling, renewable sourcing, and battery-microgrid systems into new designs to support increasingly power-hungry AI clusters.

Why Power Chemistry Now Matters

In that context, Mahmood emphasized the critical role of next-generation battery technology – particularly nickel-zinc (Ni-Zn) – as a cornerstone of the sustainable data-center model.

“Battery systems are no longer just backup,” he said. “They’re becoming part of the strategic infrastructure footprint.”

Ni-Zn chemistry, he explained, offers a combination of high power density, safety, and circularity that aligns with Europe’s sustainability mandates. Unlike lithium-ion or lead-acid systems, Ni-Zn avoids thermal-runaway risks, reduces cooling needs, and offers recyclability benefits that fit the EU’s evolving battery-regulation framework.

“For operators, it’s not just an ESG checkbox,” Mahmood added. “It’s about freeing up space, cutting long-term costs, and demonstrating a credible pathway to low-carbon operations.”

A New Definition of Digital Infrastructure

Perhaps Mahmood’s most resonant message at Capacity Europe was philosophical: the way the industry defines “infrastructure” itself must evolve.

“Data centers aren’t just cost centers or tech assets,” he said. “They’re critical national infrastructure – pillars of the modern economy that touch climate policy, energy strategy, and digital sovereignty.”

That redefinition brings a new level of accountability. It means that as Europe scales for AI, cloud, and edge computing, the choices around power, cooling, materials, and footprint will determine not just commercial success but environmental integrity.

The Takeaway

Mahmood closed our conversation with a clear challenge to the industry:

“The digital-infrastructure boom sweeping through Europe must be anchored in responsible, resilient, and sustainable design. Adopting technologies like Ni-Zn isn’t just a technical upgrade – it’s a strategic differentiator. Those who embrace that mindset now will lead the next wave of growth.”

At Capacity Europe, optimism for digital expansion was everywhere – but so was a recognition that the future will belong to those who innovate responsibly. Mahmood’s vision distilled that reality perfectly: the next frontier of infrastructure isn’t just bigger. It’s smarter, greener, and built for permanence.

The post Europe’s Digital Infrastructure Enters the Green Era: A Conversation with Nabeel Mahmood at Capacity Europe appeared first on Data Center POST.

Self-contained liquid cooling: the low-friction option

Each new generation of server silicon is pushing traditional data center air cooling closer to its operational limits. In 2025, the thermal design power (TDP) of top-bin CPUs reached 500 W, and server chip product roadmaps indicate further escalation in pursuit of higher performance. To handle these high-powered chips, more IT organizations are considering direct liquid cooling (DLC) for their servers. However, large-scale deployment of DLC with supporting facility water infrastructure can be costly and complex to operate, and is still hindered by a lack of standards (see DLC shows promise, but challenges persist).

In these circumstances, an alternative approach has emerged: air-cooled servers with internal DLC systems. Referred to by vendors as either air-assisted liquid cooling (AALC) or liquid-assisted air cooling (LAAC), these systems do not require coolant distribution units or facility water infrastructure for heat rejection. This means that they can be deployed in smaller, piecemeal installations.

Uptime Intelligence considers AALC a broader subset of DLC — defined by the use of coolant to remove heat from components within the IT chassis — that includes options for multiple servers. This report discusses designs that use a coolant loop — typically water in commercially available products — that fit entirely within a single server chassis.

Such systems enable IT system engineers and operators to cool top-bin processor silicon in dense form factors — such as 1U rack-mount servers or blades — without relying on extreme-performance heat sinks or elaborate airflow designs. Given enough total air cooling capacity, self-contained AALC requires no disruptive changes to the data hall or new maintenance tasks for facility personnel.

Deploying these systems in existing space will not expand cooling capacity the way full DLC installations with supporting infrastructure can. However, selecting individual 1U or 2U servers with AALC can either reduce IT fan power consumption or enable operators to support roughly 20% greater TDP than they otherwise could — with minimal operational overhead. According to the server makers offering this type of cooling solution, such as Dell and HPE, the premium for self-contained AALC can pay for itself in as little as two years when used to improve power efficiency.

Does simplicity matter?

Many of today’s commercial cold plate and immersion cooling systems originated and matured in high-performance computing facilities for research and academic institutions. However, another group has been experimenting with liquid cooling for more than a decade: video game enthusiasts. Some have equipped their PCs with self-contained AALC systems to improve CPU and GPU performance, as well as reduce fan noise. More recently, to manage the rising heat output of modern server CPUs, IT vendors have started to offer similar systems.

The engineering is simple: fluid tubing connects one or more cold plates to a radiator and pump. The pumps circulate warmed coolant from the cold plates through the radiator, while server fans draw cooling air through the chassis and across the radiator (see Figure 1). Because water is a more efficient heat transfer medium than air, it can remove heat from the processor at a greater rate — even at a lower case temperature.

Figure 1 Closed-loop liquid cooling within the server

Diagram: Closed-loop liquid cooling within the server

The coolant used in commercially shipping products is usually PG25, a mixture of 75% water and 25% propylene glycol. This formulation has been widely adopted in both DLC and facility water systems for decades, so its chemistry and material compatibility are well understood.

As with larger DLC systems, alternative cooling approaches can use a phase change to remove IT heat. Some designs use commercial two-phase dielectric coolants, and an experimental alternative uses a sealed system containing a small volume of pure water under partial vacuum. This lowers the boiling point of water, effectively turning it into a two-phase coolant.

Self-contained AALC designs with multiple cold plates usually have redundant pumps — one on each cold plate in the same loop — and can continue operating if one pump fails. Because AALC systems for a single server chassis contain a smaller volume of coolant than larger liquid cooling systems, any leak is less likely to spill into systems below. Cold plates are typically equipped with leak detection sensors.

Closed-loop liquid cooling is best applied in 1U servers, where space constraints prevent the use of sufficiently large heat sinks. In internal testing by HPE, the pumps and fans of an AALC system in a 1U server consumed around 40% less power than the server fans in an air-cooled equivalent. This may amount to as much as a 5% to 8% reduction in total server power consumption under full load. The benefits of switching to AALC are smaller for 2U servers, which can mount larger heat sinks and use bigger, more efficient fan motors.

However, radiator size, airflow limitations and temperature-sensitive components mean that self-contained AALC is not on par with larger DLC systems, therefore making it more suitable as a transitory measure. Additionally, these systems are not currently available for GPU servers.

Advantages of AALC within the server:

  • Higher cooling capacity (up to 20%) than air cooling in the same form factor and for the same energy input, offers more even heat distribution and faster thermal response than heat sinks.
  • Requires no changes to white space or gray space.
  • Components are widely available.
  • Can operate without maintenance for the lifetime of the server, with low risk of failure.
  • Does not require space outside the rack, unlike “sidecars” or rear-mounted radiators.

Drawbacks of AALC within the server:

  • Closed-loop server cooling systems use several complex components that cost more than a heat sink.
  • Offers less IT cooling capacity than other liquid cooling approaches: systems available outside of high-performance computing and AI-specific deployments will typically support up to 1.2 kW of load per 1U server.
  • Self-contained systems generally consume more energy than larger DLC systems for server fan power, a parasitic component of IT energy consumption.
  • No control of coolant loop temperatures; control of flow rate through pumps may be available in some designs.
  • Radiator and pumps limit space savings within the server chassis.

Outlook

For some organizations, AALC offers the opportunity to maximize the value of existing investments in air cooling infrastructure. For others, it may serve as a measured step on the path toward DLC adoption.

This form of cooling is likely to be especially valuable for operators of legacy facilities that have sufficient air cooling infrastructure to support some high-powered servers but would otherwise suffer from hot spots. Selecting AALC over air cooling may also reduce server fan power enough to allow operators to squeeze another server into a rack.

Much of AALC’s appeal is its potential for efficient use of fan power and its compatibility with existing facility cooling capabilities. Expanding beyond this to increase a facility’s cooling capacity is a different matter, requiring larger, more expensive DLC systems supported by additional heat transport and rejection equipment. In comparison, server-sized AALC systems represent a much smaller cost increase over heat sinks.

Future technical development may address some of AALC’s limitations, although progress and funding will largely depend on the commercial interest in servers with self-contained AALC. In conversations with Uptime Intelligence, IT vendors have diverging views of the role of self-contained AALC in their server portfolios, suggesting that the market’s direction remains uncertain. Nonetheless, there is some interesting investment in the field. For example, Belgian startup Calyos has developed passive closed-loop cooling systems that operate without pumps, instead moving coolant via capillary action. The company is working on a rack-scale prototype that could eventually see deployment in data centers.


The Uptime Intelligence View

AALC within the server may only deliver a fraction of the improvements associated with DLC, but it does so at a fraction of the cost and with minimal disruption to the facility. For many, the benefits may seem negligible. However, for a small group of air-cooled facilities, AALC can deliver either cooling capacity benefits or energy savings.

The post Self-contained liquid cooling: the low-friction option appeared first on Uptime Institute Blog.

❌