Normal view

Received yesterday — 31 January 2026

AI and cooling: toward more automation

AI is increasingly steering the data center industry toward new operational practices, where automation, analytics and adaptive control are paving the way for “dark” — or lights-out, unstaffed — facilities. Cooling systems, in particular, are leading this shift. Yet despite AI’s positive track record in facility operations, one persistent challenge remains: trust.

In some ways, AI faces a similar challenge to that of commercial aviation several decades ago. Even after airlines had significantly improved reliability and safety performance, making air travel not only faster but also safer than other forms of transportation, it still took time for public perceptions to shift.

That same tension between capability and confidence lies at the heart of the next evolution in data center cooling controls. As AI models — of which there are several — improve in performance, becoming better understood, transparent and explainable, the question is no longer whether AI can manage operations autonomously, but whether the industry is ready to trust it enough to turn off the lights.

AI’s place in cooling controls

Thermal management systems, such as CRAHs, CRACs and airflow management, represent the front line of AI deployment in cooling optimization. Their modular nature enables the incremental adoption of AI controls, providing immediate visibility and measurable efficiency gains in day-to-day operations.

AI can now be applied across four core cooling functions:

  • Dynamic setpoint management. Continuously recalibrates temperature, humidity and fan speeds to match load conditions.
  • Thermal load forecasting. Predicts shifts in demand and makes adjustments in advance to prevent overcooling or instability.
  • Airflow distribution and containment. Uses machine learning to balance hot and cold aisles and stage CRAH/CRAC operations efficiently.
  • Fault detection, predictive and prescriptive diagnostics. Identifies coil fouling, fan oscillation, or valve hunting before they degrade performance.

A growing ecosystem of vendors is advancing AI-driven cooling optimization across both air- and water-side applications. Companies such as Vigilent, Siemens, Schneider Electric, Phaidra and Etalytics offer machine learning platforms that integrate with existing building management systems (BMS) or data center infrastructure management (DCIM) systems to enhance thermal management and efficiency.

Siemens’ White Space Cooling Optimization (WSCO) platform applies AI to match CRAH operation with IT load and thermal conditions, while Schneider Electric, through its Motivair acquisition, has expanded into liquid cooling and AI-ready thermal systems for high-density environments. In parallel, hyperscale operators, such as Google and Microsoft, have built proprietary AI engines to fine-tune chiller and CRAH performance in real time. These solutions range from supervisory logic to adaptive, closed-loop control. However, all share a common aim: improve efficiency without compromising compliance with service level agreements (SLAs) or operator oversight.

The scope of AI adoption

While IT cooling optimization has become the most visible frontier, conversations with AI control vendors reveal that most mature deployments still begin at the facility water loop rather than in the computer room. Vendors often start with the mechanical plant and facility water system because these areas present fewer variables, such as temperature differentials, flow rates and pressure setpoints, and can be treated as closed, well-bounded systems.

This makes the water loop a safer proving ground for training and validating algorithms before extending them to computer room air cooling systems, where thermal dynamics are more complex and influenced by containment design, workload variability and external conditions.

Predictive versus prescriptive: the maturity divide

AI in cooling is evolving along a maturity spectrum — from predictive insight to prescriptive guidance and, increasingly, to autonomous control. Table 1 summarizes the functional and operational distinctions among these three stages of AI maturity in data center cooling.

Table 1 Predictive, prescriptive, and autonomous AI in data center cooling

Table: Predictive, prescriptive, and autonomous AI in data center cooling

Most deployments today stop at the predictive stage, where AI enhances situational awareness but leaves action to the operator. Achieving full prescriptive control will require not only a deeper technical sophistication but also a shift in mindset.

Technically, it is more difficult to engineer because the system must not only forecast outcomes but also choose and execute safe corrective actions within operational limits. Operationally, it is harder to trust because it challenges long-held norms about accountability and human oversight.

The divide, therefore, is not only technical but also cultural. The shift from informed supervision to algorithmic control is redefining the boundary between automation and authority.

AI’s value and its risks

No matter how advanced the technology becomes, cooling exists for one reason: maintaining environmental stability and meeting SLAs. AI-enhanced monitoring and control systems support operating staff by:

  • Predicting and preventing temperature excursions before they affect uptime.
  • Detecting system degradation early and enabling timely corrective action.
  • Optimizing energy performance under varying load profiles without violating SLA thresholds.

Yet efficiency gains mean little without confidence in system reliability. It is also important to clarify that AI in data center cooling is not a single technology. Control-oriented machine learning models, such as those used to optimize CRAHs, CRACs and chiller plants, operate within physical limits and rely on deterministic sensor data. These differ fundamentally from language-based AI models such as GPT, where “hallucinations” refer to fabricated or contextually inaccurate responses.

At the Uptime Network Fall Americas Fall Conference 2025, several operators raised concerns about AI hallucinations — instances where optimization models generate inaccurate or confusing recommendations from event logs. In control systems, such errors often arise from model drift, sensor faults, or incomplete training data, not from the reasoning failures seen in language-based AI. When a model’s understanding of system behavior falls out of sync with reality, it can misinterpret anomalies as trends, eroding operator confidence faster than it delivers efficiency gains.

The discomfort is not purely technical, it is also human. Many data center operators remain uneasy about letting AI take the controls entirely, even as they acknowledge its potential. In AI’s ascent toward autonomy, trust remains the runway still under construction.

Critically, modern AI control frameworks are being designed with built-in safety, transparency and human oversight. For example, Vigilent, a provider of AI-based optimization controls for data center cooling, reports that its optimizing control switches to “guard mode” whenever it is unable to maintain the data center environment within tolerances. Guard mode brings on additional cooling capacity (at the expense of power consumption) to restore SLA-compliant conditions. Typical examples include rapid drift or temperature hot spots. In addition, there is also a manual override option, which enables the operator to take control through monitoring and event logs.

This layered logic provides operational resiliency by enabling systems to fail safely: guard mode ensures stability, manual override guarantees operator authority, and explainability, via decision-tree logic, keeps every AI action transparent. Even in dark-mode operation, alarms and reasoning remain accessible to operators.

These frameworks directly address one of the primary fears among data center operators: losing visibility into what the system is doing.

Outlook

Gradually, the concept of a dark data center, one operated remotely with minimal on-site staff, has shifted from being an interesting theory to a desirable strategy. In recent years, many infrastructure operators have increased their use of automation and remote-management tools to enhance resiliency and operational flexibility, while also mitigating low staffing levels. Cooling systems, particularly those governed by AI-assisted control, are now central to this operational transformation.

Operational autonomy does not mean abandoning human control; it means achieving reliable operation without the need for constant supervision. Ultimately, a dark data center is not about turning off the lights, it is about turning on trust.


The Uptime Intelligence View

AI in thermal management has evolved from an experimental concept into an essential tool, improving efficiency and reliability across data centers. The next step — coordinating facility water, air and IT cooling liquid systems — will define the evolution toward greater operational autonomy. However, the transition to “dark” operation will be as much cultural as it is technical. As explainability, fail-safe modes and manual overrides build operator confidence, AI will gradually shift from being a copilot to autopilot. The technology is advancing rapidly; the question is how quickly operators will adopt it.

The post AI and cooling: toward more automation appeared first on Uptime Institute Blog.

Received before yesterday

AI Data Center Market to Surpass USD 1.98 Trillion by 2034

21 January 2026 at 15:00

The global AI data center market was valued at USD 98.2 billion in 2024 and is estimated to grow at a CAGR of 35.5% to reach USD 1.98 trillion by 2034, according to a recent report by Global Market Insights Inc.

Growing adoption of generative AI and machine learning tools requires extraordinary processing power and storage capabilities, increasing reliance on data centers specifically optimized for AI workloads. These environments depend on advanced GPUs, scalable system architecture, and ultra-low-latency networking to support complex model training and inference across industries such as finance, healthcare, and retail. Big data analytics is also accelerating demand, as organizations handle massive streams of structured and unstructured information that must be processed rapidly.

AI-focused facilities enable high-performance computing for real-time workloads, strengthening their role as essential infrastructure for global digital transformation. The rapid expansion of cloud computing, along with the rising number of hyperscale facilities, continues to amplify the need for AI-ready infrastructures. Providers are investing in advanced AI data platforms that offer scalable services to enterprises and developers, further increasing market momentum.

The AI data center market from the hardware segment accounted for USD 61.1 billion in 2024. Growth is driven by expanding use of AI chips, GPU accelerators, advanced cooling technologies, high-density server systems, and optical networking solutions. Rising GPU energy requirements, the shift toward rack densities between 30-120 kW, and large-scale deployment strategies introduced by leading technology companies are shaping long-term capital allocation in the sector.

The cloud-based category held a 58% share in 2024 and is projected to grow at a CAGR of 35.2% from 2025 through 2034. This segment leads due to its unmatched scalability, flexible consumption options, and access to the latest AI-accelerated computing hardware without upfront investment. Hyperscale providers are making multi-billion-dollar commitments to strengthen global AI infrastructures, propelling adoption of AI-driven services and increasing demand for GPUs, TPUs, and specialized processors.

U.S. AI Data Center Market generated USD 33.2 billion in 2024. The country maintains a leading position supported by prominent hyperscale operators and substantial investments in GPU clusters, liquid cooling, and large-scale AI-aligned builds. Federal incentives, regional tax advantages, and infrastructure funding have further solidified the United States as the most capacity-rich region for AI computing.

Key participants in the AI data center market include Huawei, AWS, NVIDIA, HPE, Digital Realty, Google, Lenovo, Microsoft, Equinix, and Dell Technologies. Companies expanding their foothold in the market are focusing on infrastructure modernization, large-scale GPU deployments, and energy-efficient system design.

Many firms are investing in high-density racks, integrated liquid cooling, and next-generation networking to support advanced AI workloads. Strategic partnerships with chipmakers, cloud providers, and colocation operators help accelerate capacity expansion and ensure access to cutting-edge AI hardware. Providers are also scaling global data center footprints, enhancing automation capabilities, and optimizing power utilization through renewable-energy integration. Long-term contracts with enterprises, AI-as-a-service offerings, and the buildout of specialized AI clusters further reinforce competitive positioning and market dominance.

The post AI Data Center Market to Surpass USD 1.98 Trillion by 2034 appeared first on Data Center POST.

Issues Data Centers Face and How to Overcome Them: A Guide for Managers

20 January 2026 at 14:30

Data centers are the backbone of modern digital infrastructure. They power cloud services, financial systems, healthcare platforms, and nearly every technology-driven business. As demand for data storage and processing grows, so do the operational, financial, and risk-related challenges data centers face. For managers, understanding these issues and knowing how to address them proactively is critical to maintaining uptime, security, and long-term viability. This is what you need to know to ensure you can meet up with demand.

Rising Energy Costs and Efficiency Demands

One of the most persistent challenges for data centers is energy consumption. Powering servers, cooling systems, and redundancy infrastructure requires enormous amounts of electricity, and energy costs continue to rise globally. Beyond cost, there is increasing pressure from regulators and customers to reduce environmental impact.

Managers can address this by investing in energy-efficient hardware, optimizing airflow and cooling layouts, and adopting real-time monitoring tools to identify waste. Long-term strategies may include transitioning to renewable energy sources or partnering with utility providers for more favorable pricing structures.

Cooling and Thermal Management Challenges

Heat is an unavoidable byproduct of high-density computing. Inefficient cooling not only increases costs but also raises the risk of equipment failure and downtime. As server densities increase, traditional cooling methods often struggle to keep up.

Modern solutions include hot-aisle/cold-aisle containment, liquid cooling, and AI-driven thermal monitoring systems. For managers, the key is treating cooling as a dynamic system rather than a fixed infrastructure. However, this needs to be one that evolves alongside hardware demands.

Financial Risk and Insurance Considerations

Data centers face significant financial exposure from equipment damage, downtime, liability claims, and unforeseen events. Even with strong operational controls, risk cannot be eliminated entirely.

This is where insurance becomes a critical part of risk management. Evaluating coverage that aligns with the unique needs of data center operations can help protect against losses that would otherwise threaten business continuity. BOP insurance by Next Insurance can help managers think more holistically about protecting assets, operations, and revenue streams as part of an overall risk strategy.

Downtime and Business Continuity Risks

Even brief outages can result in significant financial losses, reputational damage, and contractual penalties. This downtime may be caused by:

  • Power failures
  • Human error
  • Equipment malfunction
  • External events

To mitigate this risk, managers should prioritize redundancy at every critical point, including power supplies, network connections, and backup systems. Regular testing of disaster recovery plans is just as important as having them documented, too. A plan that hasn’t been tested is often unreliable in real-world conditions. You put your business and those whom you cater to at risk.

Cybersecurity and Physical Security Threats

Data centers face a dual security challenge: digital threats and physical risks. Cyberattacks continue to grow in sophistication, while physical threats such as unauthorized access, theft, or vandalism remain real concerns.

Addressing this requires layered security. On the digital side, this includes continuous patching, network segmentation, and monitoring for unusual activity. Physically, access controls, surveillance systems, and strict visitor protocols are essential. Managers should also ensure that staff training keeps pace with evolving threats, as human error remains a major vulnerability.

Compliance and Regulatory Pressure

Data centers often operate under complex regulatory requirements related to data privacy, industry standards, and regional laws. Compliance failures can result in fines, legal exposure, and loss of customer trust.

Managers can stay ahead by maintaining clear documentation, conducting regular audits, and working closely with legal and compliance teams. Building compliance into operational processes, rather than treating it as an afterthought, reduces risk and simplifies reporting.

Turning Challenges Into Strategic Advantage

While data centers face a wide range of operational and strategic challenges, these issues also present opportunities for improvement and differentiation. Managers who address all of the above proactively are better positioned to deliver reliability and value in a competitive market. Don’t let yours be the one that falls victim to the issues and instead take action.

# # #

About the Author:

James Daniels is a freelance writer, business enthusiast, a bit of a tech buff, and an overall geek. He is also an avid reader, who can spend hours reading and knowing about the latest gadgets and tech, whilst offering views and opinions on these topics.

The post Issues Data Centers Face and How to Overcome Them: A Guide for Managers appeared first on Data Center POST.

Duos Edge AI Brings Another Edge Data Center to Rural Texas

14 January 2026 at 14:00

Duos Technologies Group Inc. (Nasdaq: DUOT), through its operating subsidiary Duos Edge AI, Inc., has deployed another patented modular Edge Data Center (EDC) in Hereford, Texas. The facility was deployed in partnership with Hereford Independent School District (Hereford ISD) and marks another milestone in Duos Edge AI’s mission to deliver localized, low-latency compute infrastructure that supports education and community technology growth across rural and underserved markets.

​”We are thrilled to partner with Duos Edge AI to bring a state-of-the-art Edge Data Center directly to our Administration location in Hereford ISD,” said Dr. Ralph Carter, Superintendent of Hereford Independent School District. “This innovative deployment will dramatically enhance our digital infrastructure, providing low-latency access to advanced computing resources that will empower our teachers with cutting-edge tools, enable real-time AI applications in the classroom, and ensure faster, more reliable connectivity for our students and staff.​

Each modular facility is designed for rapid 90-day deployment and delivers scalable, high-performance computing power with enterprise-grade security controls, including third-party SOC 2 Type II certification under AICPA standards.

​Duos Edge AI’s patented modular infrastructure incorporates a U.S. patent for an Entryway for a Modular Data Center (Patent No. US 12,404,690 B1), providing customers with secure, compliant, and differentiated Edge infrastructure that operates exclusively on on-grid power and requires no water for cooling. Duos Edge AI continues to expand nationwide, capitalizing on growing demand for localized compute, AI enablement, and resilient digital infrastructure across underserved and high-growth markets.

“Each deployment strengthens our ability to scale a repeatable, capital-efficient edge infrastructure platform,” said Doug Recker, President of Duos and Founder of Duos Edge AI. “Our patented, SOC 2 Type II-audited EDCs are purpose-built to meet real customer demand for secure, low-latency computing while supporting long-term revenue growth and disciplined execution across our targeted markets.”

To learn more about Duos Edge AI, visit www.duosedge.ai.

The post Duos Edge AI Brings Another Edge Data Center to Rural Texas appeared first on Data Center POST.

Duos Edge AI Deploys Edge Data Center in Abilene, Texas

9 January 2026 at 17:00

Duos Technologies Group Inc. (Nasdaq: DUOT), through its operating subsidiary Duos Edge AI, Inc, has deployed a new Edge Data Center (EDC) in Abilene, Texas, in collaboration with Region 14 Education Service Center (ESC).​ This deployment expands Duos Edge AI’s presence in Texas while bringing advanced digital infrastructure to support K-12 education, healthcare, workforce development, and local businesses across West Texas.​

This installation builds on Duos Edge AI’s recent Texas deployments in Amarillo (Region 16), Waco (Region 12), and Victoria (Region 3), supporting a broader strategy to deploy edge computing solutions tailored to education, healthcare, and enterprise needs.​

“We are excited to partner with Region 14 ESC to bring cutting-edge technology to Abilene and West Texas, bringing a carrier neutral colocation facility to the market while empowering educators and communities with the tools they need to thrive in a digital world,” said Doug Recker, President of Duos and Founder of Duos Edge AI.​ “This EDC represents our commitment to fostering innovation and economic growth in regions that have historically faced connectivity challenges.”

The Abilene EDC will serve as a local carrier-neutral colocation facility and computing hub, delivering enhanced bandwidth, secure data processing, and low-latency AI capabilities to more than 40 school districts and charter schools across an 11-county region spanning over 13,000 square miles.​

Chris Wigington, Executive Director for Region 14 ESC, added, “Collaborating with Duos Edge AI allows us to elevate the technological capabilities of our schools and partners, ensuring equitable access to high-speed computing and AI resources. This data center will be a game-changer for student learning, teacher development, and regional collaboration.”

By locating the data center at Region 14 ESC, the partnership aims to help bridge digital divides in rural and underserved communities by enabling faster access to educational tools, cloud services, and AI-driven applications, while reducing reliance on distant centralized data centers.​

The EDC is expected to be fully operational in early 2026, with plans for a launch event at Region 14 ESC’s headquarters in Abilene.​

To learn more about Duos Edge AI, visit www.duosedge.ai.

The post Duos Edge AI Deploys Edge Data Center in Abilene, Texas appeared first on Data Center POST.

The Critical Role of Material Selection in Direct-to-Chip Liquid Cooling

30 December 2025 at 15:00

As data centers evolve to support AI, edge computing, and high-density cloud architectures, the challenge is no longer just maintaining an optimal power usage effectiveness (PUE), it is about achieving thermal reliability with unprecedented compute loads. Direct-to-chip liquid cooling continues to push the envelope on heat transfer performance, but one underestimated element in overall system reliability is the material composition of the fluid conveyance network itself. The hoses and fittings that transport coolant through these systems operate in extreme thermal and chemical environments, and their design directly influences uptime, maintenance intervals, and total cost of ownership.

Why Material Selection Matters

At its core, a liquid cooling system is only as reliable as its weakest component. If hoses, fittings, or seals fail due to poor material compatibility, the result could be leaks, contamination, or shortened system life, leading to downtime and costly remediation. Rubber, and rubber-like, materials are critical in hose assemblies, as they must balance flexibility for installation and serviceability with long-term resistance to temperature, pressure, permeation, coolant and coolant additives.

The challenge lies in the fact that not all rubbers, or rubber-like materials, are created equal. Each formulation is a complex recipe of polymers, plasticizers, fillers, and curing agents designed to deliver specific performance characteristics. The wrong selection can lead to issues such as fluid permeation, premature aging, or contamination of the coolant. In mission-critical environments like data centers, where even minor disruptions are unacceptable, this risk is magnified.

Temperature and Chemical Compatibility

Although data center cooling systems typically operate at temperatures between 45°C (113°F) and 65°C (149°F), sometimes reaching 100°C (212°F), those ranges can stress certain materials. Nitrile rubber, for example, performs well in oil-based environments but ages quickly in water-glycol systems, especially at higher temperatures. This can cause hardening, cracking, or coolant contamination.

By contrast, ethylene propylene diene monomer (EPDM) rubber has excellent resistance to water, glycols, and the additives commonly used in data center coolants, such as corrosion inhibitors and biocides. EPDM maintains its properties across the required operating range, making it a proven choice for direct-to-chip applications.

However, not all EPDM is the same. Developing the right EPDM for the application demands a deep understanding of polymer chemistry, filler interactions, and process control to precisely balance flexibility, heat resistance, and long-term stability.

Additionally, two curing processes, sulfur-cured and peroxide-cured, produce different performance outcomes. Sulfur-cured EPDM, while widely used, introduces zinc ions during the curing process. When exposed to deionized water, these ions can leach into the coolant, causing contamination and potentially degrading system performance. Peroxide-cured EPDM avoids this issue, offering higher temperature resistance, lower permeation rates, and greater chemical stability, making it the superior choice for modern liquid cooling.

Even among peroxide-cured EPDM compounds, long term performance is not uniform. While the cure system defines the crosslink chemistry, other formulation choices, particularly filler selection and dispersion, can influence how the material performs over time.

The use of fillers and additives is common in rubber compounding. These ingredients are often selected to control cost, improve processability, or achieve certain performance characteristics such as flame resistance, strength, or flexibility.

The challenge is that some filler systems perform well during initial qualification but are not optimized for long-term exposures faced in the operating environment. Certain fillers or processing aids can slowly migrate over time, introducing extractables into the coolant or subtly altering elastomer properties. For data center applications, EPDM compounds must therefore be engineered with a focus on long term stability, reinforcing why EPDM should not be treated as a commodity material in critical cooling systems.

Risks of Non-Compatible Materials

Material incompatibility can have several cascading effects:

  • Contamination – Non-compatible materials can leach extractables into the coolant, leading to discoloration, chemical imbalance, and reduced thermal efficiency.
  • Permeation – Some rubbers allow fluid to slowly migrate through the hose walls, causing coolant loss or altering the fluid mixture over time.
  • Premature Failure – Elevated temperatures can accelerate aging, leading to cracking, swelling, or loss of mechanical strength.
  • Leakage – Rubber under compression may deform over time, jeopardizing seal integrity if not properly formulated for resistance to compression set and tear.

In a recent two-week aging test at 80°C using a water-glycol coolant, hoses made of nitrile and sulfur-cured EPDM showed visible discoloration of the coolant, indicating leaching and breakdown of the material. Peroxide-cured EPDM, on the other hand, maintained stability, demonstrating its compatibility and reliability in long-term data center applications.

The Gates Approach

Drawing on lessons from mission critical industries that have managed thermal challenges for decades, Gates engineers apply material science rigor to the design of liquid cooling hoses for data center applications.

Rather than relying solely on initial material ratings or short-term qualification criteria, Gates begins by tailoring compound design to the operating environment. This includes deliberate control of polymer selection, filler systems, and cure chemistry to manage long term aging behavior, extractables, permeation, and retention of mechanical properties over time in high purity coolant systems.

Compounds are validated through extended aging and immersion testing that reflects real operating conditions, including exposure to heat, deionized water, and water-glycol coolants. This allows potential material changes to be identified and addressed during development, before installation in the field.

This material science driven process is applied across Gates liquid cooling platforms, including the Data Master, Data Master MegaFlex, and newly released Data Master Eco product lines. By engineering for long term stability rather than only initial compliance, Gates designs hose solutions intended to support reliable operation, predictable maintenance intervals, and extended service life in direct-to-chip liquid cooled data center environments.

Looking Ahead

As data centers continue to scale, thermal management solutions must adapt in parallel. Advanced architectures, higher rack densities, and growing environmental regulations all point to a future where liquid cooling is standard. In this environment, material selection is no longer a secondary consideration; it is foundational to system reliability.

Operators who prioritize material compatibility in fluid conveyance lines will benefit from longer service intervals, improved coolant stability, and reduced risk of downtime. In other words, the proper rubber formulation doesn’t just move fluid, it moves the industry forward.

At Gates, sustainable, high-performance cooling begins with the details. By focusing on the science of materials, we help ensure that data center operators can confidently deploy liquid cooling systems designed for the challenges of today and the innovations of tomorrow.

# # #

About the Author

Chad Chapman is a Mechanical Engineer with over 20 years of experience in the fluid power industry. He currently serves as a Product Application Engineering Manager at Gates, where he leads a team that provides technical guidance, recommendations, and innovative solutions to customers utilizing Gates products and services.

Driven by a passion for problem-solving, Chad thrives on collaborating with customers to understand their unique challenges and deliver solutions that optimize performance. He is energized by learning about new applications and technologies, especially where insights can be shared across industries. At Gates, he has been exploring the emerging field of direct-to-chip liquid cooling, an exciting extension of his deep expertise in thermal management. The rapid advancements in IT technology and AI have made his journey an inspiring and rewarding learning experience.

The post The Critical Role of Material Selection in Direct-to-Chip Liquid Cooling appeared first on Data Center POST.

Beyond Copper and Optics: How e‑Tube Powers the Terabit Era

24 December 2025 at 14:00

As data centers push toward terabit-scale bandwidth, legacy copper interconnects are hitting their limit or as the industry calls it, the “copper cliff.” Traditional copper cabling, once the workhorse of short-reach connectivity, has become too thick, too inflexible, and too short to keep pace with the scale of xPU bandwidth growth in the data center. On the other hand, optical solutions will work but are saddled with the “optical penalty,” that includes power-hungry and expensive electrical and optical components, manufacturing design complexity,  latency challenges and more importantly, reliability issues after deployment.

With performance, cost, and operational downsides increasing for both copper and optical interconnect, network operators are looking beyond the old interconnect paradigm and toward more scalable options to scale at the pace of the next generation AI clusters in data centers.

Enter e-Tube: the industry’s third interconnect option.

e-Tube Technology is a scalable multi-terabit interconnect platform that uses RF data transmission through a plastic dielectric waveguide. Designed to meet the coming demands of 1.6T and 3.2T bandwidth requirements, e-Tube leverages cables made from common plastic material such as low-density polyethylene (LDPE), which avoids the high-frequency loss and physical constraints inherent to copper. The result is a flexible, power-efficient, and highly reliable link that delivers the reach and performance required to scale up AI clusters in next-generation data center designs.

Figure 1 [Patented e-Tube Platform]

The industry is taking notice of the impact e-Tube will make, with results showing up to 10x the reach of copper while being 5x lighter and 2x thinner. Compared with optical cables, e-Tube consumes 3x less power, achieves 1,000x lower latency, and an impressive 3x less cost. Its scalable design architecture delivers consistent bandwidth for future data speeds to 448Gbps and beyond across networks, extending existing use cases and creating new applications that copper and optical interconnects cannot support.

With the impending copper cliff and optical penalty looming in the horizon, the time is now for data center operators to consider a third interconnect option. e-Tube RF transmission over a plastic dielectric delivers measurable impact with longer reach, best-in-class energy efficiency, near-zero latency, not to mention a cost-effective option. As AI workloads explode and terabit-scale fabrics become the norm, e‑Tube is posed to be a foundational cable interconnect for scaling up AI clusters for the next generation of data centers.

# # #

About the Author

Sean Park is a seasoned executive with over 25 years of experience in the semiconductors, wireless, and networking market. Throughout his career, Sean has held several leadership positions at prominent technology companies, including IDT, TeraSquare, and Marvell Semiconductor. As the CEO, CTO, and Founder at Point2 Technology, Sean was responsible for leading the company’s strategic direction and overseeing its day-to-day operations. He also served as a Director at Marvell, where he provided invaluable guidance and expertise to help the company achieve its goals. He holds a Ph.D. in Electrical Engineering from the University of Washington and also attended Seoul National University.

The post Beyond Copper and Optics: How e‑Tube Powers the Terabit Era appeared first on Data Center POST.

Creating Critical Facilities Manpower Pipelines for Data Centers

23 December 2025 at 15:00

The digital technology ecosystem and virtual spaces are powered by data – its storage, processing, and computation; and data centers are the mitochondrion on which this ecosystem depends. From online gaming and video streaming (including live events) to e-commerce transactions, credit and debit card payments, and the complex algorithms that drive artificial intelligence (AI), machine learning (ML), cloud services, and enterprise applications, data centers support nearly every aspect of modern life. Yet the professionals who operate and maintain these facilities like data center facilities engineers, technicians, and operators, remain largely unsung heroes of the information age.

Most end users, particularly consumers, rarely consider the backend infrastructure that enables their digital experiences. The continuous operation of data centers depends on the availability of adequate and reliable power and cooling for critical IT loads, robust fire protection systems and tightly managed operational processes that together ensure uptime, and system reliability. For users, however, the expectation is simple and unambiguous; online services must work seamlessly and be available whenever they are needed.

According to the Data Center Map, there are 668 data centers in Virginia, more than 4000 in the United States, and over 11,000 globally. Despite the rapid growth, the industry faces a significant challenge: it is not producing enough qualified technicians, engineers, and operators to keep pace with the growth of data center infrastructure in the United States despite an average total compensation of $70,000 which may go as high as $109,000 in Northern Virginia, as estimated by Glassdoor.

Data center professionals require highly specialized electrical and mechanical maintenance skills and knowledge of network/server operations gained through robust training and hands-on experience. Sadly, the industry risks falling short of its workforce needs due to the unprecedented scale and speed of data center construction. This growth is being fueled by the global race for AI dominance, increasing demand for digital connectivity, and the continued expansion of cloud computing services.

Industry projections highlight the magnitude of the challenge. Omdia (As reported by Data Center Dynamics) suggests data center investment will likely hit $1.6 trillion by 2030 while BloombergNEF forecasts data-center demand of 106 gigawatts by 2035. All these projects and projections demand skilled individuals which the industry does not currently have, and the vacuum might create problems in the future if not filled with the right individuals. According to the Uptime Institute’s 2023 survey, 58% of operators are finding it difficult to get qualified candidates and 55% claim they are having difficulty retaining staff. The Uptime Institute’s 2024 data center staffing and recruitment survey shows that there was 26% and 21% turnover rate for electrical and mechanical trades respectively. It was estimated by The Birmingham Group that AI facilities will create about 45,000 data center technicians and engineers jobs and employment is projected to be at 780,000 by 2030.

Meeting the current and future workforce demands requires both leveraging talent pipelines and creating new ones. Technology is growing and evolving at a high speed and filling critical data center positions increasingly demands professionals who are not only technically skilled, but also continuously trained to keep up with rapidly changing industry standards and technologies

Organizational Apprenticeship and Training Programs

Organizations should invest in organizational training and apprenticeship programs for individuals with technical training from community colleges so that they can create pipelines of technically skilled individuals to fill critical positions. This will ensure the future of critical positions within the data center industry is secured.

Trade Programs Expansion in Community College

Community colleges should expand the teachings of technical trades because these programs create life-sustaining careers with the possibility of earning high incomes. Northern Virginia Community College has spearheaded data center operations programs to train individuals who can comfortably fill entry level data center critical facilities positions in northern Virginia and everywhere else.

Veterans Re-entry Programs 

A lot of military veterans possess the required transferrable skills needed within data center critical facilities, and organizations need to leverage this opportunity. Organizations need to harness the opportunities provided by the Disabled American Veterans and DOD’s Transition Assistance Program, and other military and DOD programs.

# # #

About the Author

Rafiu Sunmonu is the Supervisor of Critical Facilities Operations at NTT Global Data Centers Americas, Inc.

The post Creating Critical Facilities Manpower Pipelines for Data Centers appeared first on Data Center POST.

How Sabey Data Centers’ Manhattan Site Is Powering the Next Wave of AI Innovation

12 December 2025 at 15:00

Sabey Data Centers’ Manhattan facility is emerging as a key hub for AI inference, giving enterprises a way to run real-time, AI-powered services in the heart of New York City. Located at 375 Pearl Street, the site combines immediate high-density capacity with proximity to Wall Street, media companies and other major business hubs, positioning AI infrastructure closer to users, data and critical partners.​

The facility offers nearly one megawatt of turnkey capacity today, with an additional seven megawatts available across powered shells, allowing organizations to scale from pilots to production without relocating workloads. Engineered for high-density, GPU-driven environments, SDC Manhattan supports modern AI architectures while maintaining the resiliency and operational excellence that define Sabey’s portfolio.​

Carrier-neutral connectivity and direct, low-latency access to New York’s network and cloud ecosystems make the facility an ideal interconnection point for latency-sensitive AI applications such as trading, risk analysis and real-time personalization. This is increasingly important for the industry as AI becomes embedded in core business processes, where milliseconds directly affect revenue, user experience and competitive differentiation. Locating inference closer to these ecosystems helps operators overcome limitations of distant, centralized infrastructure and unlock more responsive, data-rich services.​

“The future of AI isn’t just about training, it’s about delivering intelligence at scale,” said Tim Mirick, President of Sabey Data Centers. “Our Manhattan facility places that capability at the edge of one of the world’s largest and most connected markets. That’s an enormous advantage for inference models powering everything from financial services to media to healthcare.”​

By positioning its Manhattan site as an AI inference hub, Sabey Data Centers helps enterprises place their most advanced workloads where connectivity, capacity and proximity converge, aligning AI-optimized infrastructure with trusted, mission-critical operations. For the wider digital infrastructure landscape, this approach signals how urban data centers can evolve to meet the demands of AI at scale. This will bring intelligence closer to the markets it serves and set a direction for how facilities in other global metros will need to adapt as AI adoption accelerates.​

To learn more about Sabey’s Manhattan data center, visit sabeydatacenters.com.

The post How Sabey Data Centers’ Manhattan Site Is Powering the Next Wave of AI Innovation appeared first on Data Center POST.

The Rising Risk Profile of CDUs in High-Density AI Data Centers

10 December 2025 at 17:00

AI has pushed data center thermal loads to levels the industry has never encountered. Racks that once operated comfortably at 8-15 kW are now climbing past 50-100 kW, driving an accelerated shift toward liquid cooling. This transition is happening so quickly that many organizations are deploying new technologies faster than they can fully understand the operational risks.

In my recent five-part LinkedIn series:

  • 2025 U.S. Data Center Incident Trends & Lessons Learned (9-15-2025)
  • Building Safer Data Centers: How Technology is Changing Construction Safety (10-1-2025)
  • The Future of Zero-Incident Data Centers (1ind0-15-2025)
  • Measuring What Matters: The New Safety Metrics in Data Centers (11-1-2025)
  • Beyond Safety: Building Resilient Data Centers Through Integrated Risk Management (11-15-2025)

— a central theme emerged: as systems become more interconnected, risks become more systemic.

That same dynamic influenced the Direct-to-Chip Cooling: A Technical Primer article that Steve Barberi and I published in Data Center POST (10-29-2025). Today, we are observing this systemic-risk framework emerging specifically in the growing role of Cooling Distribution Units (CDUs).

CDUs have evolved from peripheral equipment to a true point of convergence for engineering design, controls logic, chemistry, operational discipline, and human performance. As AI rack densities accelerate, understanding these risks is becoming essential.

CDUs: From Peripheral Equipment to Critical Infrastructure

Historically, CDUs were treated as supplemental mechanical devices. Today, they sit at the center of the liquid-cooling ecosystem governing flow, pressure, temperature stability, fluid quality, isolation, and redundancy. In practice, the CDU now operates as the boundary between stable thermal control and cascading instability.

Yet, unlike well-established electrical systems such as UPSs, switchgear, and feeders, CDUs lack decades of operational history. Operators, technicians, commissioning agents, and even design teams have limited real-world reference points. That blind spot is where a new class of risk is emerging, and three patterns are showing up most frequently.

A New Risk Landscape for CDUs

  • Controls-Layer Fragility
    • Controls-related instability remains one of the most underestimated issues in liquid cooling. Many CDUs still rely on single-path PLC architectures, limited sensor redundancy, and firmware not designed for the thermal volatility of AI workloads. A single inaccurate pressure, flow, or temperature reading can trigger inappropriate or incorrect system responses affecting multiple racks before anyone realizes something is wrong.
  • Pressure and Flow Instability
    • AI workloads surge and cycle, producing heat patterns that stress pumps, valves, gaskets, seals, and manifolds in ways traditional IT never did. These fluctuations are accelerating wear modes that many operators are just beginning to recognize. Illustrative Open Compute Project (OCP) design examples (e.g., 7–10 psi operating ranges at relevant flow rates) are helpful reference points, but they are not universal design criteria.
  • Human-Performance Gaps
    • CDU-related high-potential near misses (HiPo NMs) frequently arise during commissioning and maintenance, when technicians are still learning new workflows. For teams accustomed to legacy air-cooled systems, tasks such as valve sequencing, alarm interpretation, isolation procedures, fluid handling, and leak response are unfamiliar. Unfortunately, as noted in my Building Safer Data Centers post, when technology advances faster than training, people become the first point of vulnerability.

Photo Image: Borealis CDU
Photo by AGT

Additional Risks Emerging in 2025 Liquid-Cooled Environments

Beyond the three most frequent patterns noted above, several quieter but equally impactful vulnerabilities are also surfacing across 2025 deployments:

  • System Architecture Gaps
    • Some first-generation CDUs and loops lack robust isolation, bypass capability, or multi-path routing. Single points of failure, such as a valve, pump, or PLC drive full-loop shutdowns, mirroring the cascading-risk behaviors highlighted in my earlier work on resilience.
  • Maintenance & Operational Variability
    • SOPs for liquid-cooling vary widely across sites and vendors. Fluid handling, startup/shutdown sequences, and leak-response steps remain inconsistent and/or create conditions for preventable HiPo NMs.
  • Chemistry & Fluid Integrity Risks
    • As highlighted in the DTC article Steve Barberi and I co-authored, corrosion, additive depletion, cross-contamination, and stagnant zones can quietly degrade system health. ICP-MS analysis and other advanced techniques are recommended in OCP-aligned coolant programs for PG-25-class fluids, though not universally required.
  • Leak Detection & Nuisance Alarms
    • False positives and false negatives, especially across BMS/DCIM integrations, remain common. Predictive analytics are becoming essential despite not yet being formalized in standards.
  • Facility-Side Dynamics
    • Upstream conditions such as temperature swings, ΔP fluctuations, water hammer, cooling tower chemistry, and biofouling often drive CDU instability. CDUs are frequently blamed for behavior originating in facility water systems.
  • Interoperability & Telemetry Semantics
    • Inconsistent Modbus, BACnet, and Redfish mappings, naming conventions, and telemetry schemas create confusion and delay troubleshooting.

Best Practices: Designing CDUs for Resilience, Not Just Cooling Capacity

If CDUs are going to serve as the cornerstone of liquid cooling in AI environments, they must be engineered around resilience, not simply performance. Several emerging best practices are gaining traction:

  1. Controls Redundancy
    • Dual PLCs, dual sensors, and cross-validated telemetry signals reduce single-point failure exposure. These features do not have prescriptive standards today but are rapidly emerging as best practices for high-density AI environments.
  2. Real-Time Telemetry & Predictive Insight
    • Detecting drift, seal degradation, valve lag, and chemistry shift early is becoming essential. Predictive analytics and deeper telemetry integration are increasingly expected.
  3. Meaningful Isolation
    • Operators should be able to isolate racks, lines, or nodes without shutting down entire loops. In high-density AI environments, isolation becomes uptime.
  4. Failure-Mode Commissioning
    • CDUs should be tested not only for performance but also for failure behavior such as PLC loss, sensor failures, false alarms, and pressure transients. These simulations reveal early-life risk patterns that standard commissioning often misses.
  5. Reliability Expectations
    • CDU design should align with OCP’s system-level reliability expectations, such as MTBF targets on the order of >300,000 hours for OAI Level 10 assemblies, while recognizing that CDU-specific requirements vary by vendor and application.

Standards Alignment

The risks and mitigation strategies outlined above align with emerging guidance from ASHRAE TC 9.9 and the OCP’s liquid-cooling workstreams, including:

  • OAI System Liquid Cooling Guidelines
  • Liquid-to-Liquid CDU Test Methodology
  • ASTM D8040 & D1384 for coolant chemistry durability
  • IEC/UL 62368-1 for hazard-based safety
  • ASHRAE 90.4, PUE/WUE/CUE metrics, and
  • ANSI/BICSI 002, ISO/IEC 22237, and Uptime’s Tier Standards emphasizing concurrently maintainable infrastructure.

These collectively reinforce a shift: CDUs must be treated as availability-critical systems, not auxiliary mechanical devices.

Looking Ahead

The rise of CDUs represents a moment the data center industry has seen before. As soon as a new technology becomes mission-critical, its risk profile expands until safety, engineering, and operations converge around it. Twenty years ago, that moment belonged to UPS systems. Ten years ago, it was batteries. Now, in AI-driven environments, it is the CDU.

Organizations that embrace resilient CDU design, deep visibility, and operator readiness will be the ones that scale AI safely and sustainably.

# # #

About the Author

Walter Leclerc is an independent consultant and recognized industry thought leader in Environmental Health & Safety, Risk Management, and Sustainability, with deep experience across data center construction and operations, technology, and industrial sectors. He has written extensively on emerging risk, liquid cooling, safety leadership, predictive analytics, incident trends, and the integration of culture, technology, and resilience in next-generation mission-critical environments. Walter led the initiatives that earned Digital Realty the Environment+Energy Leader’s Top Project of the Year Award for its Global Water Strategy and recognition on EHS Today’s America’s Safest Companies List. A frequent global speaker on the future of safety, sustainability, and resilience in data centers, Walter holds a B.S. in Chemistry from UC Berkeley and an M.S. in Environmental Management from the University of San Francisco.

The post The Rising Risk Profile of CDUs in High-Density AI Data Centers appeared first on Data Center POST.

Data Center Rack and Enclosure Market to Surpass USD 10.5 Billion by 2034

9 December 2025 at 18:00

The global data center rack and enclosure market was valued at USD 4.6 billion in 2024 and is projected to grow at a CAGR of 8.4% from 2025 to 2034, according to a recent report by Global Market Insights Inc.

The increasing adoption of edge computing, spurred by the proliferation of Internet of Things (IoT) devices, is a significant driver of market growth. The surge in modular data centers, known for their portability and scalability, boosts demand for adaptable racks and enclosures. These systems enable businesses to expand data center capacity incrementally without committing to large-scale infrastructure. Modular designs often require specialized racks and enclosures that are quick to deploy and flexible enough to meet evolving operational demands.

By component, the data center rack and enclosure market is segmented into solutions and services. In 2024, the solutions segment captured 75% of the market share and is expected to reach USD 7 billion by 2034. The increasing complexity of tasks like artificial intelligence (AI), machine learning (ML), and big data processing drives demand for high-density rack solutions. These racks optimize space utilization, a critical factor in environments with constrained power, cooling, and availability. Advanced cooling mechanisms, such as liquid cooling and airflow optimization, are essential features supporting these dense configurations.

In terms of application, the market is categorized into manufacturing, BFSI, colocation, government, healthcare, IT & telecom, energy, and others. The IT & telecom segment accounted for 32% of the market share in 2024. The shift towards cloud computing is revolutionizing IT and telecom industries, increasing the demand for robust data center infrastructure. Scalable and efficient racks and enclosures are essential to handle growing data volumes while ensuring optimal performance in cloud-based operations.

North America dominated the global data center rack and enclosure market in 2024, holding a 40% market share, with the U.S. leading the region. The presence of major cloud service providers like Amazon Web Services (AWS), Google Cloud, and Microsoft Azure has driven significant data center expansion in the region. This growth necessitates modular, flexible, scalable rack and enclosure solutions to support dynamic storage needs. Additionally, substantial investments by government entities and private enterprises in digital transformation and IT infrastructure upgrades further fuel market expansion.

The demand for innovative, high-performance data center racks and enclosures continues to rise as industries embrace digital transformation and advanced technologies. This trend ensures a positive outlook for the market through the forecast period.

The post Data Center Rack and Enclosure Market to Surpass USD 10.5 Billion by 2034 appeared first on Data Center POST.

Building Data Centers Faster and Smarter: Visual, Collaborative Scheduling Isn’t Just an Option—It’s a Business Mandate.

8 December 2025 at 15:00

Data centers are the backbone of today’s digital economy. Every second of uptime, every day of project delivery, directly impacts a client’s bottom line and a contractor’s reputation. The financial stakes are undeniable: a 60MW data center project delayed by just one day can incur an opportunity cost of $500,000. Extend that to a week, and you’re looking at millions in lost revenue or competitive ground. For general contractors, such delays aren’t just bad for business; they can severely damage trust and future opportunities.

In this environment, crafting accurate, realistic, and truly achievable construction schedules isn’t merely a best practice; it’s a strategic imperative.

The Inherent Flaws of Legacy Scheduling

For years, tools like Oracle’s Primavera P6 have been the industry standard for large-scale construction scheduling. They offer power and precision, no doubt. But they are also inherently rigid and complex. Building or modifying a schedule in these traditional systems demands specialized training, effectively centralizing control with a small, specialized group of schedulers or planners. They become the gatekeepers of the entire process.

This siloed approach creates significant blind spots. Data center projects are incredibly complex, requiring seamless integration of mechanical, electrical, structural, and IT infrastructure. Coordination challenges are guaranteed. When only a handful of individuals can genuinely contribute to the master schedule, critical insights from superintendents, subcontractors, or field engineers are inevitably missed.

The outcome? Schedules that appear solid on paper but often fail to align with jobsite realities. Overly optimistic sequencing, misjudged dependencies, and underestimated risk factors invariably lead to costly schedule slippage.

Unlocking Efficiency: The Power of Visual and Collaborative Scheduling

Enter the next generation of scheduling tools: visual, cloud-based, and inherently collaborative platforms designed to make the scheduling process faster, more transparent, and, crucially, more inclusive.

Unlike traditional tools confined to desktop software and static Gantt charts, these modern solutions empower teams to dynamically build and iterate schedules. Tasks, dependencies, and milestones are visually mapped, immediately highlighting potential bottlenecks or opportunities to safely accelerate timelines through parallel work.

More critically, their collaborative nature opens the scheduling process to the entire project team, including field engineers, trade partners, project managers, and even clients. Everyone can review and comment on the schedule in real time, identify potential conflicts proactively, and propose alternatives that genuinely improve efficiency. The result is a plan that is not only more accurate but truly optimized.

A Broader Brain Trust for a Superior Schedule

The principle is straightforward and powerful: collective intelligence builds a better plan.

In a data center project, every discipline brings unique, invaluable expertise. The MEP contractor might see chances for concurrent work in electrical and cooling systems. The structural team could pinpoint a sequencing issue impacting crane utilization. The commissioning manager might realize tasks can start earlier based on equipment readiness.

When these diverse perspectives are integrated into the schedule, the plan becomes far more resilient and efficient. Collaborative scheduling tools make this input practical and structured, without sacrificing control. The visual aspect makes it easier for non-schedulers to engage meaningfully. They can visually grasp how their proposed changes impact the overall timeline.

This democratization of the scheduling process cultivates a culture of ownership and accountability. When every team member understands the plan and has contributed to its formation, project alignment improves dramatically. Miscommunications decline, coordination excels, and the risk of costly delays diminishes significantly.

Intelligent Schedule Compression Through Collaboration

Beyond accuracy, visual scheduling enables intelligent schedule compression. In conventional environments, shortening a project’s duration often happens reactively, after a delay or missed milestone. Collaborative planning, however, identifies optimization opportunities from day one.

Consider overlapping workstreams previously assumed to be sequential, a move that can shave days or weeks off a schedule. Adjusting resource allocations or reordering tasks based on real-world input can yield similar gains. The key difference: these are proactive decisions, informed by those on the ground, rather than reactive ones made under duress.

For data center clients, keen to bring capacity online as quickly as possible, these efficiencies translate directly into competitive advantage. Delivering a project even two weeks early can represent millions of dollars in added value.

Transparency Drives Trust

Transparency is another critical, often underestimated, benefit. With all stakeholders working from the same live schedule, there’s no confusion over versions, no endless email threads with outdated attachments, and no surprises when changes occur. Updates are real-time and visible to everyone with access, including the owner.

This level of openness fosters trust, both internally within the contractor’s organization and externally with the client. Owners appreciate clear visibility into progress and potential risks. Project teams benefit from streamlined communication and reduced rework. In the high-stakes, competitive data center market, trust is a powerful differentiator.

Data-Driven, Continuous Improvement

Modern scheduling platforms also generate rich data on project planning and execution. Over time, this data becomes an indispensable tool for benchmarking performance, identifying recurring bottlenecks, and continuously refining future schedules.

For instance, analytics can reveal typical durations for specific data center construction phases, common deviation points, and which sequencing strategies consistently deliver faster results. Armed with this intelligence, contractors can hone their planning models, becoming far more predictive.

In an industry that prioritizes precision and repeatability, the ability to learn from past projects and apply those lessons to new ones is invaluable.

A New Standard for a Digital Era

The data center market is expanding rapidly, fueled by AI, cloud computing, and insatiable data demands. The pressure on contractors to deliver complex projects quickly and reliably will only intensify. Those adopting modern, collaborative scheduling approaches will gain a decisive edge.

By moving beyond static, specialist-driven scheduling to dynamic, inclusive planning, general contractors can achieve:

  • Significantly greater accuracy in project forecasts.
  • Shorter construction durations through optimized sequencing.
  • Higher team engagement and accountability.
  • Enhanced transparency and trust with clients.
  • Systematic continuous improvement across all projects.

Legacy scheduling tools have served their purpose, but their limitations are increasingly mismatched with the speed and complexity of today’s data center projects. The future of data center delivery lies in processes and tools that are as connected and intelligent as the facilities themselves.

Conclusion

The message for data center builders is unambiguous: planning differently is no longer optional. Visual, collaborative scheduling isn’t just a technological upgrade; it’s a fundamental mindset shift that transforms scheduling into a shared, strategic advantage. When the entire project team can see, understand, and shape the plan, they can build faster, smarter, and with far greater confidence. And in a world where every day of delay can cost half a million dollars, that’s not just progress, it’s significant profit.

# # #

About the Author:

Phil Carpenter is the Chief Marketing Officer of Planera, a construction tech startup revolutionizing project management with its visual, Critical Path Method (CPM)-based scheduling and planning software. For more information, visit www.planera.io.

The post Building Data Centers Faster and Smarter: Visual, Collaborative Scheduling Isn’t Just an Option—It’s a Business Mandate. appeared first on Data Center POST.

Compu Dynamics Drives Record Support for NOVA’s IET Programs at 6th Annual Charity Golf Tournament

3 December 2025 at 15:30

Compu Dynamics is once again turning a day on the course into long-term impact for the next generation of data center professionals. At its 6th Annual Charity Golf Tournament, the company raised a record-breaking $55,000 in support of Northern Virginia Community College’s (NOVA) Information and Engineering Technologies (IET) programs, pushing its cumulative contributions to the college to more than $200,000 to date.

Hosted at Bull Run Golf Club in northern Virginia, this year’s sold-out tournament drew more than 150 participants from more than 40 companies across the data center and mission-critical infrastructure ecosystem, including colocation providers, cloud and AI infrastructure companies, developers, manufacturers, and service partners. What has quickly become one of the region’s premier networking events for the data center community is equally a catalyst for local education and workforce development.

For Compu Dynamics, the tournament is about far more than friendly competition. It is one of the company’s most visible expressions of its commitment to building the future workforce that will build and power data centers.

“Our annual golf tournament has become this incredible convergence of generosity, partnership, and purpose,” said Steve Altizer, president and CEO of Compu Dynamics. “Beyond the golf and the camaraderie, we are helping equip the next generation of leaders with the skills and education they need. That’s what this is really about.”

Funds raised from the event support NOVA’s IET programs, which provide hands-on training in areas such as engineering technology and data center operations. These programs give students exposure to real-world infrastructure environments and career pathways, preparing them for high-demand roles across the digital infrastructure and critical facilities landscape.

The long-standing partnership between Compu Dynamics and NOVA demonstrates how deeply aligned industry and education can be when it comes to workforce readiness.

“We applaud Compu Dynamics for leading by example in its commitment to community colleges and the success of NOVA students,” said Kelly Persons, executive director of the NOVA Educational Foundation. “Their generosity and ongoing partnership demonstrate the power of collaboration in shaping the future workforce.”

By investing in programs that connect students directly with data center operations, engineering, and technology careers, Compu Dynamics is helping ensure that northern Virginia’s talent pipeline keeps pace with the region’s rapid growth in digital infrastructure.​

Now in its sixth year, the charity tournament continues to scale in both participation and impact, underscoring how a single annual event can create sustained benefit for students, employers, and the broader community. What started as a way to bring partners together has evolved into a platform for advancing education, supporting economic mobility, and strengthening an industry that increasingly underpins every facet of the digital economy.

As data center demand accelerates – driven by cloud, AI, and the ever-expanding digital ecosystem – initiatives like Compu Dynamics’ charity tournament help ensure that opportunity in this sector is accessible to the local students who will become its future engineers, operators, and leaders.

In northern Virginia and beyond, Compu Dynamics is proving that when the industry invests in education, everyone wins.

To learn more, visit Compu Dynamics.

The post Compu Dynamics Drives Record Support for NOVA’s IET Programs at 6th Annual Charity Golf Tournament appeared first on Data Center POST.

DC Investors Are Choosing a New Metric for the AI Era

2 December 2025 at 16:00

The conversation around data center performance is changing. Investors, analysts, and several global operators have begun asking a question that PUE cannot answer, how much compute do we produce for every unit of power we consume. This shift is not theoretical, it is already influencing how facilities are evaluated and compared.

Investors are beginning to favor data center operators who can demonstrate not only energy efficiency but also compute productivity per megawatt. Capital is moving toward facilities that understand and quantify this relationship. Several Asian data center groups have already started benchmarking facilities in this way, particularly in high density and liquid cooled environments.

Industry organizations are paying attention to these developments. The Open Compute Project has expressed interest in reviewing a white paper on Power Compute Effectiveness, PCE, and Return on Invested Power, ROIP, to understand how these measures could inform future guidance and standards. These signals point in a consistent direction. PUE remains valuable, but it can no longer serve as the primary lens for evaluating performance in modern facilities.

PUE is simple and recognizable.

PUE = Total Facility Power ÷ IT Power

It shows how much supporting infrastructure is required to deliver power to the IT load. What it does not show is how effectively that power becomes meaningful compute.

As AI workloads accelerate, data centers need visibility into output as well as efficiency. This is the role of PCE.

PCE = Compute Output ÷ Total Power

PCE reframes performance around the work produced. It answers a question that is increasingly relevant, how much intelligence or computational value do we create for every unit of power consumed.

Alongside PCE is ROIP, the operational companion metric that reflects real time performance. ROIP provides a view of how effectively power is being converted into useful compute at any moment. While PCE shows long term capability, ROIP reflects the health of the system under live conditions and exposes the impact of cooling performance, density changes, and power constraints.

This shift in measurement mirrors what has taken place in other sectors. Manufacturing moved from uptime to throughput. Transportation moved from mileage to performance and reliability. Data centers, especially those supporting AI and accelerated computing, are now moving from efficiency to productivity.

Cooling has become a direct enabler of compute and not just a supporting subsystem. When cooling performance changes, compute output changes with it. This interdependence means that understanding the relationship between power, cooling capability, and computational output is essential for real world performance, not just engineering design.

PUE still matters. It reflects operational discipline, mechanical efficiency, and the overall metabolism of the facility. What it cannot reveal is how much useful work the data center is actually producing or how effectively it can scale under load. PCE and ROIP fill that gap. They provide a more accurate view of capability, consistency, and return on power, especially as the industry moves from traditional air cooled environments to liquid ready, high density architectures.

The next phase of data center optimization will not be defined by how little power we waste, but by how much value we create with the power we have. As demand increases and the grid becomes more constrained, organizations that understand their true compute per megawatt performance will have a strategic and economic advantage. The move from energy scarcity to energy stewardship begins with measuring what matters.

The industry has spent years improving efficiency. The AI era requires us to improve output.

# # #

About the Author

Paul Quigley is the former President and current Chief Strategic Partnership Officer of Airsys Cooling Technologies, and a global advocate for high density, energy efficient data center design. With more than three decades in HVAC and mission critical cooling, he focuses on practical solutions that connect energy stewardship with real world compute performance. Paul writes and speaks internationally about PCE, ROIP, and the future of data center health in the age of AI.

The post DC Investors Are Choosing a New Metric for the AI Era appeared first on Data Center POST.

Cloud Outages Cost You Big: Here’s How to Stay Online No Matter What

25 November 2025 at 15:00

When IT goes down, the hit is immediate: revenue walks out the door, employees grind to a halt, and customers start questioning your credibility. Cloud services are built to prevent that spiral, with redundancy, automatic failover, and cross region replication baked right in. Try matching that with your own data center and you are signing up for massive hardware bills, nonstop maintenance, and the joy of keeping everything powered and patched around the clock. Cloud resilience is not just better. It is on a completely different level.

The High Stakes of Downtime

Your business depends on fast, reliable access to data: whether you’re running an eCommerce platform, financial services firm, or healthcare system. Downtime isn’t just an inconvenience; it’s a financial disaster. Every minute of outage costs businesses an average of $9,000. That’s why companies demand high-availability (HA) and disaster recovery (DR) solutions that won’t fail when they need them most. HA and DR are essential components of any organization’s business continuity plan (BCP).

Dependable access to business data is essential for operational efficiency and accurate decision-making. More organizations rely on access to high-availability data to automate business processes, and ready access to stored data and databases is critical for e-commerce, financial services, healthcare systems, CRM, inventory management, etc. The increased need for reliable data access drives more organizations to embrace cloud computing.

According to Gartner, “Seventy percent of organizations are poorly positioned in terms of disaster recovery (DR) capabilities, with 54% likely suffering from ‘mirages of overconfidence.’” To minimize the risk of costly downtime, cloud customers must shop for reliable data access services.

The Uptime Institute offers four tiers of data center classification for resiliency and redundancy.

  • Tier I – The lowest uptime ranking with basic data infrastructure, limited redundancy, and the highest risk of downtime.
  • Tier II – Offers additional physical infrastructure redundancy for power and cooling with downtime during maintenance.
  • Tier III – Supports concurrent maintenance and provides multiple data disruption paths so components can be removed or replaced without interruption.
  • Tier IV – A fault-tolerant infrastructure that guarantees uninterrupted cooling and power, with redundant systems so that no single event will create a failure.

Most on-premises data centers strive for Tier II or III designs. Most customers demand high uptime shop for Tier III and Tier IV services, and the cost and complexity of achieving a Tier IV design are usually left to cloud service providers.

To provide high availability, many cloud computing providers segment their infrastructure into Availability Zones (AZs), with each zone set up to operate independently. Each AZ has one or more data centers with self-contained power, cooling, and networking capabilities to minimize failures. AZs are typically situated close to one another to minimize latency for data replication. They also have redundant infrastructures, so there is no single point of failure within the AZ. Distributing workloads across AZs promotes high availability and alight with Tier III and Tier IV capabilities to minimize downtime.

To calculate uptime, you multiply the availability of every layer of the application infrastructure, starting with the underlying AZ, through the operating system, and then finally the application layer. In order to have the highest availability, architectures allow for applications to “fail over, patch/upgrade, and then fail back.” whether it is across AZs, or during operating system, database patching, and upgrade.

The ability to replicate data across regions is the ideal solution for disaster recovery. Regions can be separated by geographic areas or even continents, but having access to redundant data ensures that applications and workloads remain operational, even in the event of a natural disaster or widespread network failure.

Providing cross-region DR in the cloud is central to the BCP and ensures data availability, with data being asynchronously or synchronously replicated across regions. Cloud DR also includes managed failover, switching traffic to a secondary region if the primary cloud infrastructure fails.

Maintaining cross-region failovers may have some performance tradeoffs. There may be higher latency and costs compared to multi-AZ replication, but the benefits of maintaining continuous operations offset performance drawbacks.

Comparing On-premises Versus Cloud HA and DR

Deciding whether to adopt on-premises or cloud computing data centers is largely a matter of comparing costs and capabilities.

On-premises environments are ideal for users who require absolute control and customization. For example, healthcare and financial services organizations may need full control over hardware and software configurations and data because of security and unique compliance requirements. On-premises data centers also offer greater control over system performance.

Scaling on-premises data centers to support high availability and disaster recovery is expensive, requiring redundant hardware and software, generators, backup cooling capacity, etc. Maintaining a high-performance data infrastructure requires substantial expertise and maintenance, including regular failover testing.

While cloud-based data centers offer less control over configurations, they tend to provide greater reliability. The cloud service providers manage the physical infrastructure, scaling computing power and data storage as needed without installing additional hardware. Service-level agreements ensure data availability and system uptime.

Cloud data centers also offer high availability using strategies such as Availability Zones and disaster recovery using cross-region replication. Most of these services are included with cloud services contracts and are easier to set up than provisioning multiple sites on-premises. Most cloud computing providers also use a pay-as-you-go model, simplifying budgeting and cutting costs.

Many organizations adopt a hybrid strategy, using on-premises data center services for critical computing applications and leveraging cloud computing services for DR and scalability. Adopting a hybrid approach mitigates risk by replicating critical workloads to cloud-based systems, providing DR without duplicating hardware and software. Adopting a hybrid strategy also helps to cut costs for redundant services that are seldom used. It also allows companies to migrate data services to the cloud over time.

In the end, high availability and disaster recovery are not optional; they are the backbone of every modern enterprise. And while hybrid strategies can balance security, compliance, and cost, the cloud remains unmatched when it comes to delivering true resilience at scale. Its built-in redundancy, automatic failover, and cross region replication provide a level of protection that on premises systems simply cannot match without astronomical investment. For organizations that want continuity they can trust, the cloud is not just a viable option. It is the strategic choice.

# # #

About the Author

Bakul Banthia is co-founder of Tessell. Tessell is a cloud-native Database-as-a-Service (DBaaS) platform that simplifies the setup, management, security, and scaling of transactional and analytic databases in the cloud.

The post Cloud Outages Cost You Big: Here’s How to Stay Online No Matter What appeared first on Data Center POST.

Telescent Appoints Veteran Financial Leader Trevor Roots as Chief Financial Officer

20 November 2025 at 15:00

Telescent has strengthened its leadership team with the appointment of Trevor Roots as Chief Financial Officer. With more than 25 years of financial experience across the semiconductor, optical networking, and technology sectors, Roots joins the company at a pivotal moment as demand for automated fiber management accelerates across hyperscale data centers and AI infrastructure.

A seasoned executive, Roots has guided multiple venture-backed technology companies through rapid expansion. Most recently, he served as CFO of Jariet Technologies, a developer of high-speed data converters. His earlier roles include leading the financial operations at Sierra Monolithics where he supported revenue growth from $15 million to more than $70 million while maintaining strong operating margins. He also helped drive E-Tek Dynamics’ scale from $40 million to a $500 million annual run rate prior to its acquisition by JDS Uniphase.

“Trevor’s proven ability to scale technology companies and enhance operational performance makes him an ideal fit for Telescent,” said Anthony Kewitsch, CEO and Co-founder. “His background in semiconductor and optical networking environments aligns perfectly with our continued growth.”

Roots’ arrival follows a period of major momentum for Telescent, including the company’s largest order to date from a top hyperscale data center operator and expanding global partnerships for its G5 robotic patch-panel system.

“I’m excited to join Telescent at such a significant stage in its growth,” Roots said. “The company’s automated fiber management solutions address critical needs for data center and AI operators, and I look forward to supporting its long-term strategy.”

As Telescent scales to meet surging industry demand, Roots’ financial leadership will help guide the company’s next phase of expansion.

Learn more at telescent.com.

The post Telescent Appoints Veteran Financial Leader Trevor Roots as Chief Financial Officer appeared first on Data Center POST.

Insuring the Cloud: How Nuclear Policies Could Power the Next Generation of Data Centers

19 November 2025 at 16:00

The rapid growth of data centers is resulting in one of the most energy intensive sectors of the industrial economy. Providing power to support artificial intelligence, cloud computing, and cryptocurrency mining requires an uninterrupted supply of electricity. To ensure reliability, some data center developers are considering the deployment of small modular reactors (SMRs). These reactors would provide a steady, carbon-free energy source. However, as nuclear energy enters the data center space the question of insurance and how to protect operators and the public becomes critical to ensure progress toward commercial viability.

Understanding Nuclear Insurance Basics

The foundation of nuclear liability insurance in the United States lies in the Price Anderson Nuclear Industries Indemnity Act (1957), which created a unique liability system for nuclear operators. The Act mandates that reactor owners maintain the maximum amount of insurance coverage available in the market to cover potential nuclear liability damages. Currently, each reactor above 100MW is required to carry $500 million in primary coverage, supported by an additional string of retrospective payments from other licensed operators if needed. Reactors that generate more that 10 MW, but do not generate electrical power, and reactors that generate less than 100 MW are required to carry liability insurance between $4.5 and $74 million. The precise amount is governed by a formula based on thermal power and on the local population density.

Nuclear liability insurance is fundamentally distinct from conventional insurance because it addresses specialized, high-consequence, low-probability risks that other markets cannot efficiently underwrite. Commercial insurance disperses risks among individual insurers while nuclear insurance utilizes insurance pools. Pools, such as the U.S. American Nuclear Insurers (ANI) and Nuclear Risk Insurers (NRI) in the UK, combine the capacity of multiple insurers to jointly cover nuclear risks.

These pooling arrangements are necessary because the nuclear risk profile does not adhere to normal actuarial assumptions. Insurers lack adequate historical loss data on nuclear accidents, and maximum loss scenarios are so extreme that no single company could absorb them. The pooling structure allows for a broader distribution of potentially catastrophic losses across multiple insurers.

Underwriting and Risk Assessment

For nuclear property insurance, the focus of underwriters is on plant design, regulatory compliance, and operational culture rather than the statistical loss experience, which dominates conventional property insurance underwriting. Specialized insurance mutuals such as Nuclear Electric Insurance Limited (NEIL) and European Mutual Association for Nuclear Insurance (EMANI) provide coverage for damages to physical plant property. This coverage includes nuclear specific risks which are typically excluded in commercial markets such as on-site decontamination, radiation cleanup, and extended outage losses.

Conventional property insurance underwriters evaluate frequent, well-understood risks based on probabilistic models using large datasets. For nuclear installations, the low number of severe historical accidents, combined with potentially enormous losses like those sustained at Fukushima and Chernobyl precludes the traditional risk-based rating and instead relies on specialized engineering assessments.

Early engagement with markets is essential

SMR projects are no different than traditional capital projects with respect to builder’s risk insurance coverage during construction, however, when fuel arrives on site, the requirements for coverage and the availability of insurance capacity drastically changes. It is important for project managers to engage with underwriters early in the conceptual design phase to ensure adequate coverage is available. Since SMRs will likely be viewed by underwriters as first-of-a-kind technology with different safety and operational profiles compared to traditional nuclear reactors, they will want to understand the design, construction, and operational nuances to evaluate whether they would insure the risk. This early collaboration allows insurers to identify specific risk exposures at each stage of development, from off-site manufacturing to on-site assembly and nuclear fuel commissioning avoiding any gaps in coverage, particularly during the transition from construction to full operation. Failure to involve insurers early may lead to coverage fragmentation or exclusions, impacting financing and project timelines.

Nuclear property and liability insurance diverges from conventional insurance primarily through collective risk-sharing and the absence of market-based underwriting models. Its unique nature reflects the complexity of managing nuclear risks. As companies explore the deployment of SMR’s to power data centers, understanding these distinctions is crucial to designing viable insurance programs and avoiding any bottlenecks that could delay operation.

# # #

About the Author:

Ron Rispoli is a Senior Vice President in the Energy Group at Stephens Insurance. His primary focus is assisting clients in navigating the complex landscape of nuclear property and liability coverage for both existing and future nuclear facilities. He also provides risk management consultation services to utility clients, with an emphasis on emerging risks in nuclear construction. He works to help clients understand the critical role insurance plays in managing risks associated with nuclear operations and activities. He has over forty years of experience in the commercial nuclear arena.

The post Insuring the Cloud: How Nuclear Policies Could Power the Next Generation of Data Centers appeared first on Data Center POST.

The Speed of Burn

17 November 2025 at 16:00

It takes the Earth hundreds of millions of years to create usable energy.

It takes us milliseconds to burn it.

That imbalance between nature’s patience and our speed has quietly become one of the defining forces of our time.

All the power that moves our civilization began as light. Every joule traces back to the Big Bang, carried forward by the sun, stored in plants, pressed into fuels, and now released again as electricity. The current that runs through a data center today began its journey billions of years ago…ancient energy returning to motion through modern machines.

And what do we do with it? We turn it into data.

Data has become the fastest-growing form of energy use in human history. We are creating it faster than we can process, understand, or store it. The speed of data now rivals the speed of light itself, and it far exceeds our ability to assign meaning to it.

The result is a civilization burning geological time to produce digital noise.

The Asymmetry of Time

A hyperscale data center can take three to five years to design, permit, and build. The GPUs inside it process information measured in trillionths of a second. That mismatch; years to construct, microseconds to consume, defines the modern paradox of progress. We are building slower than we burn.

Energy creation is slow. Data consumption is instantaneous. And between those two speeds lies a widening moral and physical gap.

When we run a model, render an image, or stream a video, we aren’t just using electricity. We’re releasing sunlight that’s been waiting since the dawn of life to be freed. The electrons are real, finite, and irreplaceable in any human timeframe — yet we treat data as limitless because its cost is invisible.

Less than two percent of all new data is retained after a year. Ninety-eight percent disappears — deleted, overwritten, or simply forgotten. Still, we build ever-larger servers to hold it. We cool them, power them, and replicate them endlessly. It’s as if we’ve confused movement with meaning.

The Age of the Cat-Video Factory

We’ve built cat-video factories on the same grid that could power breakthroughs in medicine, energy, and climate.

There’s nothing wrong with joy or humor. Those things are a beautiful part of being human. But we’ve industrialized the trivial. We’re spending ancient energy to create data that doesn’t last the length of a memory. The cost isn’t measured in dollars; it’s measured in sunlight.

Every byte carries a birth certificate of energy. It may have traveled billions of years to arrive in your device, only to vanish in seconds. We are burning time itself — and we’re getting faster at it every year.

When Compute Outruns Creation

AI’s rise has made this imbalance impossible to ignore. A one-gigawatt data campus, power consumption that once was allocated to the size of a national power plant, can now belong to a single company. Each facility may cost tens of billions of dollars and consume electricity on par with small nations. We’ve reached a world where the scarcity of electrons defines the frontier of innovation.

It’s no longer the code that limits us; it’s the current.

The technology sector celebrates speed: faster training, faster inference, faster deployment. But nature doesn’t share that sense of urgency. Energy obeys the laws of thermodynamics, not the ambitions of quarterly growth. What took the universe 18 billion years to refine (the conversion of matter into usable light) we now exhaust at a pace that makes geological patience seem quaint.

This isn’t an argument against technology. It’s a reminder that progress without proportion becomes entropy. Efficiency without stewardship turns intelligence into heat.

The Stewardship of Light

There’s a better lens for understanding this moment. One that blends physics with purpose.

If all usable power began in the Big Bang and continues as sunlight, then every act of computation is a continuation of that ancient light’s journey. To waste data is to interrupt that journey; to use it well is to extend it. Stewardship, then, isn’t just environmental — it’s existential.

In finance, CFOs use Return on Invested Power, ROIP to judge whether the energy they buy translates into profitable compute and operational output. But there’s a deeper layer worth considering: a moral ROIP. Beyond the dollars, what kind of intelligence are we generating from the power we consume? Are we creating breakthroughs in medicine, energy, and climate, or simply building larger cat-video factories?

Both forms of ROIP matter. One measures financial return on electrons; the other measures human return on enlightenment. Together, they remind us that every watt carries two ledgers: one economic, one ethical.

We can’t slow AI’s acceleration. But we can bring its metabolism back into proportion. That begins with awareness… the humility to see that our data has ancestry, that our machines are burning the oldest relics of the cosmos. Once you see that, every click, every model, every watt takes on new weight.

The Pause Before Progress

Perhaps our next revolution isn’t speed at all. Perhaps it’s stillness, the mere ability to pause and ask whether the next byte we create honors the journey of the photons that power it.

The call isn’t to stop. It’s to think proportionally.

To remember that while energy cannot be created or destroyed, meaning can.

And that the true measure of progress may not be how much faster we can turn power into data, but how much more wisely we can turn data into light again.

Sunlight is the power. Data is the shadow.

The question is whether our shadows are getting longer… or wiser.

# # #

About the Author

Paul Quigley is President of Airsys Cooling Technologies. He writes about the intersection of power, data, and stewardship. Airsys focuses on groundbreaking technology with a conscience

The post The Speed of Burn appeared first on Data Center POST.

❌