Normal view

Received yesterday — 31 January 2026

AI and cooling: toward more automation

AI is increasingly steering the data center industry toward new operational practices, where automation, analytics and adaptive control are paving the way for “dark” — or lights-out, unstaffed — facilities. Cooling systems, in particular, are leading this shift. Yet despite AI’s positive track record in facility operations, one persistent challenge remains: trust.

In some ways, AI faces a similar challenge to that of commercial aviation several decades ago. Even after airlines had significantly improved reliability and safety performance, making air travel not only faster but also safer than other forms of transportation, it still took time for public perceptions to shift.

That same tension between capability and confidence lies at the heart of the next evolution in data center cooling controls. As AI models — of which there are several — improve in performance, becoming better understood, transparent and explainable, the question is no longer whether AI can manage operations autonomously, but whether the industry is ready to trust it enough to turn off the lights.

AI’s place in cooling controls

Thermal management systems, such as CRAHs, CRACs and airflow management, represent the front line of AI deployment in cooling optimization. Their modular nature enables the incremental adoption of AI controls, providing immediate visibility and measurable efficiency gains in day-to-day operations.

AI can now be applied across four core cooling functions:

  • Dynamic setpoint management. Continuously recalibrates temperature, humidity and fan speeds to match load conditions.
  • Thermal load forecasting. Predicts shifts in demand and makes adjustments in advance to prevent overcooling or instability.
  • Airflow distribution and containment. Uses machine learning to balance hot and cold aisles and stage CRAH/CRAC operations efficiently.
  • Fault detection, predictive and prescriptive diagnostics. Identifies coil fouling, fan oscillation, or valve hunting before they degrade performance.

A growing ecosystem of vendors is advancing AI-driven cooling optimization across both air- and water-side applications. Companies such as Vigilent, Siemens, Schneider Electric, Phaidra and Etalytics offer machine learning platforms that integrate with existing building management systems (BMS) or data center infrastructure management (DCIM) systems to enhance thermal management and efficiency.

Siemens’ White Space Cooling Optimization (WSCO) platform applies AI to match CRAH operation with IT load and thermal conditions, while Schneider Electric, through its Motivair acquisition, has expanded into liquid cooling and AI-ready thermal systems for high-density environments. In parallel, hyperscale operators, such as Google and Microsoft, have built proprietary AI engines to fine-tune chiller and CRAH performance in real time. These solutions range from supervisory logic to adaptive, closed-loop control. However, all share a common aim: improve efficiency without compromising compliance with service level agreements (SLAs) or operator oversight.

The scope of AI adoption

While IT cooling optimization has become the most visible frontier, conversations with AI control vendors reveal that most mature deployments still begin at the facility water loop rather than in the computer room. Vendors often start with the mechanical plant and facility water system because these areas present fewer variables, such as temperature differentials, flow rates and pressure setpoints, and can be treated as closed, well-bounded systems.

This makes the water loop a safer proving ground for training and validating algorithms before extending them to computer room air cooling systems, where thermal dynamics are more complex and influenced by containment design, workload variability and external conditions.

Predictive versus prescriptive: the maturity divide

AI in cooling is evolving along a maturity spectrum — from predictive insight to prescriptive guidance and, increasingly, to autonomous control. Table 1 summarizes the functional and operational distinctions among these three stages of AI maturity in data center cooling.

Table 1 Predictive, prescriptive, and autonomous AI in data center cooling

Table: Predictive, prescriptive, and autonomous AI in data center cooling

Most deployments today stop at the predictive stage, where AI enhances situational awareness but leaves action to the operator. Achieving full prescriptive control will require not only a deeper technical sophistication but also a shift in mindset.

Technically, it is more difficult to engineer because the system must not only forecast outcomes but also choose and execute safe corrective actions within operational limits. Operationally, it is harder to trust because it challenges long-held norms about accountability and human oversight.

The divide, therefore, is not only technical but also cultural. The shift from informed supervision to algorithmic control is redefining the boundary between automation and authority.

AI’s value and its risks

No matter how advanced the technology becomes, cooling exists for one reason: maintaining environmental stability and meeting SLAs. AI-enhanced monitoring and control systems support operating staff by:

  • Predicting and preventing temperature excursions before they affect uptime.
  • Detecting system degradation early and enabling timely corrective action.
  • Optimizing energy performance under varying load profiles without violating SLA thresholds.

Yet efficiency gains mean little without confidence in system reliability. It is also important to clarify that AI in data center cooling is not a single technology. Control-oriented machine learning models, such as those used to optimize CRAHs, CRACs and chiller plants, operate within physical limits and rely on deterministic sensor data. These differ fundamentally from language-based AI models such as GPT, where “hallucinations” refer to fabricated or contextually inaccurate responses.

At the Uptime Network Fall Americas Fall Conference 2025, several operators raised concerns about AI hallucinations — instances where optimization models generate inaccurate or confusing recommendations from event logs. In control systems, such errors often arise from model drift, sensor faults, or incomplete training data, not from the reasoning failures seen in language-based AI. When a model’s understanding of system behavior falls out of sync with reality, it can misinterpret anomalies as trends, eroding operator confidence faster than it delivers efficiency gains.

The discomfort is not purely technical, it is also human. Many data center operators remain uneasy about letting AI take the controls entirely, even as they acknowledge its potential. In AI’s ascent toward autonomy, trust remains the runway still under construction.

Critically, modern AI control frameworks are being designed with built-in safety, transparency and human oversight. For example, Vigilent, a provider of AI-based optimization controls for data center cooling, reports that its optimizing control switches to “guard mode” whenever it is unable to maintain the data center environment within tolerances. Guard mode brings on additional cooling capacity (at the expense of power consumption) to restore SLA-compliant conditions. Typical examples include rapid drift or temperature hot spots. In addition, there is also a manual override option, which enables the operator to take control through monitoring and event logs.

This layered logic provides operational resiliency by enabling systems to fail safely: guard mode ensures stability, manual override guarantees operator authority, and explainability, via decision-tree logic, keeps every AI action transparent. Even in dark-mode operation, alarms and reasoning remain accessible to operators.

These frameworks directly address one of the primary fears among data center operators: losing visibility into what the system is doing.

Outlook

Gradually, the concept of a dark data center, one operated remotely with minimal on-site staff, has shifted from being an interesting theory to a desirable strategy. In recent years, many infrastructure operators have increased their use of automation and remote-management tools to enhance resiliency and operational flexibility, while also mitigating low staffing levels. Cooling systems, particularly those governed by AI-assisted control, are now central to this operational transformation.

Operational autonomy does not mean abandoning human control; it means achieving reliable operation without the need for constant supervision. Ultimately, a dark data center is not about turning off the lights, it is about turning on trust.


The Uptime Intelligence View

AI in thermal management has evolved from an experimental concept into an essential tool, improving efficiency and reliability across data centers. The next step — coordinating facility water, air and IT cooling liquid systems — will define the evolution toward greater operational autonomy. However, the transition to “dark” operation will be as much cultural as it is technical. As explainability, fail-safe modes and manual overrides build operator confidence, AI will gradually shift from being a copilot to autopilot. The technology is advancing rapidly; the question is how quickly operators will adopt it.

The post AI and cooling: toward more automation appeared first on Uptime Institute Blog.

Why Data Sovereignty Is Becoming a Strategic Imperative for AI Infrastructure

29 January 2026 at 13:30

As artificial intelligence reshapes how organizations generate value from data, a quieter shift is happening beneath the surface. The question is no longer just how data is protected, but where it is processed, who governs it, and how infrastructure decisions intersect with national regulation and digital policy.

Datalec Precision Installations (DPI) is seeing this shift play out across global markets as enterprises and public sector organizations reassess how their data center strategies support both AI performance and regulatory alignment. What was once treated primarily as a compliance issue is increasingly viewed as a foundational design consideration.

Sovereignty moves upstream.

Data sovereignty has traditionally been addressed after systems were deployed, often resulting in fragmented architectures or operational workarounds. That approach is becoming less viable as regulations tighten and AI workloads demand closer proximity to sensitive data.

Organizations are now factoring sovereignty into infrastructure planning from the start, ensuring data remains within national borders and is governed by local legal frameworks. For many, this shift reduces regulatory risk while creating clearer operational boundaries for advanced workloads.

AI raises the complexity

AI intensifies data governance challenges by extending them beyond storage into compute and model execution. Training and inference processes frequently involve regulated or sensitive datasets, increasing exposure when data or workloads cross borders.

This has driven growing interest in sovereign AI environments, where data, compute, and models remain within a defined jurisdiction. Beyond compliance, these environments offer greater control over digital capabilities and reduced dependence on external platforms.

Balancing performance and governance 

Supporting sovereign AI requires infrastructure that can deliver high-density compute and low-latency performance without compromising physical security or regulatory alignment. DPI addresses this by delivering AI-ready data center environments designed to support GPU-intensive workloads while meeting regional compliance requirements.

The objective is to enable organizations to deploy advanced AI systems locally without sacrificing scalability or operational efficiency.

Regional execution at global scale

Demand for localized, compliant infrastructure is growing across regions where digital policy and economic strategy intersect. DPI’s expansion across the Middle East, APAC, and other international markets reflects this trend, combining regional delivery with standardized operational practices across 21 global entities.

According to Michael Aldridge, DPI’s Group Information Security Officer, organizations increasingly view localized infrastructure as a way to future-proof their digital strategies rather than constrain them.

Compliance as differentiation

As AI adoption accelerates, infrastructure and governance decisions are becoming inseparable. Organizations that can control where data lives and how AI systems operate are better positioned to manage risk, meet regulatory expectations, and move faster in regulated markets.

DPI’s approach reflects a broader industry shift: compliance is no longer just about meeting requirements, but about enabling innovation in an AI-driven environment.

To read DPI’s full perspective on data sovereignty and AI readiness, visit the company’s website.

The post Why Data Sovereignty Is Becoming a Strategic Imperative for AI Infrastructure appeared first on Data Center POST.

2025 in Review: Sabey’s Biggest Milestones and What They Mean

26 January 2026 at 18:00

Originally posted on Sabey Data Centers.

At Sabey Data Centers, progress is more than a series of headlines. It’s a blueprint for what’s possible when infrastructure, efficiency and stewardship go hand in hand. From our award-winning sustainability initiatives to bold new campus designs and record-setting expansions, each milestone this year has demonstrated our commitment to powering tomorrow’s workloads with conscience and confidence.

As 2026 is already well underway and already promising to be a banner year, we wanted to pause and reflect on the path we forged in 2025.

Capacity expansion: built for growth 

In 2025, we announced strategic power expansions across our Pacific Northwest, Ashburn, Austin and Quincy locations. In Seattle and Columbia, 30MW of new power is now anticipated to come online by 2027, enabling tenants to scale quickly while leveraging ultra-efficient energy (carbon-free in Seattle) and a regional average PUE as low as 1.2.

On the East Coast, Ashburn’s third and final building broke ground, set to introduce 54MW of additional capacity with the first tranches set to come online in 2026. This will be the first three-story facility in Sabey’s portfolio, purpose-built for air-cooled, liquid-cooled and hybrid deployments, with rack densities over 100kW and an average PUE of 1.35. In both regions, Sabey’s expansion balances hyperscale demand with customization, modular scale and resilient connectivity.

The launch of construction for Austin Building B was another major milestone, expanding our presence in the dynamic Round Rock tech corridor. This three-story, liquid-cooling-ready facility is designed to deliver 54 megawatts of total power capacity. Building B continues our commitment to scalable, energy-efficient digital infrastructure, tailored for enterprise and hyperscale workloads.

To continue reading, please click here.

The post 2025 in Review: Sabey’s Biggest Milestones and What They Mean appeared first on Data Center POST.

Received before yesterday

Power Should Serve Compute: Airsys’ Energy-First Approach to Data Center Cooling

21 January 2026 at 17:00

Data Center POST had the opportunity to connect with Paul Quigley, President of Airsys Cooling Technologies, Inc., ahead of PTC’26. With more than 36 years of experience spanning HVAC contracting, distribution, and executive leadership in manufacturing, Quigley is widely recognized for his work in large-scale, high-demand cooling environments. His career includes co-designing one of the largest VRF projects in North America and leading complex cooling initiatives across the United States. Since joining Airsys in 2021, Quigley has focused on advancing precision and liquid-assisted cooling technologies to help data center operators address power constraints driven by AI, high-density compute, and rapid digital infrastructure growth. In the Q&A below, Quigley shares his perspective on the challenges facing the global digital infrastructure industry and how Airsys is helping operators turn cooling efficiency into usable compute power.

Data Center Post (DCP) Question: What does your company do?  

Paul Quigley (PQ) Answer: Airsys is an energy-focused cooling solutions company founded on a simple idea: power should serve computation, not be lost protecting it. Every system we build returns power back to the data center where it belongs. Our innovative, award-winning technologies give operators more usable power for compute, improving capacity, resilience, and overall profitability.

DCP Q: What problems does your company solve in the market?

PQ A: Airsys helps data centers recover power lost to cooling. As AI clusters push energy systems past their limits, operators are forced to choose between compute, capacity, and cost. Our cooling technologies solve that problem by returning power back to the data center, reducing stranded capacity, and giving operators more usable power for compute.

DCP Q: What are your company’s core products or services?

PQ A: We design and build the world’s most advanced cooling solutions for data centers, including:

  • LiquidRack spray-cooling and liquid-assisted rack systems
  • High-efficiency DX and chilled-water cooling systems
  • Flooded-evaporator chillers (CritiCool-X)
  • Indoor and outdoor precision cooling systems
  • Edge, modular, and containerized data-center cooling
  • Control systems, energy-optimization tools, and PCE/ROIP performance frameworks

DCP Q: What markets do you serve?

PQ A:

  • Hyperscale and AI compute environments
  • Colocation and enterprise data centers
  • Modular and prefabricated data centers
  • Edge and telecom infrastructure
  • Education, industrial, government, and defense applications requiring mission-critical cooling

DCP Q: What challenges does the global digital infrastructure industry face today?

PQ A: The industry now operates in an environment of rapid compute expansion and structural power scarcity. Grid limitations, long construction timelines, inefficient legacy cooling, land constraints, and the explosive rise of AI workloads have created an energy bottleneck. Data centers no longer struggle with space… they struggle with power.

DCP Q: How is your company adapting to these challenges?

PQ A: Airsys has always shifted the conversation from cooling to energy stewardship. Every system we design focuses on reducing cooling overhead and returning power to computation. Our LiquidRack spray-cooling platform, low-lift CritiCool-X chillers, and high-efficiency DX systems enable operators to deploy more compute with less infrastructure. We’re also advancing new metrics – PCE and ROIP – to help operators quantify the financial value of returned power.

DCP Q: What are your company’s key differentiators?

PQ A:

  • Energy-first design philosophy — our systems return power to compute
  • Rapid delivery and global manufacturing — critical in today’s supply-strained market
  • LiquidRack spray cooling — enabling high-density AI clusters without stranded power
  • Flooded-evaporator chiller technology — high efficiency, low lift, faster deployment
  • End-to-end portfolio — DX, chilled water, liquid-assisted cooling, modular systems
  • Practical engineering — simple, reliable, maintainable designs
  • PCE / ROIP frameworks — financial and operational tools that change how operators evaluate cooling impact

DCP Q: What can we expect to see/hear from your company in the future?  

PQ A: You will see a major expansion of our LiquidRack platform, new ultra-efficient chiller technologies, deeper integration of PCE/ROIP metrics, and broader support for modular and edge deployments. We are continuing to push innovation toward one goal: giving operators more usable power for compute.

DCP Q: What upcoming industry events will you be attending? 

PQ A: Events Airsys is considering for 2026:

PTC’26 Hawaii; Escape the Cold Aisle Phoenix; Advancing DC Construction West; DCD>Connect New York; Data Center World DC; World of Modular Las Vegas; NSPMA; 7×24 Exchange Spring Orlando; Data Center Nation Toronto; Datacloud USA Austin; AI Infra Summit Santa Clara; DCW Power Dallas; Yotta Las Vegas; 7×24 Exchange Fall San Antonio; PTC DC; DCD>Connect Virginia; DCAC Austin; Supercomputing; Gartner IOCS; NVIDIA GTC; OCP Global Summit.

DCP Q: Do you have any recent news you would like us to highlight?

PQ A: Yes, Airsys has expanded its high-density cooling portfolio with several major advancements:

More announcements are planned for early 2026 as Airsys continues to expand its advanced cooling portfolio for high-density compute environments

DCP Q: Is there anything else you would like our readers to know about your company and capabilities?

PQ A: Airsys is built on a simple idea: power should serve computation, not be lost protecting it. Our mission is to help operators turn cooling losses into compute gains and to make energy stewardship a competitive advantage in the AI era.

DCP Q: Where can our readers learn more about your company?  

PQ A: www.airsysnorthamerica.com

DCP Q: How can our readers contact your company? 

PQ A: www.airsysnorthamerica.com/contact

To learn more about PTC’26, please visit www.ptc.org/ptc26. The event takes place January 18-21, 2026 in Honolulu, HI.

If you are interested in contributing to Data Center POST, contact us at contributions@datacenterpost.com.

Stay in the know! Subscribe to Data Center POST today.

# # #

About Data Center POST

Data Center POST provides a comprehensive view of the digital infrastructure landscape, delivering industry insights into the global data center ecosystem. As the industry’s only peer-contributed and online publication, we offer relevant information from developers, managers, providers, investors, and trendsetters worldwide.

Data Center POST works hard to get the most current information and thought-provoking ideas most apt to add relevance to the success of the data center industry. Stay informed, visit www.datacenterpost.com.

ADVERTISE | CONTRIBUTE | SUBSCRIBE

The post Power Should Serve Compute: Airsys’ Energy-First Approach to Data Center Cooling appeared first on Data Center POST.

Issues Data Centers Face and How to Overcome Them: A Guide for Managers

20 January 2026 at 14:30

Data centers are the backbone of modern digital infrastructure. They power cloud services, financial systems, healthcare platforms, and nearly every technology-driven business. As demand for data storage and processing grows, so do the operational, financial, and risk-related challenges data centers face. For managers, understanding these issues and knowing how to address them proactively is critical to maintaining uptime, security, and long-term viability. This is what you need to know to ensure you can meet up with demand.

Rising Energy Costs and Efficiency Demands

One of the most persistent challenges for data centers is energy consumption. Powering servers, cooling systems, and redundancy infrastructure requires enormous amounts of electricity, and energy costs continue to rise globally. Beyond cost, there is increasing pressure from regulators and customers to reduce environmental impact.

Managers can address this by investing in energy-efficient hardware, optimizing airflow and cooling layouts, and adopting real-time monitoring tools to identify waste. Long-term strategies may include transitioning to renewable energy sources or partnering with utility providers for more favorable pricing structures.

Cooling and Thermal Management Challenges

Heat is an unavoidable byproduct of high-density computing. Inefficient cooling not only increases costs but also raises the risk of equipment failure and downtime. As server densities increase, traditional cooling methods often struggle to keep up.

Modern solutions include hot-aisle/cold-aisle containment, liquid cooling, and AI-driven thermal monitoring systems. For managers, the key is treating cooling as a dynamic system rather than a fixed infrastructure. However, this needs to be one that evolves alongside hardware demands.

Financial Risk and Insurance Considerations

Data centers face significant financial exposure from equipment damage, downtime, liability claims, and unforeseen events. Even with strong operational controls, risk cannot be eliminated entirely.

This is where insurance becomes a critical part of risk management. Evaluating coverage that aligns with the unique needs of data center operations can help protect against losses that would otherwise threaten business continuity. BOP insurance by Next Insurance can help managers think more holistically about protecting assets, operations, and revenue streams as part of an overall risk strategy.

Downtime and Business Continuity Risks

Even brief outages can result in significant financial losses, reputational damage, and contractual penalties. This downtime may be caused by:

  • Power failures
  • Human error
  • Equipment malfunction
  • External events

To mitigate this risk, managers should prioritize redundancy at every critical point, including power supplies, network connections, and backup systems. Regular testing of disaster recovery plans is just as important as having them documented, too. A plan that hasn’t been tested is often unreliable in real-world conditions. You put your business and those whom you cater to at risk.

Cybersecurity and Physical Security Threats

Data centers face a dual security challenge: digital threats and physical risks. Cyberattacks continue to grow in sophistication, while physical threats such as unauthorized access, theft, or vandalism remain real concerns.

Addressing this requires layered security. On the digital side, this includes continuous patching, network segmentation, and monitoring for unusual activity. Physically, access controls, surveillance systems, and strict visitor protocols are essential. Managers should also ensure that staff training keeps pace with evolving threats, as human error remains a major vulnerability.

Compliance and Regulatory Pressure

Data centers often operate under complex regulatory requirements related to data privacy, industry standards, and regional laws. Compliance failures can result in fines, legal exposure, and loss of customer trust.

Managers can stay ahead by maintaining clear documentation, conducting regular audits, and working closely with legal and compliance teams. Building compliance into operational processes, rather than treating it as an afterthought, reduces risk and simplifies reporting.

Turning Challenges Into Strategic Advantage

While data centers face a wide range of operational and strategic challenges, these issues also present opportunities for improvement and differentiation. Managers who address all of the above proactively are better positioned to deliver reliability and value in a competitive market. Don’t let yours be the one that falls victim to the issues and instead take action.

# # #

About the Author:

James Daniels is a freelance writer, business enthusiast, a bit of a tech buff, and an overall geek. He is also an avid reader, who can spend hours reading and knowing about the latest gadgets and tech, whilst offering views and opinions on these topics.

The post Issues Data Centers Face and How to Overcome Them: A Guide for Managers appeared first on Data Center POST.

DC BLOX Secures $240 Million to Accelerate Hyperscale Data Center Growth Across the Southeast

15 January 2026 at 16:00

DC BLOX has taken a significant step in accelerating digital transformation across the Southeast with the closing of a new $240 million HoldCo financing facility from Global Infrastructure Partners (GIP), a part of BlackRock. This strategic funding positions the company to rapidly advance its hyperscale data center expansion while reinforcing its role as a critical enabler of AI and cloud growth in the region.​

Strategic growth financing

The $240 million facility from GIP provides fresh growth capital dedicated to DC BLOX’s hyperscale data center strategy, building on the company’s recently announced $1.15 billion and $265 million Senior Secured Green Loans. Together, these financings support the development and construction of an expanding portfolio of digital infrastructure projects designed to meet surging demand from hyperscalers and carriers.​

Powering AI and cloud innovation

DC BLOX has emerged as a leader in connected data center and fiber network solutions, with a vertically integrated platform that includes hyperscale data centers, subsea cable landing stations, colocation, and fiber services. This model allows the company to offer end-to-end solutions for hyperscalers and communications providers seeking capacity, connectivity, and resiliency in high-growth Southeastern markets.​

Community and economic impact

The new financing is about more than infrastructure; it is also about regional economic development. DC BLOX’s investments help bring cutting-edge AI and cloud technology into local communities, while driving construction jobs, tax revenues, and power grid enhancements that benefit both customers and ratepayers.

“We are excited to partner with GIP, a part of BlackRock, to fuel our ambitious growth goals,” said Melih Ileri, Chief Investment Officer at DC BLOX. “This financing underscores our commitment to serving communities in the Southeast by bringing cutting-edge AI and cloud technology investments with leading hyperscalers into the region, and creating economic development activity through construction jobs, taxes paid, and making investments into the power grid for the benefit of our customers and local ratepayers alike.”​

Backing from leading investors

Michael Bogdan, Chairman of DC BLOX and Head of the Digital Infrastructure Group at Future Standard, highlighted that this milestone showcases the strength of the company’s vision and execution. Future Standard, a global alternative asset manager based in Philadelphia with over 86.0 billion in assets under management, leads DC BLOX’s sponsorship and recently launched its Future Standard Digital Infrastructure platform with more than 2 billion in assets. GIP, now a part of BlackRock and overseeing over 189 billion in assets, brings deep sector experience across energy, transport, and digital infrastructure, further validating DC BLOX’s role in shaping the Southeast as a global hub for AI-driven innovation.​

Read the full release here.

The post DC BLOX Secures $240 Million to Accelerate Hyperscale Data Center Growth Across the Southeast appeared first on Data Center POST.

Registration Opens for Yotta 2026 as AI Infrastructure Demand Accelerates

15 January 2026 at 15:00

Yotta 2026 is officially open for registration, returning Sept. 28–30 to Las Vegas with a larger footprint, a new venue and an expanded platform designed to meet the accelerating demands of AI-driven infrastructure. Now hosted at Caesars Forum, Yotta 2026 reflects both the rapid growth of the event and the unprecedented pace at which AI is reshaping compute, power and digital infrastructure worldwide.

As AI workloads scale faster than existing systems were designed to handle, infrastructure leaders are facing mounting challenges around power availability, capital deployment, resilience and integration across traditionally siloed industries. Yotta 2026 is built to convene the full ecosystem grappling with these realities, bringing together operators, hyperscalers, enterprise leaders, energy executives, investors, builders, policymakers and technology partners in one place.

Rebecca Sausner, CEO of Yotta, emphasizes that the event is designed for practical progress, not theoretical discussion. From chips and racks to networks, cooling, power and community engagement, AI is transforming every layer of digital infrastructure. Yotta 2026 aims to move conversations beyond vision and into real-world solutions that address scale, reliability and investment risk in an AI-first era.

A defining feature of Yotta 2026 is its advisory board-led approach to programming. The conference agenda is being developed in collaboration with the newly announced Yotta Advisory Board, which includes senior leaders from organizations spanning AI, cloud, energy, finance and infrastructure, including OpenAI, Oracle, Schneider Electric, KKR, Xcel Energy, GEICO and the Electric Power Research Institute (EPRI). This cross-sector guidance ensures the program reflects how the industry actually operates, as an interconnected system where decisions around power, compute, capital, design and policy are inseparable.

The 2026 agenda will focus on the most urgent challenges shaping the AI infrastructure era. Key themes include AI infrastructure and compute density, power generation and grid interconnection, capital formation and investment risk, design and operational resilience, and policy and public-private alignment. Together, these topics offer a market-driven view of how digital infrastructure must be designed, financed and operated to support AI at scale.

With an anticipated 6,000+ AI and digital infrastructure leaders in attendance, Yotta 2026 will feature a significantly expanded indoor and outdoor expo hall, curated conference programming and immersive networking experiences. Hosted at Caesars Forum, the event is designed to support both strategic planning and hands-on execution, creating space for collaboration across the entire infrastructure value chain.

Early registration is now open, with passes starting at $795 and discounted rates available for early registrants. As AI continues to drive unprecedented infrastructure demand, Yotta 2026 positions itself as a critical forum for the conversations and decisions shaping the future of compute, power and digital infrastructure.

To learn more or register, visit yotta-event.com.

The post Registration Opens for Yotta 2026 as AI Infrastructure Demand Accelerates appeared first on Data Center POST.

PowerBridge Appoints Debra L. Raggio as EVP and General Counsel

13 January 2026 at 15:30

PowerBridge’s mission has always centered on developing powered, gigawatt-scale data center campuses that combine energy infrastructure and digital infrastructure. As demand for gigawatt-scale campuses accelerates across the U.S., the company continues to build a team designed to meet that movement. The appointment of Debra L. Raggio as Executive Vice President and General Counsel marks an important milestone in that journey.

Debra joins PowerBridge at a time of significant growth, as the convergence of energy, power, and digital infrastructure continues to reshape how large-scale data center campuses are developed. With more than 40 years of experience in the energy industry, as well as digital infrastructure experience, specializing in natural gas, electricity, and data center markets, she brings deep regulatory and commercial expertise to the role. At PowerBridge, she will oversee legal, regulatory, environmental, government affairs, and communications, while serving as a strategic advisor to Founder and CEO Alex Hernandez and the Board.

Throughout her career, Debra has been a leading national voice in shaping regulatory frameworks across energy and digital infrastructure sectors in the United States, with experience spanning power markets such as PJM and ERCOT. Her background includes private practice at Baker Botts and executive leadership roles at major energy companies, including Talen Energy Corp.

Debra was also a founding management team member of Cumulus Data LLC, a multi-gigawatt data center campus co-located with the Susquehanna Nuclear generation station in Pennsylvania. Her regulatory, commercial, and legal leadership helped enable the development and execution of the project, culminating in its sale to Amazon in 2024. Today, that campus is the foundation for an approximately $20 billion investment supporting the continued expansion of Amazon Web Services.

That experience directly aligns with PowerBridge’s approach to combining power generation, electric campus infrastructure, pad-ready data center sites, and fiber infrastructure to serve growing data center demand while adding needed power supply to regional electric grids. Reflecting on her decision to join the company, Debra shared, “I am honored to be joining CEO Alex Hernandez and the team of executives I worked with in the formation and execution of the Cumulus Data Center Campus. I look forward to helping PowerBridge become the country’s premier powered-campus development company at multi-gigawatt scale, combining power generation, electric campus infrastructure, pad-ready data center sites, and fiber infrastructure to serve the growing need for data centers, while adding needed power supply to the electric grids, including PJM and ERCOT.”

Debra’s appointment reinforces PowerBridge’s focus on regulatory leadership, strategic execution, and disciplined growth as the company advances powered, gigawatt-scale data center campuses across the United States.

Click here to read the full press release.

The post PowerBridge Appoints Debra L. Raggio as EVP and General Counsel appeared first on Data Center POST.

Nostrum Data Centers Taps JLL Expertise, Powering Spain’s Rise as a Connectivity Hub

12 January 2026 at 13:00

Spain’s digital infrastructure landscape is entering a pivotal new phase, and Nostrum Data Centers is positioning itself at the center of that transformation. By engaging global real estate and data center advisory firm JLL, Nostrum is accelerating the development of a next-generation, AI-ready data center platform designed to support Spain’s emergence as a major connectivity hub for Europe and beyond.

Building the Foundation for an AI-Driven Future

Nostrum Data Centers, the digital infrastructure division of Nostrum Group, is developing a portfolio of sustainable, high-performance data centers purpose-built for artificial intelligence, cloud computing, and high-density workloads. In December 2025, the company announced that its data center assets will be available in 2027, with land and power already secured across all sites, an increasingly rare advantage in today’s constrained infrastructure markets.

The platform includes 500 MW of secured IT capacity, with an additional 300 MW planned for future expansion, bringing total planned development to 800 MW across Spain. This scale positions Nostrum as one of the country’s most ambitious digital infrastructure developers at a time when demand for compute capacity is accelerating across Europe.

Strategic Locations, Connected by Design

Nostrum’s six data center developments are strategically distributed throughout Spain to capitalize on existing power availability, fiber routes, internet exchanges, and subsea connectivity. This geographic diversity allows customers to deploy capacity where it best supports latency-sensitive workloads, redundancy requirements, and long-term growth strategies.

Equally central to Nostrum’s approach is sustainability. Each facility is designed in alignment with the United Nations Sustainable Development Goals (SDGs), delivering industry-leading efficiency metrics, including a Power Usage Effectiveness (PUE) of 1.1 and zero Water Usage Effectiveness (WUE), eliminating water consumption for cooling.

Why JLL? And Why Now?

To support this next phase of growth, Nostrum has engaged JLL to strengthen its go-to-market strategy and customer engagement efforts. JLL brings deep global experience in data center advisory, site selection, and market positioning, helping operators translate technical infrastructure into compelling value for hyperscalers, enterprises, and AI-driven tenants.

“Nostrum Data Centers has a long-term vision for balancing innovation and sustainability. We offer our customers speed to market and scalability throughout our various locations in Spain, all while leading a green revolution to ensure development is done the right way as we position Spain as the next connectivity hub,” says Gabriel Nebreda, Chief Executive Officer at Nostrum Group. “We are confident that our engagement with JLL will be able to help us bolster our efforts and achieve our long-term vision.”

From JLL’s perspective, Spain presents a unique convergence of advantages.

“Spain has a unique market position with its access to robust power infrastructure, its proximity to Points of Presence (PoPs), internet exchanges, subsea connectivity, and being one of the lowest total cost of ownership (TCO) markets,” says Jason Bell, JLL Senior Vice President of Data Center and Technology Services in North America. “JLL is excited to be working with Nostrum Data Centers, providing our expertise and guidance to support their quest to be a leading data center platform in Spain, as well as position Spain as the next connectivity hub in Europe and beyond.”

Advancing Spain’s Role in the Global Digital Economy

With JLL’s support, Nostrum Data Centers is further refining its strategy to meet the technical and operational demands of AI and high-density computing without compromising on efficiency or sustainability. The result is a platform designed not just to meet today’s requirements, but to anticipate what the next decade of digital infrastructure will demand.

As hyperscalers, AI developers, and global enterprises look for scalable, energy-efficient alternatives to traditional European hubs, Spain, and Nostrum Data Centers, are increasingly part of the conversation.

Learn more about Nostrum Data Centers: www.thenostrumgroup.com/nostrum-data-centres

Click here to read the press release!

The post Nostrum Data Centers Taps JLL Expertise, Powering Spain’s Rise as a Connectivity Hub appeared first on Data Center POST.

Duos Edge AI Deploys Edge Data Center in Abilene, Texas

9 January 2026 at 17:00

Duos Technologies Group Inc. (Nasdaq: DUOT), through its operating subsidiary Duos Edge AI, Inc, has deployed a new Edge Data Center (EDC) in Abilene, Texas, in collaboration with Region 14 Education Service Center (ESC).​ This deployment expands Duos Edge AI’s presence in Texas while bringing advanced digital infrastructure to support K-12 education, healthcare, workforce development, and local businesses across West Texas.​

This installation builds on Duos Edge AI’s recent Texas deployments in Amarillo (Region 16), Waco (Region 12), and Victoria (Region 3), supporting a broader strategy to deploy edge computing solutions tailored to education, healthcare, and enterprise needs.​

“We are excited to partner with Region 14 ESC to bring cutting-edge technology to Abilene and West Texas, bringing a carrier neutral colocation facility to the market while empowering educators and communities with the tools they need to thrive in a digital world,” said Doug Recker, President of Duos and Founder of Duos Edge AI.​ “This EDC represents our commitment to fostering innovation and economic growth in regions that have historically faced connectivity challenges.”

The Abilene EDC will serve as a local carrier-neutral colocation facility and computing hub, delivering enhanced bandwidth, secure data processing, and low-latency AI capabilities to more than 40 school districts and charter schools across an 11-county region spanning over 13,000 square miles.​

Chris Wigington, Executive Director for Region 14 ESC, added, “Collaborating with Duos Edge AI allows us to elevate the technological capabilities of our schools and partners, ensuring equitable access to high-speed computing and AI resources. This data center will be a game-changer for student learning, teacher development, and regional collaboration.”

By locating the data center at Region 14 ESC, the partnership aims to help bridge digital divides in rural and underserved communities by enabling faster access to educational tools, cloud services, and AI-driven applications, while reducing reliance on distant centralized data centers.​

The EDC is expected to be fully operational in early 2026, with plans for a launch event at Region 14 ESC’s headquarters in Abilene.​

To learn more about Duos Edge AI, visit www.duosedge.ai.

The post Duos Edge AI Deploys Edge Data Center in Abilene, Texas appeared first on Data Center POST.

The Critical Role of Material Selection in Direct-to-Chip Liquid Cooling

30 December 2025 at 15:00

As data centers evolve to support AI, edge computing, and high-density cloud architectures, the challenge is no longer just maintaining an optimal power usage effectiveness (PUE), it is about achieving thermal reliability with unprecedented compute loads. Direct-to-chip liquid cooling continues to push the envelope on heat transfer performance, but one underestimated element in overall system reliability is the material composition of the fluid conveyance network itself. The hoses and fittings that transport coolant through these systems operate in extreme thermal and chemical environments, and their design directly influences uptime, maintenance intervals, and total cost of ownership.

Why Material Selection Matters

At its core, a liquid cooling system is only as reliable as its weakest component. If hoses, fittings, or seals fail due to poor material compatibility, the result could be leaks, contamination, or shortened system life, leading to downtime and costly remediation. Rubber, and rubber-like, materials are critical in hose assemblies, as they must balance flexibility for installation and serviceability with long-term resistance to temperature, pressure, permeation, coolant and coolant additives.

The challenge lies in the fact that not all rubbers, or rubber-like materials, are created equal. Each formulation is a complex recipe of polymers, plasticizers, fillers, and curing agents designed to deliver specific performance characteristics. The wrong selection can lead to issues such as fluid permeation, premature aging, or contamination of the coolant. In mission-critical environments like data centers, where even minor disruptions are unacceptable, this risk is magnified.

Temperature and Chemical Compatibility

Although data center cooling systems typically operate at temperatures between 45°C (113°F) and 65°C (149°F), sometimes reaching 100°C (212°F), those ranges can stress certain materials. Nitrile rubber, for example, performs well in oil-based environments but ages quickly in water-glycol systems, especially at higher temperatures. This can cause hardening, cracking, or coolant contamination.

By contrast, ethylene propylene diene monomer (EPDM) rubber has excellent resistance to water, glycols, and the additives commonly used in data center coolants, such as corrosion inhibitors and biocides. EPDM maintains its properties across the required operating range, making it a proven choice for direct-to-chip applications.

However, not all EPDM is the same. Developing the right EPDM for the application demands a deep understanding of polymer chemistry, filler interactions, and process control to precisely balance flexibility, heat resistance, and long-term stability.

Additionally, two curing processes, sulfur-cured and peroxide-cured, produce different performance outcomes. Sulfur-cured EPDM, while widely used, introduces zinc ions during the curing process. When exposed to deionized water, these ions can leach into the coolant, causing contamination and potentially degrading system performance. Peroxide-cured EPDM avoids this issue, offering higher temperature resistance, lower permeation rates, and greater chemical stability, making it the superior choice for modern liquid cooling.

Even among peroxide-cured EPDM compounds, long term performance is not uniform. While the cure system defines the crosslink chemistry, other formulation choices, particularly filler selection and dispersion, can influence how the material performs over time.

The use of fillers and additives is common in rubber compounding. These ingredients are often selected to control cost, improve processability, or achieve certain performance characteristics such as flame resistance, strength, or flexibility.

The challenge is that some filler systems perform well during initial qualification but are not optimized for long-term exposures faced in the operating environment. Certain fillers or processing aids can slowly migrate over time, introducing extractables into the coolant or subtly altering elastomer properties. For data center applications, EPDM compounds must therefore be engineered with a focus on long term stability, reinforcing why EPDM should not be treated as a commodity material in critical cooling systems.

Risks of Non-Compatible Materials

Material incompatibility can have several cascading effects:

  • Contamination – Non-compatible materials can leach extractables into the coolant, leading to discoloration, chemical imbalance, and reduced thermal efficiency.
  • Permeation – Some rubbers allow fluid to slowly migrate through the hose walls, causing coolant loss or altering the fluid mixture over time.
  • Premature Failure – Elevated temperatures can accelerate aging, leading to cracking, swelling, or loss of mechanical strength.
  • Leakage – Rubber under compression may deform over time, jeopardizing seal integrity if not properly formulated for resistance to compression set and tear.

In a recent two-week aging test at 80°C using a water-glycol coolant, hoses made of nitrile and sulfur-cured EPDM showed visible discoloration of the coolant, indicating leaching and breakdown of the material. Peroxide-cured EPDM, on the other hand, maintained stability, demonstrating its compatibility and reliability in long-term data center applications.

The Gates Approach

Drawing on lessons from mission critical industries that have managed thermal challenges for decades, Gates engineers apply material science rigor to the design of liquid cooling hoses for data center applications.

Rather than relying solely on initial material ratings or short-term qualification criteria, Gates begins by tailoring compound design to the operating environment. This includes deliberate control of polymer selection, filler systems, and cure chemistry to manage long term aging behavior, extractables, permeation, and retention of mechanical properties over time in high purity coolant systems.

Compounds are validated through extended aging and immersion testing that reflects real operating conditions, including exposure to heat, deionized water, and water-glycol coolants. This allows potential material changes to be identified and addressed during development, before installation in the field.

This material science driven process is applied across Gates liquid cooling platforms, including the Data Master, Data Master MegaFlex, and newly released Data Master Eco product lines. By engineering for long term stability rather than only initial compliance, Gates designs hose solutions intended to support reliable operation, predictable maintenance intervals, and extended service life in direct-to-chip liquid cooled data center environments.

Looking Ahead

As data centers continue to scale, thermal management solutions must adapt in parallel. Advanced architectures, higher rack densities, and growing environmental regulations all point to a future where liquid cooling is standard. In this environment, material selection is no longer a secondary consideration; it is foundational to system reliability.

Operators who prioritize material compatibility in fluid conveyance lines will benefit from longer service intervals, improved coolant stability, and reduced risk of downtime. In other words, the proper rubber formulation doesn’t just move fluid, it moves the industry forward.

At Gates, sustainable, high-performance cooling begins with the details. By focusing on the science of materials, we help ensure that data center operators can confidently deploy liquid cooling systems designed for the challenges of today and the innovations of tomorrow.

# # #

About the Author

Chad Chapman is a Mechanical Engineer with over 20 years of experience in the fluid power industry. He currently serves as a Product Application Engineering Manager at Gates, where he leads a team that provides technical guidance, recommendations, and innovative solutions to customers utilizing Gates products and services.

Driven by a passion for problem-solving, Chad thrives on collaborating with customers to understand their unique challenges and deliver solutions that optimize performance. He is energized by learning about new applications and technologies, especially where insights can be shared across industries. At Gates, he has been exploring the emerging field of direct-to-chip liquid cooling, an exciting extension of his deep expertise in thermal management. The rapid advancements in IT technology and AI have made his journey an inspiring and rewarding learning experience.

The post The Critical Role of Material Selection in Direct-to-Chip Liquid Cooling appeared first on Data Center POST.

The Factors That Are Actually Determining the Whereabouts of Hyperscale Data Center Growth

24 December 2025 at 16:00

Individual data centers are now being planned with power loads that exceed those of America’s largest nuclear plants. This is happening for the first time in U.S. history, and it’s reshaping how and where AI infrastructure can realistically be built.

The scale of demand is like nothing we’ve seen before, and developers who once focused on site access, tax incentives, and interconnection speed are now having to evaluate something far more fundamental. They’re assessing whether the grid can deliver the gigawatts of reliable capacity that AI truly requires. This shift is redrawing the map of U.S. data center development, simply because traditional markets are straining under this new, unprecedented load. As a result, new regions are emerging as contenders.

There’s no shortage of opinions on which states are best positioned to capitalize on this moment. Environmentally, states like Texas, Montana, South Dakota, and Nebraska offer low-carbon power, minimal water stress, and long-term sustainability advantages that could help hyperscalers move toward net-zero goals. But development doesn’t always follow ideals. It follows infrastructure. And today, project velocity is rapidly accelerating in states with existing fiber, redundant grid access, fast permitting, and an experienced labor pool, not just green credentials.

The Power Constraint Reality

Transmission and generation limitations have become the number-one barrier to new development. Utilities that once welcomed projects are now warning developers about decade-long delays. In San Antonio, for example, utility officials are telling data center companies that additional capacity may not be available until 2032.

Five years ago, a typical large data center might have drawn 20 to 50 megawatts. Today, AI-focused facilities are routinely planned at 300 megawatts or more; in theory, that’s enough power to sustain a small city. The U.S. currently has 40 gigawatts of operational data center capacity, up from just 26 gigawatts at the end of 2023, and another 24 gigawatts is under construction. In other words, total capacity is set to double in roughly two years.

At Industrial Info Resources, we’re tracking $3.3 trillion in global data center infrastructure investments, including $1 trillion announced in the U.S. in just the past nine months. Yet much of the transmission system these projects rely on is more than 40 years old, strained, and unable to accommodate this surge without major upgrades.

Every data center project is now, ultimately, a power project.

Why Traditional Hotspots Are Reaching Capacity

States like Texas, Ohio, Georgia, and Illinois rose to prominence because of low-cost electricity, abundant natural gas, deep labor pools, and cooperative regulatory environments. But even these markets are showing signs of saturation. Interconnection queues are backed up, delivery timelines are slipping, and developers who once viewed these areas as infinitely scalable are reassessing their options.

The PJM Interconnection, a regional transmission organization covering 13 states, recorded an 800% spike in wholesale capacity prices in its latest auction. The surge was driven by tightening reserve margins and insufficient baseload additions.

Still, not all leading markets are losing momentum; the reality is much more nuanced. Let me explain:

1) Virginia Still Dominates, Here’s Why

Roughly 70% of the world’s internet traffic flows through Loudoun County. That alone keeps Northern Virginia at the top of the list. Add unmatched fiber density, low latency access to East Coast population centers, $50 billion in Dominion Energy grid upgrades, and powerful tax incentives, and one could argue that Virginia remains the most competitive and strategically essential data center market in the world.

2) Despite the Circumstances, Texas Remains a Growth Engine

Texas offers massive wind, solar, and battery energy storage growth, abundant natural gas, and a regulatory environment that supports rapid load growth. ERCOT’s structure allows for faster interconnections, and the state is fast-tracking permits for behind-the-meter natural gas plants to bridge developers until zero-carbon grid supply scales. Again, one could argue that this combination keeps Texas squarely in the number-one or number-two position for AI capacity additions.

3) And the Next Tier of High-Velocity Markets

Georgia continues to attract hyperscale interest with low power prices, tax credits, and significant fiber expansion across the Southeast. This draws development into Alabama and Florida as well. Other hot markets remain in Pennsylvania, Utah, Arizona, Illinois, and Ohio due to their mix of low-cost power, fiber proximity, and access to skilled labor.

The Behind-the-Meter Power Trend

With grid availability tightening, more operators are turning to behind-the-meter solutions such as natural gas fuel cells, turbines, and reciprocating engines. These systems provide bridge power while utilities work through multi-year transmission expansions, and states that permit these assets quickly have a clear advantage.

Even operators committed to 100% renewable energy now recognize the need for reliable natural gas backup to maintain uptime and support local grid stability. A Duke University study found that 40 demand-curtailment events could enable 75 gigawatts of additional data center capacity without requiring new transmission.

The study illustrates a clear reality: flexible load management and temporary curtailment will become essential tools for enabling the next wave of AI-driven growth.

What Emerging States Need to Win

For states seeking to break into the market, the fundamentals matter most. This looks like lower-cost electricity, abundant natural gas, rural land with favorable tax structures, and large parcels suitable for multi-hundred-megawatt campuses.

Arizona is a great example. The state is developing a new natural gas pipeline from the Permian Basin, specifically to support its expanding data center ecosystem. But not all announced projects move forward. Constraints in interconnection studies, available power, or realistic timelines cause many proposals to stall. Tracking approved projects versus announced projects is essential to understanding true market momentum, especially in an industry where nondisclosure agreements tend to limit visibility.

The Reliability Crisis and the Path Forward

From roughly 2014 to 2024, U.S. electricity demand was essentially flat. Today, it is growing about 2% annually, driven largely by AI. The challenge is that renewable energy has expanded far faster than the dispatchable baseload required to support it, widening the reliability gap.

Meeting future AI needs will require a multi-pronged strategy that entails new natural gas plants, delayed coal retirements over the next five to six years, expanded battery storage, and eventually next-generation nuclear. Several major technology companies are already investing in small modular reactors as part of their long-term portfolios.

Data centers themselves can help stabilize the grid through battery storage deployments, demand-response participation, and flexible load practices. But long-term success ultimately depends on whether states can modernize transmission infrastructure, streamline interconnection processes, and commit to realistic baseload planning.

The states that move decisively to address these constraints, and align their infrastructure with the power-first reality of AI, will capture an outsized share of the coming investment. AI-driven demand is not slowing, and the next chapter of America’s energy and technology landscape will be written by those preparing for this moment now.

# # #

About the Author

Shane Mullins brings over 30 years of experience in energy market intelligence and database management to his role at Industrial Info Resources. He specializes in product development for energy equipment and service providers, leveraging decades of industry insight to help clients make informed, data-driven decisions in a rapidly evolving energy landscape.

The post The Factors That Are Actually Determining the Whereabouts of Hyperscale Data Center Growth appeared first on Data Center POST.

Reflecting on a Year of Global Growth at Datalec Precision Installations

19 December 2025 at 13:30

As 2025 comes to a close, Tim Hickinbottom, Head of Strategic Accounts at Datalec Precision Installations (DPI), is reflecting on a milestone year both personally and professionally. With nearly four decades in the digital infrastructure and technology sector, Hickinbottom’s perspective offers insight into how experience, adaptability, and long-term vision continue to shape growth in an evolving industry.

A Career Built on Experience and Adaptability

Hickinbottom’s career began in 1986 at Compucorp and includes formative years in the Royal Navy and with British Aerospace in Saudi Arabia. These early experiences helped shape a leadership approach grounded in resilience, discipline, and adaptability. These are qualities that remain critical as data center and mission-critical services grow more complex and globally connected.

A Defining Year 

In 2025, DPI sustained its year-on-year growth while expanding into new regions. The launch of operations in APAC, continued momentum in the Middle East, and steady growth across Europe marked one of the company’s busiest periods to date. By year-end, DPI expects to operate 23 entities worldwide, with further expansion already underway.

According to Hickinbottom, this progress reflects both strong market demand and a deliberate strategy focused on operational discipline and long-term stability.

Strategy, Engagement, and Sustainability

Behind the visible growth is a leadership team focused on reinvestment and sustainable expansion. While much of this work occurs behind the scenes, evolving strategies and internal alignment are shaping DPI’s direction.

Throughout the year, DPI reinforced its global presence at major industry events including Datacentre World and GITEX conferences across multiple regions. At the same time, the company advanced its sustainability efforts, earning recognition from CDP and EcoVadis and preparing to share its Science Based Targets.

“These initiatives matter deeply to our clients and partners,” Hickinbottom notes, emphasizing accountability and environmental stewardship as core elements of industry leadership.

Looking Ahead to 2026

As DPI looks toward 2026, Hickinbottom remains optimistic about the challenges and opportunities ahead. With hard work embedded in the company’s culture and a clear focus on innovation, DPI is positioned to continue supporting data center operators and digital infrastructure stakeholders worldwide.

“Work should be enjoyable,” Hickinbottom reflects. “It’s been an incredible journey so far, and I’m excited for what’s next.”

To explore Hickinbottom’s full reflections on 2025 and his perspective on the year ahead, read the complete blog on Datalec Precision Installations’ website here.

The post Reflecting on a Year of Global Growth at Datalec Precision Installations appeared first on Data Center POST.

Evocative Advances Data Center Growth Strategy With New Financing

17 December 2025 at 17:00

Evocative, a global provider of Internet infrastructure, has announced that it has raised debt financing from a large global investment firm, complementing continued equity support from its long-term investment partner, Crestline Investors. The financing reflects the next phase of a multi-year growth plan focused on scaling capacity in step with customer demand.

The investment will enable targeted infrastructure initiatives, including capacity upgrades, strategic metro expansions, and continued enhancements across Evocative’s data center, network, bare metal, and cloud platforms as the company responds to rising requirements for power, space, and network density to support enterprise and service provider customers.

“Crestline has worked closely with Evocative as the company continues to execute its strategic business plan,” said Will Palmer, Executive Managing Director and Co-Head of US Corporate Credit. “We believe Evocative is well positioned to meet the increasing demands of the digital infrastructure industry, and we are pleased to support their ongoing expansion and long-term vision.”

Crestline Investors has partnered with Evocative for several years, supporting the company through multiple phases of its strategic growth plan and backing its efforts to scale a global digital infrastructure platform focused on high-density colocation and connectivity. This latest financing builds on that foundation, providing capital to expand where demand is already taking shape.

“This financing marks a significant milestone in Evocative’s continued journey to expand capacity and deliver on our long-term vision of high density colocation and a robust global network to support next generation AI applications,” said Derek Garnier, CEO at Evocative. “We remain committed to building with discipline, scale, and customer focus. Our aim is to continue delivering the space, power, and connectivity required for AI development, hybrid cloud environments, and infrastructure diversification.”

As demand for AI-driven and hybrid infrastructure continues to grow, Evocative remains focused on expanding capacity with discipline across its digital infrastructure platform, aligning investment with real-world deployment needs rather than speculative buildout.

To learn more about Evocative’s digital infrastructure solutions, read the full press release here.

The post Evocative Advances Data Center Growth Strategy With New Financing appeared first on Data Center POST.

Nostrum Data Centers to Deliver 500 MW of AI-Ready, Sustainable Capacity in Spain by 2027

17 December 2025 at 14:00

As demand for AI, cloud, and hyperscale infrastructure accelerates across Europe, Nostrum Data Centers is advancing a new generation of sustainable, high-performance data center assets in Spain, with availability beginning in 2027.

The Spain-based developer is delivering more than 500 MW of IT capacity, supported by secured land and power, enabling customers to move quickly from planning to deployment. With 300 MW of power already secured and scalable to 500 MW, Nostrum is addressing Europe’s growing need for resilient, efficient digital infrastructure.

Earlier this month, Nostrum Data Centers, part of Nostrum Group, recently announced that AECOM will design and manage its $2.1 billion data center campus in Badajoz, one of six strategically located developments across the country. These sites leverage Spain’s strong subsea connectivity, competitive energy costs, and robust power availability to support scalable growth.

“Our Spain-based data centers combine strategic site selection, secured power connections, and AI-ready infrastructure to meet the demands of the next-generation digital economy,” said Gabriel Nebreda, Chief Executive Officer at Nostrum Group. “Our team of industry leaders with over 25 years of experience are developing facilities that are not only highly efficient and scalable but also fully sustainable, supporting both our customers’ growth and global climate goals.”

Engineered for high-density AI and cloud workloads, Nostrum’s facilities are designed to achieve a PUE of 1.1 and a WUE of zero, eliminating water usage for cooling. Collectively, the developments are expected to prevent 10 million metric tonnes of CO2 emissions, aligning with the United Nations Sustainable Development Goals.

Nostrum’s 2027 delivery timeline reinforces its commitment to providing efficient, future-ready infrastructure across Spain for AI, cloud, and hyperscale customers.

Learn more about Nostrum Data Centers: www.thenostrumgroup.com/nostrum-data-centres

Click here to read the press release!

The post Nostrum Data Centers to Deliver 500 MW of AI-Ready, Sustainable Capacity in Spain by 2027 appeared first on Data Center POST.

infra/CAPITAL Summit 2026 Heads to Paris as Europe’s Hyperscale AI Forum

11 December 2025 at 16:00

The European digital infrastructure community will convene in Paris next spring for the launch of the inaugural infra/CAPITAL Summit, taking place 15–16 April 2026 at the Kimpton St Honoré. Hosted in partnership by Structure Research and The Tech Capital, infra/CAPITAL is designed as a vendor‑neutral, executive‑level gathering dedicated to the intersection of hyperscale AI infrastructure and institutional capital.

Positioned as the European Summit for Hyperscale AI Strategy & Execution, infra/CAPITAL will focus on the capital, infrastructure and policy decisions reshaping how AI and cloud platforms scale across the region. The summit will bring together data centre operators, cloud and hyperscale leaders, infrastructure investors, lenders, advisors and policymakers for two days of focused discussions, market intelligence and high‑value networking.

“We established infra/CAPITAL to create a European platform where the future of hyperscale and AI infrastructure can be designed – not just discussed,” said Philbert Shih, Managing Director at Structure Research. “This summit brings together operators, investors and decision‑makers so we can use data – not hype – to chart a sustainable and scalable path for the next generation of digital infrastructure.”

A Program Built Around Capital, Power and Policy

infra/CAPITAL’s agenda is centred on the realities of building and financing AI‑ready infrastructure in Europe. Sessions will explore topics such as power and site strategy, structuring and pricing risk in hyperscale developments, cross‑border expansion, ESG and regulatory requirements, and the evolving role of neocloud and edge in AI architectures. The program will blend independent research, fireside chats and panel discussions with perspectives from across the ecosystem.

“infra/CAPITAL fills a crucial gap in Europe’s data centre and AI infrastructure ecosystem,” added João Marques Lima, Managing Director at The Tech Capital. “By convening cloud and hyperscale leaders alongside capital allocators and industry analysts, we’re building a vital marketplace of ideas and connections – one that will help drive the investments and partnerships shaping tomorrow’s data economy.”

Networking, Partnerships and a Shared Mission

With curated programming, invite‑driven networking and opportunities for structured and informal conversations, infra/CAPITAL Summit 2026 is designed to help participants forge meaningful relationships and unlock new deal pathways. For Structure Research and The Tech Capital, the event extends a shared mission: to support the global digital infrastructure community with independent insight and to convene the decision‑makers who translate strategy into execution.

To learn more or register for infra/CAPITAL Summit 2026, visit: www.infracapitalsummit.com

For more information about Structure Research visit: www.structureresearch.net

For more information about The Tech Capital visit: www.thetechcapital.com

The post infra/CAPITAL Summit 2026 Heads to Paris as Europe’s Hyperscale AI Forum appeared first on Data Center POST.

The Rising Risk Profile of CDUs in High-Density AI Data Centers

10 December 2025 at 17:00

AI has pushed data center thermal loads to levels the industry has never encountered. Racks that once operated comfortably at 8-15 kW are now climbing past 50-100 kW, driving an accelerated shift toward liquid cooling. This transition is happening so quickly that many organizations are deploying new technologies faster than they can fully understand the operational risks.

In my recent five-part LinkedIn series:

  • 2025 U.S. Data Center Incident Trends & Lessons Learned (9-15-2025)
  • Building Safer Data Centers: How Technology is Changing Construction Safety (10-1-2025)
  • The Future of Zero-Incident Data Centers (1ind0-15-2025)
  • Measuring What Matters: The New Safety Metrics in Data Centers (11-1-2025)
  • Beyond Safety: Building Resilient Data Centers Through Integrated Risk Management (11-15-2025)

— a central theme emerged: as systems become more interconnected, risks become more systemic.

That same dynamic influenced the Direct-to-Chip Cooling: A Technical Primer article that Steve Barberi and I published in Data Center POST (10-29-2025). Today, we are observing this systemic-risk framework emerging specifically in the growing role of Cooling Distribution Units (CDUs).

CDUs have evolved from peripheral equipment to a true point of convergence for engineering design, controls logic, chemistry, operational discipline, and human performance. As AI rack densities accelerate, understanding these risks is becoming essential.

CDUs: From Peripheral Equipment to Critical Infrastructure

Historically, CDUs were treated as supplemental mechanical devices. Today, they sit at the center of the liquid-cooling ecosystem governing flow, pressure, temperature stability, fluid quality, isolation, and redundancy. In practice, the CDU now operates as the boundary between stable thermal control and cascading instability.

Yet, unlike well-established electrical systems such as UPSs, switchgear, and feeders, CDUs lack decades of operational history. Operators, technicians, commissioning agents, and even design teams have limited real-world reference points. That blind spot is where a new class of risk is emerging, and three patterns are showing up most frequently.

A New Risk Landscape for CDUs

  • Controls-Layer Fragility
    • Controls-related instability remains one of the most underestimated issues in liquid cooling. Many CDUs still rely on single-path PLC architectures, limited sensor redundancy, and firmware not designed for the thermal volatility of AI workloads. A single inaccurate pressure, flow, or temperature reading can trigger inappropriate or incorrect system responses affecting multiple racks before anyone realizes something is wrong.
  • Pressure and Flow Instability
    • AI workloads surge and cycle, producing heat patterns that stress pumps, valves, gaskets, seals, and manifolds in ways traditional IT never did. These fluctuations are accelerating wear modes that many operators are just beginning to recognize. Illustrative Open Compute Project (OCP) design examples (e.g., 7–10 psi operating ranges at relevant flow rates) are helpful reference points, but they are not universal design criteria.
  • Human-Performance Gaps
    • CDU-related high-potential near misses (HiPo NMs) frequently arise during commissioning and maintenance, when technicians are still learning new workflows. For teams accustomed to legacy air-cooled systems, tasks such as valve sequencing, alarm interpretation, isolation procedures, fluid handling, and leak response are unfamiliar. Unfortunately, as noted in my Building Safer Data Centers post, when technology advances faster than training, people become the first point of vulnerability.

Photo Image: Borealis CDU
Photo by AGT

Additional Risks Emerging in 2025 Liquid-Cooled Environments

Beyond the three most frequent patterns noted above, several quieter but equally impactful vulnerabilities are also surfacing across 2025 deployments:

  • System Architecture Gaps
    • Some first-generation CDUs and loops lack robust isolation, bypass capability, or multi-path routing. Single points of failure, such as a valve, pump, or PLC drive full-loop shutdowns, mirroring the cascading-risk behaviors highlighted in my earlier work on resilience.
  • Maintenance & Operational Variability
    • SOPs for liquid-cooling vary widely across sites and vendors. Fluid handling, startup/shutdown sequences, and leak-response steps remain inconsistent and/or create conditions for preventable HiPo NMs.
  • Chemistry & Fluid Integrity Risks
    • As highlighted in the DTC article Steve Barberi and I co-authored, corrosion, additive depletion, cross-contamination, and stagnant zones can quietly degrade system health. ICP-MS analysis and other advanced techniques are recommended in OCP-aligned coolant programs for PG-25-class fluids, though not universally required.
  • Leak Detection & Nuisance Alarms
    • False positives and false negatives, especially across BMS/DCIM integrations, remain common. Predictive analytics are becoming essential despite not yet being formalized in standards.
  • Facility-Side Dynamics
    • Upstream conditions such as temperature swings, ΔP fluctuations, water hammer, cooling tower chemistry, and biofouling often drive CDU instability. CDUs are frequently blamed for behavior originating in facility water systems.
  • Interoperability & Telemetry Semantics
    • Inconsistent Modbus, BACnet, and Redfish mappings, naming conventions, and telemetry schemas create confusion and delay troubleshooting.

Best Practices: Designing CDUs for Resilience, Not Just Cooling Capacity

If CDUs are going to serve as the cornerstone of liquid cooling in AI environments, they must be engineered around resilience, not simply performance. Several emerging best practices are gaining traction:

  1. Controls Redundancy
    • Dual PLCs, dual sensors, and cross-validated telemetry signals reduce single-point failure exposure. These features do not have prescriptive standards today but are rapidly emerging as best practices for high-density AI environments.
  2. Real-Time Telemetry & Predictive Insight
    • Detecting drift, seal degradation, valve lag, and chemistry shift early is becoming essential. Predictive analytics and deeper telemetry integration are increasingly expected.
  3. Meaningful Isolation
    • Operators should be able to isolate racks, lines, or nodes without shutting down entire loops. In high-density AI environments, isolation becomes uptime.
  4. Failure-Mode Commissioning
    • CDUs should be tested not only for performance but also for failure behavior such as PLC loss, sensor failures, false alarms, and pressure transients. These simulations reveal early-life risk patterns that standard commissioning often misses.
  5. Reliability Expectations
    • CDU design should align with OCP’s system-level reliability expectations, such as MTBF targets on the order of >300,000 hours for OAI Level 10 assemblies, while recognizing that CDU-specific requirements vary by vendor and application.

Standards Alignment

The risks and mitigation strategies outlined above align with emerging guidance from ASHRAE TC 9.9 and the OCP’s liquid-cooling workstreams, including:

  • OAI System Liquid Cooling Guidelines
  • Liquid-to-Liquid CDU Test Methodology
  • ASTM D8040 & D1384 for coolant chemistry durability
  • IEC/UL 62368-1 for hazard-based safety
  • ASHRAE 90.4, PUE/WUE/CUE metrics, and
  • ANSI/BICSI 002, ISO/IEC 22237, and Uptime’s Tier Standards emphasizing concurrently maintainable infrastructure.

These collectively reinforce a shift: CDUs must be treated as availability-critical systems, not auxiliary mechanical devices.

Looking Ahead

The rise of CDUs represents a moment the data center industry has seen before. As soon as a new technology becomes mission-critical, its risk profile expands until safety, engineering, and operations converge around it. Twenty years ago, that moment belonged to UPS systems. Ten years ago, it was batteries. Now, in AI-driven environments, it is the CDU.

Organizations that embrace resilient CDU design, deep visibility, and operator readiness will be the ones that scale AI safely and sustainably.

# # #

About the Author

Walter Leclerc is an independent consultant and recognized industry thought leader in Environmental Health & Safety, Risk Management, and Sustainability, with deep experience across data center construction and operations, technology, and industrial sectors. He has written extensively on emerging risk, liquid cooling, safety leadership, predictive analytics, incident trends, and the integration of culture, technology, and resilience in next-generation mission-critical environments. Walter led the initiatives that earned Digital Realty the Environment+Energy Leader’s Top Project of the Year Award for its Global Water Strategy and recognition on EHS Today’s America’s Safest Companies List. A frequent global speaker on the future of safety, sustainability, and resilience in data centers, Walter holds a B.S. in Chemistry from UC Berkeley and an M.S. in Environmental Management from the University of San Francisco.

The post The Rising Risk Profile of CDUs in High-Density AI Data Centers appeared first on Data Center POST.

Where Is AI Taking Data Centers?

10 December 2025 at 16:00

A Vision for the Next Era of Compute from Structure Research’s Jabez Tan

Framing the Future of AI Infrastructure

At the infra/STRUCTURE Summit 2025, held October 15–16 at the Wynn Las Vegas, Jabez Tan, Head of Research at Structure Research, opened the event with a forward-looking keynote titled “Where Is AI Taking Data Centers?” His presentation provided a data-driven perspective on how artificial intelligence (AI) is reshaping digital infrastructure, redefining scale, design, and economics across the global data center ecosystem.

Tan’s session served as both a retrospective on how far the industry has come and a roadmap for where it’s heading. With AI accelerating demand beyond traditional cloud models, his insights set the tone for two days of deep discussion among the sector’s leading operators, investors, and technology providers.

From the Edge to the Core – A Redefinition of Scale

Tan began by looking back just a few years to what he called “the 2022 era of edge obsession.” At that time, much of the industry believed the future of cloud would depend on thousands of small, distributed edge data centers. “We thought the next iteration of cloud would be hundreds of sites at the base of cell towers,” Tan recalled. “But that didn’t really happen.”

Instead, the reality has inverted. “The edge has become the new core,” he said. “Rather than hundreds of small facilities, we’re now building gigawatts of capacity in centralized regions where power and land are available.”

That pivot, Tan emphasized, is fundamentally tied to economics, where cost, energy, and accessibility converge. It reflects how hyperscalers and AI developers are chasing efficiency and scale over proximity, redefining where and how the industry grows.

The AI Acceleration – Demand Without Precedent

Tan then unpacked the explosive demand for compute since late 2022, when AI adoption began its steep ascent following the launch of ChatGPT. He described the industry’s trajectory as a “roller coaster” marked by alternating waves of panic and optimism—but one with undeniable momentum.

The numbers he shared were striking. NVIDIA’s GPU shipments, for instance, have skyrocketed: from 1.3 million H100 Hopper GPUs in 2024 to 3.6 million Blackwell GPUs sold in just the first three months of 2025, a threefold increase in supply and demand. “That translates to an increase from under one gigawatt of GPU-driven demand to over four gigawatts in a single year,” Tan noted.

Tan linked this trend to a broader shift: “AI isn’t just consuming capacity, it’s generating revenue.” Large language model (LLM) providers like OpenAI, Anthropic, and xAI are now producing billions in annual income directly tied to compute access, signaling a business model where infrastructure equals monetization.

Measuring in Compute, Not Megawatts

One of the most notable insights from Tan’s session was his argument that power is no longer the most accurate measure of data center capacity. “Historically, we measured in square footage, then in megawatts,” he said. “But with AI, the true metric is compute, the amount of processing power per facility.”

This evolution is forcing analysts and operators alike to rethink capacity modeling and investment forecasting. Structure Research, Tan explained, is now tracking data centers by compute density, a more precise reflection of AI-era workloads. “The way we define market share and value creation will increasingly depend on how much compute each facility delivers,” he said.

From Training to Inference – The Next Compute Shift

Tan projected that as AI matures, the balance between training and inference workloads will shift dramatically. “Today, roughly 60% of demand is tied to training,” he explained. “Within five years, 80% will be inference.”

That shift will reshape infrastructure needs, pushing more compute toward distributed yet interconnected environments optimized for real-time processing. Tan described a future where inference happens continuously across global networks, increasing utilization, efficiency, and energy demands simultaneously.

The Coming Capacity Crunch

Perhaps the most sobering takeaway from Tan’s talk was his projection of a looming data center capacity shortfall. Based on Structure Research’s modeling, global AI-related demand could grow from 13 gigawatts in 2025 to more than 120 gigawatts by 2030, far outpacing current build rates.

“If development doesn’t accelerate, we could face a 100-gigawatt gap by the end of the decade,” Tan cautioned. He noted that 81% of capacity under development in the U.S. today comes from credible, established providers, but even that won’t be enough to meet demand. “The solution,” he said, “requires the entire ecosystem, utilities, regulators, financiers, and developers to work in sync.”

Fungibility, Flexibility, and the AI Architecture of the Future

Tan also emphasized that AI architecture must become fungible, able to handle both inference and training workloads interchangeably. He explained how hyperscalers are now demanding that facilities support variable cooling and compute configurations, often shifting between air and liquid systems based on real-time needs.

“This isn’t just about designing for GPUs,” he said. “It’s about designing for fluidity, so workloads can move and scale without constraint.”

Tan illustrated this with real-world examples of AI inference deployments requiring hundreds of cross-connects for data exchange and instant access to multiple cloud platforms. “Operators are realizing that connectivity, not just capacity, is the new value driver,” he said.

Agentic AI – A Telescope for the Mind

To close, Tan explored the concept of agentic AI, systems that not only process human inputs but act autonomously across interconnected platforms. He compared its potential to the invention of the telescope.

“When Galileo introduced the telescope, it challenged humanity’s view of its place in the universe,” Tan said. “Large language models are doing something similar for intelligence. They make us feel small today, but they also open an entirely new frontier for discovery.”

He concluded with a powerful metaphor: “If traditional technologies were tools humans used, AI is the first technology that uses tools itself. It’s a telescope for the mind.”

A Market Transformed by Compute

Tan’s session underscored that AI is redefining not only how data centers are built but also how they are measured, financed, and valued. The industry is entering an era where compute density is the new currency, where inference will dominate workloads, and where collaboration across the entire ecosystem is essential to keep pace with demand.

Infra/STRUCTURE 2026: Save the Date

Want to tune in live, receive all presentations, gain access to C-level executives, investors and industry leading research? Then save the date for infra/STRUCTURE 2026 set for October 7-8, 2026 at The Wynn Las Vegas. Pre-Registration for the 2026 event is now open, and you can visit www.infrastructuresummit.io to learn more.

The post Where Is AI Taking Data Centers? appeared first on Data Center POST.

Data Center Rack and Enclosure Market to Surpass USD 10.5 Billion by 2034

9 December 2025 at 18:00

The global data center rack and enclosure market was valued at USD 4.6 billion in 2024 and is projected to grow at a CAGR of 8.4% from 2025 to 2034, according to a recent report by Global Market Insights Inc.

The increasing adoption of edge computing, spurred by the proliferation of Internet of Things (IoT) devices, is a significant driver of market growth. The surge in modular data centers, known for their portability and scalability, boosts demand for adaptable racks and enclosures. These systems enable businesses to expand data center capacity incrementally without committing to large-scale infrastructure. Modular designs often require specialized racks and enclosures that are quick to deploy and flexible enough to meet evolving operational demands.

By component, the data center rack and enclosure market is segmented into solutions and services. In 2024, the solutions segment captured 75% of the market share and is expected to reach USD 7 billion by 2034. The increasing complexity of tasks like artificial intelligence (AI), machine learning (ML), and big data processing drives demand for high-density rack solutions. These racks optimize space utilization, a critical factor in environments with constrained power, cooling, and availability. Advanced cooling mechanisms, such as liquid cooling and airflow optimization, are essential features supporting these dense configurations.

In terms of application, the market is categorized into manufacturing, BFSI, colocation, government, healthcare, IT & telecom, energy, and others. The IT & telecom segment accounted for 32% of the market share in 2024. The shift towards cloud computing is revolutionizing IT and telecom industries, increasing the demand for robust data center infrastructure. Scalable and efficient racks and enclosures are essential to handle growing data volumes while ensuring optimal performance in cloud-based operations.

North America dominated the global data center rack and enclosure market in 2024, holding a 40% market share, with the U.S. leading the region. The presence of major cloud service providers like Amazon Web Services (AWS), Google Cloud, and Microsoft Azure has driven significant data center expansion in the region. This growth necessitates modular, flexible, scalable rack and enclosure solutions to support dynamic storage needs. Additionally, substantial investments by government entities and private enterprises in digital transformation and IT infrastructure upgrades further fuel market expansion.

The demand for innovative, high-performance data center racks and enclosures continues to rise as industries embrace digital transformation and advanced technologies. This trend ensures a positive outlook for the market through the forecast period.

The post Data Center Rack and Enclosure Market to Surpass USD 10.5 Billion by 2034 appeared first on Data Center POST.

Redefining Investment and Innovation in Digital Infrastructure

9 December 2025 at 14:00

How new entrants are reshaping data center operations, capital models, and sustainable development

At the infra/STRUCTURE Summit 2025, held October 15–16 at The Wynn Las Vegas, one of the most engaging conversations explored how a new generation of operators is reshaping the data center landscape.

The session, “New Operating Platforms,” moderated by Philbert Shih, Managing Director of Structure Research, brought together executives leading some of the most innovative digital infrastructure ventures: Ernest Popescu, CEO of Metrobloks Data Centers; Eanna Murphy, Founder and CEO of Montera Infrastructure; and Chuck McBride, CEO of Atmosphere Data Centers.

Together, they discussed how new business models, evolving capital structures, and sustainability commitments are redefining what it means to operate in the fast-changing world of digital infrastructure.

Identifying Gaps in a Rapidly Evolving Market

Shih opened the discussion by noting that the surge in investment across digital infrastructure has created room for new operating platforms to emerge.

“The industry has arguably over-indexed on hyperscale and colocation,” Shih said. “But the opportunity now lies in the gaps, in the diverse mix of services, geographies, and market segments that remain underserved.”

He challenged the panelists to explore how their platforms are addressing those gaps, and what kinds of efficiencies or innovations are shaping their approach.

Building for Speed and Efficiency

Murphy described his company’s focus on secondary and emerging markets, areas where demand is strong but infrastructure capacity has lagged.

“We wanted to look at regions where enterprise customers were underserved,” Murphy said. “Our model focuses on connecting Tier 2 cities and surrounding areas, delivering capacity closer to users and creating new connectivity ecosystems.”

Murphy emphasized that Montera’s approach is designed for speed and scale, combining pre-engineered designs and local partnerships to accelerate delivery.

“Even in smaller markets,” Murphy said, “you can build meaningful density if you plan it right and align with community needs.”

Balancing Capital, Capacity, and Time-to-Market

Popescu noted that access to capital remains one of the biggest hurdles for new operators, especially those outside traditional hyperscale markets.

“There’s plenty of opportunity in the market, but capital deployment still comes down to risk tolerance and timing,” Popescu said. “You can’t shortcut power availability, but you can manage time-to-market with flexible models and smart partnerships.”

Metrobloks focuses on developing scalable, self-performable campuses in underserved markets, combining modular design with utility partnerships to bring new capacity online faster.

“It might not be massive by hyperscale standards,” Popescu said. “But for our customers, being able to access distribution power in 12 to 18 months can make all the difference.”

Sustainability and the Next Generation of Infrastructure

For McBride, sustainability and long-term adaptability are at the heart of his company’s strategy.

“We made a conscious choice not to inherit legacy assets,” McBride said. “Instead, we’re building brand-new AI-ready campuses in underserved markets, what we call next-generation training centers.”

Atmosphere’s developments prioritize renewable energy integration and community revitalization. McBride described projects that convert industrial land, such as former power plant sites, into modern digital campuses.

“We’re taking coal-fired sites and turning them into green campuses,” McBride said. “It’s about giving these sites a second life while meeting the demands of AI and high-performance computing.”

Adapting to Changing Technology Cycles

The conversation turned to how operators are preparing for rapid changes in compute and chip technology, particularly as AI drives unprecedented density and cooling requirements.

Murphy noted the growing challenge of aligning long-term infrastructure planning with short hardware cycles.

“Every six months we’re seeing new chip architectures from NVIDIA, AMD, and others,” Murphy said. “But the data center development cycle is still three to five years. The challenge is designing for what’s next without overcommitting to what’s current.”

Panelists agreed that future-proofing is now a key differentiator, with flexibility, modularity, and liquid cooling readiness built into early designs.

Smarter Capital and Better Collaboration

Reflecting on the evolution of the investment landscape, Popescu shared that today’s capital partners are far more informed about the digital infrastructure asset class than even a few years ago.

“Institutional investors have become much more educated,” Popescu said. “The conversations are smarter, and there’s a better understanding of the balance between cost, speed, and sustainability.”

McBride added that hyperscalers, too, have shown greater willingness to adapt pricing and partnership structures in response to development challenges.

“Three years ago, I had never seen the major cloud players react so quickly,” McBride said. “They know developers are essential to getting capacity online, and that alignment benefits everyone.”

The Opportunity Ahead

In closing, Shih reflected on how the emergence of these new operating platforms is reshaping the broader ecosystem.

“We’re watching the rise of operators who are not just building capacity but reimagining how the industry functions,” Shih said. “They’re bridging the gap between capital, sustainability, and innovation, and that’s what will define the next phase of growth.”

As the digital infrastructure industry continues to evolve, these leaders are demonstrating that success now depends as much on creativity and collaboration as it does on capital and construction.

Infra/STRUCTURE 2026: Save the Date

Want to tune in live, receive all presentations, gain access to C-level executives, investors and industry leading research? Then save the date for infra/STRUCTURE 2026 set for October 7-8, 2026 at The Wynn Las Vegas. Pre-Registration for the 2026 event is now open, and you can visit www.infrastructuresummit.io to learn more.

The post Redefining Investment and Innovation in Digital Infrastructure appeared first on Data Center POST.

❌