Reading view

2025 in Review: Sabey’s Biggest Milestones and What They Mean

Originally posted on Sabey Data Centers.

At Sabey Data Centers, progress is more than a series of headlines. It’s a blueprint for what’s possible when infrastructure, efficiency and stewardship go hand in hand. From our award-winning sustainability initiatives to bold new campus designs and record-setting expansions, each milestone this year has demonstrated our commitment to powering tomorrow’s workloads with conscience and confidence.

As 2026 is already well underway and already promising to be a banner year, we wanted to pause and reflect on the path we forged in 2025.

Capacity expansion: built for growth 

In 2025, we announced strategic power expansions across our Pacific Northwest, Ashburn, Austin and Quincy locations. In Seattle and Columbia, 30MW of new power is now anticipated to come online by 2027, enabling tenants to scale quickly while leveraging ultra-efficient energy (carbon-free in Seattle) and a regional average PUE as low as 1.2.

On the East Coast, Ashburn’s third and final building broke ground, set to introduce 54MW of additional capacity with the first tranches set to come online in 2026. This will be the first three-story facility in Sabey’s portfolio, purpose-built for air-cooled, liquid-cooled and hybrid deployments, with rack densities over 100kW and an average PUE of 1.35. In both regions, Sabey’s expansion balances hyperscale demand with customization, modular scale and resilient connectivity.

The launch of construction for Austin Building B was another major milestone, expanding our presence in the dynamic Round Rock tech corridor. This three-story, liquid-cooling-ready facility is designed to deliver 54 megawatts of total power capacity. Building B continues our commitment to scalable, energy-efficient digital infrastructure, tailored for enterprise and hyperscale workloads.

To continue reading, please click here.

The post 2025 in Review: Sabey’s Biggest Milestones and What They Mean appeared first on Data Center POST.

  •  

Duos Edge AI Earns PTC’26 Innovation Honor

Duos Technologies Group Inc. (Nasdaq: DUOT), through its operating subsidiary Duos Edge AI, Inc., received the Outstanding Innovation Award at Pacific Telecommunication Conference 2026 (PTC’26). This honor recognizes Duos Edge AI’s leadership in modular Edge Data Center (EDC) solutions that boost efficiency, scalability, security, and customer experience.

Duos Edge AI’s capital-efficient model supports rapid 90-day installations and scalable growth tailored to regional needs like education, healthcare, and municipal services. High-availability designs deliver up to 100 kW+ per cabinet with resilient, 24/7 operations positioned within 12 miles of end users for minimal latency.

“This recognition from Pacific Telecommunications Council (PTC) is a meaningful validation of our strategy and execution,” said Doug Recker, President of Duos and Founder of Duos Edge AI. “Our mission has been to bring secure, low-latency digital infrastructure directly to communities that need it most. By deploying edge data centers where people live, learn, and work, we’re helping close the digital divide while building a scalable platform aligned with long-term growth and shareholder value.”

The award spotlights Duos Edge AI’s patented modular EDCs deployed in underserved communities for low-latency, enterprise-grade infrastructure. These centers enable real-time AI processing, telemedicine, digital learning, and carrier-neutral connectivity without distant cloud reliance.

Duos Edge AI thanks partners like Texas Regions 16 and 3 Education Service Centers, Dumas ISD, and local leaders embracing localized tech for equity.

To learn more about Duos Edge AI, visit www.duosedge.ai.

The post Duos Edge AI Earns PTC’26 Innovation Honor appeared first on Data Center POST.

  •  

Sabey and JetCool Push Liquid Cooling from Pilot to Standard Practice​

As AI and high‑performance computing (HPC) workloads strain traditional air‑cooled data centers, Sabey Data Centers is expanding its partnership with JetCool Technologies to make direct‑to‑chip liquid cooling a standard option across its U.S. portfolio. The move signals how multi‑tenant operators are shifting from experimental deployments to programmatic strategies for high‑density, energy‑efficient infrastructure.

Sabey, one of the largest privately held multi‑tenant data center providers in the United States, first teamed with JetCool in 2023 to test direct‑to‑chip cooling in production environments. Those early deployments reported 13.5% server power savings compared with air‑cooled alternatives, while supporting dense AI and HPC racks without heavy reliance on traditional mechanical systems.

The new phase of the collaboration is less about proving the technology and more about scale. Sabey and JetCool are now working to simplify how customers adopt liquid cooling by turning what had been bespoke engineering work into repeatable designs that can be deployed across multiple sites. The goal is to give enterprises and cloud platforms a predictable path to high‑density infrastructure that balances performance, efficiency and operational risk.

A core element of that approach is a set of modular cooling architectures developed with Dell Technologies for select PowerEdge GPU‑based servers. By closely integrating server hardware and direct‑to‑chip liquid cooling, the partners aim to deliver pre‑validated building blocks for AI and HPC clusters, rather than starting from scratch with each project. The design includes unified warranty coverage for both the servers and the cooling system, an assurance that Sabey says is key for customers wary of fragmented support models.

The expanded alliance sits inside Sabey’s broader liquid cooling partnership program, an initiative that aggregates multiple thermal management providers under one framework. Instead of backing a single technology, Sabey is positioning itself as a curator of proven, ready‑to‑integrate cooling options that map to varying density targets and sustainability goals. For IT and facilities teams under pressure to scale GPU‑rich deployments, that structure promises clearer design patterns and faster time to production.

Executives at both companies frame the partnership as a response to converging pressures: soaring compute demand, tightening efficiency requirements and growing scrutiny of data center energy use. Direct‑to‑chip liquid cooling has emerged as one of the more practical levers for improving thermal performance at the rack level, particularly in environments where power and floor space are limited but performance expectations are not.

For Sabey, formalizing JetCool’s technology as a standard, warranty‑backed option is part of a broader message to customers: liquid cooling is no longer a niche or one‑off feature, but an embedded part of the company’s roadmap for AI‑era infrastructure. Organizations evaluating their own cooling strategies can find the full announcement here.

The post Sabey and JetCool Push Liquid Cooling from Pilot to Standard Practice​ appeared first on Data Center POST.

  •  

Building the Digital Foundation for the AI Era: DataBank’s Vision for Scalable Infrastructure

Data Center POST connected with Raul K. Martynek, Chief Executive Officer of DataBank Holdings, Ltd., ahead of PTC’26. Martynek joined DataBank in 2017 and brings more than three decades of leadership experience across telecommunications, Internet infrastructure, and data center operations. His background includes senior executive roles at Net Access, Voxel dot Net, Smart Telecom, and advisory positions with DigitalBridge and Plainfield Asset Management. Under his leadership, DataBank has expanded its national footprint, strengthened its interconnection ecosystems, and positioned its platform to support AI-ready, high-density workloads across enterprise, cloud, and edge environments. In the Q&A below, Martynek shares his perspective on the challenges shaping global digital infrastructure and how DataBank is preparing customers for the next phase of AI-driven growth.

Data Center Post (DCP) Question: What does your company do?  

Raul Martynek (RM) Answer: DataBank helps the world’s largest enterprises, technology, and content providers ensure their data and applications are always on, always secure, always compliant, and ready to scale to meet the needs of the artificial intelligence era.

DCP Q: What problems does your company solve in the market?

RM A: DataBank addresses a broad set of challenges enterprises face when managing critical infrastructure. Reliability and uptime are foundational, as downtime can severely impact revenue and customer trust. We also help organizations meet security and compliance requirements without having to build costly internal expertise. Our platform allows customers to scale infrastructure without large capital expenditures by shifting to an operating expense model. In addition, we provide managed expertise that frees internal teams to focus on strategic priorities, simplify hybrid IT and cloud integration, improve latency for distributed and edge workloads, strengthen cybersecurity posture, and mitigate talent and resource constraints.

DCP Q: What are your company’s core products or services?

RM A: Data center colocation, Interconnection, Enterprise Cloud, Compliance Enablement, Data Protection. Powered by expert, human support

DCP Q: What markets do you serve?

RM A: DataBank serves customers across a broad geographic footprint in the United States and Europe. In the western United States, the company operates in key markets including Irvine, Los Angeles, and Silicon Valley in California, as well as Las Vegas, Salt Lake City, and Seattle. Its central U.S. presence includes Chicago, Denver, Indianapolis, and Kansas City. In the southern region, DataBank supports customers in Atlanta, Austin, Dallas, Houston, Miami, and Waco. Along the East Coast and Midwest, the company operates in markets such as Boston, Cleveland, New Jersey, New York City, Philadelphia, and Pittsburgh. Internationally, DataBank also serves customers in the United Kingdom.

DCP Q: What challenges does the global digital infrastructure industry face today?

RM A: The industry is facing a convergence of challenges, including power availability and grid constraints, sustainability and carbon reduction requirements, cooling demands for high-density AI and HPC workloads, supply chain pressures, land acquisition and zoning issues, and increasing interconnection complexity. At the same time, organizations must contend with talent shortages and rising cybersecurity risks, all while supporting rapidly expanding digital workloads.

DCP Q: How is your company adapting to these challenges?

RM A: We are building in markets with available power headroom and designing scalable power blocks to support future growth. Our facilities are being prepared for AI-era density with liquid-ready designs and more efficient cooling strategies. Sustainability remains a priority, with a focus on lowering energy and water usage. We are standardizing construction to improve efficiency and flexibility while expanding interconnection ecosystems such as DE-CIX. Additionally, our managed services help fill enterprise talent gaps, and we continue to invest in operational excellence, security, and company culture.

DCP Q: What are your company’s key differentiators?

RM A: DataBank differentiates itself through strong engineering and operational management, future-ready platforms, and deep compliance expertise. Our geographic focus allows us to serve customers where they need infrastructure most, while our managed services provide visibility and control across complex environments. We are also supported by patient, long-term investors, enabling disciplined growth and sustained investment.

DCP Q: What can we expect to see/hear from your company in the future?  

RM A: Customers can expect continued commitment to enterprise IT infrastructure alongside expanded AI-ready platforms. We are growing our interconnection ecosystems, advancing sustainability initiatives, modernizing key campuses, and expanding managed and hybrid IT services. Enhancing security, compliance, and customer success will remain central, as will our focus on talent and culture.

DCP Q: What upcoming industry events will you be attending? 

RM A: AI Tinkers; Metro Connect; ATC CEO Summit; MIMSS 26; DCD>Connect 2026; ITW 2026; 7×24 Cloud Run Community Festival; CBRE Digital Infrastructure Summit 2026; AI Infra Conference; TMT M&A Forum; MegaPort Connect; TAG Data Center Summit; Supercomputing 2026; Incompany; DE-DIX Dallas Olde World Holiday Market

DCP Q: Do you have any recent news you would like us to highlight?

RM A: DataBank has recently announced several milestones that underscore its continued growth and long-term strategy. The company expanded its financing vehicle to $1.6 billion to support the next phase of platform expansion and infrastructure investment. DataBank also released new research showing that 60 percent of enterprises are already seeing a return on investment from AI initiatives or expect to within the next 12 months, highlighting the accelerating business impact of AI adoption. In addition, DataBank introduced a company-wide employee ownership program, reinforcing its commitment to culture, alignment, and long-term value creation across the organization.

DCP Q: Is there anything else you would like our readers to know about your company and capabilities?

RM A: DataBank is building the digital foundation for the AI, cloud, and connected-device era. Its national footprint of data centers delivers secure, high-density colocation, interconnection, and managed services that help enterprises deploy mission-critical workloads with confidence.

We are designing for the future with liquid-cooling capabilities, campus modernization, and expanded interconnection ecosystems. We are equally committed to responsible digital infrastructure: improving efficiency, reducing water use, strengthening security, and advancing compliance.

Above all, DataBank we are a trusted infrastructure partner, providing the expertise and operational support organizations need to scale reliably and securely.

DCP Q: Where can our readers learn more about your company?  

RM A: www.databank.com

DCP Q: How can our readers contact your company? 

PQ A: www.databank.com/contact-us

To learn more about PTC’26, please visit www.ptc.org/ptc26. The event takes place January 18-21, 2026 in Honolulu, HI.

If you are interested in contributing to Data Center POST, contact us at contributions@datacenterpost.com.

Stay in the know! Subscribe to Data Center POST today.

# # #

About Data Center POST

Data Center POST provides a comprehensive view of the digital infrastructure landscape, delivering industry insights into the global data center ecosystem. As the industry’s only peer-contributed and online publication, we offer relevant information from developers, managers, providers, investors, and trendsetters worldwide.

Data Center POST works hard to get the most current information and thought-provoking ideas most apt to add relevance to the success of the data center industry. Stay informed, visit www.datacenterpost.com.

ADVERTISE | CONTRIBUTE | SUBSCRIBE

The post Building the Digital Foundation for the AI Era: DataBank’s Vision for Scalable Infrastructure appeared first on Data Center POST.

  •  

Power Should Serve Compute: Airsys’ Energy-First Approach to Data Center Cooling

Data Center POST had the opportunity to connect with Paul Quigley, President of Airsys Cooling Technologies, Inc., ahead of PTC’26. With more than 36 years of experience spanning HVAC contracting, distribution, and executive leadership in manufacturing, Quigley is widely recognized for his work in large-scale, high-demand cooling environments. His career includes co-designing one of the largest VRF projects in North America and leading complex cooling initiatives across the United States. Since joining Airsys in 2021, Quigley has focused on advancing precision and liquid-assisted cooling technologies to help data center operators address power constraints driven by AI, high-density compute, and rapid digital infrastructure growth. In the Q&A below, Quigley shares his perspective on the challenges facing the global digital infrastructure industry and how Airsys is helping operators turn cooling efficiency into usable compute power.

Data Center Post (DCP) Question: What does your company do?  

Paul Quigley (PQ) Answer: Airsys is an energy-focused cooling solutions company founded on a simple idea: power should serve computation, not be lost protecting it. Every system we build returns power back to the data center where it belongs. Our innovative, award-winning technologies give operators more usable power for compute, improving capacity, resilience, and overall profitability.

DCP Q: What problems does your company solve in the market?

PQ A: Airsys helps data centers recover power lost to cooling. As AI clusters push energy systems past their limits, operators are forced to choose between compute, capacity, and cost. Our cooling technologies solve that problem by returning power back to the data center, reducing stranded capacity, and giving operators more usable power for compute.

DCP Q: What are your company’s core products or services?

PQ A: We design and build the world’s most advanced cooling solutions for data centers, including:

  • LiquidRack spray-cooling and liquid-assisted rack systems
  • High-efficiency DX and chilled-water cooling systems
  • Flooded-evaporator chillers (CritiCool-X)
  • Indoor and outdoor precision cooling systems
  • Edge, modular, and containerized data-center cooling
  • Control systems, energy-optimization tools, and PCE/ROIP performance frameworks

DCP Q: What markets do you serve?

PQ A:

  • Hyperscale and AI compute environments
  • Colocation and enterprise data centers
  • Modular and prefabricated data centers
  • Edge and telecom infrastructure
  • Education, industrial, government, and defense applications requiring mission-critical cooling

DCP Q: What challenges does the global digital infrastructure industry face today?

PQ A: The industry now operates in an environment of rapid compute expansion and structural power scarcity. Grid limitations, long construction timelines, inefficient legacy cooling, land constraints, and the explosive rise of AI workloads have created an energy bottleneck. Data centers no longer struggle with space… they struggle with power.

DCP Q: How is your company adapting to these challenges?

PQ A: Airsys has always shifted the conversation from cooling to energy stewardship. Every system we design focuses on reducing cooling overhead and returning power to computation. Our LiquidRack spray-cooling platform, low-lift CritiCool-X chillers, and high-efficiency DX systems enable operators to deploy more compute with less infrastructure. We’re also advancing new metrics – PCE and ROIP – to help operators quantify the financial value of returned power.

DCP Q: What are your company’s key differentiators?

PQ A:

  • Energy-first design philosophy — our systems return power to compute
  • Rapid delivery and global manufacturing — critical in today’s supply-strained market
  • LiquidRack spray cooling — enabling high-density AI clusters without stranded power
  • Flooded-evaporator chiller technology — high efficiency, low lift, faster deployment
  • End-to-end portfolio — DX, chilled water, liquid-assisted cooling, modular systems
  • Practical engineering — simple, reliable, maintainable designs
  • PCE / ROIP frameworks — financial and operational tools that change how operators evaluate cooling impact

DCP Q: What can we expect to see/hear from your company in the future?  

PQ A: You will see a major expansion of our LiquidRack platform, new ultra-efficient chiller technologies, deeper integration of PCE/ROIP metrics, and broader support for modular and edge deployments. We are continuing to push innovation toward one goal: giving operators more usable power for compute.

DCP Q: What upcoming industry events will you be attending? 

PQ A: Events Airsys is considering for 2026:

PTC’26 Hawaii; Escape the Cold Aisle Phoenix; Advancing DC Construction West; DCD>Connect New York; Data Center World DC; World of Modular Las Vegas; NSPMA; 7×24 Exchange Spring Orlando; Data Center Nation Toronto; Datacloud USA Austin; AI Infra Summit Santa Clara; DCW Power Dallas; Yotta Las Vegas; 7×24 Exchange Fall San Antonio; PTC DC; DCD>Connect Virginia; DCAC Austin; Supercomputing; Gartner IOCS; NVIDIA GTC; OCP Global Summit.

DCP Q: Do you have any recent news you would like us to highlight?

PQ A: Yes, Airsys has expanded its high-density cooling portfolio with several major advancements:

More announcements are planned for early 2026 as Airsys continues to expand its advanced cooling portfolio for high-density compute environments

DCP Q: Is there anything else you would like our readers to know about your company and capabilities?

PQ A: Airsys is built on a simple idea: power should serve computation, not be lost protecting it. Our mission is to help operators turn cooling losses into compute gains and to make energy stewardship a competitive advantage in the AI era.

DCP Q: Where can our readers learn more about your company?  

PQ A: www.airsysnorthamerica.com

DCP Q: How can our readers contact your company? 

PQ A: www.airsysnorthamerica.com/contact

To learn more about PTC’26, please visit www.ptc.org/ptc26. The event takes place January 18-21, 2026 in Honolulu, HI.

If you are interested in contributing to Data Center POST, contact us at contributions@datacenterpost.com.

Stay in the know! Subscribe to Data Center POST today.

# # #

About Data Center POST

Data Center POST provides a comprehensive view of the digital infrastructure landscape, delivering industry insights into the global data center ecosystem. As the industry’s only peer-contributed and online publication, we offer relevant information from developers, managers, providers, investors, and trendsetters worldwide.

Data Center POST works hard to get the most current information and thought-provoking ideas most apt to add relevance to the success of the data center industry. Stay informed, visit www.datacenterpost.com.

ADVERTISE | CONTRIBUTE | SUBSCRIBE

The post Power Should Serve Compute: Airsys’ Energy-First Approach to Data Center Cooling appeared first on Data Center POST.

  •  

AI Data Center Market to Surpass USD 1.98 Trillion by 2034

The global AI data center market was valued at USD 98.2 billion in 2024 and is estimated to grow at a CAGR of 35.5% to reach USD 1.98 trillion by 2034, according to a recent report by Global Market Insights Inc.

Growing adoption of generative AI and machine learning tools requires extraordinary processing power and storage capabilities, increasing reliance on data centers specifically optimized for AI workloads. These environments depend on advanced GPUs, scalable system architecture, and ultra-low-latency networking to support complex model training and inference across industries such as finance, healthcare, and retail. Big data analytics is also accelerating demand, as organizations handle massive streams of structured and unstructured information that must be processed rapidly.

AI-focused facilities enable high-performance computing for real-time workloads, strengthening their role as essential infrastructure for global digital transformation. The rapid expansion of cloud computing, along with the rising number of hyperscale facilities, continues to amplify the need for AI-ready infrastructures. Providers are investing in advanced AI data platforms that offer scalable services to enterprises and developers, further increasing market momentum.

The AI data center market from the hardware segment accounted for USD 61.1 billion in 2024. Growth is driven by expanding use of AI chips, GPU accelerators, advanced cooling technologies, high-density server systems, and optical networking solutions. Rising GPU energy requirements, the shift toward rack densities between 30-120 kW, and large-scale deployment strategies introduced by leading technology companies are shaping long-term capital allocation in the sector.

The cloud-based category held a 58% share in 2024 and is projected to grow at a CAGR of 35.2% from 2025 through 2034. This segment leads due to its unmatched scalability, flexible consumption options, and access to the latest AI-accelerated computing hardware without upfront investment. Hyperscale providers are making multi-billion-dollar commitments to strengthen global AI infrastructures, propelling adoption of AI-driven services and increasing demand for GPUs, TPUs, and specialized processors.

U.S. AI Data Center Market generated USD 33.2 billion in 2024. The country maintains a leading position supported by prominent hyperscale operators and substantial investments in GPU clusters, liquid cooling, and large-scale AI-aligned builds. Federal incentives, regional tax advantages, and infrastructure funding have further solidified the United States as the most capacity-rich region for AI computing.

Key participants in the AI data center market include Huawei, AWS, NVIDIA, HPE, Digital Realty, Google, Lenovo, Microsoft, Equinix, and Dell Technologies. Companies expanding their foothold in the market are focusing on infrastructure modernization, large-scale GPU deployments, and energy-efficient system design.

Many firms are investing in high-density racks, integrated liquid cooling, and next-generation networking to support advanced AI workloads. Strategic partnerships with chipmakers, cloud providers, and colocation operators help accelerate capacity expansion and ensure access to cutting-edge AI hardware. Providers are also scaling global data center footprints, enhancing automation capabilities, and optimizing power utilization through renewable-energy integration. Long-term contracts with enterprises, AI-as-a-service offerings, and the buildout of specialized AI clusters further reinforce competitive positioning and market dominance.

The post AI Data Center Market to Surpass USD 1.98 Trillion by 2034 appeared first on Data Center POST.

  •  

Issues Data Centers Face and How to Overcome Them: A Guide for Managers

Data centers are the backbone of modern digital infrastructure. They power cloud services, financial systems, healthcare platforms, and nearly every technology-driven business. As demand for data storage and processing grows, so do the operational, financial, and risk-related challenges data centers face. For managers, understanding these issues and knowing how to address them proactively is critical to maintaining uptime, security, and long-term viability. This is what you need to know to ensure you can meet up with demand.

Rising Energy Costs and Efficiency Demands

One of the most persistent challenges for data centers is energy consumption. Powering servers, cooling systems, and redundancy infrastructure requires enormous amounts of electricity, and energy costs continue to rise globally. Beyond cost, there is increasing pressure from regulators and customers to reduce environmental impact.

Managers can address this by investing in energy-efficient hardware, optimizing airflow and cooling layouts, and adopting real-time monitoring tools to identify waste. Long-term strategies may include transitioning to renewable energy sources or partnering with utility providers for more favorable pricing structures.

Cooling and Thermal Management Challenges

Heat is an unavoidable byproduct of high-density computing. Inefficient cooling not only increases costs but also raises the risk of equipment failure and downtime. As server densities increase, traditional cooling methods often struggle to keep up.

Modern solutions include hot-aisle/cold-aisle containment, liquid cooling, and AI-driven thermal monitoring systems. For managers, the key is treating cooling as a dynamic system rather than a fixed infrastructure. However, this needs to be one that evolves alongside hardware demands.

Financial Risk and Insurance Considerations

Data centers face significant financial exposure from equipment damage, downtime, liability claims, and unforeseen events. Even with strong operational controls, risk cannot be eliminated entirely.

This is where insurance becomes a critical part of risk management. Evaluating coverage that aligns with the unique needs of data center operations can help protect against losses that would otherwise threaten business continuity. BOP insurance by Next Insurance can help managers think more holistically about protecting assets, operations, and revenue streams as part of an overall risk strategy.

Downtime and Business Continuity Risks

Even brief outages can result in significant financial losses, reputational damage, and contractual penalties. This downtime may be caused by:

  • Power failures
  • Human error
  • Equipment malfunction
  • External events

To mitigate this risk, managers should prioritize redundancy at every critical point, including power supplies, network connections, and backup systems. Regular testing of disaster recovery plans is just as important as having them documented, too. A plan that hasn’t been tested is often unreliable in real-world conditions. You put your business and those whom you cater to at risk.

Cybersecurity and Physical Security Threats

Data centers face a dual security challenge: digital threats and physical risks. Cyberattacks continue to grow in sophistication, while physical threats such as unauthorized access, theft, or vandalism remain real concerns.

Addressing this requires layered security. On the digital side, this includes continuous patching, network segmentation, and monitoring for unusual activity. Physically, access controls, surveillance systems, and strict visitor protocols are essential. Managers should also ensure that staff training keeps pace with evolving threats, as human error remains a major vulnerability.

Compliance and Regulatory Pressure

Data centers often operate under complex regulatory requirements related to data privacy, industry standards, and regional laws. Compliance failures can result in fines, legal exposure, and loss of customer trust.

Managers can stay ahead by maintaining clear documentation, conducting regular audits, and working closely with legal and compliance teams. Building compliance into operational processes, rather than treating it as an afterthought, reduces risk and simplifies reporting.

Turning Challenges Into Strategic Advantage

While data centers face a wide range of operational and strategic challenges, these issues also present opportunities for improvement and differentiation. Managers who address all of the above proactively are better positioned to deliver reliability and value in a competitive market. Don’t let yours be the one that falls victim to the issues and instead take action.

# # #

About the Author:

James Daniels is a freelance writer, business enthusiast, a bit of a tech buff, and an overall geek. He is also an avid reader, who can spend hours reading and knowing about the latest gadgets and tech, whilst offering views and opinions on these topics.

The post Issues Data Centers Face and How to Overcome Them: A Guide for Managers appeared first on Data Center POST.

  •  

Human Error in Cybersecurity and the Growing Threat to Data Centers

Cyber incidents haven’t ceased to escalate in 2025, and they keep making their presence felt more and more impactfully as we transition into 2026. The quick evolution of novel cyber threat trends leaves data centers increasingly exposed to disruptions extending beyond the traditional IT boundaries.

The Uptime Institute’s annual outage analysis shows that in 2024, cyber-related disruptions occurred at roughly twice the average rate seen over the previous four years. This trend aligns with findings from Honeywell’s 2025 Cyber Threat Report, which identified a sharp increase in ransomware and extortion activity targeting operational technology environments based on large-scale system data.

There are many discussions today around infrastructure complexity and attack sophistication, but it’s a lesser-known reality that human error in cybersecurity remains a central factor behind many of these incidents. Routine configuration changes, access decisions, or decisions taken under stress can create conditions that allow errors to sneak in. Looking at high-availability environments, human error often becomes the point at which otherwise contained threats begin to escalate into bigger problems.

As cyberattacks on data centers continue to grow in number, downtime is carrying heavier and heavier financial and reputational consequences. Addressing human error in cybersecurity means recognizing that human behavior plays a direct role in how a security architecture performs in practice. Let’s take a closer look.

How  Attackers Take Advantage of Human Error in Cybersecurity

Cyberattacks often exploit vulnerabilities that stem from both superficial, maybe even preventable mistakes, as well as deeper, systemic issues. Human error in cybersecurity often arises when established procedures are not followed through consistently, which can create gaps that attackers are more than eager to exploit. A delayed firmware update or not completing maintenance tasks can leave infrastructure exposed, even when the risks are already known. And even if organizations have defined policies to reduce these exposures, noncompliance or insufficient follow-through often weakens their effectiveness.

In many environments, operators are aware that parts of their IT and operational technology infrastructure carry known weaknesses, but due to a lack of time or oversight, they fail to address them consistently. Limited training also adds to the problem, especially when employees are expected to recognize and respond to social engineering techniques. Phishing, impersonation, and ransomware attacks are increasingly targeting organizations with complex supply chains and third-party dependencies, and in these situations, human error often enables the initial breach, after which attackers move laterally through systems, using minor mistakes to trigger disruptions.

Why Following Procedures is Crucial

Having policies in place doesn’t always guarantee that the follow-through will be consistent. In everyday operations, teams often have to juggle many things at once: updates, alerts, and routine maintenance, and small steps can be missed unintentionally. Even experienced staff can make these kinds of mistakes, especially when managing large or complex environments over an extended period of time. Gradually, these small oversights can add up and leave systems exposed.

Account management works similarly. Password rules, or the policies for the handling of inactive accounts are usually well-defined; however, they are not always applied homogeneously. Dormant accounts may go unnoticed, and teams can fall behind on updates or escape regular review. Human error in cybersecurity often develops step by step through workloads, familiarity, and everyday stress, and not because of a lack of skill or awareness.

The Danger of Interacting With Social Engineering Without Even Knowing

Social engineering is a method of attack that uses deception and impersonation to influence people into revealing information or providing access. It relies on trust and context to make people perform actions that appear harmless and legitimate at the moment.

The trick of deepfakes is that they mirror everyday communication very accurately. Attackers today have all the tools to impersonate colleagues, service providers, or internal support staff. A phone call from someone claiming to be part of the IT help desk can easily seem routine, especially when framed as a quick fix or standard check. Similar approaches can be seen in emails or messaging platforms, and the pattern is the same: urgency overrides safety.

With the various new tools available, visual deception has become very common. Employees may be directed to login pages that closely resemble internal systems and enter credentials without hesitation. Emerging techniques like AI-assisted voice or video impersonation further blur the line between legitimate requests and malicious activity, making social engineering interactions very difficult to recognize in real time.

Ignoring Security Policies and Best Practices

It’s not enough if security policies exist only as formal documentation, but are not followed consistently on the floor. Sometimes, even if access procedures are defined, employees under the pressure of time can make undocumented exceptions. Access policies, or change management rules, for example, require peer review and approval, but urgent maintenance or capacity pressures often lead to decisions that bypass those steps.

These small deviations create gaps between how systems are supposed to be protected and how they are actually handled. When policies become situational or optional, security controls lose their purpose and reliability, leaving the infrastructure exposed, even though there’s a mature security framework in place.

When Policies Leave Room for Interpretation

Policies that lack precision introduce variability into how security controls are applied across teams and shifts. When procedures don’t explicitly define how credentials should be managed on shared systems, retained login sessions, or administrative access can remain in place beyond their intended scope. Similarly, if requirements for password rotation or periodic access reviews are loosely framed or undocumented, they are more likely to be deferred during routine operations.

These conditions rarely trigger immediate alerts or audit findings. However, over time, they accumulate into systemic weaknesses that expand the attack surface and increase the likelihood of attacks.

Best Practices That Erode in Daily Operations

Security issues often emerge through slow, incremental changes. When operational pressure increases, teams might want to rely on more informal workarounds to keep everything running. Routine best practices like updates, access reviews, and configuration standards can slip down the priority list or become sloppy in their application. Individually, all of these decisions can seem reasonable at the moment; over time, however, they do add up and dilute the established safeguards, which leaves the organization exposed even without a single clearly identifiable incident.

Overlooking Access and Offboarding Control

Ignoring best practices around access management introduces the next line of risks. Employees and third-party contractors often retain privileges beyond their active role if offboarding steps are not followed through. In the lack of clear deprovisioning rules, like disabling accounts, dormant access can linger on unnoticed. These inactive accounts are not monitored closely enough to detect and identify if misuse or compromise happens.

Policy Gaps During Incident Response

The consequences of ignoring procedures become most visible when an actual cybersecurity incident occurs. When teams are forced to act quickly without clear guidance, errors start to surface. Procedures that are outdated, untested, or difficult to locate offer little support during an emergency. There’s no policy that can eliminate risks completely, however, organizations that treat procedures as living, enforceable tools are better positioned to respond effectively when an incident occurs.

A Weak Approach to Security Governance

Weak security governance often allows risks to persist unnoticed, especially when oversight from management is limited or unclear. Without clear ownership and accountability, routine tasks like applying security patches or reviewing alerts can be delayed or overlooked, leaving systems exposed. These seemingly insignificant gaps create an environment over time in which vulnerabilities are known but not actively addressed.

Training plays a very important role in closing this gap, but only when it is treated as part of governance,and not as an isolated activity. Regular, structured training helps employees develop a habit of verification and reinforces the checks and balances defined by organizational policies. To remain effective, training has to evolve in tandem with the threat landscape. Employees need ongoing exposure to emerging attack techniques and practical guidance on how to recognize and respond to them within their daily workflows. Aligned governance and training help organizations position themselves better to reduce risk driven by human factors.

Understanding the Stakes

Human error in cybersecurity is often discussed as a collection of isolated missteps, but in reality, it reflects how people operate within complex systems under constant pressure.

In data center environments, these errors rarely occur as isolated events but are influenced by interconnected processes, tight timelines, and attackers who deliberately exploit trust, familiarity, and routine behavior. Looking at it from this angle, human error doesn’t show only individual mistakes but provides insight into how risks develop across an organization over time.

Recognizing the role of human error in cybersecurity is essential for reducing future incidents, but awareness alone is not enough. Training also plays an important role, but it cannot compensate for unclear processes, weak governance, or a culture that prioritizes speed more than safety.

Data center operators have to continuously adapt their security practices and reinforce expectations through daily operations instead of treating security best practices as rigid formalities. Building a culture where employees understand how their actions influence security outcomes helps organizations respond more effectively to evolving threats and limits the conditions that allow small errors to turn into major, devastating incidents.

# # #

About the Author

Michael Zrihen  is the Senior Director of Marketing & Internal Operations Manager at Volico Data Centers.

The post Human Error in Cybersecurity and the Growing Threat to Data Centers appeared first on Data Center POST.

  •  

Sabey Data Centers Taps OptiCool to Tackle High-Density Cooling

Sabey Data Centers is adding OptiCool Technologies to its growing ecosystem of advanced cooling partners, aiming to make high-density and AI-driven compute deployments more practical and energy efficient across its U.S. footprint. This move highlights how colocation providers are turning to specialized liquid and refrigerant-based solutions as rack densities outstrip the capabilities of traditional air cooling.

OptiCool is known for two-phase refrigerant pumped systems that use a non-conductive refrigerant to absorb heat through phase change at the rack level. This approach enables efficient heat removal without chilled water loops or extensive mechanical plant build-outs, which can simplify facility design and cut both capital and operating costs for data centers pushing into higher power densities. Sabey is positioning the OptiCool alliance as part of its integrated cooling technologies partnership program, which is designed to lower barriers to liquid and alternative cooling adoption for customers. Instead of forcing enterprises to engineer bespoke solutions for each deployment, Sabey is curating pre-vetted architectures and partners that align cooling technology, facility infrastructure and operational responsibility. For operators planning AI and HPC rollouts, that can translate into clearer deployment paths and reduced integration risk.

The appeal of two-phase refrigerant cooling lies in its combination of density, efficiency and retrofit friendliness. Because the systems move heat directly from the rack to localized condensers using a pumped refrigerant, they can often be deployed with minimal disruption to existing white space. That makes them attractive for operators that need to increase rack power without rebuilding entire data halls or adding large amounts of chilled water infrastructure.

Sabey executives frame the partnership as a response to customer demand for flexible, future-ready cooling options. As more organizations standardize on GPU-rich architectures and high-density configurations, cooling strategy has become a primary constraint on capacity planning. By incorporating OptiCool’s technology into its program, Sabey is signaling to customers that they will have multiple, validated pathways to support emerging workload profiles while staying within power and sustainability envelopes.

As liquid and refrigerant-based cooling rapidly move into the mainstream, customers evaluating their own AI and high-density strategies may benefit from understanding how Sabey is standardizing these technologies across its portfolio. To explore how this partnership and Sabey’s broader integrated cooling program could support specific deployment plans, readers can visit Sabey’s website for more information at www. sabeydatacenters.com.

The post Sabey Data Centers Taps OptiCool to Tackle High-Density Cooling appeared first on Data Center POST.

  •  

It’s Not An AI Bubble — We’re Witnessing the Next “Cloud” Revolution

Alphabet, Amazon, and Microsoft; these tech giants’ cloud services, Google Cloud, AWS, and Azure, respectively, are considered the driving force behind all current business computing, data, and mobile services. But back in the mid-2000s, they weren’t immediately seen as best bets on Wall Street. When Amazon launched AWS, analysts and investors were skeptical. They dismissed AWS as a distraction from Amazon’s core retail business. The Wall Street wizards did not understand the potential of cloud computing services. Many critics believed enterprises would never move their mission-critical workloads off-premises and into remote data centers.

As we all know, the naysayers were wrong, and cloud computing took off, redefining global business. It turbo-charged the economy, creating trillions in enterprise value while reducing IT costs, increasing application agility, and enabling new business models. In addition, the advent of cloud services lowered barriers to entry for startups and enabled rapid service scaling. Improving efficiency, collaboration, and innovation through scalable, pay-as-you-go access to computing resources was part of the formula for astounding success. The cloud pushed innovation to every corner of society, and those wise financiers misunderstood it. They could not see how this capital-intensive, long-horizon bet would ever pay off.

Now, we are at that moment again. This time with artificial intelligence.

Headlines appear every day saying that we’re in an “AI bubble.” But AI has gone beyond mere speculation as companies (hyperscalers) are in early-stage infrastructure buildout mode. Hyperscalers understand this momentum. They have seen this movie before with a different protagonist, and they know the story ends with transformation, not collapse. The need for transformative compute, power, and connectivity is the catalyst driving a new generation of data center buildouts. The applications, the productivity, and the tools are there. And unlike the early cloud era, sustainable AI-related revenue is a predictable balance sheet line item.

The Data

Consider these most recent quarterly earnings:

  • Microsoft Q3 2025: Revenue: $70.1B, up 13%. Net income: $25.8B, up 18%. Intelligent Cloud grew 21% led by Azure, with 16 points of growth from AI services.
  • Amazon Q3 2025: Revenue: $180.2B, up 13%. AWS grew 20% to $33B. Trainium2, its second-gen AI chip, is a multi-billion-dollar line. AWS added 3.8 GW of power capacity in 12 months due to high demand.
  • Alphabet (Google Parent) Q3 2025: Revenue: $102.35B, up 16%. Cloud revenue grew 33% to $15.2B. Operating income: up nearly 85%, backed by $155B cloud backlog.
  • Meta Q3 2025: Revenue: $51.2B, up 26%. Increased infrastructure spend focused on expanding AI compute capacity. (4)

These are not the signs of a bubble. These are the signatures of a platform shift, and the companies leading it are already realizing returns while businesses weave AI into operations.

Bubble or Bottleneck

However, let’s be clear about this analogy: AI is not simply the next chapter of the cloud. Instead, it builds on and accelerates the cloud’s original mission: making extraordinary computing capabilities accessible and scalable. While the cloud democratized computing, AI is now democratizing intelligence and autonomy. This evolution will transform how we work, secure systems, travel, heal, build, educate, and solve problems.

Just as there were cloud critics, we now have AI critics. They say that aggressive capital spending, rising energy demand, and grid strain are signs that the market is already overextended. The pundits are correct about the spending:

  • Alphabet (Google) Q3 2025: ~US $24B on infrastructure oriented toward AI/data centers.
  • Amazon (AWS) Q3 2025: ~US $34.2B, largely on infrastructure/AI-related efforts.
  • Meta Q3 2025: US $19.4B directed at servers/data centers/network infrastructure for AI.
  • Microsoft Q3 2025: Roughly US $34.9B, of which perhaps US $17-18B or more is directly AI/data-center infrastructure (based on “half” of capex).

However, the pundits’ underlying argument is predicated on the same misunderstandings seen in the run-up to the cloud era: it confuses infrastructure investment with excess spending. The challenge with AI is not too much capacity; it is not enough. Demand is already exceeding grid capacity, land availability, power transmission expansion, and specialized equipment supply.

Bubbles do not behave that way; they generate idle capacity. For example, consider the collapse of Global Crossing. The company created the first transcontinental internet backbone by laying 100,000 route-miles of undersea fiber linking 27 countries.

Unfortunately, Global Crossing did not survive the dot-com bubble burst (1990-2000) and filed for bankruptcy. However, Level 3, then CenturyLink (2017), and Lumen Technologies knew better than to listen to Wall Street and acquired Global Crossing’s cables. Today, Lumen has reported total 2024 revenue of $13.1 billion. Although they don’t specifically list submarine cable business revenue, it’s reasonable to infer that these cables are still generating in the low billion-dollar revenue figures—a nice perpetual paycheck for not listening to the penny pinchers.

The AI economy is moving the value chain down the same path of sustainable profitability. But first, we must address factors such as data center proximity to grid strength, access to substation expansion, transformer supply, water access, cooling capacity, and land for modern power-intensive compute loads.

Power, Land, and the New Workforce

The cloud era prioritized fiber; the AI era is prioritizing power. Transmission corridors, utility partnerships, renewable integration, cooling systems, and purpose-built digital land strategies are essential for AI expansion. And with all that comes the “pick and shovel” jobs building data centers, which Wall Street does not factor into the AI economy. You need to look no further than Caterpillar’s Q3 2025 sales and revenue of $16.1 billion, up 10 percent.

Often overlooked in the tech hype are the industrial, real estate, and power grid requirements for data center builds, which require skilled workers such as electricians, steelworkers, construction crews, civil engineers, equipment manufacturers, utility operators, grid modernizers, and renewable developers. And once they’re up and running, data centers need cloud and network architects, cybersecurity analysts, and AI professionals.

As AI scales, it will lift industrial landowners, renewable power developers, utilities, semiconductor manufacturers, equipment suppliers, telecom networks, and thousands of local trades and service ecosystems, just as it’s lifting Caterpillar. It will accelerate infrastructure revitalization and strengthen rural and suburban economies. It will create new industries, just like the cloud did with Software as a Service (SaaS), e-commerce logistics, digital banking, streaming media, and remote-work platforms.

Conclusion

We’ve seen Wall Street mislabel some of the most significant tech expansions, from the telecom-hotel buildout of the 1990s to the co-location wave, global fiber expansion, hyperscale cloud, and now, with AI. Just like all revolutionary ideas, skepticism tends to precede them, even though there’s an inevitability to them. But stay focused: infrastructure comes before revenue, and revenue tends to arrive sooner than predicted, which brings home the point that AI is not inflating; it is expanding.

Smartphones reshaped consumer behavior within a decade; AI will reshape the industry in less than half that time. This is not a bubble. It is an infrastructure super-cycle predicated on electricity, land, silicon, and ingenuity. Now is the time to act: those who build power-first digital infrastructure are not in the hype business; they’re laying the foundation for the next century of economic growth.

# # #

About the Author

Ryne Friedman is an Associate at hi-tequity, where he leverages his commercial real estate expertise to guide strategic site selection and location analysis for data center development. A U.S. Coast Guard veteran and licensed Florida real estate professional, he previously supported national brands such as Dairy Queen, Crunch Fitness, Jimmy John’s, and 7-Eleven with market research and site acquisition. His background spans roles at SLC Commercial, Lambert Commercial Real Estate, DSA Encore, and DataCenterAndColocation. Ryne studied Business Administration and Management at Central Connecticut State University.

The post It’s Not An AI Bubble — We’re Witnessing the Next “Cloud” Revolution appeared first on Data Center POST.

  •  

Resolute CS and Equinix Close the Last-Mile Gap with Automated Connectivity Platform

Equinix customers can now order last-mile connectivity from enterprise edge locations to any of Equinix’s 270+ data centers globally, eliminating weeks of manual sourcing and the margin stacking that has long plagued enterprise network procurement.

The collaboration integrates Resolute CS’s NEXUS platform directly into the Equinix Customer Portal, giving enterprises transparent access to 3,200+ carriers across 180 countries. Rather than navigating opaque pricing through multiple intermediaries, customers can design, price, and order last-mile access with full visibility into costs and carrier options.

The Last-Mile Problem

While interconnection platforms like Equinix Fabric have transformed data center connectivity, the edge connectivity gap has remained a persistent friction point. Enterprises connecting branch offices or remote facilities to data centers typically face weeks-long sourcing cycles, opaque pricing structures with 2-4 layers of margin stacking (25-30% each), and inconsistent delivery across geographies.

This inefficiency becomes particularly acute as AI workloads shift toward distributed architectures. Unlike centralized applications, AI infrastructure increasingly requires connectivity across edge locations, multiple data centers, and cloud platforms, creating exponentially more last-mile requirements that manual sourcing processes cannot efficiently handle.

How It Works

Resolute NEXUS automates route design, identifies diversity and resiliency options, simplifies cloud access paths, and coordinates direct ordering with carriers. The result: enterprises can manage connectivity from branch office to data center to cloud through a single portal, with transparent pricing and no hidden margin layers.

“We are empowering customers to design their network architecture without access constraints,” said Patrick C. Shutt, CEO and co-founder of Resolute CS. “With Equinix and Resolute NEXUS, customers can design, price, and order global last-mile access with full transparency, removing complexity and lowering costs.”

Benefits for Carriers Too

The platform also creates opportunities for network providers. By operating as a carrier-neutral marketplace, Resolute NEXUS gives providers direct visibility into qualified enterprise demand, improved infrastructure utilization, and lower customer acquisition costs, all without the traditional intermediary layers.

AI and Distributed Infrastructure

With Equinix operating 270+ AI-optimized data centers across 77 markets, automated last-mile sourcing directly addresses the connectivity requirements for distributed AI deployments. Enterprises can now provision edge-to-cloud connectivity with the speed and transparency expected from modern cloud services.

Equinix Fabric customers can access the platform immediately through the Equinix Customer Portal by navigating to “Find Service Providers” and searching for Resolute NEXUS – Last Mile Access.

To learn more, read the full press release here.

The post Resolute CS and Equinix Close the Last-Mile Gap with Automated Connectivity Platform appeared first on Data Center POST.

  •  

DC BLOX Secures $240 Million to Accelerate Hyperscale Data Center Growth Across the Southeast

DC BLOX has taken a significant step in accelerating digital transformation across the Southeast with the closing of a new $240 million HoldCo financing facility from Global Infrastructure Partners (GIP), a part of BlackRock. This strategic funding positions the company to rapidly advance its hyperscale data center expansion while reinforcing its role as a critical enabler of AI and cloud growth in the region.​

Strategic growth financing

The $240 million facility from GIP provides fresh growth capital dedicated to DC BLOX’s hyperscale data center strategy, building on the company’s recently announced $1.15 billion and $265 million Senior Secured Green Loans. Together, these financings support the development and construction of an expanding portfolio of digital infrastructure projects designed to meet surging demand from hyperscalers and carriers.​

Powering AI and cloud innovation

DC BLOX has emerged as a leader in connected data center and fiber network solutions, with a vertically integrated platform that includes hyperscale data centers, subsea cable landing stations, colocation, and fiber services. This model allows the company to offer end-to-end solutions for hyperscalers and communications providers seeking capacity, connectivity, and resiliency in high-growth Southeastern markets.​

Community and economic impact

The new financing is about more than infrastructure; it is also about regional economic development. DC BLOX’s investments help bring cutting-edge AI and cloud technology into local communities, while driving construction jobs, tax revenues, and power grid enhancements that benefit both customers and ratepayers.

“We are excited to partner with GIP, a part of BlackRock, to fuel our ambitious growth goals,” said Melih Ileri, Chief Investment Officer at DC BLOX. “This financing underscores our commitment to serving communities in the Southeast by bringing cutting-edge AI and cloud technology investments with leading hyperscalers into the region, and creating economic development activity through construction jobs, taxes paid, and making investments into the power grid for the benefit of our customers and local ratepayers alike.”​

Backing from leading investors

Michael Bogdan, Chairman of DC BLOX and Head of the Digital Infrastructure Group at Future Standard, highlighted that this milestone showcases the strength of the company’s vision and execution. Future Standard, a global alternative asset manager based in Philadelphia with over 86.0 billion in assets under management, leads DC BLOX’s sponsorship and recently launched its Future Standard Digital Infrastructure platform with more than 2 billion in assets. GIP, now a part of BlackRock and overseeing over 189 billion in assets, brings deep sector experience across energy, transport, and digital infrastructure, further validating DC BLOX’s role in shaping the Southeast as a global hub for AI-driven innovation.​

Read the full release here.

The post DC BLOX Secures $240 Million to Accelerate Hyperscale Data Center Growth Across the Southeast appeared first on Data Center POST.

  •  

Registration Opens for Yotta 2026 as AI Infrastructure Demand Accelerates

Yotta 2026 is officially open for registration, returning Sept. 28–30 to Las Vegas with a larger footprint, a new venue and an expanded platform designed to meet the accelerating demands of AI-driven infrastructure. Now hosted at Caesars Forum, Yotta 2026 reflects both the rapid growth of the event and the unprecedented pace at which AI is reshaping compute, power and digital infrastructure worldwide.

As AI workloads scale faster than existing systems were designed to handle, infrastructure leaders are facing mounting challenges around power availability, capital deployment, resilience and integration across traditionally siloed industries. Yotta 2026 is built to convene the full ecosystem grappling with these realities, bringing together operators, hyperscalers, enterprise leaders, energy executives, investors, builders, policymakers and technology partners in one place.

Rebecca Sausner, CEO of Yotta, emphasizes that the event is designed for practical progress, not theoretical discussion. From chips and racks to networks, cooling, power and community engagement, AI is transforming every layer of digital infrastructure. Yotta 2026 aims to move conversations beyond vision and into real-world solutions that address scale, reliability and investment risk in an AI-first era.

A defining feature of Yotta 2026 is its advisory board-led approach to programming. The conference agenda is being developed in collaboration with the newly announced Yotta Advisory Board, which includes senior leaders from organizations spanning AI, cloud, energy, finance and infrastructure, including OpenAI, Oracle, Schneider Electric, KKR, Xcel Energy, GEICO and the Electric Power Research Institute (EPRI). This cross-sector guidance ensures the program reflects how the industry actually operates, as an interconnected system where decisions around power, compute, capital, design and policy are inseparable.

The 2026 agenda will focus on the most urgent challenges shaping the AI infrastructure era. Key themes include AI infrastructure and compute density, power generation and grid interconnection, capital formation and investment risk, design and operational resilience, and policy and public-private alignment. Together, these topics offer a market-driven view of how digital infrastructure must be designed, financed and operated to support AI at scale.

With an anticipated 6,000+ AI and digital infrastructure leaders in attendance, Yotta 2026 will feature a significantly expanded indoor and outdoor expo hall, curated conference programming and immersive networking experiences. Hosted at Caesars Forum, the event is designed to support both strategic planning and hands-on execution, creating space for collaboration across the entire infrastructure value chain.

Early registration is now open, with passes starting at $795 and discounted rates available for early registrants. As AI continues to drive unprecedented infrastructure demand, Yotta 2026 positions itself as a critical forum for the conversations and decisions shaping the future of compute, power and digital infrastructure.

To learn more or register, visit yotta-event.com.

The post Registration Opens for Yotta 2026 as AI Infrastructure Demand Accelerates appeared first on Data Center POST.

  •  

ESI Expands HVO Fuel Services to Power Data Center Sustainability and Net‑Zero 2030 Ambitions

ESI Total Fuel Management is expanding its Hydrotreated Vegetable Oil (HVO/R99) services to help data centers and other mission-critical facilities advance their sustainability strategies without sacrificing reliability. With this move, the company is deepening its role as a long-term partner for operators pursuing Net-Zero 2030 goals in an increasingly demanding digital infrastructure landscape.​

Advancing data center sustainability

Across the data center industry, operators are under growing pressure to reduce the environmental impact of standby power systems while maintaining assured uptime. ESI draws on decades of experience in fuel lifecycle management, having previously championed ultra-low sulfur diesel adoption, to guide customers through the transition to renewable diesel.​

To support practical and scalable adoption, ESI has established the first secure HVO/R99 supply chain on the East Coast, giving operators dependable access to renewable diesel as part of a long-term fuel strategy. This infrastructure enables data center and mission-critical operators to integrate HVO into their operations as a realistic step toward emissions reduction and operational continuity.​

Renewable diesel performance benefits

HVO/R99 can reduce carbon emissions by up to 90 percent compared with conventional diesel, while maintaining strong cold-weather performance and long-term fuel stability suited to standby generator storage cycles. As a drop-in fuel, it requires no modifications to existing infrastructure and directly supports Scope 1 emissions reduction initiatives.​

Integrated lifecycle approach

Within ESI’s broader portfolio, HVO is one component of a comprehensive approach encompassing fuel quality, monitoring, compliance, and system resiliency.

“Sustainability goals do not replace the need for resiliency, and they can be complementary,” said Alex Marcus, CEO and president of ESI Total Fuel Management. “Our focus is helping customers implement solutions that are technically sound and operationally proven. By managing the entire fuel lifecycle, from supply and storage to monitoring, consumption, and pollution control, we help customers reduce environmental impact while maintaining resilient, mission-critical systems.”​

Supporting Net-Zero 2030 objectives

For data center operators pursuing Net-Zero 2030, ESI provides the engineering expertise, infrastructure, and operational support needed to move beyond isolated initiatives toward coordinated, data-driven fuel strategies. This combination of renewable fuel options and full lifecycle management helps strengthen both sustainability and resiliency for mission-critical environments.​

Read the full release here.

The post ESI Expands HVO Fuel Services to Power Data Center Sustainability and Net‑Zero 2030 Ambitions appeared first on Data Center POST.

  •  

PowerBridge Appoints Debra L. Raggio as EVP and General Counsel

PowerBridge’s mission has always centered on developing powered, gigawatt-scale data center campuses that combine energy infrastructure and digital infrastructure. As demand for gigawatt-scale campuses accelerates across the U.S., the company continues to build a team designed to meet that movement. The appointment of Debra L. Raggio as Executive Vice President and General Counsel marks an important milestone in that journey.

Debra joins PowerBridge at a time of significant growth, as the convergence of energy, power, and digital infrastructure continues to reshape how large-scale data center campuses are developed. With more than 40 years of experience in the energy industry, as well as digital infrastructure experience, specializing in natural gas, electricity, and data center markets, she brings deep regulatory and commercial expertise to the role. At PowerBridge, she will oversee legal, regulatory, environmental, government affairs, and communications, while serving as a strategic advisor to Founder and CEO Alex Hernandez and the Board.

Throughout her career, Debra has been a leading national voice in shaping regulatory frameworks across energy and digital infrastructure sectors in the United States, with experience spanning power markets such as PJM and ERCOT. Her background includes private practice at Baker Botts and executive leadership roles at major energy companies, including Talen Energy Corp.

Debra was also a founding management team member of Cumulus Data LLC, a multi-gigawatt data center campus co-located with the Susquehanna Nuclear generation station in Pennsylvania. Her regulatory, commercial, and legal leadership helped enable the development and execution of the project, culminating in its sale to Amazon in 2024. Today, that campus is the foundation for an approximately $20 billion investment supporting the continued expansion of Amazon Web Services.

That experience directly aligns with PowerBridge’s approach to combining power generation, electric campus infrastructure, pad-ready data center sites, and fiber infrastructure to serve growing data center demand while adding needed power supply to regional electric grids. Reflecting on her decision to join the company, Debra shared, “I am honored to be joining CEO Alex Hernandez and the team of executives I worked with in the formation and execution of the Cumulus Data Center Campus. I look forward to helping PowerBridge become the country’s premier powered-campus development company at multi-gigawatt scale, combining power generation, electric campus infrastructure, pad-ready data center sites, and fiber infrastructure to serve the growing need for data centers, while adding needed power supply to the electric grids, including PJM and ERCOT.”

Debra’s appointment reinforces PowerBridge’s focus on regulatory leadership, strategic execution, and disciplined growth as the company advances powered, gigawatt-scale data center campuses across the United States.

Click here to read the full press release.

The post PowerBridge Appoints Debra L. Raggio as EVP and General Counsel appeared first on Data Center POST.

  •  

Nostrum Data Centers Taps JLL Expertise, Powering Spain’s Rise as a Connectivity Hub

Spain’s digital infrastructure landscape is entering a pivotal new phase, and Nostrum Data Centers is positioning itself at the center of that transformation. By engaging global real estate and data center advisory firm JLL, Nostrum is accelerating the development of a next-generation, AI-ready data center platform designed to support Spain’s emergence as a major connectivity hub for Europe and beyond.

Building the Foundation for an AI-Driven Future

Nostrum Data Centers, the digital infrastructure division of Nostrum Group, is developing a portfolio of sustainable, high-performance data centers purpose-built for artificial intelligence, cloud computing, and high-density workloads. In December 2025, the company announced that its data center assets will be available in 2027, with land and power already secured across all sites, an increasingly rare advantage in today’s constrained infrastructure markets.

The platform includes 500 MW of secured IT capacity, with an additional 300 MW planned for future expansion, bringing total planned development to 800 MW across Spain. This scale positions Nostrum as one of the country’s most ambitious digital infrastructure developers at a time when demand for compute capacity is accelerating across Europe.

Strategic Locations, Connected by Design

Nostrum’s six data center developments are strategically distributed throughout Spain to capitalize on existing power availability, fiber routes, internet exchanges, and subsea connectivity. This geographic diversity allows customers to deploy capacity where it best supports latency-sensitive workloads, redundancy requirements, and long-term growth strategies.

Equally central to Nostrum’s approach is sustainability. Each facility is designed in alignment with the United Nations Sustainable Development Goals (SDGs), delivering industry-leading efficiency metrics, including a Power Usage Effectiveness (PUE) of 1.1 and zero Water Usage Effectiveness (WUE), eliminating water consumption for cooling.

Why JLL? And Why Now?

To support this next phase of growth, Nostrum has engaged JLL to strengthen its go-to-market strategy and customer engagement efforts. JLL brings deep global experience in data center advisory, site selection, and market positioning, helping operators translate technical infrastructure into compelling value for hyperscalers, enterprises, and AI-driven tenants.

“Nostrum Data Centers has a long-term vision for balancing innovation and sustainability. We offer our customers speed to market and scalability throughout our various locations in Spain, all while leading a green revolution to ensure development is done the right way as we position Spain as the next connectivity hub,” says Gabriel Nebreda, Chief Executive Officer at Nostrum Group. “We are confident that our engagement with JLL will be able to help us bolster our efforts and achieve our long-term vision.”

From JLL’s perspective, Spain presents a unique convergence of advantages.

“Spain has a unique market position with its access to robust power infrastructure, its proximity to Points of Presence (PoPs), internet exchanges, subsea connectivity, and being one of the lowest total cost of ownership (TCO) markets,” says Jason Bell, JLL Senior Vice President of Data Center and Technology Services in North America. “JLL is excited to be working with Nostrum Data Centers, providing our expertise and guidance to support their quest to be a leading data center platform in Spain, as well as position Spain as the next connectivity hub in Europe and beyond.”

Advancing Spain’s Role in the Global Digital Economy

With JLL’s support, Nostrum Data Centers is further refining its strategy to meet the technical and operational demands of AI and high-density computing without compromising on efficiency or sustainability. The result is a platform designed not just to meet today’s requirements, but to anticipate what the next decade of digital infrastructure will demand.

As hyperscalers, AI developers, and global enterprises look for scalable, energy-efficient alternatives to traditional European hubs, Spain, and Nostrum Data Centers, are increasingly part of the conversation.

Learn more about Nostrum Data Centers: www.thenostrumgroup.com/nostrum-data-centres

Click here to read the press release!

The post Nostrum Data Centers Taps JLL Expertise, Powering Spain’s Rise as a Connectivity Hub appeared first on Data Center POST.

  •  

Interconnection and Colocation: The Backbone of AI-Ready Infrastructure

Originally posted on 1547Realty.

AI is changing what infrastructure needs to do. It is no longer enough to provide power cooling and a basic network connection. Modern AI and high performance computing workloads depend on constant access to large data sets and fast communication between systems. That makes interconnection an essential part of the environment that supports them.

Traditional cloud environments were not built for dense GPU clusters or latency sensitive applications. This has helped drive the rise of neocloud providers, which focus on specialized compute and rely on data centers for the physical setting in which it operates.

Industry reporting from RCR Wireless notes that many neocloud providers choose to colocate in established facilities instead of building new data centers. This gives them faster speed to market and direct access to network ecosystems that would take years to recreate on their own. In this context data centers with strong connectivity play a central role.

1547 operates facilities that combine space and power with the network access needed for AI and neocloud deployments. These environments allow operators to place infrastructure where it can perform as intended.

The Shift from Cloud First to Cloud Right

For many years, the default approach for new applications was simple. Put it in the cloud. That cloud first mindset is now giving way to a cloud-right strategy. The question is no longer only whether something can run in the cloud, but whether it should.

AI and high-performance workloads often need to run close to users, to data sources, or along specific network routes. They require predictable latency and steady throughput. When model training or inference spans many GPUs across different clusters, even small delays can affect performance and cost.

Analysts have observed that organizations are matching each workload to the environment that fits it best. As RTInsights highlights, not every workload performs well in a single centralized cloud. Some applications remain in hyperscale environments. Others move to edge sites, private clouds or colocation facilities that offer greater control over performance. Neocloud operators support this shift by offering GPU focused infrastructure from locations chosen for both efficiency and access to network routes.

To do that, they need more than space. They need carriers, cloud on-ramps, internet exchanges and private connection options. They need a fabric that lets them move data efficiently between customers, partners, and providers. Connectivity within the facility brings these elements together and supports cloud right placement.

1547 facilities support this shift by giving operators access to diverse networks in key markets. These environments allow AI workloads to sit where they perform best while staying connected to the wider ecosystem.

To continue reading, please click here.

The post Interconnection and Colocation: The Backbone of AI-Ready Infrastructure appeared first on Data Center POST.

  •  

The Critical Role of Material Selection in Direct-to-Chip Liquid Cooling

As data centers evolve to support AI, edge computing, and high-density cloud architectures, the challenge is no longer just maintaining an optimal power usage effectiveness (PUE), it is about achieving thermal reliability with unprecedented compute loads. Direct-to-chip liquid cooling continues to push the envelope on heat transfer performance, but one underestimated element in overall system reliability is the material composition of the fluid conveyance network itself. The hoses and fittings that transport coolant through these systems operate in extreme thermal and chemical environments, and their design directly influences uptime, maintenance intervals, and total cost of ownership.

Why Material Selection Matters

At its core, a liquid cooling system is only as reliable as its weakest component. If hoses, fittings, or seals fail due to poor material compatibility, the result could be leaks, contamination, or shortened system life, leading to downtime and costly remediation. Rubber, and rubber-like, materials are critical in hose assemblies, as they must balance flexibility for installation and serviceability with long-term resistance to temperature, pressure, permeation, coolant and coolant additives.

The challenge lies in the fact that not all rubbers, or rubber-like materials, are created equal. Each formulation is a complex recipe of polymers, plasticizers, fillers, and curing agents designed to deliver specific performance characteristics. The wrong selection can lead to issues such as fluid permeation, premature aging, or contamination of the coolant. In mission-critical environments like data centers, where even minor disruptions are unacceptable, this risk is magnified.

Temperature and Chemical Compatibility

Although data center cooling systems typically operate at temperatures between 45°C (113°F) and 65°C (149°F), sometimes reaching 100°C (212°F), those ranges can stress certain materials. Nitrile rubber, for example, performs well in oil-based environments but ages quickly in water-glycol systems, especially at higher temperatures. This can cause hardening, cracking, or coolant contamination.

By contrast, ethylene propylene diene monomer (EPDM) rubber has excellent resistance to water, glycols, and the additives commonly used in data center coolants, such as corrosion inhibitors and biocides. EPDM maintains its properties across the required operating range, making it a proven choice for direct-to-chip applications.

However, not all EPDM is the same. Developing the right EPDM for the application demands a deep understanding of polymer chemistry, filler interactions, and process control to precisely balance flexibility, heat resistance, and long-term stability.

Additionally, two curing processes, sulfur-cured and peroxide-cured, produce different performance outcomes. Sulfur-cured EPDM, while widely used, introduces zinc ions during the curing process. When exposed to deionized water, these ions can leach into the coolant, causing contamination and potentially degrading system performance. Peroxide-cured EPDM avoids this issue, offering higher temperature resistance, lower permeation rates, and greater chemical stability, making it the superior choice for modern liquid cooling.

Even among peroxide-cured EPDM compounds, long term performance is not uniform. While the cure system defines the crosslink chemistry, other formulation choices, particularly filler selection and dispersion, can influence how the material performs over time.

The use of fillers and additives is common in rubber compounding. These ingredients are often selected to control cost, improve processability, or achieve certain performance characteristics such as flame resistance, strength, or flexibility.

The challenge is that some filler systems perform well during initial qualification but are not optimized for long-term exposures faced in the operating environment. Certain fillers or processing aids can slowly migrate over time, introducing extractables into the coolant or subtly altering elastomer properties. For data center applications, EPDM compounds must therefore be engineered with a focus on long term stability, reinforcing why EPDM should not be treated as a commodity material in critical cooling systems.

Risks of Non-Compatible Materials

Material incompatibility can have several cascading effects:

  • Contamination – Non-compatible materials can leach extractables into the coolant, leading to discoloration, chemical imbalance, and reduced thermal efficiency.
  • Permeation – Some rubbers allow fluid to slowly migrate through the hose walls, causing coolant loss or altering the fluid mixture over time.
  • Premature Failure – Elevated temperatures can accelerate aging, leading to cracking, swelling, or loss of mechanical strength.
  • Leakage – Rubber under compression may deform over time, jeopardizing seal integrity if not properly formulated for resistance to compression set and tear.

In a recent two-week aging test at 80°C using a water-glycol coolant, hoses made of nitrile and sulfur-cured EPDM showed visible discoloration of the coolant, indicating leaching and breakdown of the material. Peroxide-cured EPDM, on the other hand, maintained stability, demonstrating its compatibility and reliability in long-term data center applications.

The Gates Approach

Drawing on lessons from mission critical industries that have managed thermal challenges for decades, Gates engineers apply material science rigor to the design of liquid cooling hoses for data center applications.

Rather than relying solely on initial material ratings or short-term qualification criteria, Gates begins by tailoring compound design to the operating environment. This includes deliberate control of polymer selection, filler systems, and cure chemistry to manage long term aging behavior, extractables, permeation, and retention of mechanical properties over time in high purity coolant systems.

Compounds are validated through extended aging and immersion testing that reflects real operating conditions, including exposure to heat, deionized water, and water-glycol coolants. This allows potential material changes to be identified and addressed during development, before installation in the field.

This material science driven process is applied across Gates liquid cooling platforms, including the Data Master, Data Master MegaFlex, and newly released Data Master Eco product lines. By engineering for long term stability rather than only initial compliance, Gates designs hose solutions intended to support reliable operation, predictable maintenance intervals, and extended service life in direct-to-chip liquid cooled data center environments.

Looking Ahead

As data centers continue to scale, thermal management solutions must adapt in parallel. Advanced architectures, higher rack densities, and growing environmental regulations all point to a future where liquid cooling is standard. In this environment, material selection is no longer a secondary consideration; it is foundational to system reliability.

Operators who prioritize material compatibility in fluid conveyance lines will benefit from longer service intervals, improved coolant stability, and reduced risk of downtime. In other words, the proper rubber formulation doesn’t just move fluid, it moves the industry forward.

At Gates, sustainable, high-performance cooling begins with the details. By focusing on the science of materials, we help ensure that data center operators can confidently deploy liquid cooling systems designed for the challenges of today and the innovations of tomorrow.

# # #

About the Author

Chad Chapman is a Mechanical Engineer with over 20 years of experience in the fluid power industry. He currently serves as a Product Application Engineering Manager at Gates, where he leads a team that provides technical guidance, recommendations, and innovative solutions to customers utilizing Gates products and services.

Driven by a passion for problem-solving, Chad thrives on collaborating with customers to understand their unique challenges and deliver solutions that optimize performance. He is energized by learning about new applications and technologies, especially where insights can be shared across industries. At Gates, he has been exploring the emerging field of direct-to-chip liquid cooling, an exciting extension of his deep expertise in thermal management. The rapid advancements in IT technology and AI have made his journey an inspiring and rewarding learning experience.

The post The Critical Role of Material Selection in Direct-to-Chip Liquid Cooling appeared first on Data Center POST.

  •