In Short : Oswal Greenzo Energies has secured a 5 MW green hydrogen project at Deendayal Port, marking a significant step in India’s hydrogen journey. The project reflects growing momentum in clean hydrogen adoption and highlights the role of ports as emerging hubs for green energy production, industrial decarbonization, and future-ready sustainable infrastructure.
In Detail : Oswal Greenzo Energies has achieved a major milestone by winning a 5 MW green hydrogen project at Deendayal Port, strengthening its position in India’s emerging hydrogen economy. This project represents a strategic move toward developing large-scale green hydrogen infrastructure and demonstrates the increasing role of private players in supporting national clean energy ambitions.
The project involves setting up a green hydrogen production facility powered by renewable energy sources, ensuring that hydrogen is generated without carbon emissions. This aligns with India’s broader objective of promoting green hydrogen as a key solution for decarbonizing hard-to-abate sectors such as shipping, refining, fertilizers, and heavy industries.
Deendayal Port, one of India’s major ports, is positioning itself as a green energy hub by integrating clean technologies into its operations. By hosting a green hydrogen facility, the port aims to reduce its carbon footprint, improve energy efficiency, and explore alternative fuels for port equipment, logistics operations, and maritime activities.
For Oswal Greenzo Energies, this project strengthens its presence in the green hydrogen segment and expands its clean energy portfolio beyond conventional renewables. The company is expected to leverage its technical expertise to develop efficient hydrogen production systems that can be scaled in the future as demand increases.
The 5 MW capacity of the project, while modest in scale, is significant as a pilot and demonstration model for larger hydrogen initiatives. Such projects help validate technical feasibility, assess cost structures, and build operational experience that is essential for accelerating commercial adoption across different sectors.
Green hydrogen is increasingly being viewed as a cornerstone of India’s energy transition strategy. It offers a pathway to reduce dependence on fossil fuels, enhance energy security, and create new industrial value chains around electrolyzers, storage systems, transport infrastructure, and downstream hydrogen applications.
Ports are emerging as ideal locations for green hydrogen projects due to their access to land, renewable power connectivity, and proximity to industrial consumers. Hydrogen produced at ports can be used for bunkering, export, industrial fuel, or conversion into green ammonia and other derivatives for global markets.
The project is also expected to contribute to the development of India’s hydrogen ecosystem by encouraging investments, fostering innovation, and creating employment opportunities. As more ports and industrial clusters adopt green hydrogen, it can accelerate the creation of a nationwide hydrogen supply network.
Overall, the green hydrogen project at Deendayal Port marks a meaningful step in India’s clean energy journey. It highlights the growing confidence in hydrogen technologies, the role of infrastructure assets like ports in driving sustainability, and the potential of green hydrogen to reshape the future of energy, industry, and transportation.
Truck stop operator Pilot Travel Centers has entered into an agreement with Tesla to install charging stations for Tesla’s Semi heavy-duty electric trucks.
The Tesla charging stations will be built at select Pilot locations in California, Georgia, Nevada, New Mexico and Texas, along I-5, I-10 and “several major corridors where the need for heavy-duty charging is highest.” The first sites are expected to open in Summer 2026.
Each location will host four to eight charging stalls featuring Tesla’s V4 cabinet charging technology, which can deliver up to 1.2 megawatts of power at each stall.
Pilot says that in the future, the sites may be expanded to be compatible with heavy-duty electric vehicles from other manufacturers.
“Heavy-duty charging is yet another extension of our exploration into alternative fuel offerings, and we’re happy to partner with a leader in the space that provides turnkey solutions and deploys them quickly,” said Shannon Sturgil, Senior VP, Alternative Fuels at Pilot.
WEX Fleet card now combines gasoline and public EV charging transactions into one card, one account, and one invoice. WEX says it is the first fuel card provider to unify fueling and public EV charging payments across its proprietary closed-loop fuel network, targeting mixed-energy fleets that operate internal combustion engine vehicles and EVs.
The card works at more than 175,000 WEX-accepting public charging ports and at more than 90% of US gas stations that accept WEX cards. The upgraded card embeds RFID technology directly into the standard WEX Fleet card, which WEX says removes the need for a separate EV charging card or mobile app to activate and pay for a charging session. WEX says using its closed-loop fleet network, rather than open-loop general-purpose card networks, enables end-to-end transaction control, richer data, stronger security, and fleet-specific purchase controls while maintaining existing fueling workflows.
For operations teams, WEX offers unified reporting, purchase controls via the DriverDash app, and a single credit line spanning charging and fueling transactions. EV charging can be enabled immediately or added during the next scheduled renewal, and existing EV-enabled customers can request updated cards in the WEX online customer portal.
The Electrify America EV charging network, which boasts over 5,000 locations, is now accessible for GM EV customers via the automaker’s branded smartphone apps. Drivers can use their myChevrolet, myGMC or myCadillac apps to find nearby charging stations with real-time availability information, plan routes, monitor charging session status, and pay for charging directly in the app.
Multiple charging apps are the bane of road-tripping EV drivers. Adding support for charging networks within branded apps is one way OEMs can improve the charging experience for their customers. GM aims to connect EV drivers to an expanding ecosystem of public charging infrastructure. GM owners can navigate to the Public Charging page in their myBrand apps and look for supported networks.
“We’re collaborating across the industry to deliver not just more chargers, but better public charging experiences,” said Wade Sheffer, Vice President, GM Energy. “Our work with Electrify America helps make public charging easier to access for GM EV drivers.”
“As EV travel continues to grow, so does the need for convenient charging experiences,” said Robert Barrosa, CEO and President of Electrify America. “Through this integration, GM EV drivers have more Hyper-Fast chargers to choose from and a seamless experience they can count on nationwide.”
AI is increasingly steering the data center industry toward new operational practices, where automation, analytics and adaptive control are paving the way for “dark” — or lights-out, unstaffed — facilities. Cooling systems, in particular, are leading this shift. Yet despite AI’s positive track record in facility operations, one persistent challenge remains: trust.
In some ways, AI faces a similar challenge to that of commercial aviation several decades ago. Even after airlines had significantly improved reliability and safety performance, making air travel not only faster but also safer than other forms of transportation, it still took time for public perceptions to shift.
That same tension between capability and confidence lies at the heart of the next evolution in data center cooling controls. As AI models — of which there are several — improve in performance, becoming better understood, transparent and explainable, the question is no longer whether AI can manage operations autonomously, but whether the industry is ready to trust it enough to turn off the lights.
AI’s place in cooling controls
Thermal management systems, such as CRAHs, CRACs and airflow management, represent the front line of AI deployment in cooling optimization. Their modular nature enables the incremental adoption of AI controls, providing immediate visibility and measurable efficiency gains in day-to-day operations.
AI can now be applied across four core cooling functions:
Dynamic setpoint management. Continuously recalibrates temperature, humidity and fan speeds to match load conditions.
Thermal load forecasting. Predicts shifts in demand and makes adjustments in advance to prevent overcooling or instability.
Airflow distribution and containment. Uses machine learning to balance hot and cold aisles and stage CRAH/CRAC operations efficiently.
Fault detection, predictive and prescriptive diagnostics. Identifies coil fouling, fan oscillation, or valve hunting before they degrade performance.
A growing ecosystem of vendors is advancing AI-driven cooling optimization across both air- and water-side applications. Companies such as Vigilent, Siemens, Schneider Electric, Phaidra and Etalytics offer machine learning platforms that integrate with existing building management systems (BMS) or data center infrastructure management (DCIM) systems to enhance thermal management and efficiency.
Siemens’ White Space Cooling Optimization (WSCO) platform applies AI to match CRAH operation with IT load and thermal conditions, while Schneider Electric, through its Motivair acquisition, has expanded into liquid cooling and AI-ready thermal systems for high-density environments. In parallel, hyperscale operators, such as Google and Microsoft, have built proprietary AI engines to fine-tune chiller and CRAH performance in real time. These solutions range from supervisory logic to adaptive, closed-loop control. However, all share a common aim: improve efficiency without compromising compliance with service level agreements (SLAs) or operator oversight.
The scope of AI adoption
While IT cooling optimization has become the most visible frontier, conversations with AI control vendors reveal that most mature deployments still begin at the facility water loop rather than in the computer room. Vendors often start with the mechanical plant and facility water system because these areas present fewer variables, such as temperature differentials, flow rates and pressure setpoints, and can be treated as closed, well-bounded systems.
This makes the water loop a safer proving ground for training and validating algorithms before extending them to computer room air cooling systems, where thermal dynamics are more complex and influenced by containment design, workload variability and external conditions.
Predictive versus prescriptive: the maturity divide
AI in cooling is evolving along a maturity spectrum — from predictive insight to prescriptive guidance and, increasingly, to autonomous control. Table 1 summarizes the functional and operational distinctions among these three stages of AI maturity in data center cooling.
Table 1 Predictive, prescriptive, and autonomous AI in data center cooling
Most deployments today stop at the predictive stage, where AI enhances situational awareness but leaves action to the operator. Achieving full prescriptive control will require not only a deeper technical sophistication but also a shift in mindset.
Technically, it is more difficult to engineer because the system must not only forecast outcomes but also choose and execute safe corrective actions within operational limits. Operationally, it is harder to trust because it challenges long-held norms about accountability and human oversight.
The divide, therefore, is not only technical but also cultural. The shift from informed supervision to algorithmic control is redefining the boundary between automation and authority.
AI’s value and its risks
No matter how advanced the technology becomes, cooling exists for one reason: maintaining environmental stability and meeting SLAs. AI-enhanced monitoring and control systems support operating staff by:
Predicting and preventing temperature excursions before they affect uptime.
Detecting system degradation early and enabling timely corrective action.
Optimizing energy performance under varying load profiles without violating SLA thresholds.
Yet efficiency gains mean little without confidence in system reliability. It is also important to clarify that AI in data center cooling is not a single technology. Control-oriented machine learning models, such as those used to optimize CRAHs, CRACs and chiller plants, operate within physical limits and rely on deterministic sensor data. These differ fundamentally from language-based AI models such as GPT, where “hallucinations” refer to fabricated or contextually inaccurate responses.
At the Uptime Network Fall Americas Fall Conference 2025, several operators raised concerns about AI hallucinations — instances where optimization models generate inaccurate or confusing recommendations from event logs. In control systems, such errors often arise from model drift, sensor faults, or incomplete training data, not from the reasoning failures seen in language-based AI. When a model’s understanding of system behavior falls out of sync with reality, it can misinterpret anomalies as trends, eroding operator confidence faster than it delivers efficiency gains.
The discomfort is not purely technical, it is also human. Many data center operators remain uneasy about letting AI take the controls entirely, even as they acknowledge its potential. In AI’s ascent toward autonomy, trust remains the runway still under construction.
Critically, modern AI control frameworks are being designed with built-in safety, transparency and human oversight. For example, Vigilent, a provider of AI-based optimization controls for data center cooling, reports that its optimizing control switches to “guard mode” whenever it is unable to maintain the data center environment within tolerances. Guard mode brings on additional cooling capacity (at the expense of power consumption) to restore SLA-compliant conditions. Typical examples include rapid drift or temperature hot spots. In addition, there is also a manual override option, which enables the operator to take control through monitoring and event logs.
This layered logic provides operational resiliency by enabling systems to fail safely: guard mode ensures stability, manual override guarantees operator authority, and explainability, via decision-tree logic, keeps every AI action transparent. Even in dark-mode operation, alarms and reasoning remain accessible to operators.
These frameworks directly address one of the primary fears among data center operators: losing visibility into what the system is doing.
Outlook
Gradually, the concept of a dark data center, one operated remotely with minimal on-site staff, has shifted from being an interesting theory to a desirable strategy. In recent years, many infrastructure operators have increased their use of automation and remote-management tools to enhance resiliency and operational flexibility, while also mitigating low staffing levels. Cooling systems, particularly those governed by AI-assisted control, are now central to this operational transformation.
Operational autonomy does not mean abandoning human control; it means achieving reliable operation without the need for constant supervision. Ultimately, a dark data center is not about turning off the lights, it is about turning on trust.
The Uptime Intelligence View
AI in thermal management has evolved from an experimental concept into an essential tool, improving efficiency and reliability across data centers. The next step — coordinating facility water, air and IT cooling liquid systems — will define the evolution toward greater operational autonomy. However, the transition to “dark” operation will be as much cultural as it is technical. As explainability, fail-safe modes and manual overrides build operator confidence, AI will gradually shift from being a copilot to autopilot. The technology is advancing rapidly; the question is how quickly operators will adopt it.
Datalec Precision Installations (DPI) and PODTECH have announced a global technology partnership focused on delivering pre-staged, deployment-ready AI infrastructure solutions as hyperscaler demand drives data center vacancy rates to historic lows. With capacity tightening to 6.5% in Europe and 5.9% in the U.K., the partnership addresses a critical bottleneck in AI data center commissioning, where deployment timelines and technical complexity have become major constraints for enterprises and cloud platforms scaling GPU-intensive workloads.
The AI Infrastructure Commissioning Challenge
As hyperscalers deploy more than $600 billion in AI data center infrastructure this year, representing 75% of total capital expenditure, the focus has shifted from simply securing capacity to ensuring infrastructure is fully validated and production-ready at deployment. AI workloads demand far more than traditional data center services. NVIDIA-based AI racks require specialized expertise in NVLink fabric configuration, GPU testing, compute node initialization, dead-on-arrival (DOA) testing, site and factory acceptance testing (SAT/FAT), and network validation. These technical requirements, combined with increasingly tight deployment windows, have created demand for integrated commissioning providers capable of delivering turnkey solutions.
Integrated Capabilities Across the AI Lifecycle
The DPI-PODTECH partnership brings together complementary capabilities across the full AI infrastructure stack. DPI contributes expertise in infrastructure connectivity and mechanical systems. PODTECH adds software development, commissioning protocols, and systems integration delivered through more than 60 technical specialists across the U.K., Asia, and the Middle East. Together, the companies offer end-to-end services from pre-deployment validation through network bootstrapping, ensuring AI environments are fully operational before customer handoff.
The partnership builds on successful NVIDIA AI rack deployments for international hyperscaler programs, where both companies demonstrated the ability to manage complex, multi-site rollouts. By formalizing their collaboration, DPI and PODTECH are positioning to scale these capabilities across regions where data center capacity is most constrained and AI infrastructure demand is accelerating fastest.
Strategic Focus on High-Growth Markets
The partnership specifically targets Europe, Asia, and the Middle East, markets experiencing acute capacity constraints and surging AI investment. PODTECH’s existing presence across these regions gives the partnership immediate on-the-ground capacity to support hyperscaler and enterprise deployments. The company’s ISO 27001, ISO 9001, and ISO 20000-1 certifications provide the compliance foundation required for clients in regulated industries and public sector engagements.
Industry Perspective
“As organizations accelerate their AI adoption, the reliability and performance of the underlying infrastructure have never been more critical,” said James Bangs, technology and services director at DPI. “Building on our partnership with PODTECH, we have already delivered multiple successful deployments together, and this formal collaboration enables us to scale our capabilities globally.”
Harry Pod, founder at PODTECH, emphasized the operational benefits of the integrated model: “Following our successful collaborations with Datalec on major NVIDIA AI rack deployments, we are very proud to officially combine our capabilities. By working as one integrated delivery team, we can provide clients with packaged, pre-staged, and deployment-ready AI infrastructure solutions grounded in quality, precision, and engineering excellence.”
Looking Ahead
For enterprises and hyperscalers navigating AI infrastructure decisions in 2026, the partnership signals a shift toward specialized commissioning providers capable of managing the entire deployment lifecycle. With hyperscaler capital expenditure forecast to remain elevated through 2027 and vacancy rates showing no signs of easing, demand for integrated commissioning services is likely to intensify across DPI and PODTECH’s target markets.
Organizations evaluating AI infrastructure commissioning strategies can learn more at datalecltd.com.
As artificial intelligence reshapes how organizations generate value from data, a quieter shift is happening beneath the surface. The question is no longer just how data is protected, but where it is processed, who governs it, and how infrastructure decisions intersect with national regulation and digital policy.
Datalec Precision Installations (DPI) is seeing this shift play out across global markets as enterprises and public sector organizations reassess how their data center strategies support both AI performance and regulatory alignment. What was once treated primarily as a compliance issue is increasingly viewed as a foundational design consideration.
Sovereignty moves upstream.
Data sovereignty has traditionally been addressed after systems were deployed, often resulting in fragmented architectures or operational workarounds. That approach is becoming less viable as regulations tighten and AI workloads demand closer proximity to sensitive data.
Organizations are now factoring sovereignty into infrastructure planning from the start, ensuring data remains within national borders and is governed by local legal frameworks. For many, this shift reduces regulatory risk while creating clearer operational boundaries for advanced workloads.
AI raises the complexity
AI intensifies data governance challenges by extending them beyond storage into compute and model execution. Training and inference processes frequently involve regulated or sensitive datasets, increasing exposure when data or workloads cross borders.
This has driven growing interest in sovereign AI environments, where data, compute, and models remain within a defined jurisdiction. Beyond compliance, these environments offer greater control over digital capabilities and reduced dependence on external platforms.
Balancing performance and governance
Supporting sovereign AI requires infrastructure that can deliver high-density compute and low-latency performance without compromising physical security or regulatory alignment. DPI addresses this by delivering AI-ready data center environments designed to support GPU-intensive workloads while meeting regional compliance requirements.
The objective is to enable organizations to deploy advanced AI systems locally without sacrificing scalability or operational efficiency.
Regional execution at global scale
Demand for localized, compliant infrastructure is growing across regions where digital policy and economic strategy intersect. DPI’s expansion across the Middle East, APAC, and other international markets reflects this trend, combining regional delivery with standardized operational practices across 21 global entities.
According to Michael Aldridge, DPI’s Group Information Security Officer, organizations increasingly view localized infrastructure as a way to future-proof their digital strategies rather than constrain them.
Compliance as differentiation
As AI adoption accelerates, infrastructure and governance decisions are becoming inseparable. Organizations that can control where data lives and how AI systems operate are better positioned to manage risk, meet regulatory expectations, and move faster in regulated markets.
DPI’s approach reflects a broader industry shift: compliance is no longer just about meeting requirements, but about enabling innovation in an AI-driven environment.
To read DPI’s full perspective on data sovereignty and AI readiness, visit the company’s website.
At Sabey Data Centers, progress is more than a series of headlines. It’s a blueprint for what’s possible when infrastructure, efficiency and stewardship go hand in hand. From our award-winning sustainability initiatives to bold new campus designs and record-setting expansions, each milestone this year has demonstrated our commitment to powering tomorrow’s workloads with conscience and confidence.
As 2026 is already well underway and already promising to be a banner year, we wanted to pause and reflect on the path we forged in 2025.
Capacity expansion: built for growth
In 2025, we announced strategic power expansions across our Pacific Northwest, Ashburn, Austin and Quincy locations. In Seattle and Columbia, 30MW of new power is now anticipated to come online by 2027, enabling tenants to scale quickly while leveraging ultra-efficient energy (carbon-free in Seattle) and a regional average PUE as low as 1.2.
On the East Coast, Ashburn’s third and final building broke ground, set to introduce 54MW of additional capacity with the first tranches set to come online in 2026. This will be the first three-story facility in Sabey’s portfolio, purpose-built for air-cooled, liquid-cooled and hybrid deployments, with rack densities over 100kW and an average PUE of 1.35. In both regions, Sabey’s expansion balances hyperscale demand with customization, modular scale and resilient connectivity.
The launch of construction for Austin Building B was another major milestone, expanding our presence in the dynamic Round Rock tech corridor. This three-story, liquid-cooling-ready facility is designed to deliver 54 megawatts of total power capacity. Building B continues our commitment to scalable, energy-efficient digital infrastructure, tailored for enterprise and hyperscale workloads.
NEW YORK, Jan. 29, 2026 — Cockroach Labs, a cloud-agnostic distributed SQL databases with CockroachDB, today announced findings from its second annual survey, “The State of AI Infrastructure 2026: Can Systems Withstand AI Scale?” The report reveals a growing concern that AI use is starting to overwhelm the traditional IT systems meant to support it. As […]
The Colombian government has announced a grid expansion plan which it says will facilitate up to 6GW of new clean energy capacity in the country’s Caribbean region.
Duos Technologies Group Inc. (Nasdaq: DUOT), through its operating subsidiary Duos Edge AI, Inc., received the Outstanding Innovation Award at Pacific Telecommunication Conference 2026 (PTC’26). This honor recognizes Duos Edge AI’s leadership in modular Edge Data Center (EDC) solutions that boost efficiency, scalability, security, and customer experience.
Duos Edge AI’s capital-efficient model supports rapid 90-day installations and scalable growth tailored to regional needs like education, healthcare, and municipal services. High-availability designs deliver up to 100 kW+ per cabinet with resilient, 24/7 operations positioned within 12 miles of end users for minimal latency.
“This recognition from Pacific Telecommunications Council (PTC) is a meaningful validation of our strategy and execution,” said Doug Recker, President of Duos and Founder of Duos Edge AI. “Our mission has been to bring secure, low-latency digital infrastructure directly to communities that need it most. By deploying edge data centers where people live, learn, and work, we’re helping close the digital divide while building a scalable platform aligned with long-term growth and shareholder value.”
The award spotlights Duos Edge AI’s patented modular EDCs deployed in underserved communities for low-latency, enterprise-grade infrastructure. These centers enable real-time AI processing, telemedicine, digital learning, and carrier-neutral connectivity without distant cloud reliance.
Data Center POST had the opportunity to connect with Paul Quigley, President of Airsys Cooling Technologies, Inc., ahead of PTC’26. With more than 36 years of experience spanning HVAC contracting, distribution, and executive leadership in manufacturing, Quigley is widely recognized for his work in large-scale, high-demand cooling environments. His career includes co-designing one of the largest VRF projects in North America and leading complex cooling initiatives across the United States. Since joining Airsys in 2021, Quigley has focused on advancing precision and liquid-assisted cooling technologies to help data center operators address power constraints driven by AI, high-density compute, and rapid digital infrastructure growth. In the Q&A below, Quigley shares his perspective on the challenges facing the global digital infrastructure industry and how Airsys is helping operators turn cooling efficiency into usable compute power.
Data Center Post (DCP) Question: What does your company do?
Paul Quigley (PQ) Answer: Airsys is an energy-focused cooling solutions company founded on a simple idea: power should serve computation, not be lost protecting it. Every system we build returns power back to the data center where it belongs. Our innovative, award-winning technologies give operators more usable power for compute, improving capacity, resilience, and overall profitability.
DCP Q: What problems does your company solve in the market?
PQ A: Airsys helps data centers recover power lost to cooling. As AI clusters push energy systems past their limits, operators are forced to choose between compute, capacity, and cost. Our cooling technologies solve that problem by returning power back to the data center, reducing stranded capacity, and giving operators more usable power for compute.
DCP Q: What are your company’s core products or services?
PQ A: We design and build the world’s most advanced cooling solutions for data centers, including:
LiquidRack spray-cooling and liquid-assisted rack systems
High-efficiency DX and chilled-water cooling systems
Flooded-evaporator chillers (CritiCool-X)
Indoor and outdoor precision cooling systems
Edge, modular, and containerized data-center cooling
Control systems, energy-optimization tools, and PCE/ROIP performance frameworks
DCP Q: What markets do you serve?
PQ A:
Hyperscale and AI compute environments
Colocation and enterprise data centers
Modular and prefabricated data centers
Edge and telecom infrastructure
Education, industrial, government, and defense applications requiring mission-critical cooling
DCP Q: What challenges does the global digital infrastructure industry face today?
PQ A: The industry now operates in an environment of rapid compute expansion and structural power scarcity. Grid limitations, long construction timelines, inefficient legacy cooling, land constraints, and the explosive rise of AI workloads have created an energy bottleneck. Data centers no longer struggle with space… they struggle with power.
DCP Q: How is your company adapting to these challenges?
PQ A: Airsys has always shifted the conversation from cooling to energy stewardship. Every system we design focuses on reducing cooling overhead and returning power to computation. Our LiquidRack spray-cooling platform, low-lift CritiCool-X chillers, and high-efficiency DX systems enable operators to deploy more compute with less infrastructure. We’re also advancing new metrics – PCE and ROIP – to help operators quantify the financial value of returned power.
DCP Q: What are your company’s key differentiators?
PQ A:
Energy-first design philosophy — our systems return power to compute
Rapid delivery and global manufacturing — critical in today’s supply-strained market
LiquidRack spray cooling — enabling high-density AI clusters without stranded power
Flooded-evaporator chiller technology — high efficiency, low lift, faster deployment
End-to-end portfolio — DX, chilled water, liquid-assisted cooling, modular systems
PCE / ROIP frameworks — financial and operational tools that change how operators evaluate cooling impact
DCP Q: What can we expect to see/hear from your company in the future?
PQ A: You will see a major expansion of our LiquidRack platform, new ultra-efficient chiller technologies, deeper integration of PCE/ROIP metrics, and broader support for modular and edge deployments. We are continuing to push innovation toward one goal: giving operators more usable power for compute.
DCP Q: What upcoming industry events will you be attending?
More announcements are planned for early 2026 as Airsys continues to expand its advanced cooling portfolio for high-density compute environments
DCP Q: Is there anything else you would like our readers to know about your company and capabilities?
PQ A: Airsys is built on a simple idea: power should serve computation, not be lost protecting it. Our mission is to help operators turn cooling losses into compute gains and to make energy stewardship a competitive advantage in the AI era.
DCP Q: Where can our readers learn more about your company?
Stay in the know! Subscribe to Data Center POST today.
# # #
About Data Center POST
Data Center POST provides a comprehensive view of the digital infrastructure landscape, delivering industry insights into the global data center ecosystem. As the industry’s only peer-contributed and online publication, we offer relevant information from developers, managers, providers, investors, and trendsetters worldwide.
Data Center POST works hard to get the most current information and thought-provoking ideas most apt to add relevance to the success of the data center industry. Stay informed, visit www.datacenterpost.com.
Data centers are the backbone of modern digital infrastructure. They power cloud services, financial systems, healthcare platforms, and nearly every technology-driven business. As demand for data storage and processing grows, so do the operational, financial, and risk-related challenges data centers face. For managers, understanding these issues and knowing how to address them proactively is critical to maintaining uptime, security, and long-term viability. This is what you need to know to ensure you can meet up with demand.
Rising Energy Costs and Efficiency Demands
One of the most persistent challenges for data centers is energy consumption. Powering servers, cooling systems, and redundancy infrastructure requires enormous amounts of electricity, and energy costs continue to rise globally. Beyond cost, there is increasing pressure from regulators and customers to reduce environmental impact.
Managers can address this by investing in energy-efficient hardware, optimizing airflow and cooling layouts, and adopting real-time monitoring tools to identify waste. Long-term strategies may include transitioning to renewable energy sources or partnering with utility providers for more favorable pricing structures.
Cooling and Thermal Management Challenges
Heat is an unavoidable byproduct of high-density computing. Inefficient cooling not only increases costs but also raises the risk of equipment failure and downtime. As server densities increase, traditional cooling methods often struggle to keep up.
Modern solutions include hot-aisle/cold-aisle containment, liquid cooling, and AI-driven thermal monitoring systems. For managers, the key is treating cooling as a dynamic system rather than a fixed infrastructure. However, this needs to be one that evolves alongside hardware demands.
Financial Risk and Insurance Considerations
Data centers face significant financial exposure from equipment damage, downtime, liability claims, and unforeseen events. Even with strong operational controls, risk cannot be eliminated entirely.
This is where insurance becomes a critical part of risk management. Evaluating coverage that aligns with the unique needs of data center operations can help protect against losses that would otherwise threaten business continuity. BOP insurance by Next Insurance can help managers think more holistically about protecting assets, operations, and revenue streams as part of an overall risk strategy.
Downtime and Business Continuity Risks
Even brief outages can result in significant financial losses, reputational damage, and contractual penalties. This downtime may be caused by:
Power failures
Human error
Equipment malfunction
External events
To mitigate this risk, managers should prioritize redundancy at every critical point, including power supplies, network connections, and backup systems. Regular testing of disaster recovery plans is just as important as having them documented, too. A plan that hasn’t been tested is often unreliable in real-world conditions. You put your business and those whom you cater to at risk.
Cybersecurity and Physical Security Threats
Data centers face a dual security challenge: digital threats and physical risks. Cyberattacks continue to grow in sophistication, while physical threats such as unauthorized access, theft, or vandalism remain real concerns.
Addressing this requires layered security. On the digital side, this includes continuous patching, network segmentation, and monitoring for unusual activity. Physically, access controls, surveillance systems, and strict visitor protocols are essential. Managers should also ensure that staff training keeps pace with evolving threats, as human error remains a major vulnerability.
Compliance and Regulatory Pressure
Data centers often operate under complex regulatory requirements related to data privacy, industry standards, and regional laws. Compliance failures can result in fines, legal exposure, and loss of customer trust.
Managers can stay ahead by maintaining clear documentation, conducting regular audits, and working closely with legal and compliance teams. Building compliance into operational processes, rather than treating it as an afterthought, reduces risk and simplifies reporting.
Turning Challenges Into Strategic Advantage
While data centers face a wide range of operational and strategic challenges, these issues also present opportunities for improvement and differentiation. Managers who address all of the above proactively are better positioned to deliver reliability and value in a competitive market. Don’t let yours be the one that falls victim to the issues and instead take action.
# # #
About the Author:
James Daniels is a freelance writer, business enthusiast, a bit of a tech buff, and an overall geek. He is also an avid reader, who can spend hours reading and knowing about the latest gadgets and tech, whilst offering views and opinions on these topics.
Equinix customers can now order last-mile connectivity from enterprise edge locations to any of Equinix’s 270+ data centers globally, eliminating weeks of manual sourcing and the margin stacking that has long plagued enterprise network procurement.
The collaboration integrates Resolute CS’s NEXUS platform directly into the Equinix Customer Portal, giving enterprises transparent access to 3,200+ carriers across 180 countries. Rather than navigating opaque pricing through multiple intermediaries, customers can design, price, and order last-mile access with full visibility into costs and carrier options.
The Last-Mile Problem
While interconnection platforms like Equinix Fabric have transformed data center connectivity, the edge connectivity gap has remained a persistent friction point. Enterprises connecting branch offices or remote facilities to data centers typically face weeks-long sourcing cycles, opaque pricing structures with 2-4 layers of margin stacking (25-30% each), and inconsistent delivery across geographies.
This inefficiency becomes particularly acute as AI workloads shift toward distributed architectures. Unlike centralized applications, AI infrastructure increasingly requires connectivity across edge locations, multiple data centers, and cloud platforms, creating exponentially more last-mile requirements that manual sourcing processes cannot efficiently handle.
How It Works
Resolute NEXUS automates route design, identifies diversity and resiliency options, simplifies cloud access paths, and coordinates direct ordering with carriers. The result: enterprises can manage connectivity from branch office to data center to cloud through a single portal, with transparent pricing and no hidden margin layers.
“We are empowering customers to design their network architecture without access constraints,” said Patrick C. Shutt, CEO and co-founder of Resolute CS. “With Equinix and Resolute NEXUS, customers can design, price, and order global last-mile access with full transparency, removing complexity and lowering costs.”
Benefits for Carriers Too
The platform also creates opportunities for network providers. By operating as a carrier-neutral marketplace, Resolute NEXUS gives providers direct visibility into qualified enterprise demand, improved infrastructure utilization, and lower customer acquisition costs, all without the traditional intermediary layers.
AI and Distributed Infrastructure
With Equinix operating 270+ AI-optimized data centers across 77 markets, automated last-mile sourcing directly addresses the connectivity requirements for distributed AI deployments. Enterprises can now provision edge-to-cloud connectivity with the speed and transparency expected from modern cloud services.
Equinix Fabric customers can access the platform immediately through the Equinix Customer Portal by navigating to “Find Service Providers” and searching for Resolute NEXUS – Last Mile Access.
DC BLOX has taken a significant step in accelerating digital transformation across the Southeast with the closing of a new $240 million HoldCo financing facility from Global Infrastructure Partners (GIP), a part of BlackRock. This strategic funding positions the company to rapidly advance its hyperscale data center expansion while reinforcing its role as a critical enabler of AI and cloud growth in the region.
Strategic growth financing
The $240 million facility from GIP provides fresh growth capital dedicated to DC BLOX’s hyperscale data center strategy, building on the company’s recently announced $1.15 billion and $265 million Senior Secured Green Loans. Together, these financings support the development and construction of an expanding portfolio of digital infrastructure projects designed to meet surging demand from hyperscalers and carriers.
Powering AI and cloud innovation
DC BLOX has emerged as a leader in connected data center and fiber network solutions, with a vertically integrated platform that includes hyperscale data centers, subsea cable landing stations,colocation, and fiber services. This model allows the company to offer end-to-end solutions for hyperscalers and communications providers seeking capacity, connectivity, and resiliency in high-growth Southeastern markets.
Community and economic impact
The new financing is about more than infrastructure; it is also about regional economic development. DC BLOX’s investments help bring cutting-edge AI and cloud technology into local communities, while driving construction jobs, tax revenues, and power grid enhancements that benefit both customers and ratepayers.
“We are excited to partner with GIP, a part of BlackRock, to fuel our ambitious growth goals,” said Melih Ileri, Chief Investment Officer at DC BLOX. “This financing underscores our commitment to serving communities in the Southeast by bringing cutting-edge AI and cloud technology investments with leading hyperscalers into the region, and creating economic development activity through construction jobs, taxes paid, and making investments into the power grid for the benefit of our customers and local ratepayers alike.”
Backing from leading investors
Michael Bogdan, Chairman of DC BLOX and Head of the Digital Infrastructure Group at Future Standard, highlighted that this milestone showcases the strength of the company’s vision and execution. Future Standard, a global alternative asset manager based in Philadelphia with over 86.0 billion in assets under management, leads DC BLOX’s sponsorship and recently launched its Future Standard Digital Infrastructure platform with more than 2 billion in assets. GIP, now a part of BlackRock and overseeing over 189 billion in assets, brings deep sector experience across energy, transport, and digital infrastructure, further validating DC BLOX’s role in shaping the Southeast as a global hub for AI-driven innovation.
Yotta 2026 is officially open for registration, returning Sept. 28–30 to Las Vegas with a larger footprint, a new venue and an expanded platform designed to meet the accelerating demands of AI-driven infrastructure. Now hosted at Caesars Forum, Yotta 2026 reflects both the rapid growth of the event and the unprecedented pace at which AI is reshaping compute, power and digital infrastructure worldwide.
As AI workloads scale faster than existing systems were designed to handle, infrastructure leaders are facing mounting challenges around power availability, capital deployment, resilience and integration across traditionally siloed industries. Yotta 2026 is built to convene the full ecosystem grappling with these realities, bringing together operators, hyperscalers, enterprise leaders, energy executives, investors, builders, policymakers and technology partners in one place.
Rebecca Sausner, CEO of Yotta, emphasizes that the event is designed for practical progress, not theoretical discussion. From chips and racks to networks, cooling, power and community engagement, AI is transforming every layer of digital infrastructure. Yotta 2026 aims to move conversations beyond vision and into real-world solutions that address scale, reliability and investment risk in an AI-first era.
A defining feature of Yotta 2026 is its advisory board-led approach to programming. The conference agenda is being developed in collaboration with the newly announced Yotta Advisory Board, which includes senior leaders from organizations spanning AI, cloud, energy, finance and infrastructure, including OpenAI, Oracle, Schneider Electric, KKR, Xcel Energy, GEICO and the Electric Power Research Institute (EPRI). This cross-sector guidance ensures the program reflects how the industry actually operates, as an interconnected system where decisions around power, compute, capital, design and policy are inseparable.
The 2026 agenda will focus on the most urgent challenges shaping the AI infrastructure era. Key themes include AI infrastructure and compute density, power generation and grid interconnection, capital formation and investment risk, design and operational resilience, and policy and public-private alignment. Together, these topics offer a market-driven view of how digital infrastructure must be designed, financed and operated to support AI at scale.
With an anticipated 6,000+ AI and digital infrastructure leaders in attendance, Yotta 2026 will feature a significantly expanded indoor and outdoor expo hall, curated conference programming and immersive networking experiences. Hosted at Caesars Forum, the event is designed to support both strategic planning and hands-on execution, creating space for collaboration across the entire infrastructure value chain.
Early registration is now open, with passes starting at $795 and discounted rates available for early registrants. As AI continues to drive unprecedented infrastructure demand, Yotta 2026 positions itself as a critical forum for the conversations and decisions shaping the future of compute, power and digital infrastructure.
Duos Technologies Group Inc. (Nasdaq: DUOT), through its operating subsidiary Duos Edge AI, Inc., has deployed another patented modular Edge Data Center (EDC) in Hereford, Texas. The facility was deployed in partnership with Hereford Independent School District (Hereford ISD) and marks another milestone in Duos Edge AI’s mission to deliver localized, low-latency compute infrastructure that supports education and community technology growth across rural and underserved markets.
”We are thrilled to partner with Duos Edge AI to bring a state-of-the-art Edge Data Center directly to our Administration location in Hereford ISD,” said Dr. Ralph Carter, Superintendent of Hereford Independent School District. “This innovative deployment will dramatically enhance our digital infrastructure, providing low-latency access to advanced computing resources that will empower our teachers with cutting-edge tools, enable real-time AI applications in the classroom, and ensure faster, more reliable connectivity for our students and staff.
Each modular facility is designed for rapid 90-day deployment and delivers scalable, high-performance computing power with enterprise-grade security controls, including third-party SOC 2 Type II certification under AICPA standards.
Duos Edge AI’s patented modular infrastructure incorporates a U.S. patent for an Entryway for a Modular Data Center (Patent No. US 12,404,690 B1), providing customers with secure, compliant, and differentiated Edge infrastructure that operates exclusively on on-grid power and requires no water for cooling. Duos Edge AI continues to expand nationwide, capitalizing on growing demand for localized compute, AI enablement, and resilient digital infrastructure across underserved and high-growth markets.
“Each deployment strengthens our ability to scale a repeatable, capital-efficient edge infrastructure platform,” said Doug Recker, President of Duos and Founder of Duos Edge AI. “Our patented, SOC 2 Type II-audited EDCs are purpose-built to meet real customer demand for secure, low-latency computing while supporting long-term revenue growth and disciplined execution across our targeted markets.”
PowerBridge’s mission has always centered on developing powered, gigawatt-scale data center campuses that combine energy infrastructure and digital infrastructure. As demand for gigawatt-scale campuses accelerates across the U.S., the company continues to build a team designed to meet that movement. The appointment of Debra L. Raggio as Executive Vice President and General Counsel marks an important milestone in that journey.
Debra joins PowerBridge at a time of significant growth, as the convergence of energy, power, and digital infrastructure continues to reshape how large-scale data center campuses are developed. With more than 40 years of experience in the energy industry, as well as digital infrastructure experience, specializing in natural gas, electricity, and data center markets, she brings deep regulatory and commercial expertise to the role. At PowerBridge, she will oversee legal, regulatory, environmental, government affairs, and communications, while serving as a strategic advisor to Founder and CEO Alex Hernandez and the Board.
Throughout her career, Debra has been a leading national voice in shaping regulatory frameworks across energy and digital infrastructure sectors in the United States, with experience spanning power markets such as PJM and ERCOT. Her background includes private practice at Baker Botts and executive leadership roles at major energy companies, including Talen Energy Corp.
Debra was also a founding management team member of Cumulus Data LLC, a multi-gigawatt data center campus co-located with the Susquehanna Nuclear generation station in Pennsylvania. Her regulatory, commercial, and legal leadership helped enable the development and execution of the project, culminating in its sale to Amazon in 2024. Today, that campus is the foundation for an approximately $20 billion investment supporting the continued expansion of Amazon Web Services.
That experience directly aligns with PowerBridge’s approach to combining power generation, electric campus infrastructure, pad-ready data center sites, and fiber infrastructure to serve growing data center demand while adding needed power supply to regional electric grids. Reflecting on her decision to join the company, Debra shared, “I am honored to be joining CEO Alex Hernandez and the team of executives I worked with in the formation and execution of the Cumulus Data Center Campus. I look forward to helping PowerBridge become the country’s premier powered-campus development company at multi-gigawatt scale, combining power generation, electric campus infrastructure, pad-ready data center sites, and fiber infrastructure to serve the growing need for data centers, while adding needed power supply to the electric grids, including PJM and ERCOT.”
Debra’s appointment reinforces PowerBridge’s focus on regulatory leadership, strategic execution, and disciplined growth as the company advances powered, gigawatt-scale data center campuses across the United States.
Spain’s digital infrastructure landscape is entering a pivotal new phase, and Nostrum Data Centers is positioning itself at the center of that transformation. By engaging global real estate and data center advisory firm JLL, Nostrum is accelerating the development of a next-generation, AI-ready data center platform designed to support Spain’s emergence as a major connectivity hub for Europe and beyond.
Building the Foundation for an AI-Driven Future
Nostrum Data Centers, the digital infrastructure division of Nostrum Group, is developing a portfolio of sustainable, high-performance data centers purpose-built for artificial intelligence, cloud computing, and high-density workloads. In December 2025, the company announced that its data center assets will be available in 2027, with land and power already secured across all sites, an increasingly rare advantage in today’s constrained infrastructure markets.
The platform includes 500 MW of secured IT capacity, with an additional 300 MW planned for future expansion, bringing total planned development to 800 MW across Spain. This scale positions Nostrum as one of the country’s most ambitious digital infrastructure developers at a time when demand for compute capacity is accelerating across Europe.
Strategic Locations, Connected by Design
Nostrum’s six data center developments are strategically distributed throughout Spain to capitalize on existing power availability, fiber routes, internet exchanges, and subsea connectivity. This geographic diversity allows customers to deploy capacity where it best supports latency-sensitive workloads, redundancy requirements, and long-term growth strategies.
Equally central to Nostrum’s approach is sustainability. Each facility is designed in alignment with the United Nations Sustainable Development Goals (SDGs), delivering industry-leading efficiency metrics, including a Power Usage Effectiveness (PUE) of 1.1 and zero Water Usage Effectiveness (WUE), eliminating water consumption for cooling.
Why JLL? And Why Now?
To support this next phase of growth, Nostrum has engaged JLL to strengthen its go-to-market strategy and customer engagement efforts. JLL brings deep global experience in data center advisory, site selection, and market positioning, helping operators translate technical infrastructure into compelling value for hyperscalers, enterprises, and AI-driven tenants.
“Nostrum Data Centers has a long-term vision for balancing innovation and sustainability. We offer our customers speed to market and scalability throughout our various locations in Spain, all while leading a green revolution to ensure development is done the right way as we position Spain as the next connectivity hub,” says Gabriel Nebreda, Chief Executive Officer at Nostrum Group. “We are confident that our engagement with JLL will be able to help us bolster our efforts and achieve our long-term vision.”
From JLL’s perspective, Spain presents a unique convergence of advantages.
“Spain has a unique market position with its access to robust power infrastructure, its proximity to Points of Presence (PoPs), internet exchanges, subsea connectivity, and being one of the lowest total cost of ownership (TCO) markets,” says Jason Bell, JLL Senior Vice President of Data Center and Technology Services in North America. “JLL is excited to be working with Nostrum Data Centers, providing our expertise and guidance to support their quest to be a leading data center platform in Spain, as well as position Spain as the next connectivity hub in Europe and beyond.”
Advancing Spain’s Role in the Global Digital Economy
With JLL’s support, Nostrum Data Centers is further refining its strategy to meet the technical and operational demands of AI and high-density computing without compromising on efficiency or sustainability. The result is a platform designed not just to meet today’s requirements, but to anticipate what the next decade of digital infrastructure will demand.
As hyperscalers, AI developers, and global enterprises look for scalable, energy-efficient alternatives to traditional European hubs, Spain, and Nostrum Data Centers, are increasingly part of the conversation.
Duos Technologies Group Inc. (Nasdaq: DUOT), through its operating subsidiary Duos Edge AI, Inc, has deployed a new Edge Data Center (EDC) in Abilene, Texas, in collaboration with Region 14 Education Service Center (ESC). This deployment expands Duos Edge AI’s presence in Texas while bringing advanced digital infrastructure to support K-12 education, healthcare, workforce development, and local businesses across West Texas.
This installation builds on Duos Edge AI’s recent Texas deployments in Amarillo (Region 16), Waco (Region 12), and Victoria (Region 3), supporting a broader strategy to deploy edge computing solutions tailored to education, healthcare, and enterprise needs.
“We are excited to partner with Region 14 ESC to bring cutting-edge technology to Abilene and West Texas, bringing a carrier neutral colocation facility to the market while empowering educators and communities with the tools they need to thrive in a digital world,” said Doug Recker, President of Duos and Founder of Duos Edge AI. “This EDC represents our commitment to fostering innovation and economic growth in regions that have historically faced connectivity challenges.”
The Abilene EDC will serve as a local carrier-neutral colocation facility and computing hub, delivering enhanced bandwidth, secure data processing, and low-latency AI capabilities to more than 40 school districts and charter schools across an 11-county region spanning over 13,000 square miles.
Chris Wigington, Executive Director for Region 14 ESC, added, “Collaborating with Duos Edge AI allows us to elevate the technological capabilities of our schools and partners, ensuring equitable access to high-speed computing and AI resources. This data center will be a game-changer for student learning, teacher development, and regional collaboration.”
By locating the data center at Region 14 ESC, the partnership aims to help bridge digital divides in rural and underserved communities by enabling faster access to educational tools, cloud services, and AI-driven applications, while reducing reliance on distant centralized data centers.
The EDC is expected to be fully operational in early 2026, with plans for a launch event at Region 14 ESC’s headquarters in Abilene.