Normal view

Received yesterday — 31 January 2026

DPI and PODTECH Partner to Scale AI Infrastructure Commissioning Across Europe, Asia, and the Middle East

29 January 2026 at 20:30

Datalec Precision Installations (DPI) and PODTECH have announced a global technology partnership focused on delivering pre-staged, deployment-ready AI infrastructure solutions as hyperscaler demand drives data center vacancy rates to historic lows. With capacity tightening to 6.5% in Europe and 5.9% in the U.K., the partnership addresses a critical bottleneck in AI data center commissioning, where deployment timelines and technical complexity have become major constraints for enterprises and cloud platforms scaling GPU-intensive workloads.

The AI Infrastructure Commissioning Challenge

As hyperscalers deploy more than $600 billion in AI data center infrastructure this year, representing 75% of total capital expenditure, the focus has shifted from simply securing capacity to ensuring infrastructure is fully validated and production-ready at deployment. AI workloads demand far more than traditional data center services. NVIDIA-based AI racks require specialized expertise in NVLink fabric configuration, GPU testing, compute node initialization, dead-on-arrival (DOA) testing, site and factory acceptance testing (SAT/FAT), and network validation. These technical requirements, combined with increasingly tight deployment windows, have created demand for integrated commissioning providers capable of delivering turnkey solutions.

Integrated Capabilities Across the AI Lifecycle

The DPI-PODTECH partnership brings together complementary capabilities across the full AI infrastructure stack. DPI contributes expertise in infrastructure connectivity and mechanical systems. PODTECH adds software development, commissioning protocols, and systems integration delivered through more than 60 technical specialists across the U.K., Asia, and the Middle East. Together, the companies offer end-to-end services from pre-deployment validation through network bootstrapping, ensuring AI environments are fully operational before customer handoff.

The partnership builds on successful NVIDIA AI rack deployments for international hyperscaler programs, where both companies demonstrated the ability to manage complex, multi-site rollouts. By formalizing their collaboration, DPI and PODTECH are positioning to scale these capabilities across regions where data center capacity is most constrained and AI infrastructure demand is accelerating fastest.

Strategic Focus on High-Growth Markets

The partnership specifically targets Europe, Asia, and the Middle East, markets experiencing acute capacity constraints and surging AI investment. PODTECH’s existing presence across these regions gives the partnership immediate on-the-ground capacity to support hyperscaler and enterprise deployments. The company’s ISO 27001, ISO 9001, and ISO 20000-1 certifications provide the compliance foundation required for clients in regulated industries and public sector engagements.

Industry Perspective

“As organizations accelerate their AI adoption, the reliability and performance of the underlying infrastructure have never been more critical,” said James Bangs, technology and services director at DPI. “Building on our partnership with PODTECH, we have already delivered multiple successful deployments together, and this formal collaboration enables us to scale our capabilities globally.”

Harry Pod, founder at PODTECH, emphasized the operational benefits of the integrated model: “Following our successful collaborations with Datalec on major NVIDIA AI rack deployments, we are very proud to officially combine our capabilities. By working as one integrated delivery team, we can provide clients with packaged, pre-staged, and deployment-ready AI infrastructure solutions grounded in quality, precision, and engineering excellence.”

Looking Ahead

For enterprises and hyperscalers navigating AI infrastructure decisions in 2026, the partnership signals a shift toward specialized commissioning providers capable of managing the entire deployment lifecycle. With hyperscaler capital expenditure forecast to remain elevated through 2027 and vacancy rates showing no signs of easing, demand for integrated commissioning services is likely to intensify across DPI and PODTECH’s target markets.

Organizations evaluating AI infrastructure commissioning strategies can learn more at datalecltd.com.

The post DPI and PODTECH Partner to Scale AI Infrastructure Commissioning Across Europe, Asia, and the Middle East appeared first on Data Center POST.

Empire Fiber Internet Lights Up Downtown Cortland with Free Wi-Fi

29 January 2026 at 17:30

Empire Fiber Internet, a leading fiber optic internet service provider serving communities across New York and Pennsylvania, has officially lit up the Downtown Cortland Wi-Fi Project in partnership with the City of Cortland. Residents, visitors, and local businesses can now tap into free, fast, and reliable public Wi-Fi throughout the downtown district.

Empire Fiber Internet first brought its high-speed fiber network to Cortland in 2024, delivering symmetrical, gig-ready connectivity to more than 5,500 homes and businesses. The Downtown Wi-Fi Project extends powerful, reliable internet access into the city’s most active public spaces.

​”Our partnership with the City of Cortland puts connection within everyone’s reach–residents, visitors, students, and families,” said Kevin Dickens, Empire Fiber Internet CEO. “When fast, free Wi-Fi is available in the places people gather, it strengthens community, expands access, and enhances everyday life.”

Empire Fiber Internet completed new community Wi-Fi installations at Beaudry Park, Dexter Park, and Suggett Park in Fall 2025, expanding fast, free public connectivity across some of Cortland’s most popular gathering spaces.

“As we put the finishing touches on Main Street, it’s exciting to expand free Wi-Fi access not only downtown, but into our public parks as well,” said Mayor of Cortland, Scott Steve. “This project strengthens accessibility, supports local businesses, and improves connectivity for residents and visitors alike.”

These projects deliver free, reliable public Wi-Fi across key downtown and park locations, increased foot traffic and visibility for local businesses, stronger community events supported by dependable connectivity, and modern digital infrastructure that fuels innovation, engagement, and economic growth.

To learn more about Empire Fiber Internet, visit www.empireaccess.com.

The post Empire Fiber Internet Lights Up Downtown Cortland with Free Wi-Fi appeared first on Data Center POST.

Why Data Sovereignty Is Becoming a Strategic Imperative for AI Infrastructure

29 January 2026 at 13:30

As artificial intelligence reshapes how organizations generate value from data, a quieter shift is happening beneath the surface. The question is no longer just how data is protected, but where it is processed, who governs it, and how infrastructure decisions intersect with national regulation and digital policy.

Datalec Precision Installations (DPI) is seeing this shift play out across global markets as enterprises and public sector organizations reassess how their data center strategies support both AI performance and regulatory alignment. What was once treated primarily as a compliance issue is increasingly viewed as a foundational design consideration.

Sovereignty moves upstream.

Data sovereignty has traditionally been addressed after systems were deployed, often resulting in fragmented architectures or operational workarounds. That approach is becoming less viable as regulations tighten and AI workloads demand closer proximity to sensitive data.

Organizations are now factoring sovereignty into infrastructure planning from the start, ensuring data remains within national borders and is governed by local legal frameworks. For many, this shift reduces regulatory risk while creating clearer operational boundaries for advanced workloads.

AI raises the complexity

AI intensifies data governance challenges by extending them beyond storage into compute and model execution. Training and inference processes frequently involve regulated or sensitive datasets, increasing exposure when data or workloads cross borders.

This has driven growing interest in sovereign AI environments, where data, compute, and models remain within a defined jurisdiction. Beyond compliance, these environments offer greater control over digital capabilities and reduced dependence on external platforms.

Balancing performance and governance 

Supporting sovereign AI requires infrastructure that can deliver high-density compute and low-latency performance without compromising physical security or regulatory alignment. DPI addresses this by delivering AI-ready data center environments designed to support GPU-intensive workloads while meeting regional compliance requirements.

The objective is to enable organizations to deploy advanced AI systems locally without sacrificing scalability or operational efficiency.

Regional execution at global scale

Demand for localized, compliant infrastructure is growing across regions where digital policy and economic strategy intersect. DPI’s expansion across the Middle East, APAC, and other international markets reflects this trend, combining regional delivery with standardized operational practices across 21 global entities.

According to Michael Aldridge, DPI’s Group Information Security Officer, organizations increasingly view localized infrastructure as a way to future-proof their digital strategies rather than constrain them.

Compliance as differentiation

As AI adoption accelerates, infrastructure and governance decisions are becoming inseparable. Organizations that can control where data lives and how AI systems operate are better positioned to manage risk, meet regulatory expectations, and move faster in regulated markets.

DPI’s approach reflects a broader industry shift: compliance is no longer just about meeting requirements, but about enabling innovation in an AI-driven environment.

To read DPI’s full perspective on data sovereignty and AI readiness, visit the company’s website.

The post Why Data Sovereignty Is Becoming a Strategic Imperative for AI Infrastructure appeared first on Data Center POST.

2025 in Review: Sabey’s Biggest Milestones and What They Mean

26 January 2026 at 18:00

Originally posted on Sabey Data Centers.

At Sabey Data Centers, progress is more than a series of headlines. It’s a blueprint for what’s possible when infrastructure, efficiency and stewardship go hand in hand. From our award-winning sustainability initiatives to bold new campus designs and record-setting expansions, each milestone this year has demonstrated our commitment to powering tomorrow’s workloads with conscience and confidence.

As 2026 is already well underway and already promising to be a banner year, we wanted to pause and reflect on the path we forged in 2025.

Capacity expansion: built for growth 

In 2025, we announced strategic power expansions across our Pacific Northwest, Ashburn, Austin and Quincy locations. In Seattle and Columbia, 30MW of new power is now anticipated to come online by 2027, enabling tenants to scale quickly while leveraging ultra-efficient energy (carbon-free in Seattle) and a regional average PUE as low as 1.2.

On the East Coast, Ashburn’s third and final building broke ground, set to introduce 54MW of additional capacity with the first tranches set to come online in 2026. This will be the first three-story facility in Sabey’s portfolio, purpose-built for air-cooled, liquid-cooled and hybrid deployments, with rack densities over 100kW and an average PUE of 1.35. In both regions, Sabey’s expansion balances hyperscale demand with customization, modular scale and resilient connectivity.

The launch of construction for Austin Building B was another major milestone, expanding our presence in the dynamic Round Rock tech corridor. This three-story, liquid-cooling-ready facility is designed to deliver 54 megawatts of total power capacity. Building B continues our commitment to scalable, energy-efficient digital infrastructure, tailored for enterprise and hyperscale workloads.

To continue reading, please click here.

The post 2025 in Review: Sabey’s Biggest Milestones and What They Mean appeared first on Data Center POST.

Received before yesterday

Duos Edge AI Earns PTC’26 Innovation Honor

22 January 2026 at 19:30

Duos Technologies Group Inc. (Nasdaq: DUOT), through its operating subsidiary Duos Edge AI, Inc., received the Outstanding Innovation Award at Pacific Telecommunication Conference 2026 (PTC’26). This honor recognizes Duos Edge AI’s leadership in modular Edge Data Center (EDC) solutions that boost efficiency, scalability, security, and customer experience.

Duos Edge AI’s capital-efficient model supports rapid 90-day installations and scalable growth tailored to regional needs like education, healthcare, and municipal services. High-availability designs deliver up to 100 kW+ per cabinet with resilient, 24/7 operations positioned within 12 miles of end users for minimal latency.

“This recognition from Pacific Telecommunications Council (PTC) is a meaningful validation of our strategy and execution,” said Doug Recker, President of Duos and Founder of Duos Edge AI. “Our mission has been to bring secure, low-latency digital infrastructure directly to communities that need it most. By deploying edge data centers where people live, learn, and work, we’re helping close the digital divide while building a scalable platform aligned with long-term growth and shareholder value.”

The award spotlights Duos Edge AI’s patented modular EDCs deployed in underserved communities for low-latency, enterprise-grade infrastructure. These centers enable real-time AI processing, telemedicine, digital learning, and carrier-neutral connectivity without distant cloud reliance.

Duos Edge AI thanks partners like Texas Regions 16 and 3 Education Service Centers, Dumas ISD, and local leaders embracing localized tech for equity.

To learn more about Duos Edge AI, visit www.duosedge.ai.

The post Duos Edge AI Earns PTC’26 Innovation Honor appeared first on Data Center POST.

Sabey and JetCool Push Liquid Cooling from Pilot to Standard Practice​

22 January 2026 at 17:00

As AI and high‑performance computing (HPC) workloads strain traditional air‑cooled data centers, Sabey Data Centers is expanding its partnership with JetCool Technologies to make direct‑to‑chip liquid cooling a standard option across its U.S. portfolio. The move signals how multi‑tenant operators are shifting from experimental deployments to programmatic strategies for high‑density, energy‑efficient infrastructure.

Sabey, one of the largest privately held multi‑tenant data center providers in the United States, first teamed with JetCool in 2023 to test direct‑to‑chip cooling in production environments. Those early deployments reported 13.5% server power savings compared with air‑cooled alternatives, while supporting dense AI and HPC racks without heavy reliance on traditional mechanical systems.

The new phase of the collaboration is less about proving the technology and more about scale. Sabey and JetCool are now working to simplify how customers adopt liquid cooling by turning what had been bespoke engineering work into repeatable designs that can be deployed across multiple sites. The goal is to give enterprises and cloud platforms a predictable path to high‑density infrastructure that balances performance, efficiency and operational risk.

A core element of that approach is a set of modular cooling architectures developed with Dell Technologies for select PowerEdge GPU‑based servers. By closely integrating server hardware and direct‑to‑chip liquid cooling, the partners aim to deliver pre‑validated building blocks for AI and HPC clusters, rather than starting from scratch with each project. The design includes unified warranty coverage for both the servers and the cooling system, an assurance that Sabey says is key for customers wary of fragmented support models.

The expanded alliance sits inside Sabey’s broader liquid cooling partnership program, an initiative that aggregates multiple thermal management providers under one framework. Instead of backing a single technology, Sabey is positioning itself as a curator of proven, ready‑to‑integrate cooling options that map to varying density targets and sustainability goals. For IT and facilities teams under pressure to scale GPU‑rich deployments, that structure promises clearer design patterns and faster time to production.

Executives at both companies frame the partnership as a response to converging pressures: soaring compute demand, tightening efficiency requirements and growing scrutiny of data center energy use. Direct‑to‑chip liquid cooling has emerged as one of the more practical levers for improving thermal performance at the rack level, particularly in environments where power and floor space are limited but performance expectations are not.

For Sabey, formalizing JetCool’s technology as a standard, warranty‑backed option is part of a broader message to customers: liquid cooling is no longer a niche or one‑off feature, but an embedded part of the company’s roadmap for AI‑era infrastructure. Organizations evaluating their own cooling strategies can find the full announcement here.

The post Sabey and JetCool Push Liquid Cooling from Pilot to Standard Practice​ appeared first on Data Center POST.

Empire Fiber Internet Brings High-Speed Fiber To Irondequoit

22 January 2026 at 15:00

Empire Fiber Internet, a leading fiber optic internet service provider serving communities across New York and Pennsylvania, continues its ongoing expansion in the Greater Rochester area with the completion of its buildout in Irondequoit. This follows the company’s successful launch in Greece in August 2025 and underscores Empire Fiber Internet’s commitment to bringing high speed reliable fiber internet to Rochester area communities.

This expansion delivers 100% high speed fiber internet with symmetrical upload and download speeds to over 3,500 homes, with transparent rates, no hidden fees, no long term contracts, and locally based 24/7 customer support.​

“Irondequoit is a natural fit for our expansion: it’s a community that values choice and reliability, and our local team is already lighting up neighborhoods with 100% fiber, transparent pricing (no hidden fees or contracts), and 24/7 local support for residents and businesses,” said Kevin Dickens, CEO of Empire Fiber Internet. “High speed internet fuels work, learning, innovation, and growth, and we’re proud to bring that kind of game changing connectivity to the Irondequoit community.”​

Empire Fiber Internet’s network is designed to meet the growing needs of today’s households and businesses, supporting streaming, gaming, remote work, cloud applications, and secure operations. Plans in serviceable areas start at $55 per month with symmetrical speeds up to 2 Gig, free installation, no hidden fees, and 24/7 local customer support.​

To learn more about Empire Fiber Internet, visit www.empireaccess.com.

The post Empire Fiber Internet Brings High-Speed Fiber To Irondequoit appeared first on Data Center POST.

Three-quarters of AI Subscribers in the US Want Generative AI Included With Their Mobile Contract

22 January 2026 at 14:00

Originally posted on TelecomNewsroom.

Telcos are the missing link in AI adoption, say paying AI subscribers

Nearly three-quarters (74%) of US consumers who pay for generative AI services want those tools included directly with their mobile phone plan, according to new research from subscription bundling platform, Bango.

The survey of 1,400 ChatGPT subscribers in the US also reveals that demand for AI-inclusive telco bundles extends beyond mobile. A further 72% of AI subscribers want AI included as part of their home broadband or TV package, while more than three-quarters (77%) want generative AI tools paired with streaming services such as Netflix or Spotify, offering a bundling opportunity for telcos.

The findings signal a major opportunity for telcos to become the primary distributors of AI services. AI subscribers already spend over $65 per month on these tools, representing a high value audience for telcos.

To read the full press release, please click here.

The post Three-quarters of AI Subscribers in the US Want Generative AI Included With Their Mobile Contract appeared first on Data Center POST.

Building the Digital Foundation for the AI Era: DataBank’s Vision for Scalable Infrastructure

21 January 2026 at 20:00

Data Center POST connected with Raul K. Martynek, Chief Executive Officer of DataBank Holdings, Ltd., ahead of PTC’26. Martynek joined DataBank in 2017 and brings more than three decades of leadership experience across telecommunications, Internet infrastructure, and data center operations. His background includes senior executive roles at Net Access, Voxel dot Net, Smart Telecom, and advisory positions with DigitalBridge and Plainfield Asset Management. Under his leadership, DataBank has expanded its national footprint, strengthened its interconnection ecosystems, and positioned its platform to support AI-ready, high-density workloads across enterprise, cloud, and edge environments. In the Q&A below, Martynek shares his perspective on the challenges shaping global digital infrastructure and how DataBank is preparing customers for the next phase of AI-driven growth.

Data Center Post (DCP) Question: What does your company do?  

Raul Martynek (RM) Answer: DataBank helps the world’s largest enterprises, technology, and content providers ensure their data and applications are always on, always secure, always compliant, and ready to scale to meet the needs of the artificial intelligence era.

DCP Q: What problems does your company solve in the market?

RM A: DataBank addresses a broad set of challenges enterprises face when managing critical infrastructure. Reliability and uptime are foundational, as downtime can severely impact revenue and customer trust. We also help organizations meet security and compliance requirements without having to build costly internal expertise. Our platform allows customers to scale infrastructure without large capital expenditures by shifting to an operating expense model. In addition, we provide managed expertise that frees internal teams to focus on strategic priorities, simplify hybrid IT and cloud integration, improve latency for distributed and edge workloads, strengthen cybersecurity posture, and mitigate talent and resource constraints.

DCP Q: What are your company’s core products or services?

RM A: Data center colocation, Interconnection, Enterprise Cloud, Compliance Enablement, Data Protection. Powered by expert, human support

DCP Q: What markets do you serve?

RM A: DataBank serves customers across a broad geographic footprint in the United States and Europe. In the western United States, the company operates in key markets including Irvine, Los Angeles, and Silicon Valley in California, as well as Las Vegas, Salt Lake City, and Seattle. Its central U.S. presence includes Chicago, Denver, Indianapolis, and Kansas City. In the southern region, DataBank supports customers in Atlanta, Austin, Dallas, Houston, Miami, and Waco. Along the East Coast and Midwest, the company operates in markets such as Boston, Cleveland, New Jersey, New York City, Philadelphia, and Pittsburgh. Internationally, DataBank also serves customers in the United Kingdom.

DCP Q: What challenges does the global digital infrastructure industry face today?

RM A: The industry is facing a convergence of challenges, including power availability and grid constraints, sustainability and carbon reduction requirements, cooling demands for high-density AI and HPC workloads, supply chain pressures, land acquisition and zoning issues, and increasing interconnection complexity. At the same time, organizations must contend with talent shortages and rising cybersecurity risks, all while supporting rapidly expanding digital workloads.

DCP Q: How is your company adapting to these challenges?

RM A: We are building in markets with available power headroom and designing scalable power blocks to support future growth. Our facilities are being prepared for AI-era density with liquid-ready designs and more efficient cooling strategies. Sustainability remains a priority, with a focus on lowering energy and water usage. We are standardizing construction to improve efficiency and flexibility while expanding interconnection ecosystems such as DE-CIX. Additionally, our managed services help fill enterprise talent gaps, and we continue to invest in operational excellence, security, and company culture.

DCP Q: What are your company’s key differentiators?

RM A: DataBank differentiates itself through strong engineering and operational management, future-ready platforms, and deep compliance expertise. Our geographic focus allows us to serve customers where they need infrastructure most, while our managed services provide visibility and control across complex environments. We are also supported by patient, long-term investors, enabling disciplined growth and sustained investment.

DCP Q: What can we expect to see/hear from your company in the future?  

RM A: Customers can expect continued commitment to enterprise IT infrastructure alongside expanded AI-ready platforms. We are growing our interconnection ecosystems, advancing sustainability initiatives, modernizing key campuses, and expanding managed and hybrid IT services. Enhancing security, compliance, and customer success will remain central, as will our focus on talent and culture.

DCP Q: What upcoming industry events will you be attending? 

RM A: AI Tinkers; Metro Connect; ATC CEO Summit; MIMSS 26; DCD>Connect 2026; ITW 2026; 7×24 Cloud Run Community Festival; CBRE Digital Infrastructure Summit 2026; AI Infra Conference; TMT M&A Forum; MegaPort Connect; TAG Data Center Summit; Supercomputing 2026; Incompany; DE-DIX Dallas Olde World Holiday Market

DCP Q: Do you have any recent news you would like us to highlight?

RM A: DataBank has recently announced several milestones that underscore its continued growth and long-term strategy. The company expanded its financing vehicle to $1.6 billion to support the next phase of platform expansion and infrastructure investment. DataBank also released new research showing that 60 percent of enterprises are already seeing a return on investment from AI initiatives or expect to within the next 12 months, highlighting the accelerating business impact of AI adoption. In addition, DataBank introduced a company-wide employee ownership program, reinforcing its commitment to culture, alignment, and long-term value creation across the organization.

DCP Q: Is there anything else you would like our readers to know about your company and capabilities?

RM A: DataBank is building the digital foundation for the AI, cloud, and connected-device era. Its national footprint of data centers delivers secure, high-density colocation, interconnection, and managed services that help enterprises deploy mission-critical workloads with confidence.

We are designing for the future with liquid-cooling capabilities, campus modernization, and expanded interconnection ecosystems. We are equally committed to responsible digital infrastructure: improving efficiency, reducing water use, strengthening security, and advancing compliance.

Above all, DataBank we are a trusted infrastructure partner, providing the expertise and operational support organizations need to scale reliably and securely.

DCP Q: Where can our readers learn more about your company?  

RM A: www.databank.com

DCP Q: How can our readers contact your company? 

PQ A: www.databank.com/contact-us

To learn more about PTC’26, please visit www.ptc.org/ptc26. The event takes place January 18-21, 2026 in Honolulu, HI.

If you are interested in contributing to Data Center POST, contact us at contributions@datacenterpost.com.

Stay in the know! Subscribe to Data Center POST today.

# # #

About Data Center POST

Data Center POST provides a comprehensive view of the digital infrastructure landscape, delivering industry insights into the global data center ecosystem. As the industry’s only peer-contributed and online publication, we offer relevant information from developers, managers, providers, investors, and trendsetters worldwide.

Data Center POST works hard to get the most current information and thought-provoking ideas most apt to add relevance to the success of the data center industry. Stay informed, visit www.datacenterpost.com.

ADVERTISE | CONTRIBUTE | SUBSCRIBE

The post Building the Digital Foundation for the AI Era: DataBank’s Vision for Scalable Infrastructure appeared first on Data Center POST.

Power Should Serve Compute: Airsys’ Energy-First Approach to Data Center Cooling

21 January 2026 at 17:00

Data Center POST had the opportunity to connect with Paul Quigley, President of Airsys Cooling Technologies, Inc., ahead of PTC’26. With more than 36 years of experience spanning HVAC contracting, distribution, and executive leadership in manufacturing, Quigley is widely recognized for his work in large-scale, high-demand cooling environments. His career includes co-designing one of the largest VRF projects in North America and leading complex cooling initiatives across the United States. Since joining Airsys in 2021, Quigley has focused on advancing precision and liquid-assisted cooling technologies to help data center operators address power constraints driven by AI, high-density compute, and rapid digital infrastructure growth. In the Q&A below, Quigley shares his perspective on the challenges facing the global digital infrastructure industry and how Airsys is helping operators turn cooling efficiency into usable compute power.

Data Center Post (DCP) Question: What does your company do?  

Paul Quigley (PQ) Answer: Airsys is an energy-focused cooling solutions company founded on a simple idea: power should serve computation, not be lost protecting it. Every system we build returns power back to the data center where it belongs. Our innovative, award-winning technologies give operators more usable power for compute, improving capacity, resilience, and overall profitability.

DCP Q: What problems does your company solve in the market?

PQ A: Airsys helps data centers recover power lost to cooling. As AI clusters push energy systems past their limits, operators are forced to choose between compute, capacity, and cost. Our cooling technologies solve that problem by returning power back to the data center, reducing stranded capacity, and giving operators more usable power for compute.

DCP Q: What are your company’s core products or services?

PQ A: We design and build the world’s most advanced cooling solutions for data centers, including:

  • LiquidRack spray-cooling and liquid-assisted rack systems
  • High-efficiency DX and chilled-water cooling systems
  • Flooded-evaporator chillers (CritiCool-X)
  • Indoor and outdoor precision cooling systems
  • Edge, modular, and containerized data-center cooling
  • Control systems, energy-optimization tools, and PCE/ROIP performance frameworks

DCP Q: What markets do you serve?

PQ A:

  • Hyperscale and AI compute environments
  • Colocation and enterprise data centers
  • Modular and prefabricated data centers
  • Edge and telecom infrastructure
  • Education, industrial, government, and defense applications requiring mission-critical cooling

DCP Q: What challenges does the global digital infrastructure industry face today?

PQ A: The industry now operates in an environment of rapid compute expansion and structural power scarcity. Grid limitations, long construction timelines, inefficient legacy cooling, land constraints, and the explosive rise of AI workloads have created an energy bottleneck. Data centers no longer struggle with space… they struggle with power.

DCP Q: How is your company adapting to these challenges?

PQ A: Airsys has always shifted the conversation from cooling to energy stewardship. Every system we design focuses on reducing cooling overhead and returning power to computation. Our LiquidRack spray-cooling platform, low-lift CritiCool-X chillers, and high-efficiency DX systems enable operators to deploy more compute with less infrastructure. We’re also advancing new metrics – PCE and ROIP – to help operators quantify the financial value of returned power.

DCP Q: What are your company’s key differentiators?

PQ A:

  • Energy-first design philosophy — our systems return power to compute
  • Rapid delivery and global manufacturing — critical in today’s supply-strained market
  • LiquidRack spray cooling — enabling high-density AI clusters without stranded power
  • Flooded-evaporator chiller technology — high efficiency, low lift, faster deployment
  • End-to-end portfolio — DX, chilled water, liquid-assisted cooling, modular systems
  • Practical engineering — simple, reliable, maintainable designs
  • PCE / ROIP frameworks — financial and operational tools that change how operators evaluate cooling impact

DCP Q: What can we expect to see/hear from your company in the future?  

PQ A: You will see a major expansion of our LiquidRack platform, new ultra-efficient chiller technologies, deeper integration of PCE/ROIP metrics, and broader support for modular and edge deployments. We are continuing to push innovation toward one goal: giving operators more usable power for compute.

DCP Q: What upcoming industry events will you be attending? 

PQ A: Events Airsys is considering for 2026:

PTC’26 Hawaii; Escape the Cold Aisle Phoenix; Advancing DC Construction West; DCD>Connect New York; Data Center World DC; World of Modular Las Vegas; NSPMA; 7×24 Exchange Spring Orlando; Data Center Nation Toronto; Datacloud USA Austin; AI Infra Summit Santa Clara; DCW Power Dallas; Yotta Las Vegas; 7×24 Exchange Fall San Antonio; PTC DC; DCD>Connect Virginia; DCAC Austin; Supercomputing; Gartner IOCS; NVIDIA GTC; OCP Global Summit.

DCP Q: Do you have any recent news you would like us to highlight?

PQ A: Yes, Airsys has expanded its high-density cooling portfolio with several major advancements:

More announcements are planned for early 2026 as Airsys continues to expand its advanced cooling portfolio for high-density compute environments

DCP Q: Is there anything else you would like our readers to know about your company and capabilities?

PQ A: Airsys is built on a simple idea: power should serve computation, not be lost protecting it. Our mission is to help operators turn cooling losses into compute gains and to make energy stewardship a competitive advantage in the AI era.

DCP Q: Where can our readers learn more about your company?  

PQ A: www.airsysnorthamerica.com

DCP Q: How can our readers contact your company? 

PQ A: www.airsysnorthamerica.com/contact

To learn more about PTC’26, please visit www.ptc.org/ptc26. The event takes place January 18-21, 2026 in Honolulu, HI.

If you are interested in contributing to Data Center POST, contact us at contributions@datacenterpost.com.

Stay in the know! Subscribe to Data Center POST today.

# # #

About Data Center POST

Data Center POST provides a comprehensive view of the digital infrastructure landscape, delivering industry insights into the global data center ecosystem. As the industry’s only peer-contributed and online publication, we offer relevant information from developers, managers, providers, investors, and trendsetters worldwide.

Data Center POST works hard to get the most current information and thought-provoking ideas most apt to add relevance to the success of the data center industry. Stay informed, visit www.datacenterpost.com.

ADVERTISE | CONTRIBUTE | SUBSCRIBE

The post Power Should Serve Compute: Airsys’ Energy-First Approach to Data Center Cooling appeared first on Data Center POST.

AI Data Center Market to Surpass USD 1.98 Trillion by 2034

21 January 2026 at 15:00

The global AI data center market was valued at USD 98.2 billion in 2024 and is estimated to grow at a CAGR of 35.5% to reach USD 1.98 trillion by 2034, according to a recent report by Global Market Insights Inc.

Growing adoption of generative AI and machine learning tools requires extraordinary processing power and storage capabilities, increasing reliance on data centers specifically optimized for AI workloads. These environments depend on advanced GPUs, scalable system architecture, and ultra-low-latency networking to support complex model training and inference across industries such as finance, healthcare, and retail. Big data analytics is also accelerating demand, as organizations handle massive streams of structured and unstructured information that must be processed rapidly.

AI-focused facilities enable high-performance computing for real-time workloads, strengthening their role as essential infrastructure for global digital transformation. The rapid expansion of cloud computing, along with the rising number of hyperscale facilities, continues to amplify the need for AI-ready infrastructures. Providers are investing in advanced AI data platforms that offer scalable services to enterprises and developers, further increasing market momentum.

The AI data center market from the hardware segment accounted for USD 61.1 billion in 2024. Growth is driven by expanding use of AI chips, GPU accelerators, advanced cooling technologies, high-density server systems, and optical networking solutions. Rising GPU energy requirements, the shift toward rack densities between 30-120 kW, and large-scale deployment strategies introduced by leading technology companies are shaping long-term capital allocation in the sector.

The cloud-based category held a 58% share in 2024 and is projected to grow at a CAGR of 35.2% from 2025 through 2034. This segment leads due to its unmatched scalability, flexible consumption options, and access to the latest AI-accelerated computing hardware without upfront investment. Hyperscale providers are making multi-billion-dollar commitments to strengthen global AI infrastructures, propelling adoption of AI-driven services and increasing demand for GPUs, TPUs, and specialized processors.

U.S. AI Data Center Market generated USD 33.2 billion in 2024. The country maintains a leading position supported by prominent hyperscale operators and substantial investments in GPU clusters, liquid cooling, and large-scale AI-aligned builds. Federal incentives, regional tax advantages, and infrastructure funding have further solidified the United States as the most capacity-rich region for AI computing.

Key participants in the AI data center market include Huawei, AWS, NVIDIA, HPE, Digital Realty, Google, Lenovo, Microsoft, Equinix, and Dell Technologies. Companies expanding their foothold in the market are focusing on infrastructure modernization, large-scale GPU deployments, and energy-efficient system design.

Many firms are investing in high-density racks, integrated liquid cooling, and next-generation networking to support advanced AI workloads. Strategic partnerships with chipmakers, cloud providers, and colocation operators help accelerate capacity expansion and ensure access to cutting-edge AI hardware. Providers are also scaling global data center footprints, enhancing automation capabilities, and optimizing power utilization through renewable-energy integration. Long-term contracts with enterprises, AI-as-a-service offerings, and the buildout of specialized AI clusters further reinforce competitive positioning and market dominance.

The post AI Data Center Market to Surpass USD 1.98 Trillion by 2034 appeared first on Data Center POST.

Issues Data Centers Face and How to Overcome Them: A Guide for Managers

20 January 2026 at 14:30

Data centers are the backbone of modern digital infrastructure. They power cloud services, financial systems, healthcare platforms, and nearly every technology-driven business. As demand for data storage and processing grows, so do the operational, financial, and risk-related challenges data centers face. For managers, understanding these issues and knowing how to address them proactively is critical to maintaining uptime, security, and long-term viability. This is what you need to know to ensure you can meet up with demand.

Rising Energy Costs and Efficiency Demands

One of the most persistent challenges for data centers is energy consumption. Powering servers, cooling systems, and redundancy infrastructure requires enormous amounts of electricity, and energy costs continue to rise globally. Beyond cost, there is increasing pressure from regulators and customers to reduce environmental impact.

Managers can address this by investing in energy-efficient hardware, optimizing airflow and cooling layouts, and adopting real-time monitoring tools to identify waste. Long-term strategies may include transitioning to renewable energy sources or partnering with utility providers for more favorable pricing structures.

Cooling and Thermal Management Challenges

Heat is an unavoidable byproduct of high-density computing. Inefficient cooling not only increases costs but also raises the risk of equipment failure and downtime. As server densities increase, traditional cooling methods often struggle to keep up.

Modern solutions include hot-aisle/cold-aisle containment, liquid cooling, and AI-driven thermal monitoring systems. For managers, the key is treating cooling as a dynamic system rather than a fixed infrastructure. However, this needs to be one that evolves alongside hardware demands.

Financial Risk and Insurance Considerations

Data centers face significant financial exposure from equipment damage, downtime, liability claims, and unforeseen events. Even with strong operational controls, risk cannot be eliminated entirely.

This is where insurance becomes a critical part of risk management. Evaluating coverage that aligns with the unique needs of data center operations can help protect against losses that would otherwise threaten business continuity. BOP insurance by Next Insurance can help managers think more holistically about protecting assets, operations, and revenue streams as part of an overall risk strategy.

Downtime and Business Continuity Risks

Even brief outages can result in significant financial losses, reputational damage, and contractual penalties. This downtime may be caused by:

  • Power failures
  • Human error
  • Equipment malfunction
  • External events

To mitigate this risk, managers should prioritize redundancy at every critical point, including power supplies, network connections, and backup systems. Regular testing of disaster recovery plans is just as important as having them documented, too. A plan that hasn’t been tested is often unreliable in real-world conditions. You put your business and those whom you cater to at risk.

Cybersecurity and Physical Security Threats

Data centers face a dual security challenge: digital threats and physical risks. Cyberattacks continue to grow in sophistication, while physical threats such as unauthorized access, theft, or vandalism remain real concerns.

Addressing this requires layered security. On the digital side, this includes continuous patching, network segmentation, and monitoring for unusual activity. Physically, access controls, surveillance systems, and strict visitor protocols are essential. Managers should also ensure that staff training keeps pace with evolving threats, as human error remains a major vulnerability.

Compliance and Regulatory Pressure

Data centers often operate under complex regulatory requirements related to data privacy, industry standards, and regional laws. Compliance failures can result in fines, legal exposure, and loss of customer trust.

Managers can stay ahead by maintaining clear documentation, conducting regular audits, and working closely with legal and compliance teams. Building compliance into operational processes, rather than treating it as an afterthought, reduces risk and simplifies reporting.

Turning Challenges Into Strategic Advantage

While data centers face a wide range of operational and strategic challenges, these issues also present opportunities for improvement and differentiation. Managers who address all of the above proactively are better positioned to deliver reliability and value in a competitive market. Don’t let yours be the one that falls victim to the issues and instead take action.

# # #

About the Author:

James Daniels is a freelance writer, business enthusiast, a bit of a tech buff, and an overall geek. He is also an avid reader, who can spend hours reading and knowing about the latest gadgets and tech, whilst offering views and opinions on these topics.

The post Issues Data Centers Face and How to Overcome Them: A Guide for Managers appeared first on Data Center POST.

Human Error in Cybersecurity and the Growing Threat to Data Centers

19 January 2026 at 17:00

Cyber incidents haven’t ceased to escalate in 2025, and they keep making their presence felt more and more impactfully as we transition into 2026. The quick evolution of novel cyber threat trends leaves data centers increasingly exposed to disruptions extending beyond the traditional IT boundaries.

The Uptime Institute’s annual outage analysis shows that in 2024, cyber-related disruptions occurred at roughly twice the average rate seen over the previous four years. This trend aligns with findings from Honeywell’s 2025 Cyber Threat Report, which identified a sharp increase in ransomware and extortion activity targeting operational technology environments based on large-scale system data.

There are many discussions today around infrastructure complexity and attack sophistication, but it’s a lesser-known reality that human error in cybersecurity remains a central factor behind many of these incidents. Routine configuration changes, access decisions, or decisions taken under stress can create conditions that allow errors to sneak in. Looking at high-availability environments, human error often becomes the point at which otherwise contained threats begin to escalate into bigger problems.

As cyberattacks on data centers continue to grow in number, downtime is carrying heavier and heavier financial and reputational consequences. Addressing human error in cybersecurity means recognizing that human behavior plays a direct role in how a security architecture performs in practice. Let’s take a closer look.

How  Attackers Take Advantage of Human Error in Cybersecurity

Cyberattacks often exploit vulnerabilities that stem from both superficial, maybe even preventable mistakes, as well as deeper, systemic issues. Human error in cybersecurity often arises when established procedures are not followed through consistently, which can create gaps that attackers are more than eager to exploit. A delayed firmware update or not completing maintenance tasks can leave infrastructure exposed, even when the risks are already known. And even if organizations have defined policies to reduce these exposures, noncompliance or insufficient follow-through often weakens their effectiveness.

In many environments, operators are aware that parts of their IT and operational technology infrastructure carry known weaknesses, but due to a lack of time or oversight, they fail to address them consistently. Limited training also adds to the problem, especially when employees are expected to recognize and respond to social engineering techniques. Phishing, impersonation, and ransomware attacks are increasingly targeting organizations with complex supply chains and third-party dependencies, and in these situations, human error often enables the initial breach, after which attackers move laterally through systems, using minor mistakes to trigger disruptions.

Why Following Procedures is Crucial

Having policies in place doesn’t always guarantee that the follow-through will be consistent. In everyday operations, teams often have to juggle many things at once: updates, alerts, and routine maintenance, and small steps can be missed unintentionally. Even experienced staff can make these kinds of mistakes, especially when managing large or complex environments over an extended period of time. Gradually, these small oversights can add up and leave systems exposed.

Account management works similarly. Password rules, or the policies for the handling of inactive accounts are usually well-defined; however, they are not always applied homogeneously. Dormant accounts may go unnoticed, and teams can fall behind on updates or escape regular review. Human error in cybersecurity often develops step by step through workloads, familiarity, and everyday stress, and not because of a lack of skill or awareness.

The Danger of Interacting With Social Engineering Without Even Knowing

Social engineering is a method of attack that uses deception and impersonation to influence people into revealing information or providing access. It relies on trust and context to make people perform actions that appear harmless and legitimate at the moment.

The trick of deepfakes is that they mirror everyday communication very accurately. Attackers today have all the tools to impersonate colleagues, service providers, or internal support staff. A phone call from someone claiming to be part of the IT help desk can easily seem routine, especially when framed as a quick fix or standard check. Similar approaches can be seen in emails or messaging platforms, and the pattern is the same: urgency overrides safety.

With the various new tools available, visual deception has become very common. Employees may be directed to login pages that closely resemble internal systems and enter credentials without hesitation. Emerging techniques like AI-assisted voice or video impersonation further blur the line between legitimate requests and malicious activity, making social engineering interactions very difficult to recognize in real time.

Ignoring Security Policies and Best Practices

It’s not enough if security policies exist only as formal documentation, but are not followed consistently on the floor. Sometimes, even if access procedures are defined, employees under the pressure of time can make undocumented exceptions. Access policies, or change management rules, for example, require peer review and approval, but urgent maintenance or capacity pressures often lead to decisions that bypass those steps.

These small deviations create gaps between how systems are supposed to be protected and how they are actually handled. When policies become situational or optional, security controls lose their purpose and reliability, leaving the infrastructure exposed, even though there’s a mature security framework in place.

When Policies Leave Room for Interpretation

Policies that lack precision introduce variability into how security controls are applied across teams and shifts. When procedures don’t explicitly define how credentials should be managed on shared systems, retained login sessions, or administrative access can remain in place beyond their intended scope. Similarly, if requirements for password rotation or periodic access reviews are loosely framed or undocumented, they are more likely to be deferred during routine operations.

These conditions rarely trigger immediate alerts or audit findings. However, over time, they accumulate into systemic weaknesses that expand the attack surface and increase the likelihood of attacks.

Best Practices That Erode in Daily Operations

Security issues often emerge through slow, incremental changes. When operational pressure increases, teams might want to rely on more informal workarounds to keep everything running. Routine best practices like updates, access reviews, and configuration standards can slip down the priority list or become sloppy in their application. Individually, all of these decisions can seem reasonable at the moment; over time, however, they do add up and dilute the established safeguards, which leaves the organization exposed even without a single clearly identifiable incident.

Overlooking Access and Offboarding Control

Ignoring best practices around access management introduces the next line of risks. Employees and third-party contractors often retain privileges beyond their active role if offboarding steps are not followed through. In the lack of clear deprovisioning rules, like disabling accounts, dormant access can linger on unnoticed. These inactive accounts are not monitored closely enough to detect and identify if misuse or compromise happens.

Policy Gaps During Incident Response

The consequences of ignoring procedures become most visible when an actual cybersecurity incident occurs. When teams are forced to act quickly without clear guidance, errors start to surface. Procedures that are outdated, untested, or difficult to locate offer little support during an emergency. There’s no policy that can eliminate risks completely, however, organizations that treat procedures as living, enforceable tools are better positioned to respond effectively when an incident occurs.

A Weak Approach to Security Governance

Weak security governance often allows risks to persist unnoticed, especially when oversight from management is limited or unclear. Without clear ownership and accountability, routine tasks like applying security patches or reviewing alerts can be delayed or overlooked, leaving systems exposed. These seemingly insignificant gaps create an environment over time in which vulnerabilities are known but not actively addressed.

Training plays a very important role in closing this gap, but only when it is treated as part of governance,and not as an isolated activity. Regular, structured training helps employees develop a habit of verification and reinforces the checks and balances defined by organizational policies. To remain effective, training has to evolve in tandem with the threat landscape. Employees need ongoing exposure to emerging attack techniques and practical guidance on how to recognize and respond to them within their daily workflows. Aligned governance and training help organizations position themselves better to reduce risk driven by human factors.

Understanding the Stakes

Human error in cybersecurity is often discussed as a collection of isolated missteps, but in reality, it reflects how people operate within complex systems under constant pressure.

In data center environments, these errors rarely occur as isolated events but are influenced by interconnected processes, tight timelines, and attackers who deliberately exploit trust, familiarity, and routine behavior. Looking at it from this angle, human error doesn’t show only individual mistakes but provides insight into how risks develop across an organization over time.

Recognizing the role of human error in cybersecurity is essential for reducing future incidents, but awareness alone is not enough. Training also plays an important role, but it cannot compensate for unclear processes, weak governance, or a culture that prioritizes speed more than safety.

Data center operators have to continuously adapt their security practices and reinforce expectations through daily operations instead of treating security best practices as rigid formalities. Building a culture where employees understand how their actions influence security outcomes helps organizations respond more effectively to evolving threats and limits the conditions that allow small errors to turn into major, devastating incidents.

# # #

About the Author

Michael Zrihen  is the Senior Director of Marketing & Internal Operations Manager at Volico Data Centers.

The post Human Error in Cybersecurity and the Growing Threat to Data Centers appeared first on Data Center POST.

Sabey Data Centers Taps OptiCool to Tackle High-Density Cooling

19 January 2026 at 15:30

Sabey Data Centers is adding OptiCool Technologies to its growing ecosystem of advanced cooling partners, aiming to make high-density and AI-driven compute deployments more practical and energy efficient across its U.S. footprint. This move highlights how colocation providers are turning to specialized liquid and refrigerant-based solutions as rack densities outstrip the capabilities of traditional air cooling.

OptiCool is known for two-phase refrigerant pumped systems that use a non-conductive refrigerant to absorb heat through phase change at the rack level. This approach enables efficient heat removal without chilled water loops or extensive mechanical plant build-outs, which can simplify facility design and cut both capital and operating costs for data centers pushing into higher power densities. Sabey is positioning the OptiCool alliance as part of its integrated cooling technologies partnership program, which is designed to lower barriers to liquid and alternative cooling adoption for customers. Instead of forcing enterprises to engineer bespoke solutions for each deployment, Sabey is curating pre-vetted architectures and partners that align cooling technology, facility infrastructure and operational responsibility. For operators planning AI and HPC rollouts, that can translate into clearer deployment paths and reduced integration risk.

The appeal of two-phase refrigerant cooling lies in its combination of density, efficiency and retrofit friendliness. Because the systems move heat directly from the rack to localized condensers using a pumped refrigerant, they can often be deployed with minimal disruption to existing white space. That makes them attractive for operators that need to increase rack power without rebuilding entire data halls or adding large amounts of chilled water infrastructure.

Sabey executives frame the partnership as a response to customer demand for flexible, future-ready cooling options. As more organizations standardize on GPU-rich architectures and high-density configurations, cooling strategy has become a primary constraint on capacity planning. By incorporating OptiCool’s technology into its program, Sabey is signaling to customers that they will have multiple, validated pathways to support emerging workload profiles while staying within power and sustainability envelopes.

As liquid and refrigerant-based cooling rapidly move into the mainstream, customers evaluating their own AI and high-density strategies may benefit from understanding how Sabey is standardizing these technologies across its portfolio. To explore how this partnership and Sabey’s broader integrated cooling program could support specific deployment plans, readers can visit Sabey’s website for more information at www. sabeydatacenters.com.

The post Sabey Data Centers Taps OptiCool to Tackle High-Density Cooling appeared first on Data Center POST.

It’s Not An AI Bubble — We’re Witnessing the Next “Cloud” Revolution

19 January 2026 at 14:30

Alphabet, Amazon, and Microsoft; these tech giants’ cloud services, Google Cloud, AWS, and Azure, respectively, are considered the driving force behind all current business computing, data, and mobile services. But back in the mid-2000s, they weren’t immediately seen as best bets on Wall Street. When Amazon launched AWS, analysts and investors were skeptical. They dismissed AWS as a distraction from Amazon’s core retail business. The Wall Street wizards did not understand the potential of cloud computing services. Many critics believed enterprises would never move their mission-critical workloads off-premises and into remote data centers.

As we all know, the naysayers were wrong, and cloud computing took off, redefining global business. It turbo-charged the economy, creating trillions in enterprise value while reducing IT costs, increasing application agility, and enabling new business models. In addition, the advent of cloud services lowered barriers to entry for startups and enabled rapid service scaling. Improving efficiency, collaboration, and innovation through scalable, pay-as-you-go access to computing resources was part of the formula for astounding success. The cloud pushed innovation to every corner of society, and those wise financiers misunderstood it. They could not see how this capital-intensive, long-horizon bet would ever pay off.

Now, we are at that moment again. This time with artificial intelligence.

Headlines appear every day saying that we’re in an “AI bubble.” But AI has gone beyond mere speculation as companies (hyperscalers) are in early-stage infrastructure buildout mode. Hyperscalers understand this momentum. They have seen this movie before with a different protagonist, and they know the story ends with transformation, not collapse. The need for transformative compute, power, and connectivity is the catalyst driving a new generation of data center buildouts. The applications, the productivity, and the tools are there. And unlike the early cloud era, sustainable AI-related revenue is a predictable balance sheet line item.

The Data

Consider these most recent quarterly earnings:

  • Microsoft Q3 2025: Revenue: $70.1B, up 13%. Net income: $25.8B, up 18%. Intelligent Cloud grew 21% led by Azure, with 16 points of growth from AI services.
  • Amazon Q3 2025: Revenue: $180.2B, up 13%. AWS grew 20% to $33B. Trainium2, its second-gen AI chip, is a multi-billion-dollar line. AWS added 3.8 GW of power capacity in 12 months due to high demand.
  • Alphabet (Google Parent) Q3 2025: Revenue: $102.35B, up 16%. Cloud revenue grew 33% to $15.2B. Operating income: up nearly 85%, backed by $155B cloud backlog.
  • Meta Q3 2025: Revenue: $51.2B, up 26%. Increased infrastructure spend focused on expanding AI compute capacity. (4)

These are not the signs of a bubble. These are the signatures of a platform shift, and the companies leading it are already realizing returns while businesses weave AI into operations.

Bubble or Bottleneck

However, let’s be clear about this analogy: AI is not simply the next chapter of the cloud. Instead, it builds on and accelerates the cloud’s original mission: making extraordinary computing capabilities accessible and scalable. While the cloud democratized computing, AI is now democratizing intelligence and autonomy. This evolution will transform how we work, secure systems, travel, heal, build, educate, and solve problems.

Just as there were cloud critics, we now have AI critics. They say that aggressive capital spending, rising energy demand, and grid strain are signs that the market is already overextended. The pundits are correct about the spending:

  • Alphabet (Google) Q3 2025: ~US $24B on infrastructure oriented toward AI/data centers.
  • Amazon (AWS) Q3 2025: ~US $34.2B, largely on infrastructure/AI-related efforts.
  • Meta Q3 2025: US $19.4B directed at servers/data centers/network infrastructure for AI.
  • Microsoft Q3 2025: Roughly US $34.9B, of which perhaps US $17-18B or more is directly AI/data-center infrastructure (based on “half” of capex).

However, the pundits’ underlying argument is predicated on the same misunderstandings seen in the run-up to the cloud era: it confuses infrastructure investment with excess spending. The challenge with AI is not too much capacity; it is not enough. Demand is already exceeding grid capacity, land availability, power transmission expansion, and specialized equipment supply.

Bubbles do not behave that way; they generate idle capacity. For example, consider the collapse of Global Crossing. The company created the first transcontinental internet backbone by laying 100,000 route-miles of undersea fiber linking 27 countries.

Unfortunately, Global Crossing did not survive the dot-com bubble burst (1990-2000) and filed for bankruptcy. However, Level 3, then CenturyLink (2017), and Lumen Technologies knew better than to listen to Wall Street and acquired Global Crossing’s cables. Today, Lumen has reported total 2024 revenue of $13.1 billion. Although they don’t specifically list submarine cable business revenue, it’s reasonable to infer that these cables are still generating in the low billion-dollar revenue figures—a nice perpetual paycheck for not listening to the penny pinchers.

The AI economy is moving the value chain down the same path of sustainable profitability. But first, we must address factors such as data center proximity to grid strength, access to substation expansion, transformer supply, water access, cooling capacity, and land for modern power-intensive compute loads.

Power, Land, and the New Workforce

The cloud era prioritized fiber; the AI era is prioritizing power. Transmission corridors, utility partnerships, renewable integration, cooling systems, and purpose-built digital land strategies are essential for AI expansion. And with all that comes the “pick and shovel” jobs building data centers, which Wall Street does not factor into the AI economy. You need to look no further than Caterpillar’s Q3 2025 sales and revenue of $16.1 billion, up 10 percent.

Often overlooked in the tech hype are the industrial, real estate, and power grid requirements for data center builds, which require skilled workers such as electricians, steelworkers, construction crews, civil engineers, equipment manufacturers, utility operators, grid modernizers, and renewable developers. And once they’re up and running, data centers need cloud and network architects, cybersecurity analysts, and AI professionals.

As AI scales, it will lift industrial landowners, renewable power developers, utilities, semiconductor manufacturers, equipment suppliers, telecom networks, and thousands of local trades and service ecosystems, just as it’s lifting Caterpillar. It will accelerate infrastructure revitalization and strengthen rural and suburban economies. It will create new industries, just like the cloud did with Software as a Service (SaaS), e-commerce logistics, digital banking, streaming media, and remote-work platforms.

Conclusion

We’ve seen Wall Street mislabel some of the most significant tech expansions, from the telecom-hotel buildout of the 1990s to the co-location wave, global fiber expansion, hyperscale cloud, and now, with AI. Just like all revolutionary ideas, skepticism tends to precede them, even though there’s an inevitability to them. But stay focused: infrastructure comes before revenue, and revenue tends to arrive sooner than predicted, which brings home the point that AI is not inflating; it is expanding.

Smartphones reshaped consumer behavior within a decade; AI will reshape the industry in less than half that time. This is not a bubble. It is an infrastructure super-cycle predicated on electricity, land, silicon, and ingenuity. Now is the time to act: those who build power-first digital infrastructure are not in the hype business; they’re laying the foundation for the next century of economic growth.

# # #

About the Author

Ryne Friedman is an Associate at hi-tequity, where he leverages his commercial real estate expertise to guide strategic site selection and location analysis for data center development. A U.S. Coast Guard veteran and licensed Florida real estate professional, he previously supported national brands such as Dairy Queen, Crunch Fitness, Jimmy John’s, and 7-Eleven with market research and site acquisition. His background spans roles at SLC Commercial, Lambert Commercial Real Estate, DSA Encore, and DataCenterAndColocation. Ryne studied Business Administration and Management at Central Connecticut State University.

The post It’s Not An AI Bubble — We’re Witnessing the Next “Cloud” Revolution appeared first on Data Center POST.

Resolute CS and Equinix Close the Last-Mile Gap with Automated Connectivity Platform

15 January 2026 at 21:00

Equinix customers can now order last-mile connectivity from enterprise edge locations to any of Equinix’s 270+ data centers globally, eliminating weeks of manual sourcing and the margin stacking that has long plagued enterprise network procurement.

The collaboration integrates Resolute CS’s NEXUS platform directly into the Equinix Customer Portal, giving enterprises transparent access to 3,200+ carriers across 180 countries. Rather than navigating opaque pricing through multiple intermediaries, customers can design, price, and order last-mile access with full visibility into costs and carrier options.

The Last-Mile Problem

While interconnection platforms like Equinix Fabric have transformed data center connectivity, the edge connectivity gap has remained a persistent friction point. Enterprises connecting branch offices or remote facilities to data centers typically face weeks-long sourcing cycles, opaque pricing structures with 2-4 layers of margin stacking (25-30% each), and inconsistent delivery across geographies.

This inefficiency becomes particularly acute as AI workloads shift toward distributed architectures. Unlike centralized applications, AI infrastructure increasingly requires connectivity across edge locations, multiple data centers, and cloud platforms, creating exponentially more last-mile requirements that manual sourcing processes cannot efficiently handle.

How It Works

Resolute NEXUS automates route design, identifies diversity and resiliency options, simplifies cloud access paths, and coordinates direct ordering with carriers. The result: enterprises can manage connectivity from branch office to data center to cloud through a single portal, with transparent pricing and no hidden margin layers.

“We are empowering customers to design their network architecture without access constraints,” said Patrick C. Shutt, CEO and co-founder of Resolute CS. “With Equinix and Resolute NEXUS, customers can design, price, and order global last-mile access with full transparency, removing complexity and lowering costs.”

Benefits for Carriers Too

The platform also creates opportunities for network providers. By operating as a carrier-neutral marketplace, Resolute NEXUS gives providers direct visibility into qualified enterprise demand, improved infrastructure utilization, and lower customer acquisition costs, all without the traditional intermediary layers.

AI and Distributed Infrastructure

With Equinix operating 270+ AI-optimized data centers across 77 markets, automated last-mile sourcing directly addresses the connectivity requirements for distributed AI deployments. Enterprises can now provision edge-to-cloud connectivity with the speed and transparency expected from modern cloud services.

Equinix Fabric customers can access the platform immediately through the Equinix Customer Portal by navigating to “Find Service Providers” and searching for Resolute NEXUS – Last Mile Access.

To learn more, read the full press release here.

The post Resolute CS and Equinix Close the Last-Mile Gap with Automated Connectivity Platform appeared first on Data Center POST.

DC BLOX Secures $240 Million to Accelerate Hyperscale Data Center Growth Across the Southeast

15 January 2026 at 16:00

DC BLOX has taken a significant step in accelerating digital transformation across the Southeast with the closing of a new $240 million HoldCo financing facility from Global Infrastructure Partners (GIP), a part of BlackRock. This strategic funding positions the company to rapidly advance its hyperscale data center expansion while reinforcing its role as a critical enabler of AI and cloud growth in the region.​

Strategic growth financing

The $240 million facility from GIP provides fresh growth capital dedicated to DC BLOX’s hyperscale data center strategy, building on the company’s recently announced $1.15 billion and $265 million Senior Secured Green Loans. Together, these financings support the development and construction of an expanding portfolio of digital infrastructure projects designed to meet surging demand from hyperscalers and carriers.​

Powering AI and cloud innovation

DC BLOX has emerged as a leader in connected data center and fiber network solutions, with a vertically integrated platform that includes hyperscale data centers, subsea cable landing stations, colocation, and fiber services. This model allows the company to offer end-to-end solutions for hyperscalers and communications providers seeking capacity, connectivity, and resiliency in high-growth Southeastern markets.​

Community and economic impact

The new financing is about more than infrastructure; it is also about regional economic development. DC BLOX’s investments help bring cutting-edge AI and cloud technology into local communities, while driving construction jobs, tax revenues, and power grid enhancements that benefit both customers and ratepayers.

“We are excited to partner with GIP, a part of BlackRock, to fuel our ambitious growth goals,” said Melih Ileri, Chief Investment Officer at DC BLOX. “This financing underscores our commitment to serving communities in the Southeast by bringing cutting-edge AI and cloud technology investments with leading hyperscalers into the region, and creating economic development activity through construction jobs, taxes paid, and making investments into the power grid for the benefit of our customers and local ratepayers alike.”​

Backing from leading investors

Michael Bogdan, Chairman of DC BLOX and Head of the Digital Infrastructure Group at Future Standard, highlighted that this milestone showcases the strength of the company’s vision and execution. Future Standard, a global alternative asset manager based in Philadelphia with over 86.0 billion in assets under management, leads DC BLOX’s sponsorship and recently launched its Future Standard Digital Infrastructure platform with more than 2 billion in assets. GIP, now a part of BlackRock and overseeing over 189 billion in assets, brings deep sector experience across energy, transport, and digital infrastructure, further validating DC BLOX’s role in shaping the Southeast as a global hub for AI-driven innovation.​

Read the full release here.

The post DC BLOX Secures $240 Million to Accelerate Hyperscale Data Center Growth Across the Southeast appeared first on Data Center POST.

Registration Opens for Yotta 2026 as AI Infrastructure Demand Accelerates

15 January 2026 at 15:00

Yotta 2026 is officially open for registration, returning Sept. 28–30 to Las Vegas with a larger footprint, a new venue and an expanded platform designed to meet the accelerating demands of AI-driven infrastructure. Now hosted at Caesars Forum, Yotta 2026 reflects both the rapid growth of the event and the unprecedented pace at which AI is reshaping compute, power and digital infrastructure worldwide.

As AI workloads scale faster than existing systems were designed to handle, infrastructure leaders are facing mounting challenges around power availability, capital deployment, resilience and integration across traditionally siloed industries. Yotta 2026 is built to convene the full ecosystem grappling with these realities, bringing together operators, hyperscalers, enterprise leaders, energy executives, investors, builders, policymakers and technology partners in one place.

Rebecca Sausner, CEO of Yotta, emphasizes that the event is designed for practical progress, not theoretical discussion. From chips and racks to networks, cooling, power and community engagement, AI is transforming every layer of digital infrastructure. Yotta 2026 aims to move conversations beyond vision and into real-world solutions that address scale, reliability and investment risk in an AI-first era.

A defining feature of Yotta 2026 is its advisory board-led approach to programming. The conference agenda is being developed in collaboration with the newly announced Yotta Advisory Board, which includes senior leaders from organizations spanning AI, cloud, energy, finance and infrastructure, including OpenAI, Oracle, Schneider Electric, KKR, Xcel Energy, GEICO and the Electric Power Research Institute (EPRI). This cross-sector guidance ensures the program reflects how the industry actually operates, as an interconnected system where decisions around power, compute, capital, design and policy are inseparable.

The 2026 agenda will focus on the most urgent challenges shaping the AI infrastructure era. Key themes include AI infrastructure and compute density, power generation and grid interconnection, capital formation and investment risk, design and operational resilience, and policy and public-private alignment. Together, these topics offer a market-driven view of how digital infrastructure must be designed, financed and operated to support AI at scale.

With an anticipated 6,000+ AI and digital infrastructure leaders in attendance, Yotta 2026 will feature a significantly expanded indoor and outdoor expo hall, curated conference programming and immersive networking experiences. Hosted at Caesars Forum, the event is designed to support both strategic planning and hands-on execution, creating space for collaboration across the entire infrastructure value chain.

Early registration is now open, with passes starting at $795 and discounted rates available for early registrants. As AI continues to drive unprecedented infrastructure demand, Yotta 2026 positions itself as a critical forum for the conversations and decisions shaping the future of compute, power and digital infrastructure.

To learn more or register, visit yotta-event.com.

The post Registration Opens for Yotta 2026 as AI Infrastructure Demand Accelerates appeared first on Data Center POST.

ESI Expands HVO Fuel Services to Power Data Center Sustainability and Net‑Zero 2030 Ambitions

15 January 2026 at 14:00

ESI Total Fuel Management is expanding its Hydrotreated Vegetable Oil (HVO/R99) services to help data centers and other mission-critical facilities advance their sustainability strategies without sacrificing reliability. With this move, the company is deepening its role as a long-term partner for operators pursuing Net-Zero 2030 goals in an increasingly demanding digital infrastructure landscape.​

Advancing data center sustainability

Across the data center industry, operators are under growing pressure to reduce the environmental impact of standby power systems while maintaining assured uptime. ESI draws on decades of experience in fuel lifecycle management, having previously championed ultra-low sulfur diesel adoption, to guide customers through the transition to renewable diesel.​

To support practical and scalable adoption, ESI has established the first secure HVO/R99 supply chain on the East Coast, giving operators dependable access to renewable diesel as part of a long-term fuel strategy. This infrastructure enables data center and mission-critical operators to integrate HVO into their operations as a realistic step toward emissions reduction and operational continuity.​

Renewable diesel performance benefits

HVO/R99 can reduce carbon emissions by up to 90 percent compared with conventional diesel, while maintaining strong cold-weather performance and long-term fuel stability suited to standby generator storage cycles. As a drop-in fuel, it requires no modifications to existing infrastructure and directly supports Scope 1 emissions reduction initiatives.​

Integrated lifecycle approach

Within ESI’s broader portfolio, HVO is one component of a comprehensive approach encompassing fuel quality, monitoring, compliance, and system resiliency.

“Sustainability goals do not replace the need for resiliency, and they can be complementary,” said Alex Marcus, CEO and president of ESI Total Fuel Management. “Our focus is helping customers implement solutions that are technically sound and operationally proven. By managing the entire fuel lifecycle, from supply and storage to monitoring, consumption, and pollution control, we help customers reduce environmental impact while maintaining resilient, mission-critical systems.”​

Supporting Net-Zero 2030 objectives

For data center operators pursuing Net-Zero 2030, ESI provides the engineering expertise, infrastructure, and operational support needed to move beyond isolated initiatives toward coordinated, data-driven fuel strategies. This combination of renewable fuel options and full lifecycle management helps strengthen both sustainability and resiliency for mission-critical environments.​

Read the full release here.

The post ESI Expands HVO Fuel Services to Power Data Center Sustainability and Net‑Zero 2030 Ambitions appeared first on Data Center POST.

Duos Edge AI Brings Another Edge Data Center to Rural Texas

14 January 2026 at 14:00

Duos Technologies Group Inc. (Nasdaq: DUOT), through its operating subsidiary Duos Edge AI, Inc., has deployed another patented modular Edge Data Center (EDC) in Hereford, Texas. The facility was deployed in partnership with Hereford Independent School District (Hereford ISD) and marks another milestone in Duos Edge AI’s mission to deliver localized, low-latency compute infrastructure that supports education and community technology growth across rural and underserved markets.

​”We are thrilled to partner with Duos Edge AI to bring a state-of-the-art Edge Data Center directly to our Administration location in Hereford ISD,” said Dr. Ralph Carter, Superintendent of Hereford Independent School District. “This innovative deployment will dramatically enhance our digital infrastructure, providing low-latency access to advanced computing resources that will empower our teachers with cutting-edge tools, enable real-time AI applications in the classroom, and ensure faster, more reliable connectivity for our students and staff.​

Each modular facility is designed for rapid 90-day deployment and delivers scalable, high-performance computing power with enterprise-grade security controls, including third-party SOC 2 Type II certification under AICPA standards.

​Duos Edge AI’s patented modular infrastructure incorporates a U.S. patent for an Entryway for a Modular Data Center (Patent No. US 12,404,690 B1), providing customers with secure, compliant, and differentiated Edge infrastructure that operates exclusively on on-grid power and requires no water for cooling. Duos Edge AI continues to expand nationwide, capitalizing on growing demand for localized compute, AI enablement, and resilient digital infrastructure across underserved and high-growth markets.

“Each deployment strengthens our ability to scale a repeatable, capital-efficient edge infrastructure platform,” said Doug Recker, President of Duos and Founder of Duos Edge AI. “Our patented, SOC 2 Type II-audited EDCs are purpose-built to meet real customer demand for secure, low-latency computing while supporting long-term revenue growth and disciplined execution across our targeted markets.”

To learn more about Duos Edge AI, visit www.duosedge.ai.

The post Duos Edge AI Brings Another Edge Data Center to Rural Texas appeared first on Data Center POST.

❌