Reading view

India’s Energy Transition Enters Complex Execution Phase Amid Carbon Pricing and Grid Reforms – EQ

In Short : India’s energy transition is moving from planning to a complex execution phase, as carbon pricing and grid reforms begin to reshape investment signals. Policymakers and investors are navigating evolving regulatory, financial, and operational frameworks to optimize renewable integration, modernize transmission systems, and balance growth with decarbonization objectives, signaling a critical shift in the country’s clean energy strategy.

In Detail : India’s energy transition is entering a new and complex phase, where implementation challenges are taking center stage. The focus is shifting from setting renewable capacity targets to executing large-scale projects, modernizing grids, and integrating low-carbon solutions into a system historically dominated by fossil fuels. This phase demands careful coordination across technology, policy, and financial domains.

Carbon pricing mechanisms are beginning to influence energy investment decisions. By assigning a cost to carbon emissions, policymakers aim to create financial incentives for low-carbon technologies, encouraging utilities, industrial players, and investors to prioritize renewable generation, energy efficiency, and decarbonization in long-term planning.

Grid reforms are another critical factor reshaping India’s energy landscape. Enhancing transmission infrastructure, introducing digital monitoring, and implementing flexible market mechanisms are essential to handle the variability of renewable power, ensure system stability, and facilitate efficient power flows between regions with uneven generation and demand profiles.

The interplay of carbon pricing and grid reforms is also influencing private sector investment. Investors are increasingly evaluating projects not just on capacity or returns but on carbon impact, regulatory certainty, and the ability to integrate with modernized grid systems, resulting in a more nuanced and sophisticated decision-making process.

Renewable energy integration is now accompanied by operational complexities. High shares of solar, wind, and distributed generation require balancing mechanisms, storage solutions, and responsive market designs. This demands both technical upgrades and strategic planning to maintain reliability while meeting ambitious decarbonization targets.

Financial structures are evolving to match this complexity. New instruments such as green bonds, sustainability-linked loans, and hybrid financing are becoming key enablers for large renewable and storage projects, helping investors manage risk while aligning capital allocation with environmental and policy objectives.

Policy coherence is critical in this execution phase. Consistent regulations around tariffs, grid access, carbon pricing, and renewable incentives are essential to provide clarity for developers, reduce delays, and ensure that capital flows into projects that advance India’s energy transition efficiently and effectively.

The execution phase also highlights the importance of skill development and innovation. Grid modernization, storage deployment, and integration of emerging technologies require trained personnel, R&D investment, and operational expertise to implement complex projects safely and sustainably across the country.

Overall, India’s energy transition has moved into a phase where capacity addition is no longer sufficient. The combination of carbon pricing, grid reforms, and evolving investment frameworks is reshaping the sector. Successfully navigating this complex execution environment is critical for achieving energy security, reducing emissions, and building a sustainable and resilient energy ecosystem for the future.

  •  

Pilot Travel Centers to deploy heavy-duty EV charging stations for Tesla Semis


Truck stop operator Pilot Travel Centers has entered into an agreement with Tesla to install charging stations for Tesla’s Semi heavy-duty electric trucks.

The Tesla charging stations will be built at select Pilot locations in California, Georgia, Nevada, New Mexico and Texas, along I-5, I-10 and “several major corridors where the need for heavy-duty charging is highest.” The first sites are expected to open in Summer 2026.

Each location will host four to eight charging stalls featuring Tesla’s V4 cabinet charging technology, which can deliver up to 1.2 megawatts of power at each stall.

Pilot says that in the future, the sites may be expanded to be compatible with heavy-duty electric vehicles from other manufacturers.

“Heavy-duty charging is yet another extension of our exploration into alternative fuel offerings, and we’re happy to partner with a leader in the space that provides turnkey solutions and deploys them quickly,” said Shannon Sturgil, Senior VP, Alternative Fuels at Pilot.

Source: Pilot Travel Centers

  •  

DPI and PODTECH Partner to Scale AI Infrastructure Commissioning Across Europe, Asia, and the Middle East

Datalec Precision Installations (DPI) and PODTECH have announced a global technology partnership focused on delivering pre-staged, deployment-ready AI infrastructure solutions as hyperscaler demand drives data center vacancy rates to historic lows. With capacity tightening to 6.5% in Europe and 5.9% in the U.K., the partnership addresses a critical bottleneck in AI data center commissioning, where deployment timelines and technical complexity have become major constraints for enterprises and cloud platforms scaling GPU-intensive workloads.

The AI Infrastructure Commissioning Challenge

As hyperscalers deploy more than $600 billion in AI data center infrastructure this year, representing 75% of total capital expenditure, the focus has shifted from simply securing capacity to ensuring infrastructure is fully validated and production-ready at deployment. AI workloads demand far more than traditional data center services. NVIDIA-based AI racks require specialized expertise in NVLink fabric configuration, GPU testing, compute node initialization, dead-on-arrival (DOA) testing, site and factory acceptance testing (SAT/FAT), and network validation. These technical requirements, combined with increasingly tight deployment windows, have created demand for integrated commissioning providers capable of delivering turnkey solutions.

Integrated Capabilities Across the AI Lifecycle

The DPI-PODTECH partnership brings together complementary capabilities across the full AI infrastructure stack. DPI contributes expertise in infrastructure connectivity and mechanical systems. PODTECH adds software development, commissioning protocols, and systems integration delivered through more than 60 technical specialists across the U.K., Asia, and the Middle East. Together, the companies offer end-to-end services from pre-deployment validation through network bootstrapping, ensuring AI environments are fully operational before customer handoff.

The partnership builds on successful NVIDIA AI rack deployments for international hyperscaler programs, where both companies demonstrated the ability to manage complex, multi-site rollouts. By formalizing their collaboration, DPI and PODTECH are positioning to scale these capabilities across regions where data center capacity is most constrained and AI infrastructure demand is accelerating fastest.

Strategic Focus on High-Growth Markets

The partnership specifically targets Europe, Asia, and the Middle East, markets experiencing acute capacity constraints and surging AI investment. PODTECH’s existing presence across these regions gives the partnership immediate on-the-ground capacity to support hyperscaler and enterprise deployments. The company’s ISO 27001, ISO 9001, and ISO 20000-1 certifications provide the compliance foundation required for clients in regulated industries and public sector engagements.

Industry Perspective

“As organizations accelerate their AI adoption, the reliability and performance of the underlying infrastructure have never been more critical,” said James Bangs, technology and services director at DPI. “Building on our partnership with PODTECH, we have already delivered multiple successful deployments together, and this formal collaboration enables us to scale our capabilities globally.”

Harry Pod, founder at PODTECH, emphasized the operational benefits of the integrated model: “Following our successful collaborations with Datalec on major NVIDIA AI rack deployments, we are very proud to officially combine our capabilities. By working as one integrated delivery team, we can provide clients with packaged, pre-staged, and deployment-ready AI infrastructure solutions grounded in quality, precision, and engineering excellence.”

Looking Ahead

For enterprises and hyperscalers navigating AI infrastructure decisions in 2026, the partnership signals a shift toward specialized commissioning providers capable of managing the entire deployment lifecycle. With hyperscaler capital expenditure forecast to remain elevated through 2027 and vacancy rates showing no signs of easing, demand for integrated commissioning services is likely to intensify across DPI and PODTECH’s target markets.

Organizations evaluating AI infrastructure commissioning strategies can learn more at datalecltd.com.

The post DPI and PODTECH Partner to Scale AI Infrastructure Commissioning Across Europe, Asia, and the Middle East appeared first on Data Center POST.

  •  

2025 in Review: Sabey’s Biggest Milestones and What They Mean

Originally posted on Sabey Data Centers.

At Sabey Data Centers, progress is more than a series of headlines. It’s a blueprint for what’s possible when infrastructure, efficiency and stewardship go hand in hand. From our award-winning sustainability initiatives to bold new campus designs and record-setting expansions, each milestone this year has demonstrated our commitment to powering tomorrow’s workloads with conscience and confidence.

As 2026 is already well underway and already promising to be a banner year, we wanted to pause and reflect on the path we forged in 2025.

Capacity expansion: built for growth 

In 2025, we announced strategic power expansions across our Pacific Northwest, Ashburn, Austin and Quincy locations. In Seattle and Columbia, 30MW of new power is now anticipated to come online by 2027, enabling tenants to scale quickly while leveraging ultra-efficient energy (carbon-free in Seattle) and a regional average PUE as low as 1.2.

On the East Coast, Ashburn’s third and final building broke ground, set to introduce 54MW of additional capacity with the first tranches set to come online in 2026. This will be the first three-story facility in Sabey’s portfolio, purpose-built for air-cooled, liquid-cooled and hybrid deployments, with rack densities over 100kW and an average PUE of 1.35. In both regions, Sabey’s expansion balances hyperscale demand with customization, modular scale and resilient connectivity.

The launch of construction for Austin Building B was another major milestone, expanding our presence in the dynamic Round Rock tech corridor. This three-story, liquid-cooling-ready facility is designed to deliver 54 megawatts of total power capacity. Building B continues our commitment to scalable, energy-efficient digital infrastructure, tailored for enterprise and hyperscale workloads.

To continue reading, please click here.

The post 2025 in Review: Sabey’s Biggest Milestones and What They Mean appeared first on Data Center POST.

  •  

The Data Center Boom Is Concentrated in the U.S.



If a data center is moving in next door, you probably live in the United States. More than half of all upcoming global data centers—as indicated by land purchased for data centers not yet announced, those under construction, and those whose plans are public—will be developed in the United States.

chart visualization

And these figures are likely underselling the near-term data-center dominance of the United States. Power usage varies widely among data centers, depending on land availability and whether the facility will provide xhttps://spectrum.ieee.org/data-center-liquid-cooling or mixed-use services, says Tom Wilson, who studies energy systems at the Electric Power Research Institute. Because of these factors, “data centers in the U.S. are much larger on average than data centers in other countries,” he says.


Wilson adds that the dataset you see here—which comes from the analysis firm Data Center Map—may undercount new Chinese data centers because they are often not announced publicly. Chinese data-center plans are “just not in the repository of information used to collect data on other parts of the world,” he says. If information about China were up-to-date, he would still expect to see “the U.S. ahead, China somewhat behind, and then the rest of the world trailing.”

chart visualization

One thing that worries Wilson is whether the U.S. power grid can meet the rising energy demands of these data centers. “We’ve had flat demand for basically two decades, and now we want to grow. It’s a big system to grow,” he notes.

chart visualization

He thinks the best solution is asking data centers to be more flexible in their power use, maybe by scheduling complex computation for off-peak times or maintaining on-site batteries, removing part of the burden from the power grid. Whether such measures will be enough to keep up with demand remains an open question.

  •  

Data Centers Look to Old Airplane Engines for Power



Data-center developers are running into a severe power bottleneck as they rush to build bigger facilities to capitalize on generative AI’s potential. Normally, they would power these centers by connecting to the grid or building a power plant onsite. However, they face major delays in either securing gas turbines or in obtaining energy from the grid.

At the Data Center World Power show in San Antonio in October, natural-gas power provider ProEnergy revealed an alternative—repurposed aviation engines. According to Landon Tessmer, vice president of commercial operations at ProEnergy, some data centers are using his company’s PE6000 gas turbines to provide the power needed during the data center’s construction and during its first few years of operation. When grid power is available, these machines either revert to a backup role, supplement the grid, or are sold to the local utility.

“We have sold 21 gas turbines for two data-center projects amounting to more than 1 gigawatt,” says Tessmer. “Both projects are expected to provide bridging power for five to seven years, which is when they expect to have grid interconnection and no longer need permanent behind-the-meter generation.”

Bridging Power Gaps With a New Kind of Aeroderivative Turbine

It is a common and long-established practice for gas-turbine original equipment manufacturers (OEMs) like GE Vernova and Siemens Energy to convert a successful aircraft engine for stationary electric-power generation applications. Known as aeroderivative gas turbines, these machines have carved out a niche for themselves because they’re lighter, smaller, and more easily maintained than traditional heavy-frame gas turbines.

“It takes a lot to industrialize an aviation engine and make it generate power,” says Mark Axford, President of Axford Turbine Consultants, a gas-turbine consultant and a valuation expert for used turbines.

For example, GE Vernova’s LM6000 gas turbine was derived from GE’s successful CF6-80C2 turbofan engine which was widely used on commercial jets. The CF6-80C2 was first released in 1985, and the LM6000 appeared on the market five years later. To make it suitable for power generation, it needed an expanded turbine section to convert engine thrust into shaft power, a series of struts and supports to mount it on a concrete deck or steel frame, and new controls. Further modifications typically include the development of fuel nozzles that let the machine run on natural gas rather than aviation fuel, and a combustor that minimizes the emission of nitrogen oxides, a major pollutant.

“There just aren’t enough gas turbines to go around and the problem is probably going to get worse,” says Paul Browning, CEO of Generative Power Solutions, formerly the head of GE Power & Water (now GE Vernova) and Mitsubishi Power. Contact GE Vernova to order an LM6000 today and you might be told the waiting list is anywhere from three to five years. You’d hear the same from Siemens Energy for its SGT-A35 aeroderivative gas turbine. Some large, popular, models have even longer waiting lists.

For contrast, “a PE6000 from ProEnergy can be delivered in 2027,” Tessmer says.

 Landon Tessmer speaking behind a podium in front an audience at the Data Center World Power show. Landon Tessmer, ProEnergy’s vice president of commercial operations, spoke at the Data Center World Power conference in October 2025.Data Center World Power

Converted Turbofan Aircraft Engine Can Provide 48 Megawatts

ProEnergy buys and overhauls used CF6-80C2 engine cores—the central part of the engine where combustion occurs—and matches them with newly manufactured aeroderivative parts made either by ProEnergy or its partners. After assembly and testing, these refurbished engines are ready for a second life in electric-power generation, where they provide 48 megawatts, enough to power a small-to-medium data center (or a town of perhaps 20,000 to 40,000 households). According to Tessmer, approximately 1,000 of these aircraft engines are expected to be retired over the next decade, so there’s no shortage of them. A large data center may have demand that exceeds 100 MW, and some of the latest data centers being designed for AI are more than 1 GW.

An overhaul returns an engine and its components to as-new condition. Each of its thousands of parts are disassembled, cleaned, inspected, and then repaired or replaced as needed. In this way, the engine is renewed for another long cycle of run time. Apart from the engine core, every part inside the PE6000 turbine is manufactured to ProEnergy’s specifications. We can overhaul the high-pressure core of any CF6-80C2 and fabricate all the low-pressure components,” Tessmer adds.

ProEnergy sells two-turbine blocks with the standard configuration. It consists of gas turbines, generators, and a host of other gear, such as systems to cool the air entering the turbine during hot days as a way to boost performance, selective catalytic reduction systems to reduce emissions, and various electrical systems. The company focuses solely on one engine, the CF6-80C2, to streamline and simplify engineering and maintenance.

The PE6000 was originally intended for use by utilities that needed more capacity during peak hours. The data-center boom has turned that expectation on its head—data-center operators want these engines to provide power to the entire facility. They run on natural gas and, when being started, can be up and running in 5 minutes. If one needs maintenance, it can be swapped out with a spare within 72 hours. Emissions levels average 2.5 parts per million for nitrogen oxide, which is well below EPA-regulated levels (generally below 10 to below 25 parts per million, depending on the use case). Since 2020, ProEnergy has fabricated 75 PE6000 packages and now has another 52 being assembled or on order.

Lengthy Grid-Connection Delays Mean More Business

Multiple factors contribute to this popularity. Besides the surge in data centers, there’s often a lengthy wait for transmission lines, which may face local opposition and require permits from multiple municipalities or states. “Aeroderivative gas turbines are gaining ground as a bridging technology that runs behind the meter until the utility is able to supply grid power,” says Tessmer.

Tessmer has seen examples of eight-to-ten-year delays on permitting alone. If connecting to the grid continues to take years, at least in some areas, and if gas turbine manufacturers don’t dramatically boost output, bridging power could become an indispensable enabler of the buildout of AI infrastructure.

  •  

Duos Edge AI Earns PTC’26 Innovation Honor

Duos Technologies Group Inc. (Nasdaq: DUOT), through its operating subsidiary Duos Edge AI, Inc., received the Outstanding Innovation Award at Pacific Telecommunication Conference 2026 (PTC’26). This honor recognizes Duos Edge AI’s leadership in modular Edge Data Center (EDC) solutions that boost efficiency, scalability, security, and customer experience.

Duos Edge AI’s capital-efficient model supports rapid 90-day installations and scalable growth tailored to regional needs like education, healthcare, and municipal services. High-availability designs deliver up to 100 kW+ per cabinet with resilient, 24/7 operations positioned within 12 miles of end users for minimal latency.

“This recognition from Pacific Telecommunications Council (PTC) is a meaningful validation of our strategy and execution,” said Doug Recker, President of Duos and Founder of Duos Edge AI. “Our mission has been to bring secure, low-latency digital infrastructure directly to communities that need it most. By deploying edge data centers where people live, learn, and work, we’re helping close the digital divide while building a scalable platform aligned with long-term growth and shareholder value.”

The award spotlights Duos Edge AI’s patented modular EDCs deployed in underserved communities for low-latency, enterprise-grade infrastructure. These centers enable real-time AI processing, telemedicine, digital learning, and carrier-neutral connectivity without distant cloud reliance.

Duos Edge AI thanks partners like Texas Regions 16 and 3 Education Service Centers, Dumas ISD, and local leaders embracing localized tech for equity.

To learn more about Duos Edge AI, visit www.duosedge.ai.

The post Duos Edge AI Earns PTC’26 Innovation Honor appeared first on Data Center POST.

  •  

Sabey and JetCool Push Liquid Cooling from Pilot to Standard Practice​

As AI and high‑performance computing (HPC) workloads strain traditional air‑cooled data centers, Sabey Data Centers is expanding its partnership with JetCool Technologies to make direct‑to‑chip liquid cooling a standard option across its U.S. portfolio. The move signals how multi‑tenant operators are shifting from experimental deployments to programmatic strategies for high‑density, energy‑efficient infrastructure.

Sabey, one of the largest privately held multi‑tenant data center providers in the United States, first teamed with JetCool in 2023 to test direct‑to‑chip cooling in production environments. Those early deployments reported 13.5% server power savings compared with air‑cooled alternatives, while supporting dense AI and HPC racks without heavy reliance on traditional mechanical systems.

The new phase of the collaboration is less about proving the technology and more about scale. Sabey and JetCool are now working to simplify how customers adopt liquid cooling by turning what had been bespoke engineering work into repeatable designs that can be deployed across multiple sites. The goal is to give enterprises and cloud platforms a predictable path to high‑density infrastructure that balances performance, efficiency and operational risk.

A core element of that approach is a set of modular cooling architectures developed with Dell Technologies for select PowerEdge GPU‑based servers. By closely integrating server hardware and direct‑to‑chip liquid cooling, the partners aim to deliver pre‑validated building blocks for AI and HPC clusters, rather than starting from scratch with each project. The design includes unified warranty coverage for both the servers and the cooling system, an assurance that Sabey says is key for customers wary of fragmented support models.

The expanded alliance sits inside Sabey’s broader liquid cooling partnership program, an initiative that aggregates multiple thermal management providers under one framework. Instead of backing a single technology, Sabey is positioning itself as a curator of proven, ready‑to‑integrate cooling options that map to varying density targets and sustainability goals. For IT and facilities teams under pressure to scale GPU‑rich deployments, that structure promises clearer design patterns and faster time to production.

Executives at both companies frame the partnership as a response to converging pressures: soaring compute demand, tightening efficiency requirements and growing scrutiny of data center energy use. Direct‑to‑chip liquid cooling has emerged as one of the more practical levers for improving thermal performance at the rack level, particularly in environments where power and floor space are limited but performance expectations are not.

For Sabey, formalizing JetCool’s technology as a standard, warranty‑backed option is part of a broader message to customers: liquid cooling is no longer a niche or one‑off feature, but an embedded part of the company’s roadmap for AI‑era infrastructure. Organizations evaluating their own cooling strategies can find the full announcement here.

The post Sabey and JetCool Push Liquid Cooling from Pilot to Standard Practice​ appeared first on Data Center POST.

  •  

Building the Digital Foundation for the AI Era: DataBank’s Vision for Scalable Infrastructure

Data Center POST connected with Raul K. Martynek, Chief Executive Officer of DataBank Holdings, Ltd., ahead of PTC’26. Martynek joined DataBank in 2017 and brings more than three decades of leadership experience across telecommunications, Internet infrastructure, and data center operations. His background includes senior executive roles at Net Access, Voxel dot Net, Smart Telecom, and advisory positions with DigitalBridge and Plainfield Asset Management. Under his leadership, DataBank has expanded its national footprint, strengthened its interconnection ecosystems, and positioned its platform to support AI-ready, high-density workloads across enterprise, cloud, and edge environments. In the Q&A below, Martynek shares his perspective on the challenges shaping global digital infrastructure and how DataBank is preparing customers for the next phase of AI-driven growth.

Data Center Post (DCP) Question: What does your company do?  

Raul Martynek (RM) Answer: DataBank helps the world’s largest enterprises, technology, and content providers ensure their data and applications are always on, always secure, always compliant, and ready to scale to meet the needs of the artificial intelligence era.

DCP Q: What problems does your company solve in the market?

RM A: DataBank addresses a broad set of challenges enterprises face when managing critical infrastructure. Reliability and uptime are foundational, as downtime can severely impact revenue and customer trust. We also help organizations meet security and compliance requirements without having to build costly internal expertise. Our platform allows customers to scale infrastructure without large capital expenditures by shifting to an operating expense model. In addition, we provide managed expertise that frees internal teams to focus on strategic priorities, simplify hybrid IT and cloud integration, improve latency for distributed and edge workloads, strengthen cybersecurity posture, and mitigate talent and resource constraints.

DCP Q: What are your company’s core products or services?

RM A: Data center colocation, Interconnection, Enterprise Cloud, Compliance Enablement, Data Protection. Powered by expert, human support

DCP Q: What markets do you serve?

RM A: DataBank serves customers across a broad geographic footprint in the United States and Europe. In the western United States, the company operates in key markets including Irvine, Los Angeles, and Silicon Valley in California, as well as Las Vegas, Salt Lake City, and Seattle. Its central U.S. presence includes Chicago, Denver, Indianapolis, and Kansas City. In the southern region, DataBank supports customers in Atlanta, Austin, Dallas, Houston, Miami, and Waco. Along the East Coast and Midwest, the company operates in markets such as Boston, Cleveland, New Jersey, New York City, Philadelphia, and Pittsburgh. Internationally, DataBank also serves customers in the United Kingdom.

DCP Q: What challenges does the global digital infrastructure industry face today?

RM A: The industry is facing a convergence of challenges, including power availability and grid constraints, sustainability and carbon reduction requirements, cooling demands for high-density AI and HPC workloads, supply chain pressures, land acquisition and zoning issues, and increasing interconnection complexity. At the same time, organizations must contend with talent shortages and rising cybersecurity risks, all while supporting rapidly expanding digital workloads.

DCP Q: How is your company adapting to these challenges?

RM A: We are building in markets with available power headroom and designing scalable power blocks to support future growth. Our facilities are being prepared for AI-era density with liquid-ready designs and more efficient cooling strategies. Sustainability remains a priority, with a focus on lowering energy and water usage. We are standardizing construction to improve efficiency and flexibility while expanding interconnection ecosystems such as DE-CIX. Additionally, our managed services help fill enterprise talent gaps, and we continue to invest in operational excellence, security, and company culture.

DCP Q: What are your company’s key differentiators?

RM A: DataBank differentiates itself through strong engineering and operational management, future-ready platforms, and deep compliance expertise. Our geographic focus allows us to serve customers where they need infrastructure most, while our managed services provide visibility and control across complex environments. We are also supported by patient, long-term investors, enabling disciplined growth and sustained investment.

DCP Q: What can we expect to see/hear from your company in the future?  

RM A: Customers can expect continued commitment to enterprise IT infrastructure alongside expanded AI-ready platforms. We are growing our interconnection ecosystems, advancing sustainability initiatives, modernizing key campuses, and expanding managed and hybrid IT services. Enhancing security, compliance, and customer success will remain central, as will our focus on talent and culture.

DCP Q: What upcoming industry events will you be attending? 

RM A: AI Tinkers; Metro Connect; ATC CEO Summit; MIMSS 26; DCD>Connect 2026; ITW 2026; 7×24 Cloud Run Community Festival; CBRE Digital Infrastructure Summit 2026; AI Infra Conference; TMT M&A Forum; MegaPort Connect; TAG Data Center Summit; Supercomputing 2026; Incompany; DE-DIX Dallas Olde World Holiday Market

DCP Q: Do you have any recent news you would like us to highlight?

RM A: DataBank has recently announced several milestones that underscore its continued growth and long-term strategy. The company expanded its financing vehicle to $1.6 billion to support the next phase of platform expansion and infrastructure investment. DataBank also released new research showing that 60 percent of enterprises are already seeing a return on investment from AI initiatives or expect to within the next 12 months, highlighting the accelerating business impact of AI adoption. In addition, DataBank introduced a company-wide employee ownership program, reinforcing its commitment to culture, alignment, and long-term value creation across the organization.

DCP Q: Is there anything else you would like our readers to know about your company and capabilities?

RM A: DataBank is building the digital foundation for the AI, cloud, and connected-device era. Its national footprint of data centers delivers secure, high-density colocation, interconnection, and managed services that help enterprises deploy mission-critical workloads with confidence.

We are designing for the future with liquid-cooling capabilities, campus modernization, and expanded interconnection ecosystems. We are equally committed to responsible digital infrastructure: improving efficiency, reducing water use, strengthening security, and advancing compliance.

Above all, DataBank we are a trusted infrastructure partner, providing the expertise and operational support organizations need to scale reliably and securely.

DCP Q: Where can our readers learn more about your company?  

RM A: www.databank.com

DCP Q: How can our readers contact your company? 

PQ A: www.databank.com/contact-us

To learn more about PTC’26, please visit www.ptc.org/ptc26. The event takes place January 18-21, 2026 in Honolulu, HI.

If you are interested in contributing to Data Center POST, contact us at contributions@datacenterpost.com.

Stay in the know! Subscribe to Data Center POST today.

# # #

About Data Center POST

Data Center POST provides a comprehensive view of the digital infrastructure landscape, delivering industry insights into the global data center ecosystem. As the industry’s only peer-contributed and online publication, we offer relevant information from developers, managers, providers, investors, and trendsetters worldwide.

Data Center POST works hard to get the most current information and thought-provoking ideas most apt to add relevance to the success of the data center industry. Stay informed, visit www.datacenterpost.com.

ADVERTISE | CONTRIBUTE | SUBSCRIBE

The post Building the Digital Foundation for the AI Era: DataBank’s Vision for Scalable Infrastructure appeared first on Data Center POST.

  •  

Power Should Serve Compute: Airsys’ Energy-First Approach to Data Center Cooling

Data Center POST had the opportunity to connect with Paul Quigley, President of Airsys Cooling Technologies, Inc., ahead of PTC’26. With more than 36 years of experience spanning HVAC contracting, distribution, and executive leadership in manufacturing, Quigley is widely recognized for his work in large-scale, high-demand cooling environments. His career includes co-designing one of the largest VRF projects in North America and leading complex cooling initiatives across the United States. Since joining Airsys in 2021, Quigley has focused on advancing precision and liquid-assisted cooling technologies to help data center operators address power constraints driven by AI, high-density compute, and rapid digital infrastructure growth. In the Q&A below, Quigley shares his perspective on the challenges facing the global digital infrastructure industry and how Airsys is helping operators turn cooling efficiency into usable compute power.

Data Center Post (DCP) Question: What does your company do?  

Paul Quigley (PQ) Answer: Airsys is an energy-focused cooling solutions company founded on a simple idea: power should serve computation, not be lost protecting it. Every system we build returns power back to the data center where it belongs. Our innovative, award-winning technologies give operators more usable power for compute, improving capacity, resilience, and overall profitability.

DCP Q: What problems does your company solve in the market?

PQ A: Airsys helps data centers recover power lost to cooling. As AI clusters push energy systems past their limits, operators are forced to choose between compute, capacity, and cost. Our cooling technologies solve that problem by returning power back to the data center, reducing stranded capacity, and giving operators more usable power for compute.

DCP Q: What are your company’s core products or services?

PQ A: We design and build the world’s most advanced cooling solutions for data centers, including:

  • LiquidRack spray-cooling and liquid-assisted rack systems
  • High-efficiency DX and chilled-water cooling systems
  • Flooded-evaporator chillers (CritiCool-X)
  • Indoor and outdoor precision cooling systems
  • Edge, modular, and containerized data-center cooling
  • Control systems, energy-optimization tools, and PCE/ROIP performance frameworks

DCP Q: What markets do you serve?

PQ A:

  • Hyperscale and AI compute environments
  • Colocation and enterprise data centers
  • Modular and prefabricated data centers
  • Edge and telecom infrastructure
  • Education, industrial, government, and defense applications requiring mission-critical cooling

DCP Q: What challenges does the global digital infrastructure industry face today?

PQ A: The industry now operates in an environment of rapid compute expansion and structural power scarcity. Grid limitations, long construction timelines, inefficient legacy cooling, land constraints, and the explosive rise of AI workloads have created an energy bottleneck. Data centers no longer struggle with space… they struggle with power.

DCP Q: How is your company adapting to these challenges?

PQ A: Airsys has always shifted the conversation from cooling to energy stewardship. Every system we design focuses on reducing cooling overhead and returning power to computation. Our LiquidRack spray-cooling platform, low-lift CritiCool-X chillers, and high-efficiency DX systems enable operators to deploy more compute with less infrastructure. We’re also advancing new metrics – PCE and ROIP – to help operators quantify the financial value of returned power.

DCP Q: What are your company’s key differentiators?

PQ A:

  • Energy-first design philosophy — our systems return power to compute
  • Rapid delivery and global manufacturing — critical in today’s supply-strained market
  • LiquidRack spray cooling — enabling high-density AI clusters without stranded power
  • Flooded-evaporator chiller technology — high efficiency, low lift, faster deployment
  • End-to-end portfolio — DX, chilled water, liquid-assisted cooling, modular systems
  • Practical engineering — simple, reliable, maintainable designs
  • PCE / ROIP frameworks — financial and operational tools that change how operators evaluate cooling impact

DCP Q: What can we expect to see/hear from your company in the future?  

PQ A: You will see a major expansion of our LiquidRack platform, new ultra-efficient chiller technologies, deeper integration of PCE/ROIP metrics, and broader support for modular and edge deployments. We are continuing to push innovation toward one goal: giving operators more usable power for compute.

DCP Q: What upcoming industry events will you be attending? 

PQ A: Events Airsys is considering for 2026:

PTC’26 Hawaii; Escape the Cold Aisle Phoenix; Advancing DC Construction West; DCD>Connect New York; Data Center World DC; World of Modular Las Vegas; NSPMA; 7×24 Exchange Spring Orlando; Data Center Nation Toronto; Datacloud USA Austin; AI Infra Summit Santa Clara; DCW Power Dallas; Yotta Las Vegas; 7×24 Exchange Fall San Antonio; PTC DC; DCD>Connect Virginia; DCAC Austin; Supercomputing; Gartner IOCS; NVIDIA GTC; OCP Global Summit.

DCP Q: Do you have any recent news you would like us to highlight?

PQ A: Yes, Airsys has expanded its high-density cooling portfolio with several major advancements:

More announcements are planned for early 2026 as Airsys continues to expand its advanced cooling portfolio for high-density compute environments

DCP Q: Is there anything else you would like our readers to know about your company and capabilities?

PQ A: Airsys is built on a simple idea: power should serve computation, not be lost protecting it. Our mission is to help operators turn cooling losses into compute gains and to make energy stewardship a competitive advantage in the AI era.

DCP Q: Where can our readers learn more about your company?  

PQ A: www.airsysnorthamerica.com

DCP Q: How can our readers contact your company? 

PQ A: www.airsysnorthamerica.com/contact

To learn more about PTC’26, please visit www.ptc.org/ptc26. The event takes place January 18-21, 2026 in Honolulu, HI.

If you are interested in contributing to Data Center POST, contact us at contributions@datacenterpost.com.

Stay in the know! Subscribe to Data Center POST today.

# # #

About Data Center POST

Data Center POST provides a comprehensive view of the digital infrastructure landscape, delivering industry insights into the global data center ecosystem. As the industry’s only peer-contributed and online publication, we offer relevant information from developers, managers, providers, investors, and trendsetters worldwide.

Data Center POST works hard to get the most current information and thought-provoking ideas most apt to add relevance to the success of the data center industry. Stay informed, visit www.datacenterpost.com.

ADVERTISE | CONTRIBUTE | SUBSCRIBE

The post Power Should Serve Compute: Airsys’ Energy-First Approach to Data Center Cooling appeared first on Data Center POST.

  •  

AI Data Center Market to Surpass USD 1.98 Trillion by 2034

The global AI data center market was valued at USD 98.2 billion in 2024 and is estimated to grow at a CAGR of 35.5% to reach USD 1.98 trillion by 2034, according to a recent report by Global Market Insights Inc.

Growing adoption of generative AI and machine learning tools requires extraordinary processing power and storage capabilities, increasing reliance on data centers specifically optimized for AI workloads. These environments depend on advanced GPUs, scalable system architecture, and ultra-low-latency networking to support complex model training and inference across industries such as finance, healthcare, and retail. Big data analytics is also accelerating demand, as organizations handle massive streams of structured and unstructured information that must be processed rapidly.

AI-focused facilities enable high-performance computing for real-time workloads, strengthening their role as essential infrastructure for global digital transformation. The rapid expansion of cloud computing, along with the rising number of hyperscale facilities, continues to amplify the need for AI-ready infrastructures. Providers are investing in advanced AI data platforms that offer scalable services to enterprises and developers, further increasing market momentum.

The AI data center market from the hardware segment accounted for USD 61.1 billion in 2024. Growth is driven by expanding use of AI chips, GPU accelerators, advanced cooling technologies, high-density server systems, and optical networking solutions. Rising GPU energy requirements, the shift toward rack densities between 30-120 kW, and large-scale deployment strategies introduced by leading technology companies are shaping long-term capital allocation in the sector.

The cloud-based category held a 58% share in 2024 and is projected to grow at a CAGR of 35.2% from 2025 through 2034. This segment leads due to its unmatched scalability, flexible consumption options, and access to the latest AI-accelerated computing hardware without upfront investment. Hyperscale providers are making multi-billion-dollar commitments to strengthen global AI infrastructures, propelling adoption of AI-driven services and increasing demand for GPUs, TPUs, and specialized processors.

U.S. AI Data Center Market generated USD 33.2 billion in 2024. The country maintains a leading position supported by prominent hyperscale operators and substantial investments in GPU clusters, liquid cooling, and large-scale AI-aligned builds. Federal incentives, regional tax advantages, and infrastructure funding have further solidified the United States as the most capacity-rich region for AI computing.

Key participants in the AI data center market include Huawei, AWS, NVIDIA, HPE, Digital Realty, Google, Lenovo, Microsoft, Equinix, and Dell Technologies. Companies expanding their foothold in the market are focusing on infrastructure modernization, large-scale GPU deployments, and energy-efficient system design.

Many firms are investing in high-density racks, integrated liquid cooling, and next-generation networking to support advanced AI workloads. Strategic partnerships with chipmakers, cloud providers, and colocation operators help accelerate capacity expansion and ensure access to cutting-edge AI hardware. Providers are also scaling global data center footprints, enhancing automation capabilities, and optimizing power utilization through renewable-energy integration. Long-term contracts with enterprises, AI-as-a-service offerings, and the buildout of specialized AI clusters further reinforce competitive positioning and market dominance.

The post AI Data Center Market to Surpass USD 1.98 Trillion by 2034 appeared first on Data Center POST.

  •  

Issues Data Centers Face and How to Overcome Them: A Guide for Managers

Data centers are the backbone of modern digital infrastructure. They power cloud services, financial systems, healthcare platforms, and nearly every technology-driven business. As demand for data storage and processing grows, so do the operational, financial, and risk-related challenges data centers face. For managers, understanding these issues and knowing how to address them proactively is critical to maintaining uptime, security, and long-term viability. This is what you need to know to ensure you can meet up with demand.

Rising Energy Costs and Efficiency Demands

One of the most persistent challenges for data centers is energy consumption. Powering servers, cooling systems, and redundancy infrastructure requires enormous amounts of electricity, and energy costs continue to rise globally. Beyond cost, there is increasing pressure from regulators and customers to reduce environmental impact.

Managers can address this by investing in energy-efficient hardware, optimizing airflow and cooling layouts, and adopting real-time monitoring tools to identify waste. Long-term strategies may include transitioning to renewable energy sources or partnering with utility providers for more favorable pricing structures.

Cooling and Thermal Management Challenges

Heat is an unavoidable byproduct of high-density computing. Inefficient cooling not only increases costs but also raises the risk of equipment failure and downtime. As server densities increase, traditional cooling methods often struggle to keep up.

Modern solutions include hot-aisle/cold-aisle containment, liquid cooling, and AI-driven thermal monitoring systems. For managers, the key is treating cooling as a dynamic system rather than a fixed infrastructure. However, this needs to be one that evolves alongside hardware demands.

Financial Risk and Insurance Considerations

Data centers face significant financial exposure from equipment damage, downtime, liability claims, and unforeseen events. Even with strong operational controls, risk cannot be eliminated entirely.

This is where insurance becomes a critical part of risk management. Evaluating coverage that aligns with the unique needs of data center operations can help protect against losses that would otherwise threaten business continuity. BOP insurance by Next Insurance can help managers think more holistically about protecting assets, operations, and revenue streams as part of an overall risk strategy.

Downtime and Business Continuity Risks

Even brief outages can result in significant financial losses, reputational damage, and contractual penalties. This downtime may be caused by:

  • Power failures
  • Human error
  • Equipment malfunction
  • External events

To mitigate this risk, managers should prioritize redundancy at every critical point, including power supplies, network connections, and backup systems. Regular testing of disaster recovery plans is just as important as having them documented, too. A plan that hasn’t been tested is often unreliable in real-world conditions. You put your business and those whom you cater to at risk.

Cybersecurity and Physical Security Threats

Data centers face a dual security challenge: digital threats and physical risks. Cyberattacks continue to grow in sophistication, while physical threats such as unauthorized access, theft, or vandalism remain real concerns.

Addressing this requires layered security. On the digital side, this includes continuous patching, network segmentation, and monitoring for unusual activity. Physically, access controls, surveillance systems, and strict visitor protocols are essential. Managers should also ensure that staff training keeps pace with evolving threats, as human error remains a major vulnerability.

Compliance and Regulatory Pressure

Data centers often operate under complex regulatory requirements related to data privacy, industry standards, and regional laws. Compliance failures can result in fines, legal exposure, and loss of customer trust.

Managers can stay ahead by maintaining clear documentation, conducting regular audits, and working closely with legal and compliance teams. Building compliance into operational processes, rather than treating it as an afterthought, reduces risk and simplifies reporting.

Turning Challenges Into Strategic Advantage

While data centers face a wide range of operational and strategic challenges, these issues also present opportunities for improvement and differentiation. Managers who address all of the above proactively are better positioned to deliver reliability and value in a competitive market. Don’t let yours be the one that falls victim to the issues and instead take action.

# # #

About the Author:

James Daniels is a freelance writer, business enthusiast, a bit of a tech buff, and an overall geek. He is also an avid reader, who can spend hours reading and knowing about the latest gadgets and tech, whilst offering views and opinions on these topics.

The post Issues Data Centers Face and How to Overcome Them: A Guide for Managers appeared first on Data Center POST.

  •  

Human Error in Cybersecurity and the Growing Threat to Data Centers

Cyber incidents haven’t ceased to escalate in 2025, and they keep making their presence felt more and more impactfully as we transition into 2026. The quick evolution of novel cyber threat trends leaves data centers increasingly exposed to disruptions extending beyond the traditional IT boundaries.

The Uptime Institute’s annual outage analysis shows that in 2024, cyber-related disruptions occurred at roughly twice the average rate seen over the previous four years. This trend aligns with findings from Honeywell’s 2025 Cyber Threat Report, which identified a sharp increase in ransomware and extortion activity targeting operational technology environments based on large-scale system data.

There are many discussions today around infrastructure complexity and attack sophistication, but it’s a lesser-known reality that human error in cybersecurity remains a central factor behind many of these incidents. Routine configuration changes, access decisions, or decisions taken under stress can create conditions that allow errors to sneak in. Looking at high-availability environments, human error often becomes the point at which otherwise contained threats begin to escalate into bigger problems.

As cyberattacks on data centers continue to grow in number, downtime is carrying heavier and heavier financial and reputational consequences. Addressing human error in cybersecurity means recognizing that human behavior plays a direct role in how a security architecture performs in practice. Let’s take a closer look.

How  Attackers Take Advantage of Human Error in Cybersecurity

Cyberattacks often exploit vulnerabilities that stem from both superficial, maybe even preventable mistakes, as well as deeper, systemic issues. Human error in cybersecurity often arises when established procedures are not followed through consistently, which can create gaps that attackers are more than eager to exploit. A delayed firmware update or not completing maintenance tasks can leave infrastructure exposed, even when the risks are already known. And even if organizations have defined policies to reduce these exposures, noncompliance or insufficient follow-through often weakens their effectiveness.

In many environments, operators are aware that parts of their IT and operational technology infrastructure carry known weaknesses, but due to a lack of time or oversight, they fail to address them consistently. Limited training also adds to the problem, especially when employees are expected to recognize and respond to social engineering techniques. Phishing, impersonation, and ransomware attacks are increasingly targeting organizations with complex supply chains and third-party dependencies, and in these situations, human error often enables the initial breach, after which attackers move laterally through systems, using minor mistakes to trigger disruptions.

Why Following Procedures is Crucial

Having policies in place doesn’t always guarantee that the follow-through will be consistent. In everyday operations, teams often have to juggle many things at once: updates, alerts, and routine maintenance, and small steps can be missed unintentionally. Even experienced staff can make these kinds of mistakes, especially when managing large or complex environments over an extended period of time. Gradually, these small oversights can add up and leave systems exposed.

Account management works similarly. Password rules, or the policies for the handling of inactive accounts are usually well-defined; however, they are not always applied homogeneously. Dormant accounts may go unnoticed, and teams can fall behind on updates or escape regular review. Human error in cybersecurity often develops step by step through workloads, familiarity, and everyday stress, and not because of a lack of skill or awareness.

The Danger of Interacting With Social Engineering Without Even Knowing

Social engineering is a method of attack that uses deception and impersonation to influence people into revealing information or providing access. It relies on trust and context to make people perform actions that appear harmless and legitimate at the moment.

The trick of deepfakes is that they mirror everyday communication very accurately. Attackers today have all the tools to impersonate colleagues, service providers, or internal support staff. A phone call from someone claiming to be part of the IT help desk can easily seem routine, especially when framed as a quick fix or standard check. Similar approaches can be seen in emails or messaging platforms, and the pattern is the same: urgency overrides safety.

With the various new tools available, visual deception has become very common. Employees may be directed to login pages that closely resemble internal systems and enter credentials without hesitation. Emerging techniques like AI-assisted voice or video impersonation further blur the line between legitimate requests and malicious activity, making social engineering interactions very difficult to recognize in real time.

Ignoring Security Policies and Best Practices

It’s not enough if security policies exist only as formal documentation, but are not followed consistently on the floor. Sometimes, even if access procedures are defined, employees under the pressure of time can make undocumented exceptions. Access policies, or change management rules, for example, require peer review and approval, but urgent maintenance or capacity pressures often lead to decisions that bypass those steps.

These small deviations create gaps between how systems are supposed to be protected and how they are actually handled. When policies become situational or optional, security controls lose their purpose and reliability, leaving the infrastructure exposed, even though there’s a mature security framework in place.

When Policies Leave Room for Interpretation

Policies that lack precision introduce variability into how security controls are applied across teams and shifts. When procedures don’t explicitly define how credentials should be managed on shared systems, retained login sessions, or administrative access can remain in place beyond their intended scope. Similarly, if requirements for password rotation or periodic access reviews are loosely framed or undocumented, they are more likely to be deferred during routine operations.

These conditions rarely trigger immediate alerts or audit findings. However, over time, they accumulate into systemic weaknesses that expand the attack surface and increase the likelihood of attacks.

Best Practices That Erode in Daily Operations

Security issues often emerge through slow, incremental changes. When operational pressure increases, teams might want to rely on more informal workarounds to keep everything running. Routine best practices like updates, access reviews, and configuration standards can slip down the priority list or become sloppy in their application. Individually, all of these decisions can seem reasonable at the moment; over time, however, they do add up and dilute the established safeguards, which leaves the organization exposed even without a single clearly identifiable incident.

Overlooking Access and Offboarding Control

Ignoring best practices around access management introduces the next line of risks. Employees and third-party contractors often retain privileges beyond their active role if offboarding steps are not followed through. In the lack of clear deprovisioning rules, like disabling accounts, dormant access can linger on unnoticed. These inactive accounts are not monitored closely enough to detect and identify if misuse or compromise happens.

Policy Gaps During Incident Response

The consequences of ignoring procedures become most visible when an actual cybersecurity incident occurs. When teams are forced to act quickly without clear guidance, errors start to surface. Procedures that are outdated, untested, or difficult to locate offer little support during an emergency. There’s no policy that can eliminate risks completely, however, organizations that treat procedures as living, enforceable tools are better positioned to respond effectively when an incident occurs.

A Weak Approach to Security Governance

Weak security governance often allows risks to persist unnoticed, especially when oversight from management is limited or unclear. Without clear ownership and accountability, routine tasks like applying security patches or reviewing alerts can be delayed or overlooked, leaving systems exposed. These seemingly insignificant gaps create an environment over time in which vulnerabilities are known but not actively addressed.

Training plays a very important role in closing this gap, but only when it is treated as part of governance,and not as an isolated activity. Regular, structured training helps employees develop a habit of verification and reinforces the checks and balances defined by organizational policies. To remain effective, training has to evolve in tandem with the threat landscape. Employees need ongoing exposure to emerging attack techniques and practical guidance on how to recognize and respond to them within their daily workflows. Aligned governance and training help organizations position themselves better to reduce risk driven by human factors.

Understanding the Stakes

Human error in cybersecurity is often discussed as a collection of isolated missteps, but in reality, it reflects how people operate within complex systems under constant pressure.

In data center environments, these errors rarely occur as isolated events but are influenced by interconnected processes, tight timelines, and attackers who deliberately exploit trust, familiarity, and routine behavior. Looking at it from this angle, human error doesn’t show only individual mistakes but provides insight into how risks develop across an organization over time.

Recognizing the role of human error in cybersecurity is essential for reducing future incidents, but awareness alone is not enough. Training also plays an important role, but it cannot compensate for unclear processes, weak governance, or a culture that prioritizes speed more than safety.

Data center operators have to continuously adapt their security practices and reinforce expectations through daily operations instead of treating security best practices as rigid formalities. Building a culture where employees understand how their actions influence security outcomes helps organizations respond more effectively to evolving threats and limits the conditions that allow small errors to turn into major, devastating incidents.

# # #

About the Author

Michael Zrihen  is the Senior Director of Marketing & Internal Operations Manager at Volico Data Centers.

The post Human Error in Cybersecurity and the Growing Threat to Data Centers appeared first on Data Center POST.

  •  

Sabey Data Centers Taps OptiCool to Tackle High-Density Cooling

Sabey Data Centers is adding OptiCool Technologies to its growing ecosystem of advanced cooling partners, aiming to make high-density and AI-driven compute deployments more practical and energy efficient across its U.S. footprint. This move highlights how colocation providers are turning to specialized liquid and refrigerant-based solutions as rack densities outstrip the capabilities of traditional air cooling.

OptiCool is known for two-phase refrigerant pumped systems that use a non-conductive refrigerant to absorb heat through phase change at the rack level. This approach enables efficient heat removal without chilled water loops or extensive mechanical plant build-outs, which can simplify facility design and cut both capital and operating costs for data centers pushing into higher power densities. Sabey is positioning the OptiCool alliance as part of its integrated cooling technologies partnership program, which is designed to lower barriers to liquid and alternative cooling adoption for customers. Instead of forcing enterprises to engineer bespoke solutions for each deployment, Sabey is curating pre-vetted architectures and partners that align cooling technology, facility infrastructure and operational responsibility. For operators planning AI and HPC rollouts, that can translate into clearer deployment paths and reduced integration risk.

The appeal of two-phase refrigerant cooling lies in its combination of density, efficiency and retrofit friendliness. Because the systems move heat directly from the rack to localized condensers using a pumped refrigerant, they can often be deployed with minimal disruption to existing white space. That makes them attractive for operators that need to increase rack power without rebuilding entire data halls or adding large amounts of chilled water infrastructure.

Sabey executives frame the partnership as a response to customer demand for flexible, future-ready cooling options. As more organizations standardize on GPU-rich architectures and high-density configurations, cooling strategy has become a primary constraint on capacity planning. By incorporating OptiCool’s technology into its program, Sabey is signaling to customers that they will have multiple, validated pathways to support emerging workload profiles while staying within power and sustainability envelopes.

As liquid and refrigerant-based cooling rapidly move into the mainstream, customers evaluating their own AI and high-density strategies may benefit from understanding how Sabey is standardizing these technologies across its portfolio. To explore how this partnership and Sabey’s broader integrated cooling program could support specific deployment plans, readers can visit Sabey’s website for more information at www. sabeydatacenters.com.

The post Sabey Data Centers Taps OptiCool to Tackle High-Density Cooling appeared first on Data Center POST.

  •  

It’s Not An AI Bubble — We’re Witnessing the Next “Cloud” Revolution

Alphabet, Amazon, and Microsoft; these tech giants’ cloud services, Google Cloud, AWS, and Azure, respectively, are considered the driving force behind all current business computing, data, and mobile services. But back in the mid-2000s, they weren’t immediately seen as best bets on Wall Street. When Amazon launched AWS, analysts and investors were skeptical. They dismissed AWS as a distraction from Amazon’s core retail business. The Wall Street wizards did not understand the potential of cloud computing services. Many critics believed enterprises would never move their mission-critical workloads off-premises and into remote data centers.

As we all know, the naysayers were wrong, and cloud computing took off, redefining global business. It turbo-charged the economy, creating trillions in enterprise value while reducing IT costs, increasing application agility, and enabling new business models. In addition, the advent of cloud services lowered barriers to entry for startups and enabled rapid service scaling. Improving efficiency, collaboration, and innovation through scalable, pay-as-you-go access to computing resources was part of the formula for astounding success. The cloud pushed innovation to every corner of society, and those wise financiers misunderstood it. They could not see how this capital-intensive, long-horizon bet would ever pay off.

Now, we are at that moment again. This time with artificial intelligence.

Headlines appear every day saying that we’re in an “AI bubble.” But AI has gone beyond mere speculation as companies (hyperscalers) are in early-stage infrastructure buildout mode. Hyperscalers understand this momentum. They have seen this movie before with a different protagonist, and they know the story ends with transformation, not collapse. The need for transformative compute, power, and connectivity is the catalyst driving a new generation of data center buildouts. The applications, the productivity, and the tools are there. And unlike the early cloud era, sustainable AI-related revenue is a predictable balance sheet line item.

The Data

Consider these most recent quarterly earnings:

  • Microsoft Q3 2025: Revenue: $70.1B, up 13%. Net income: $25.8B, up 18%. Intelligent Cloud grew 21% led by Azure, with 16 points of growth from AI services.
  • Amazon Q3 2025: Revenue: $180.2B, up 13%. AWS grew 20% to $33B. Trainium2, its second-gen AI chip, is a multi-billion-dollar line. AWS added 3.8 GW of power capacity in 12 months due to high demand.
  • Alphabet (Google Parent) Q3 2025: Revenue: $102.35B, up 16%. Cloud revenue grew 33% to $15.2B. Operating income: up nearly 85%, backed by $155B cloud backlog.
  • Meta Q3 2025: Revenue: $51.2B, up 26%. Increased infrastructure spend focused on expanding AI compute capacity. (4)

These are not the signs of a bubble. These are the signatures of a platform shift, and the companies leading it are already realizing returns while businesses weave AI into operations.

Bubble or Bottleneck

However, let’s be clear about this analogy: AI is not simply the next chapter of the cloud. Instead, it builds on and accelerates the cloud’s original mission: making extraordinary computing capabilities accessible and scalable. While the cloud democratized computing, AI is now democratizing intelligence and autonomy. This evolution will transform how we work, secure systems, travel, heal, build, educate, and solve problems.

Just as there were cloud critics, we now have AI critics. They say that aggressive capital spending, rising energy demand, and grid strain are signs that the market is already overextended. The pundits are correct about the spending:

  • Alphabet (Google) Q3 2025: ~US $24B on infrastructure oriented toward AI/data centers.
  • Amazon (AWS) Q3 2025: ~US $34.2B, largely on infrastructure/AI-related efforts.
  • Meta Q3 2025: US $19.4B directed at servers/data centers/network infrastructure for AI.
  • Microsoft Q3 2025: Roughly US $34.9B, of which perhaps US $17-18B or more is directly AI/data-center infrastructure (based on “half” of capex).

However, the pundits’ underlying argument is predicated on the same misunderstandings seen in the run-up to the cloud era: it confuses infrastructure investment with excess spending. The challenge with AI is not too much capacity; it is not enough. Demand is already exceeding grid capacity, land availability, power transmission expansion, and specialized equipment supply.

Bubbles do not behave that way; they generate idle capacity. For example, consider the collapse of Global Crossing. The company created the first transcontinental internet backbone by laying 100,000 route-miles of undersea fiber linking 27 countries.

Unfortunately, Global Crossing did not survive the dot-com bubble burst (1990-2000) and filed for bankruptcy. However, Level 3, then CenturyLink (2017), and Lumen Technologies knew better than to listen to Wall Street and acquired Global Crossing’s cables. Today, Lumen has reported total 2024 revenue of $13.1 billion. Although they don’t specifically list submarine cable business revenue, it’s reasonable to infer that these cables are still generating in the low billion-dollar revenue figures—a nice perpetual paycheck for not listening to the penny pinchers.

The AI economy is moving the value chain down the same path of sustainable profitability. But first, we must address factors such as data center proximity to grid strength, access to substation expansion, transformer supply, water access, cooling capacity, and land for modern power-intensive compute loads.

Power, Land, and the New Workforce

The cloud era prioritized fiber; the AI era is prioritizing power. Transmission corridors, utility partnerships, renewable integration, cooling systems, and purpose-built digital land strategies are essential for AI expansion. And with all that comes the “pick and shovel” jobs building data centers, which Wall Street does not factor into the AI economy. You need to look no further than Caterpillar’s Q3 2025 sales and revenue of $16.1 billion, up 10 percent.

Often overlooked in the tech hype are the industrial, real estate, and power grid requirements for data center builds, which require skilled workers such as electricians, steelworkers, construction crews, civil engineers, equipment manufacturers, utility operators, grid modernizers, and renewable developers. And once they’re up and running, data centers need cloud and network architects, cybersecurity analysts, and AI professionals.

As AI scales, it will lift industrial landowners, renewable power developers, utilities, semiconductor manufacturers, equipment suppliers, telecom networks, and thousands of local trades and service ecosystems, just as it’s lifting Caterpillar. It will accelerate infrastructure revitalization and strengthen rural and suburban economies. It will create new industries, just like the cloud did with Software as a Service (SaaS), e-commerce logistics, digital banking, streaming media, and remote-work platforms.

Conclusion

We’ve seen Wall Street mislabel some of the most significant tech expansions, from the telecom-hotel buildout of the 1990s to the co-location wave, global fiber expansion, hyperscale cloud, and now, with AI. Just like all revolutionary ideas, skepticism tends to precede them, even though there’s an inevitability to them. But stay focused: infrastructure comes before revenue, and revenue tends to arrive sooner than predicted, which brings home the point that AI is not inflating; it is expanding.

Smartphones reshaped consumer behavior within a decade; AI will reshape the industry in less than half that time. This is not a bubble. It is an infrastructure super-cycle predicated on electricity, land, silicon, and ingenuity. Now is the time to act: those who build power-first digital infrastructure are not in the hype business; they’re laying the foundation for the next century of economic growth.

# # #

About the Author

Ryne Friedman is an Associate at hi-tequity, where he leverages his commercial real estate expertise to guide strategic site selection and location analysis for data center development. A U.S. Coast Guard veteran and licensed Florida real estate professional, he previously supported national brands such as Dairy Queen, Crunch Fitness, Jimmy John’s, and 7-Eleven with market research and site acquisition. His background spans roles at SLC Commercial, Lambert Commercial Real Estate, DSA Encore, and DataCenterAndColocation. Ryne studied Business Administration and Management at Central Connecticut State University.

The post It’s Not An AI Bubble — We’re Witnessing the Next “Cloud” Revolution appeared first on Data Center POST.

  •  

Resolute CS and Equinix Close the Last-Mile Gap with Automated Connectivity Platform

Equinix customers can now order last-mile connectivity from enterprise edge locations to any of Equinix’s 270+ data centers globally, eliminating weeks of manual sourcing and the margin stacking that has long plagued enterprise network procurement.

The collaboration integrates Resolute CS’s NEXUS platform directly into the Equinix Customer Portal, giving enterprises transparent access to 3,200+ carriers across 180 countries. Rather than navigating opaque pricing through multiple intermediaries, customers can design, price, and order last-mile access with full visibility into costs and carrier options.

The Last-Mile Problem

While interconnection platforms like Equinix Fabric have transformed data center connectivity, the edge connectivity gap has remained a persistent friction point. Enterprises connecting branch offices or remote facilities to data centers typically face weeks-long sourcing cycles, opaque pricing structures with 2-4 layers of margin stacking (25-30% each), and inconsistent delivery across geographies.

This inefficiency becomes particularly acute as AI workloads shift toward distributed architectures. Unlike centralized applications, AI infrastructure increasingly requires connectivity across edge locations, multiple data centers, and cloud platforms, creating exponentially more last-mile requirements that manual sourcing processes cannot efficiently handle.

How It Works

Resolute NEXUS automates route design, identifies diversity and resiliency options, simplifies cloud access paths, and coordinates direct ordering with carriers. The result: enterprises can manage connectivity from branch office to data center to cloud through a single portal, with transparent pricing and no hidden margin layers.

“We are empowering customers to design their network architecture without access constraints,” said Patrick C. Shutt, CEO and co-founder of Resolute CS. “With Equinix and Resolute NEXUS, customers can design, price, and order global last-mile access with full transparency, removing complexity and lowering costs.”

Benefits for Carriers Too

The platform also creates opportunities for network providers. By operating as a carrier-neutral marketplace, Resolute NEXUS gives providers direct visibility into qualified enterprise demand, improved infrastructure utilization, and lower customer acquisition costs, all without the traditional intermediary layers.

AI and Distributed Infrastructure

With Equinix operating 270+ AI-optimized data centers across 77 markets, automated last-mile sourcing directly addresses the connectivity requirements for distributed AI deployments. Enterprises can now provision edge-to-cloud connectivity with the speed and transparency expected from modern cloud services.

Equinix Fabric customers can access the platform immediately through the Equinix Customer Portal by navigating to “Find Service Providers” and searching for Resolute NEXUS – Last Mile Access.

To learn more, read the full press release here.

The post Resolute CS and Equinix Close the Last-Mile Gap with Automated Connectivity Platform appeared first on Data Center POST.

  •  

DC BLOX Secures $240 Million to Accelerate Hyperscale Data Center Growth Across the Southeast

DC BLOX has taken a significant step in accelerating digital transformation across the Southeast with the closing of a new $240 million HoldCo financing facility from Global Infrastructure Partners (GIP), a part of BlackRock. This strategic funding positions the company to rapidly advance its hyperscale data center expansion while reinforcing its role as a critical enabler of AI and cloud growth in the region.​

Strategic growth financing

The $240 million facility from GIP provides fresh growth capital dedicated to DC BLOX’s hyperscale data center strategy, building on the company’s recently announced $1.15 billion and $265 million Senior Secured Green Loans. Together, these financings support the development and construction of an expanding portfolio of digital infrastructure projects designed to meet surging demand from hyperscalers and carriers.​

Powering AI and cloud innovation

DC BLOX has emerged as a leader in connected data center and fiber network solutions, with a vertically integrated platform that includes hyperscale data centers, subsea cable landing stations, colocation, and fiber services. This model allows the company to offer end-to-end solutions for hyperscalers and communications providers seeking capacity, connectivity, and resiliency in high-growth Southeastern markets.​

Community and economic impact

The new financing is about more than infrastructure; it is also about regional economic development. DC BLOX’s investments help bring cutting-edge AI and cloud technology into local communities, while driving construction jobs, tax revenues, and power grid enhancements that benefit both customers and ratepayers.

“We are excited to partner with GIP, a part of BlackRock, to fuel our ambitious growth goals,” said Melih Ileri, Chief Investment Officer at DC BLOX. “This financing underscores our commitment to serving communities in the Southeast by bringing cutting-edge AI and cloud technology investments with leading hyperscalers into the region, and creating economic development activity through construction jobs, taxes paid, and making investments into the power grid for the benefit of our customers and local ratepayers alike.”​

Backing from leading investors

Michael Bogdan, Chairman of DC BLOX and Head of the Digital Infrastructure Group at Future Standard, highlighted that this milestone showcases the strength of the company’s vision and execution. Future Standard, a global alternative asset manager based in Philadelphia with over 86.0 billion in assets under management, leads DC BLOX’s sponsorship and recently launched its Future Standard Digital Infrastructure platform with more than 2 billion in assets. GIP, now a part of BlackRock and overseeing over 189 billion in assets, brings deep sector experience across energy, transport, and digital infrastructure, further validating DC BLOX’s role in shaping the Southeast as a global hub for AI-driven innovation.​

Read the full release here.

The post DC BLOX Secures $240 Million to Accelerate Hyperscale Data Center Growth Across the Southeast appeared first on Data Center POST.

  •  

Registration Opens for Yotta 2026 as AI Infrastructure Demand Accelerates

Yotta 2026 is officially open for registration, returning Sept. 28–30 to Las Vegas with a larger footprint, a new venue and an expanded platform designed to meet the accelerating demands of AI-driven infrastructure. Now hosted at Caesars Forum, Yotta 2026 reflects both the rapid growth of the event and the unprecedented pace at which AI is reshaping compute, power and digital infrastructure worldwide.

As AI workloads scale faster than existing systems were designed to handle, infrastructure leaders are facing mounting challenges around power availability, capital deployment, resilience and integration across traditionally siloed industries. Yotta 2026 is built to convene the full ecosystem grappling with these realities, bringing together operators, hyperscalers, enterprise leaders, energy executives, investors, builders, policymakers and technology partners in one place.

Rebecca Sausner, CEO of Yotta, emphasizes that the event is designed for practical progress, not theoretical discussion. From chips and racks to networks, cooling, power and community engagement, AI is transforming every layer of digital infrastructure. Yotta 2026 aims to move conversations beyond vision and into real-world solutions that address scale, reliability and investment risk in an AI-first era.

A defining feature of Yotta 2026 is its advisory board-led approach to programming. The conference agenda is being developed in collaboration with the newly announced Yotta Advisory Board, which includes senior leaders from organizations spanning AI, cloud, energy, finance and infrastructure, including OpenAI, Oracle, Schneider Electric, KKR, Xcel Energy, GEICO and the Electric Power Research Institute (EPRI). This cross-sector guidance ensures the program reflects how the industry actually operates, as an interconnected system where decisions around power, compute, capital, design and policy are inseparable.

The 2026 agenda will focus on the most urgent challenges shaping the AI infrastructure era. Key themes include AI infrastructure and compute density, power generation and grid interconnection, capital formation and investment risk, design and operational resilience, and policy and public-private alignment. Together, these topics offer a market-driven view of how digital infrastructure must be designed, financed and operated to support AI at scale.

With an anticipated 6,000+ AI and digital infrastructure leaders in attendance, Yotta 2026 will feature a significantly expanded indoor and outdoor expo hall, curated conference programming and immersive networking experiences. Hosted at Caesars Forum, the event is designed to support both strategic planning and hands-on execution, creating space for collaboration across the entire infrastructure value chain.

Early registration is now open, with passes starting at $795 and discounted rates available for early registrants. As AI continues to drive unprecedented infrastructure demand, Yotta 2026 positions itself as a critical forum for the conversations and decisions shaping the future of compute, power and digital infrastructure.

To learn more or register, visit yotta-event.com.

The post Registration Opens for Yotta 2026 as AI Infrastructure Demand Accelerates appeared first on Data Center POST.

  •