Normal view

Received yesterday — 31 January 2026

2025 in Review: Sabey’s Biggest Milestones and What They Mean

26 January 2026 at 18:00

Originally posted on Sabey Data Centers.

At Sabey Data Centers, progress is more than a series of headlines. It’s a blueprint for what’s possible when infrastructure, efficiency and stewardship go hand in hand. From our award-winning sustainability initiatives to bold new campus designs and record-setting expansions, each milestone this year has demonstrated our commitment to powering tomorrow’s workloads with conscience and confidence.

As 2026 is already well underway and already promising to be a banner year, we wanted to pause and reflect on the path we forged in 2025.

Capacity expansion: built for growth 

In 2025, we announced strategic power expansions across our Pacific Northwest, Ashburn, Austin and Quincy locations. In Seattle and Columbia, 30MW of new power is now anticipated to come online by 2027, enabling tenants to scale quickly while leveraging ultra-efficient energy (carbon-free in Seattle) and a regional average PUE as low as 1.2.

On the East Coast, Ashburn’s third and final building broke ground, set to introduce 54MW of additional capacity with the first tranches set to come online in 2026. This will be the first three-story facility in Sabey’s portfolio, purpose-built for air-cooled, liquid-cooled and hybrid deployments, with rack densities over 100kW and an average PUE of 1.35. In both regions, Sabey’s expansion balances hyperscale demand with customization, modular scale and resilient connectivity.

The launch of construction for Austin Building B was another major milestone, expanding our presence in the dynamic Round Rock tech corridor. This three-story, liquid-cooling-ready facility is designed to deliver 54 megawatts of total power capacity. Building B continues our commitment to scalable, energy-efficient digital infrastructure, tailored for enterprise and hyperscale workloads.

To continue reading, please click here.

The post 2025 in Review: Sabey’s Biggest Milestones and What They Mean appeared first on Data Center POST.

Received before yesterday

Power Should Serve Compute: Airsys’ Energy-First Approach to Data Center Cooling

21 January 2026 at 17:00

Data Center POST had the opportunity to connect with Paul Quigley, President of Airsys Cooling Technologies, Inc., ahead of PTC’26. With more than 36 years of experience spanning HVAC contracting, distribution, and executive leadership in manufacturing, Quigley is widely recognized for his work in large-scale, high-demand cooling environments. His career includes co-designing one of the largest VRF projects in North America and leading complex cooling initiatives across the United States. Since joining Airsys in 2021, Quigley has focused on advancing precision and liquid-assisted cooling technologies to help data center operators address power constraints driven by AI, high-density compute, and rapid digital infrastructure growth. In the Q&A below, Quigley shares his perspective on the challenges facing the global digital infrastructure industry and how Airsys is helping operators turn cooling efficiency into usable compute power.

Data Center Post (DCP) Question: What does your company do?  

Paul Quigley (PQ) Answer: Airsys is an energy-focused cooling solutions company founded on a simple idea: power should serve computation, not be lost protecting it. Every system we build returns power back to the data center where it belongs. Our innovative, award-winning technologies give operators more usable power for compute, improving capacity, resilience, and overall profitability.

DCP Q: What problems does your company solve in the market?

PQ A: Airsys helps data centers recover power lost to cooling. As AI clusters push energy systems past their limits, operators are forced to choose between compute, capacity, and cost. Our cooling technologies solve that problem by returning power back to the data center, reducing stranded capacity, and giving operators more usable power for compute.

DCP Q: What are your company’s core products or services?

PQ A: We design and build the world’s most advanced cooling solutions for data centers, including:

  • LiquidRack spray-cooling and liquid-assisted rack systems
  • High-efficiency DX and chilled-water cooling systems
  • Flooded-evaporator chillers (CritiCool-X)
  • Indoor and outdoor precision cooling systems
  • Edge, modular, and containerized data-center cooling
  • Control systems, energy-optimization tools, and PCE/ROIP performance frameworks

DCP Q: What markets do you serve?

PQ A:

  • Hyperscale and AI compute environments
  • Colocation and enterprise data centers
  • Modular and prefabricated data centers
  • Edge and telecom infrastructure
  • Education, industrial, government, and defense applications requiring mission-critical cooling

DCP Q: What challenges does the global digital infrastructure industry face today?

PQ A: The industry now operates in an environment of rapid compute expansion and structural power scarcity. Grid limitations, long construction timelines, inefficient legacy cooling, land constraints, and the explosive rise of AI workloads have created an energy bottleneck. Data centers no longer struggle with space… they struggle with power.

DCP Q: How is your company adapting to these challenges?

PQ A: Airsys has always shifted the conversation from cooling to energy stewardship. Every system we design focuses on reducing cooling overhead and returning power to computation. Our LiquidRack spray-cooling platform, low-lift CritiCool-X chillers, and high-efficiency DX systems enable operators to deploy more compute with less infrastructure. We’re also advancing new metrics – PCE and ROIP – to help operators quantify the financial value of returned power.

DCP Q: What are your company’s key differentiators?

PQ A:

  • Energy-first design philosophy — our systems return power to compute
  • Rapid delivery and global manufacturing — critical in today’s supply-strained market
  • LiquidRack spray cooling — enabling high-density AI clusters without stranded power
  • Flooded-evaporator chiller technology — high efficiency, low lift, faster deployment
  • End-to-end portfolio — DX, chilled water, liquid-assisted cooling, modular systems
  • Practical engineering — simple, reliable, maintainable designs
  • PCE / ROIP frameworks — financial and operational tools that change how operators evaluate cooling impact

DCP Q: What can we expect to see/hear from your company in the future?  

PQ A: You will see a major expansion of our LiquidRack platform, new ultra-efficient chiller technologies, deeper integration of PCE/ROIP metrics, and broader support for modular and edge deployments. We are continuing to push innovation toward one goal: giving operators more usable power for compute.

DCP Q: What upcoming industry events will you be attending? 

PQ A: Events Airsys is considering for 2026:

PTC’26 Hawaii; Escape the Cold Aisle Phoenix; Advancing DC Construction West; DCD>Connect New York; Data Center World DC; World of Modular Las Vegas; NSPMA; 7×24 Exchange Spring Orlando; Data Center Nation Toronto; Datacloud USA Austin; AI Infra Summit Santa Clara; DCW Power Dallas; Yotta Las Vegas; 7×24 Exchange Fall San Antonio; PTC DC; DCD>Connect Virginia; DCAC Austin; Supercomputing; Gartner IOCS; NVIDIA GTC; OCP Global Summit.

DCP Q: Do you have any recent news you would like us to highlight?

PQ A: Yes, Airsys has expanded its high-density cooling portfolio with several major advancements:

More announcements are planned for early 2026 as Airsys continues to expand its advanced cooling portfolio for high-density compute environments

DCP Q: Is there anything else you would like our readers to know about your company and capabilities?

PQ A: Airsys is built on a simple idea: power should serve computation, not be lost protecting it. Our mission is to help operators turn cooling losses into compute gains and to make energy stewardship a competitive advantage in the AI era.

DCP Q: Where can our readers learn more about your company?  

PQ A: www.airsysnorthamerica.com

DCP Q: How can our readers contact your company? 

PQ A: www.airsysnorthamerica.com/contact

To learn more about PTC’26, please visit www.ptc.org/ptc26. The event takes place January 18-21, 2026 in Honolulu, HI.

If you are interested in contributing to Data Center POST, contact us at contributions@datacenterpost.com.

Stay in the know! Subscribe to Data Center POST today.

# # #

About Data Center POST

Data Center POST provides a comprehensive view of the digital infrastructure landscape, delivering industry insights into the global data center ecosystem. As the industry’s only peer-contributed and online publication, we offer relevant information from developers, managers, providers, investors, and trendsetters worldwide.

Data Center POST works hard to get the most current information and thought-provoking ideas most apt to add relevance to the success of the data center industry. Stay informed, visit www.datacenterpost.com.

ADVERTISE | CONTRIBUTE | SUBSCRIBE

The post Power Should Serve Compute: Airsys’ Energy-First Approach to Data Center Cooling appeared first on Data Center POST.

Interconnection and Colocation: The Backbone of AI-Ready Infrastructure

6 January 2026 at 14:00

Originally posted on 1547Realty.

AI is changing what infrastructure needs to do. It is no longer enough to provide power cooling and a basic network connection. Modern AI and high performance computing workloads depend on constant access to large data sets and fast communication between systems. That makes interconnection an essential part of the environment that supports them.

Traditional cloud environments were not built for dense GPU clusters or latency sensitive applications. This has helped drive the rise of neocloud providers, which focus on specialized compute and rely on data centers for the physical setting in which it operates.

Industry reporting from RCR Wireless notes that many neocloud providers choose to colocate in established facilities instead of building new data centers. This gives them faster speed to market and direct access to network ecosystems that would take years to recreate on their own. In this context data centers with strong connectivity play a central role.

1547 operates facilities that combine space and power with the network access needed for AI and neocloud deployments. These environments allow operators to place infrastructure where it can perform as intended.

The Shift from Cloud First to Cloud Right

For many years, the default approach for new applications was simple. Put it in the cloud. That cloud first mindset is now giving way to a cloud-right strategy. The question is no longer only whether something can run in the cloud, but whether it should.

AI and high-performance workloads often need to run close to users, to data sources, or along specific network routes. They require predictable latency and steady throughput. When model training or inference spans many GPUs across different clusters, even small delays can affect performance and cost.

Analysts have observed that organizations are matching each workload to the environment that fits it best. As RTInsights highlights, not every workload performs well in a single centralized cloud. Some applications remain in hyperscale environments. Others move to edge sites, private clouds or colocation facilities that offer greater control over performance. Neocloud operators support this shift by offering GPU focused infrastructure from locations chosen for both efficiency and access to network routes.

To do that, they need more than space. They need carriers, cloud on-ramps, internet exchanges and private connection options. They need a fabric that lets them move data efficiently between customers, partners, and providers. Connectivity within the facility brings these elements together and supports cloud right placement.

1547 facilities support this shift by giving operators access to diverse networks in key markets. These environments allow AI workloads to sit where they perform best while staying connected to the wider ecosystem.

To continue reading, please click here.

The post Interconnection and Colocation: The Backbone of AI-Ready Infrastructure appeared first on Data Center POST.

infra/CAPITAL Summit 2026 Heads to Paris as Europe’s Hyperscale AI Forum

11 December 2025 at 16:00

The European digital infrastructure community will convene in Paris next spring for the launch of the inaugural infra/CAPITAL Summit, taking place 15–16 April 2026 at the Kimpton St Honoré. Hosted in partnership by Structure Research and The Tech Capital, infra/CAPITAL is designed as a vendor‑neutral, executive‑level gathering dedicated to the intersection of hyperscale AI infrastructure and institutional capital.

Positioned as the European Summit for Hyperscale AI Strategy & Execution, infra/CAPITAL will focus on the capital, infrastructure and policy decisions reshaping how AI and cloud platforms scale across the region. The summit will bring together data centre operators, cloud and hyperscale leaders, infrastructure investors, lenders, advisors and policymakers for two days of focused discussions, market intelligence and high‑value networking.

“We established infra/CAPITAL to create a European platform where the future of hyperscale and AI infrastructure can be designed – not just discussed,” said Philbert Shih, Managing Director at Structure Research. “This summit brings together operators, investors and decision‑makers so we can use data – not hype – to chart a sustainable and scalable path for the next generation of digital infrastructure.”

A Program Built Around Capital, Power and Policy

infra/CAPITAL’s agenda is centred on the realities of building and financing AI‑ready infrastructure in Europe. Sessions will explore topics such as power and site strategy, structuring and pricing risk in hyperscale developments, cross‑border expansion, ESG and regulatory requirements, and the evolving role of neocloud and edge in AI architectures. The program will blend independent research, fireside chats and panel discussions with perspectives from across the ecosystem.

“infra/CAPITAL fills a crucial gap in Europe’s data centre and AI infrastructure ecosystem,” added João Marques Lima, Managing Director at The Tech Capital. “By convening cloud and hyperscale leaders alongside capital allocators and industry analysts, we’re building a vital marketplace of ideas and connections – one that will help drive the investments and partnerships shaping tomorrow’s data economy.”

Networking, Partnerships and a Shared Mission

With curated programming, invite‑driven networking and opportunities for structured and informal conversations, infra/CAPITAL Summit 2026 is designed to help participants forge meaningful relationships and unlock new deal pathways. For Structure Research and The Tech Capital, the event extends a shared mission: to support the global digital infrastructure community with independent insight and to convene the decision‑makers who translate strategy into execution.

To learn more or register for infra/CAPITAL Summit 2026, visit: www.infracapitalsummit.com

For more information about Structure Research visit: www.structureresearch.net

For more information about The Tech Capital visit: www.thetechcapital.com

The post infra/CAPITAL Summit 2026 Heads to Paris as Europe’s Hyperscale AI Forum appeared first on Data Center POST.

AI and the Next Frontier of Digital Infrastructure

8 December 2025 at 16:00

Insights from Structure Research, Applied Digital, PowerHouse Data Centers, and DataBank

A New Era of Infrastructure Growth

At the infra/STRUCTURE Summit 2025, held October 15–16 at the Wynn Las Vegas, the session titled “AI: The Next Frontier” brought together data center leaders to discuss how artificial intelligence is reshaping infrastructure demand, investment, and development strategy.

Moderated by Jabez Tan, Head of Research at Structure Research, the conversation featured Wes Cummins, CEO of Applied Digital; Luke Kipfer, Managing Director at PowerHouse Data Centers; and Raul Martynek, CEO of DataBank. Each offered unique perspectives on how their organizations are adapting to the acceleration of AI workloads and what that means for power, scale, and capital in the years ahead.

Industry Transformation – From Hyperscale to AI-Scale

Jabez Tan opened the discussion by reflecting on how quickly the market has shifted. Just one year ago, many were questioning the durability of AI-related infrastructure investments. Now, as Tan observed, “The speed of change has outpaced even the most optimistic expectations.”

Wes Cummins of Applied Digital illustrated this evolution through his company’s own transformation. “We started building Bitcoin data centers,” Cummins said. “We were never a miner, we just built the facilities. Then, when AI took off, we realized our designs could scale. We pivoted early, and when ChatGPT hit, the entire world changed.”

That pivot positioned Applied Digital to become a key player in the new era of high-performance computing (HPC) and GPU-intensive workloads, with facilities like its large-scale campus in North Dakota exemplifying how traditional models have been re-engineered for AI.

Building for Scale – Meeting the Demand Wave

Raul Martynek of DataBank and Luke Kipfer of PowerHouse Data Centers both emphasized how scale and speed have become the defining factors of success. “As an executive developer, you have to have the conviction to bring inventory to market,” Martynek said. “If you’re building in good markets and with the right customers, there’s enormous room for growth.”

Cummins agreed, stressing that the conversation has shifted beyond simply securing power. “We’re moving past the question of who has power,” Cummins said. “Now it’s about who can build at scale, deliver reliably, and operate efficiently. Construction timelines, supply chain access, and delivery speed are the new gating factors.”

The panelists noted that hyperscalers are no longer alone in this race. New AI-focused firms, GPU as a service providers, and cloud entrants are competing for capacity at unprecedented levels, pushing the industry to think and build faster.

Site Strategy and Market Evolution – Staying Close to the Core

On the question of site selection, Martynek explained that traditional Tier 1 markets remain critical, though the boundaries are expanding. “Proximity to major availability zones is still a sound long-term strategy,” Martynek said. “We’re buying land in emerging submarkets of Virginia, for example, close enough to the core, but flexible enough to support scale.”

Kipfer added that hyperscalers’ preferences vary by workload type. “For AI and machine learning, some customers can be farther from peering points,” Kipfer said. “But for commercial cloud and enterprise use cases, Tier 1 and Tier 1-adjacent locations still offer the lowest risk and greatest performance.”

Together, their remarks reflected a balanced market dynamic, one where new geographies are gaining traction, but traditional hubs remain foundational to large-scale AI deployments.

Is This a Bubble? – Understanding the AI Surge

As AI investment accelerates, Tan posed the question many in the audience were thinking: Are we in another tech bubble?

Cummins was direct in his response. “I lived through the dot-com bubble,” he said. “This is different. The rate of adoption and real-world application is unlike anything we’ve ever seen.” He pointed out that ChatGPT reached a billion daily queries in just over two years—compared to Google’s eleven-year journey to the same milestone. “The computing power behind that is staggering,” he added.

Martynek agreed, noting that despite the hype, constraints in power, supply chain, and construction capacity make overbuilding virtually impossible. “It’s actually very hard to build too much right now,” he said. “The market is self-regulating through those bottlenecks.”

Capital Strategy and Long-Term Partnerships

A major theme throughout the discussion was the evolving capital stack supporting AI infrastructure. Martynek shared that DataBank has attracted strong investment from institutional partners seeking stable, long-term returns. “We’ve created investment-grade structures backed by 15-year commitments from top-tier customers,” Martynek said. “That gives our investors confidence and gives us visibility into future growth.”

Cummins added that Applied Digital’s focus is on securing long-term offtake agreements with the right clients, those building sustainable businesses rather than speculative projects. “These are 15-year-plus commitments from high-quality counterparties,” Cummins said. “That’s what allows us to build aggressively but responsibly.”

The panel agreed that long-term alignment between capital providers, developers, and customers will define the next phase of industry maturity.

The Future of AI Infrastructure – Speed, Scale, and Cooperation

Looking ahead, all three panelists emphasized the need for ongoing collaboration across the ecosystem. From developers to operators to hyperscalers, success will depend on shared innovation and operational agility.

Cummins summed up the moment: “The world isn’t going back. We’ve unlocked a new era of computing, and our challenge is to keep up with it. Speed is everything.”

Martynek added, “We’re not overbuilding, we’re underprepared. The companies that can execute with discipline and partnership will define the next decade of infrastructure growth.”

A Market Fueled by Real Demand

The discussion underscored that the AI-driven infrastructure boom is not speculative, it’s structural. Adoption is accelerating faster than any previous technology wave, supply is constrained, and capital is flowing toward long-term, revenue-backed projects. The result is a market with strong fundamentals, focused execution, and unprecedented potential for innovation.

Infra/STRUCTURE 2026: Save the Date

Want to tune in live, received all presentations, gain access to C-level executives, investors and industry leading research? Then save the date for infra/STRUCTURE 2026 set for October 7-8, 2026 at The Wynn Las Vegas. Pre-Registration for the 2026 event is now open, and you can visit www.infrastructuresummit.io to learn more.

The post AI and the Next Frontier of Digital Infrastructure appeared first on Data Center POST.

Hyperscale Data Center Procurement: Scaling Smarter in the Age of AI

1 December 2025 at 16:00

Insights from Structure Research, Cloudflare, Decillion, Groq, and Lambda

Why Hyperscale Procurement Matters Now

At the infra/STRUCTURE Summit 2025, held October 15–16 at the Wynn Las Vegas, the session on Hyperscale Data Center Procurement explored how hyperscalers, cloud platforms, and AI companies are redefining site selection, capacity planning, and power procurement.

With the explosion of artificial intelligence (AI) and high-performance workloads, the panel examined how data center operators are adapting to meet new demands for speed, density, and collaboration. The discussion brought together leading experts who sit at the intersection of technology, infrastructure, and strategy: Jabez Tan, Head of Research at Structure Research; Sarah Kurtz, Data Center Selection Manager at Cloudflare; Whitney Switzer, CEO of Decillion; Anik Nagpal, Principal Strategic Advisor for Global Data Center Development at Groq; and Ken Patchett, VP of Data Center Infrastructure at Lambda.

Together, they offered a grounded yet forward-looking view of how hyperscale infrastructure is evolving and why collective problem-solving across the ecosystem has never been more urgent.

Understanding the New Procurement Reality – From Megawatts to Gigawatts

Moderator Jabez Tan opened by noting how quickly the scale of data center procurement has transformed. Just a few years ago, hyperscale planning revolved around megawatts. Today, as Ken Patchett of Lambda explained, “We used to talk in megawatts; now we’re talking in gigawatts or even hundreds of megawatts. The world has changed.”

Patchett emphasized that this growth is not theoretical; vacancy rates are at record lows, and facilities are being leased before construction even begins. “Seventy-three percent of buildings being built in the U.S. today are pre-leased before completion,” he said. “In some cases, they’re 100 percent committed before a shovel hits the ground.”

This surge underscores both the opportunity and the strain on today’s hyperscale procurement models. The traditional development timelines, often five years from land acquisition to delivery, are being tested by the speed at which AI and GPU-driven workloads are scaling.

Site Selection and Power – Seeing Through the Noise

Whitney Switzer, CEO of Decillion, offered insights into the increasingly complex process of site selection, especially in an environment filled with speculation and limited power capacity. “There’s a lot of land and a lot of promises,” Switzer said, “but not all sites can actually deliver what hyperscalers need. The challenge is cutting through the noise to identify real, deliverable power and real infrastructure.”

Anik Nagpal from Groq added that power availability has become the defining factor in any site’s viability. “We’re facing long waiting lists with utilities,” Nagpal explained. “It’s not enough to have a site, you need documented substation agreements, confirmed transformer orders, and clear delivery dates.” Without that level of verification, even well-positioned properties can fall short of hyperscale timelines.

Switzer reinforced that the industry must move toward deeper collaboration between developers, power providers, and end users to accelerate readiness. “You have to build trust,” she said. “That’s what ensures creativity and alignment between the business and technical sides of a deal.”

Market Challenges and Evolving Partner Strategies

Sarah Kurtz of Cloudflare described a rapidly tightening capacity market, where competition for space and power is fierce. “Prices have moved dramatically,” Kurtz said. “We might go out for one megawatt and come back to find that the same capacity now costs four times as much.” Despite those pressures, Kurtz highlighted that the key is adaptability, knowing when to secure smaller, strategic sites that can deliver sooner rather than waiting years for larger campuses.

Ken Patchett echoed this sentiment, pointing out that the demand wave is forcing new forms of partnership. “We’re all asking, ‘Do you have space? Do you have power?’ Conversations that didn’t happen ten years ago are now everyday,” Patchett said. “We have to work together, utilities, operators, AI companies, to actually build the infrastructure that matches the pace of technology.”

Nagpal added that power immediacy and transparency are now central to deal-making. “People want to believe the power’s there,” he said, “but you only know it when you see the agreements in writing. That’s the new due diligence.”

Designing for Density and Agility – Building for the Next Cycle

A recurring theme throughout the session was that data center design itself must evolve as hardware cycles shorten. Patchett underscored that density and adaptability are now fundamental requirements. “The buildings we designed 20 years ago won’t support what we’re running today,” Patchett said. “We’re moving from 50-kilowatt racks to 600-kilowatt racks, and we have to build in a way that can pivot every six to nine months as hardware changes.”

Patchett added that despite fears of overbuilding, the industry isn’t facing a bubble. “We’re still using what we built ten or twenty years ago,” he said. “This is about addition, not replacement. Our challenge is to keep up with demand, not question it.”

The panelists agreed that modular design, flexible financing, and shared innovation will define the next phase of data center evolution. As Switzer summarized, “It’s all about partnership, aligning resources and expertise to deliver creative solutions at scale.”

Collaboration as the New Competitive Edge

The session made clear that hyperscale procurement is no longer about simply buying power and land. It’s about integrating supply chains, synchronizing with utilities, and designing for continuous evolution. Across every perspective, developer, operator, and end user, the message was the same: collaboration is the only way to scale sustainably.

The leaders on stage shared a unified view that as AI reshapes data center demand, the industry’s success will depend not on who builds fastest, but on who builds smartest—with transparency, trust, and long-term partnership at the core.

Infra/STRUCTURE 2026: Save the Date

Want to tune in live, received all presentations, gain access to C-level executives, investors and industry leading research? Then save the date for infra/STRUCTURE 2026 set for October 7-8, 2026 at The Wynn Las Vegas.  Pre-Registration for the 2026 event is now open, and you can visit: www.infrastructuresummit.io to learn more.

The post Hyperscale Data Center Procurement: Scaling Smarter in the Age of AI appeared first on Data Center POST.

Alternative Cloud Providers Redefine Scale, Sovereignty, and AI Performance

26 November 2025 at 16:00

At this year’s infra/STRUCTURE Summit 2025, held at the Wynn Las Vegas, one of the most forward-looking conversations came from the session “From Cloud to Edge to AI Inferencing.” Moderated by Philbert Shih, Managing Director at Structure Research, the discussion brought together a diverse panel of innovators shaping the future of cloud and AI infrastructure: Kevin Cochrane, Chief Marketing Officer at Vultr; Jeffrey Gregor, General Manager at OVHcloud; and Darrick Horton, CEO at TensorWave.

Together, they explored the emergence of new platforms bridging the gap between hyperscale cloud providers and the next wave of AI-driven, distributed workloads.

The Rise of Alternatives: Choice Beyond the Hyperscalers

Philbert Shih opened the session by emphasizing the growing diversity in the cloud ecosystem, from legacy hyperscalers to specialized, regionally focused providers. The conversation quickly turned to how these companies are filling critical gaps in the market as enterprises look for more flexible, sovereign, and performance-tuned infrastructure for AI workloads.

Cochrane shared insights from a recent survey of over 2,000 CIOs, revealing a striking shift: while just a few years ago nearly all enterprises defaulted to hyperscalers for AI development, only 18% plan to rely on them exclusively today. “We’re witnessing a dramatic change,” Cochrane said. “Organizations are seeking new partners who can deliver performance and expertise without the lock-in or limitations of traditional cloud models.”

Data Sovereignty and Global Reach

Data sovereignty remains a key differentiator, particularly in Europe. “Being European-born gives us a unique advantage,” Gregor noted. “Our customers care deeply about where their data resides, and we’ve built our infrastructure to reflect those values.”

He also highlighted OVHcloud’s focus on sustainability and self-sufficiency, from designing and operating its own servers to pioneering water-cooling technologies across its data centers. “Our mission is to bring the power of the cloud to everyone,” Gregor said. “From startups to the largest public institutions, we’re enabling a wider range of customers to build, train, and deploy AI workloads responsibly.”

AI Infrastructure at Scale

Horton described how next-generation cloud providers are building infrastructure purpose-built for AI, especially large-scale training and inferencing workloads. “We design for the most demanding use cases, foundational model training, and that requires reliability, flexibility, and power optimization at the cluster scale.”

Horton noted that customers are increasingly choosing data center locations based on power availability and sustainability, underscoring how energy strategy is becoming as critical as network performance. TensorWave’s approach, Horton added, is to make that scale accessible without the hyperscale overhead.

Democratizing Access to AI Compute

Across the panel, a common theme emerged: accessibility. Whether through Vultr’s push to simplify AI infrastructure deployment via API-based services, OVHcloud’s distributed “local zone” strategy, or TensorWave’s focus on purpose-built GPU clusters, each company is working to make advanced compute resources more open and flexible for developers, enterprises, and AI innovators.

These alternative cloud providers are not just filling gaps — they’re redefining what cloud infrastructure can look like in an AI-driven era. From sovereign data control to decentralized AI processing, the cloud is evolving into a more diverse, resilient, and performance-oriented ecosystem.

Looking Ahead

As AI reshapes industries, the demand for specialized infrastructure continues to accelerate. Sessions like this one underscored how innovation is no longer confined to the hyperscalers. It’s emerging from agile providers who combine scale with locality, sustainability, and purpose-built design.

Infra/STRUCTURE 2026: Save the Date

Want to tune in live, receive all presentations, gain access to C-level executives, investors and industry leading research? Then save the date for infra/STRUCTURE 2026 set for October 7-8, 2026 at The Wynn Las Vegas. Pre-Registration for the 2026 event is now open, and you can visit www.infrastructuresummit.io to learn more.

The post Alternative Cloud Providers Redefine Scale, Sovereignty, and AI Performance appeared first on Data Center POST.

How Capacity Europe 2025 Framed the Future of Cloud, Connectivity, and AI

20 November 2025 at 14:00

Capacity Europe 2025: Collaboration, Connectivity, and the Cloud-AI Convergence

The 24th edition of Capacity Europe, held October 21–23, 2025 at the InterContinental London – The O2, brought together more than 3,000 global leaders across the connectivity, cloud, data center, and digital infrastructure ecosystem. Over three impactful days, the event served as Europe’s largest platform for carriers, hyperscalers, data center operators, investors, and technology providers to exchange ideas and forge partnerships shaping the future of global connectivity.

Discussions throughout the event centered on the convergence of network and cloud infrastructure, the accelerating impact of artificial intelligence (AI), and the race to deliver sustainable, scalable capacity. The agenda featured over 150 speakers from organizations including Google Cloud, DE-CIX, Colt Technology, Equinix, NTT DATA, DigitalBridge, and EXA Infrastructure.

Highlighted speakers included Lex Coors, President, EUDCA & Chief Datacenter Technology & Engineering Officer at Digital Realty, and Vladimir Prodanovic, Principal Program Manager at NVIDIA, who both shared insights during the panel “Build Today or Buy Forever: The Role of European Data Centres in Facilitating the AI Explosion.” Tony Rossabi, Founder & Managing Member of Ocolo, and Phillip Marangella, Chief Marketing & Product Officer at EdgeConneX, joined the session “Chasing Power: How to Meet Future Requirements,” addressing one of the industry’s most urgent challenges, access to reliable, sustainable energy for large-scale deployments.

AI at the Core of Infrastructure Growth

AI dominated the conversation from the main stage to private meeting rooms. From hyperscalers to regional fiber providers, leaders agreed that the next era of infrastructure growth will be defined by low-latency, high-capacity ecosystems designed for AI inference and training workloads. European operators are expanding into new metros, while investors are targeting secondary markets where power and land remain available. The consensus: AI is redefining the scale, speed, and sophistication of digital infrastructure build-outs.

Power, Sustainability, and Policy

With demand rising faster than grid capacity, power availability emerged as one of Europe’s most pressing constraints. Experts noted that securing new grid connections can take up to ten years in mature markets such as London, Frankfurt, and Amsterdam, prompting developers to pursue renewable PPAs, grid-adjacent campuses, and nuclear partnerships. Sustainability was another focal point, with increasing expectations around embodied-carbon reduction, operational efficiency, and regulatory transparency under the EU’s evolving sustainability framework.

From Competition to Collaboration

Another key theme was the shift from competition to collaboration. The once-distinct worlds of carrier, cloud, and colocation are converging as customers seek end-to-end solutions spanning connectivity, compute, and storage. Panelists and participants emphasized that the future of connectivity depends on strategic partnerships among vendors, technology providers, and investors to create the resilient ecosystems required for AI-era infrastructure.

Capacity Europe 2025 reaffirmed that connectivity, cloud, and colocation are no longer parallel industries, they are interdependent pillars of a unified digital ecosystem. The conversations in London underscored that collaboration, scalability, and sustainability will define Europe’s ability to remain competitive in the global digital economy.

The next opportunity to continue this dialogue will be Capacity Middle East co-located with Datacloud Middle East, taking place March 3–6, 2026 in Dubai, followed by International Telecoms Week (ITW) in Washington, D.C., May 10–13, 2026.

To learn more about upcoming events in the Capacity Media portfolio, visit www.capacitymedia.com/events.

The post How Capacity Europe 2025 Framed the Future of Cloud, Connectivity, and AI appeared first on Data Center POST.

Strategic Evolution of Data Center Infrastructure for the Age of AI

7 November 2025 at 15:30

Originally posted on Compu Dynamics.

Artificial intelligence is transforming how digital infrastructure is conceived, designed, and deployed. While the world’s largest cloud providers continue to build massive hyperscale campuses, a new layer of demand is emerging — AI training clusters, high-performance compute environments, and inference nodes that require speed, density, and adaptability more than sheer scale.

For these applications, modular design is playing a strategic role. It isn’t a replacement for traditional builds. It’s an evolutionary complement — enabling rapid, precise deployment wherever high-density compute is needed.

Purpose-Built for AI, Not the Cloud of Yesterday

Traditional colocation and hyperscale data center facilities were engineered for predictable, virtualized workloads. AI environments behave differently. They run hotter, denser, and evolve faster. Training clusters may exceed 200 kW per rack and require liquid-cooling integration from day one. Inference workloads demand proximity to the user to minimize latency.

Modular data center solutions provide a practical way to meet those demands. Prefabricated, fully engineered modules can be built in parallel with site work, tested in controlled conditions, and commissioned in days rather than months. Each enclosure can be tailored to its purpose — an AI training pod, an inference edge node, or a compact expansion of existing capacity.

To continue reading, please click here.

The post Strategic Evolution of Data Center Infrastructure for the Age of AI appeared first on Data Center POST.

Retail vs wholesale: finding the right colo pricing model

Colocation providers may offer two pricing and packaging models to sell similar products and capabilities. In both models, customers purchase space, power and services. However, the method of purchase differs.

In a retail model, customers purchase a small quantity of space and power, usually by the rack or a fraction of a rack. The colocation provider standardizes contracts, pricing and capabilities — the cost and complexity of delivering to a customer’s precise requirements are not justified, considering the relatively small contract value.

In a wholesale model, customers purchase a significantly larger quantity of space and power, typically at least a dedicated, enclosed suite of white space. Due to the size of these contracts, colocation providers need to be flexible in meeting customer needs, even potentially building new facilities to accommodate their requirements. The colocation provider negotiates price and terms, and customers often prefer to pay for actual power consumption rather than be billed on maximum capacity. A metered model allows the customer to scale power usage in response to changing demands.

A colocation provider may focus on a particular market by offering only a retail or wholesale model, or the provider may offer both to broaden its appeal. The terms “wholesale” and “retail” colocation more accurately describe the pricing and packaging models used by colocation providers rather than the type of customer.

Table 1 Key differences between retail and wholesale colocation providers

Table: Key differences between retail and wholesale colocation providers

Retail colocation deals typically have higher gross margins in percentage terms, but the volume of sales is lower. Most colocation providers would rather sell wholesale contracts because they offer higher revenues through larger volumes of sales, despite having lower gross margins. As wholesale colocations are better prospects, retail customers are more likely to experience cost rises at renewal than wholesale customers.

Retail colocation pricing model

Retail terms are designed to be simple and predictable. Customers are typically charged a fixed fee based on the maximum power capacity supplied to equipment and the space used. This fee covers both the repayment of fixed costs, and the variable costs associated with IT power and cooling. The fixed fee bundles all these elements together, so customers have no visibility into these individual components — but they benefit from predictable pricing.

In retail colocation, the facilities are already available, so capital costs are recovered across all retail customers through standard pricing. If a customer exceeds their allotted maximum power capacity, they risk triggering a breaker and potentially powering down their IT equipment. Some colocation providers monitor for overages and warn customers that they need to increase their capacity before an outage occurs.

Customers are likely to purchase more power capacity than they need to prevent these outages. As a result, some colocation providers may deliberately oversubscribe power consumption to reduce their power costs and increase their profit margins. There are operational and reputational risks if oversubscription causes service degradation or outages.

Some colocation providers also meter power, charging a fee based on IT usage, which factors in the repayment of capital, IT and cooling costs, as well as a profit margin. Those with metering enabled may charge customers for usage exceeding maximum capacity, typically at a higher rate.

Can a colocation provider increase prices during a contract term? Occasionally, but only as a last resort — such as if power costs increase significantly. This possibility will be stipulated in the contract as an emergency or force majeure measure.

Usually, an internet connection is included. However, data transfer over that connection may be metered or bundled into a fixed cost package. Customers have the option to purchase cross-connects linking their infrastructure to third-party communications providers, including on-ramps to cloud providers.

Wholesale colocation pricing model

Wholesale colocation pricing is designed to offer customers the flexibility to utilize their capacity as they choose. Because terms are customized, pricing models will vary from customer to customer.

Some customers may prefer to pay for a fixed capacity of total power, regardless of whether the power is used or not. In this model, both IT power and cooling costs are factored into the price.

Other customers may prefer a more granular approach, with multiple charging components:

  • Fixed fee per unit of space/rack based on maximum power capacity and is designed to cover the colocation provider’s fixed costs, while including a profit margin.
  • Variable IT power costs are passed directly from the electricity supplier to the customer, metered in kilowatts (kW). Customers bear the full cost of price fluctuations, which can change rapidly depending on grid conditions.
  • To account for variable cooling costs, power costs may be calculated by multiplying actual power usage by an agreed design PUE to create an “additional power” fee. This figure may also be multiplied by a “utilization factor” to reflect cases where a customer is using only a small fraction of the data hall (and therefore impacting overall efficiency).

Some customers may prefer a blended model of both a fixed element for baseline capacity and a variable charge for consumption above the baseline. Redundant feeds are also likely to impact cost. If new data halls need to be constructed, these costs may be passed on to the customers directly, or some capital may be recovered through a higher fixed rack fee.

Alternatively, for long-term deployments, customers may opt for either a “build-to-suit” or “powered shell” arrangement. In a build-to-suit model, the colocation provider designs and constructs the facility —including power, cooling and layout — to the customer’s exact specifications. The space is then leased back to the customer, typically under a long-term agreement exceeding a decade.

In a powered shell setup, the provider delivers a completed exterior building with core infrastructure, such as utility power and network access. The customer is then responsible for outfitting the interior (racks, cooling, electrical systems) to suit their operational needs.

Most customers using wholesale colocation providers will need to implement cross-connects to third-party connectivity and network providers hosted in meet-me rooms. They may also need to arrange the construction of new capacity into the facility with the colocation provider and suppliers.

Hyperscalers are an excellent prospect for wholesale colocation, given their significant scale. However, their limited numbers and strong market power enable them to negotiate lower margins from colocation providers.

Table 2 Pricing models used in retail and wholesale colocation

Table: Pricing models used in retail and wholesale colocation

In a retail colocation engagement, the customer has limited negotiating power — with little scale, they generally have minimal flexibility on pricing, terms and customization. In a wholesale engagement, the opposite is true, and the arrangement favors the customer. Colocation providers want the scale and sales volume, so are willing to cut prices and accommodate additional requirements. They are also willing to offer flexible pricing in response to customers’ rapidly changing requirements.


The Uptime Intelligence View

Hyperscalers have the strongest market power to dictate contracts and prices. With so few players, it is unlikely that many hyperscalers will be bidding for the same space, which would push up prices. However, colocation providers still want their business, because of the volume it brings. They would prefer to reduce gross margins to ensure a win, rather than risk losing a customer with such unmatched scale.

The post Retail vs wholesale: finding the right colo pricing model appeared first on Uptime Institute Blog.

❌