Normal view

Received yesterday — 2 April 2026

Data Governance and Clinical Innovation

31 March 2026 at 13:00

Artificial intelligence is a tool designed to power innovation, but it’s important to understand its primary fuel: data. Data is required not only for the outputs of AI algorithms but also for their training and operation. Because of this, in sectors where innovation has become driven by technologies like artificial intelligence, data has essentially become fuel for innovation, and it’s important to ensure the safety and quality of this data to stimulate it.

Understandably, many critics have expressed concern over the use of artificial intelligence in healthcare settings, considering the private, sensitive nature of the data used in the field. Patient personal information is not only highly sensitive but also protected by law, meaning there are strict regulations and guidelines dictating how entities in healthcare can use artificial intelligence with regard to patient data.

Why strong data governance is essential for AI in healthcare

However, that doesn’t mean artificial intelligence shouldn’t be used in healthcare whatsoever. Instead, it means there is a need for strong data governance, as this is an essential step in enabling safe and ethical AI use in any industry, particularly ones such as healthcare where the stakes are high. In addition to ensuring compliance with any applicable regulations, strong data governance helps create greater transparency and trust that inspires patient confidence.

It’s important to remember the reason why the healthcare sector wants to deploy artificial intelligence technology in the first place: AI can accelerate innovation and lead to improved patient outcomes. For example, innovators in the healthcare industry have used AI to accelerate drug discovery, conduct more accurate diagnostics, and streamline operations in a way that significantly improves efficiency. But to achieve these outcomes, systems must have access to accurate, well-managed data.

The key to this is creating compliance frameworks that reduce and mitigate the risks of artificial intelligence while still supporting scalable healthcare solutions. Of course, the core of any compliance framework in healthcare is data security and privacy, but these guidelines can also help control other risks, such as algorithmic bias and “black box” risks, ensuring that all decisions and recommendations made by an artificial intelligence are fair and explainable.

Enabling the responsible deployment of AI in healthcare

Ultimately, data governance isn’t about gatekeeping but about collaboration and enabling the responsible and ethical deployment of artificial intelligence. The mindset with which we approach AI shouldn’t be about limiting how we can use the technology, but instead how we can facilitate its use in a way that does not compromise data integrity or patient privacy.

Right now, the key goal of healthcare practitioners who hope to implement artificial intelligence should be to build trust and reliability in these systems. The steps required to achieve this include ensuring data quality and diversity, maintaining transparent communication, and continuous monitoring and validation.

The best way to look at AI systems in healthcare is as an analog to human employees. In healthcare, not even human employees have unfettered access to patient data. There are access controls based on the level of access an individual needs, with checks and balances and supervisory control.

The same philosophy should apply to autonomous systems. Just as approvals and access controls are required of human employees, so too should AI systems require approvals from human overseers.

Indeed, there is a world in which artificial intelligence can revolutionize the healthcare industry for the better, alleviating some of the burden on healthcare workers and contributing to improved patient outcomes. However, for this to happen, the adoption of AI must be done in a way that is responsible and ethical. With this mindset, prioritizing strong data governance, AI can become a reliable partner in patient care.

# # #

About the Author

Chris Hutchins serves as the founder and CEO of Hutchins Data Strategy Consulting. The healthcare institutions benefit from his expertise in developing scalable moral data and artificial intelligence methods to maximize their data potential. His areas of expertise include enterprise data governance, responsible AI adoption, and self-service analytics. His expertise helps organizations achieve substantial results through technology implementation. Through team empowerment, Chris assists healthcare leaders in enhancing care delivery while reducing administrative work and transforming data into meaningful outcomes.

The post Data Governance and Clinical Innovation appeared first on Data Center POST.

The 1 Gigawatt Data Center Dilemma

26 March 2026 at 15:00

The AI revolution is pushing the data center industry toward gigawatt-scale campuses. But the real question today is not how large a facility can be built. The real question is how quickly power can be converted into revenue.

Consider a 1 gigawatt data center project. One gigawatt equals one thousand megawatts of capacity. In today’s market, typical infrastructure costs for large data centers range between 8 million and 12 million dollars per megawatt for standard facilities. That places the infrastructure cost of a 1 GW campus between 8 billion and 12 billion dollars.

In many U.S. markets, developers are seeing costs closer to 10 to 14 million dollars per megawatt, which would place a 1 GW campus between 10 and 14 billion dollars. AI optimized data centers can be even more expensive due to high density racks, liquid cooling systems, and larger electrical infrastructure. Those facilities can reach 15 to 20 million dollars per megawatt, pushing a 1 GW campus to 15 to 20 billion dollars in infrastructure alone.

Once servers, GPUs, networking equipment, and storage are installed, the total project value can easily exceed 30 billion dollars. But capital cost is no longer the biggest constraint, energy is.

According to the International Energy Agency, global data center electricity consumption reached roughly 415 terawatt hours in 2024, representing about 1.5 percent of global electricity demand. That number is projected to approach 800 terawatt hours by 2030 as AI adoption accelerates. At the same time, power infrastructure is struggling to keep up. The United States interconnection queue alone now exceeds 2 terawatts of generation capacity waiting for approval, and in many regions new grid connections can take three to six years. This creates a major financial challenge for traditional hyperscale development.

Large buildings are often constructed years before sufficient power becomes available. Hundreds of megawatts of capacity can sit idle while developers wait for substations, transmission lines, and utility upgrades. On a one gigawatt campus that could mean billions of dollars tied up in infrastructure waiting for power.

Now compare that with a modular campus strategy.

Instead of constructing massive buildings designed for the full gigawatt from day one, the campus can be deployed incrementally as power becomes available. A one gigawatt campus could begin with a 20 megawatt deployment. Using the same industry pricing ranges, that first deployment would require between 160 and 240 million dollars at eight to twelve million dollars per megawatt, or up to 300 to 400 million dollars if the facility is designed for high density AI workloads. What makes this model powerful is how quickly revenue can begin.

In many markets AI capacity is leasing between 150 thousand and 250 thousand dollars per megawatt per month depending on location and density. A 20 megawatt deployment can therefore generate roughly 3 to 5 million dollars per month, or approximately 36 to 60 million dollars per year, while the rest of the campus continues expanding. Instead of waiting years for a massive hyperscale facility to be completed, the project can begin generating revenue within 12 to 18 months.

As additional power becomes available the campus grows from twenty megawatts to one hundred megawatts, then several hundred megawatts, and eventually the full one gigawatt capacity. By the time the campus reaches full scale, the project may already be generating hundreds of millions of dollars annually.

There is also another strategic advantage that is becoming increasingly important: mobility of infrastructure.

If power availability changes, new energy sources come online, or grid constraints shift to another region, modular facilities can be redeployed where energy exists. Massive fixed hyperscale buildings cannot move.

This dramatically changes the risk profile.

Traditional hyperscale development concentrates 10 to 20 billion dollars into a single permanent structure. Modular campuses distribute capital across infrastructure that scales directly with available power.

In a world where energy has become the limiting factor for digital growth, the future of hyperscale development may not be one giant building. It may be gigawatt scale campuses built from modular infrastructure designed to grow with power.

# # #

About the Author

Kliton Agolli Co-Founder, Board Member & Director of Global Growth Northstar Technologies Group | Naples, Florida.

Kliton Agolli is a senior security and international business development executive with more than 35 years of experience operating at the intersection of national security, executive protection, counterintelligence, and global commercial expansion. His career spans military service, law enforcement, VIP and diplomatic protection, healthcare and hospitality security, and cross-border business development in complex and high-risk environments.

At Northstar Technologies Group, Mr. Agolli leads global growth strategy, international partnerships, and strategic market expansion. He plays a key role in aligning advanced security and infrastructure technologies with government, defense, healthcare, and mission-critical commercial clients worldwide. His work focuses on risk-informed growth, regulatory compliance, and building long-term strategic alliances across Europe, the Middle East, and the United States.

The post The 1 Gigawatt Data Center Dilemma appeared first on Data Center POST.

AI’s Overlooked Bottleneck: Why Front-End Networks Are Crucial to AI Data Center Performance

24 March 2026 at 14:00

By Mike Hodge, AI Solutions Lead, Keysight Technologies

It’s the heart of the AI gold rush, and everyone wants to capitalize on the next big thing. Large language models, multimodal systems, and domain-specific AI workloads are moving from experimentation to production at scale. Across industries, enterprises are building their own proprietary models or integrating pre-trained ones to power applications spanning from video analytics to highly specialized inference services.

This shift has triggered a new wave of infrastructure investment. But while GPUs and accelerators dominate the conversation, scaling AI platforms has produced a less obvious constraint: front-end network performance. In increasingly distributed, multi-tenant AI environments, the ability to move data efficiently into (and across) platforms has become just as critical as raw compute density.

New AI platforms mean new expectations for infrastructure

AI infrastructure is no longer the exclusive domain of a handful of hyperscalers. A growing class of service providers has begun offering end-to-end AI platforms where compute, storage, networking, and orchestration are delivered as a service. Their value proposition is straightforward: customers bring data and models, while the platform handles the complexity of building, operating, and maintaining large-scale data center deployments.

Service models like these, however, place extraordinary demands on networking. Unlike traditional cloud workloads, AI jobs are defined by massive, sustained data movement and tight coupling between data pipelines and compute utilization. GPUs cannot perform at peak efficiency unless data arrives on time, in the right order, and at predictable speeds.

As a result, network performance is now one of the primary determinants of training, inference, and infrastructure efficiency.

The eye of the storm is moving from the fabric to the front end

AI infrastructure discussions often focus on back-end fabrics. Think about things like high-bandwidth, low-latency interconnects between GPUs, for example. However, while these fabrics are indeed essential, they are only part of the picture.

Before training or inference ever begins, data must first traverse the front-end network. This occurs in several ways, but some of the most common paths include:

  • From remote object stores or on-premises repositories into the data center
  • From ingress points into virtual machines or containers
  • From storage into GPU-attached hosts

This is where north-south traffic (external to internal) intersects with east-west traffic (host-to-host and service-to-service). And in AI environments, these flows are not occasional spikes. They are sustained, high-throughput, latency-sensitive streams that run continuously throughout the lifecycle of a job.

When front-end networks underperform, the consequences are costly and immediate: idle accelerators, elongated training windows, unpredictable inference latency, and poor multi-tenant isolation.

Why traditional network validation falls short

Most cloud networks were designed around general-purpose workloads. Think about things like web services, databases, and transactional systems with relatively modest bandwidth demands and fluctuating traffic patterns punctuated by the occasional spike.

AI workloads, on the other hand, break these assumptions. On the front end, AI traffic is characterized by:

  • Extremely large data transfers, often using jumbo frames
  • Long-lived connections, sustained over hours or days
  • Millions of concurrent sessions in multi-tenant environments
  • Tight latency and jitter tolerances to avoid starving accelerators

Conventional network testing approaches — such as synthetic benchmarks, isolated link tests, or small-scale simulations — are unable to replicate this behavior. As a result, many issues only surface once customer workloads are already running, which also happens to be when the cost of remediation is highest.

The need for realistic workload emulation

Optimizing front-end AI networks requires the ability to reproduce real workload behavior at scale. That means emulating both north-south and east-west traffic patterns simultaneously, across distributed environments and under sustained load.

For north-south paths, this includes verifying that large datasets can be reliably pulled from diverse external sources into local storage. Moreover, the network must also be able to do so with consistent throughput, predictable latency, and no silent data loss. Transfers like these are essential, as any inefficiency propagates directly into longer training times and underutilized GPUs.

For east-west paths, the challenge shifts to connection density, latency, and scalability. Once workloads are running, virtual machines and services exchange data continuously. Sometimes within the same host, sometimes across racks, and sometimes across geographically separated data centers. Modern AI platforms increasingly rely on SmartNICs and offload technologies to make this feasible, so these components must also be validated under realistic connection rates and protocol behavior.

Without large-scale, workload-accurate testing, subtle bottlenecks — such as rule-processing limits, connection-tracking inefficiencies, or unexpected latency spikes — can remain hidden until production traffic exposes them.

Front-end optimization is a competitive differentiator

In response, the most advanced AI platform operators are shifting left: validating their front-end networks before customers ever deploy workloads. Along the way, their proactive approach is changing the economics of AI infrastructure.

Stress-testing networks under real-world conditions offers a range of benefits for network operators:

  • Identifying performance cliffs at high line rates
  • Understanding how different layers of the stack interact under load
  • Resolving scaling limitations in NICs, virtual networking, or storage paths
  • Delivering predictable performance across tenants and geographies

It’s not just about improving peak throughput. It’s about building confidence that platforms perform as expected under peak pressure. And in a market where AI workloads are expensive, time-sensitive, and strategically important, this confidence becomes a differentiator. Customers may never see the network directly, but they feel its impact in faster training cycles, lower inference latency, and fewer production surprises.

Looking ahead: front-end networks and the next generation of AI

AI workloads continue to evolve. Microservices-based architectures, distributed inference pipelines, and increasingly stateful services are placing even more emphasis on low-latency, high-availability front-end connectivity. At the same time, data is becoming more geographically distributed, pushing platforms to span multiple regions and network domains.

In this environment, front-end networks are no longer a supporting actor. They are a core component of AI system design. That means they must be engineered, validated, and optimized with the same rigor applied to compute and accelerators.

The lesson is clear: operators cannot optimize AI infrastructure by focusing on GPUs alone. The performance, efficiency, and reliability of tomorrow’s AI platforms will be defined just as much by how well they move data as by how fast they process it.

The post AI’s Overlooked Bottleneck: Why Front-End Networks Are Crucial to AI Data Center Performance appeared first on Data Center POST.

The Benefits of Bare Metal for AI Workloads

16 March 2026 at 15:00

Originally posted on Hivelocity.

Artificial intelligence (AI) is driving a new wave of innovation that demands more from infrastructure than ever before. As organizations train larger models, process massive datasets, and deploy AI, performance, scalability, and cost efficiency have become even more critical. In this high-performance landscape, bare metal servers offer a clear advantage over virtualized environments, delivering the raw power and control that AI workloads require.

Bare metal servers provide direct access to dedicated hardware (CPU cores, memory, storage) without the overhead of virtualization. This architecture eliminates the “noisy neighbor” effect that is common in cloud environments, ensuring consistent, predictable performance. For AI tasks such as model training and inferencing, where compute intensity and I/O throughput are key, that consistency can translate into measurable performance gains.

Cost Predictability

While there is a common industry misconception that bare metal is more expensive than cloud alternatives, this is often not the case. In reality, long-term AI operations, especially within predictable or stable workloads, often see significant savings with bare metal infrastructure. Because resources are dedicated, costs are fixed and transparent, cutting down on the unpredictable cloud egress fees and scaling premiums that typically come with consumption-based models.

This predictability of cost allows AI teams to plan budgets more effectively, particularly for ongoing training pipelines and continuous model tuning. Hivelocity’s bare metal solutions allow customers to scale resources strategically, allowing workloads to evolve without the billing complexities that can make cloud deployments difficult to manage.

To continue reading, please click here.

The post The Benefits of Bare Metal for AI Workloads appeared first on Data Center POST.

Company Profile: BRIGHTRAY’s Prefabrication Strategy for the AI Era

12 March 2026 at 15:30

Data Center POST had the opportunity to connect with David Wang, Founder and Chairman of BRIGHTRAY, who is leading a new paradigm in data center delivery—speed without compromise, scale with sustainability. With over 25 years of industry experience, including senior leadership at Schneider Electric and HP managing mission-critical infrastructure, Wang founded BRIGHTRAY to address the explosive AI-driven demand for rapid, high-density infrastructure.

Traditional construction can no longer keep pace. That’s why at BRIGHTRAY our strategy on proprietary Prefabrication Data Center Solutions, enabling ultra-high-density deployment at unprecedented speed. This is proven by the company’s Malaysia milestones: MY-01 (20MW) delivered in 8 months, and MY-02 (50MW) completed in just 6 months, setting new benchmarks for speed and scalability.

Looking ahead, Wang is leading BRIGHTRAY’s global expansion from our strong APAC foundation into the U.S. and Middle East markets with the vision to establish BRIGHTRAY as “Your Gateway to Excellence in Integrated IDC Services”, building a resilient, sustainable digital backbone for the AI era.

The information below is summarized to provide our readers a deeper dive into who BRIGHTRAY is, what they do and the problems they are solving in the industry.

What does BRIGHTRAY do?  

BRIGHTRAY provides prefabricated data center solutions that are designed and built off-site for faster, more efficient deployment.

What problems does BRIGHTRAY solve in the market?

The company addresses the growing demand for speed and scalability in data center infrastructure. BRIGHTRAY helps clients compress deployment timelines, reduce execution risk, and bring infrastructure online faster, enabling quicker returns and greater adaptability across different environments. The company is capable of delivering a 50MW data center in as fast as 6 months, setting a new industry benchmark

What are BRIGHTRAY’s core products or services?

Prefabrication Data Center Solutions

Full Prefabrication DC(FPD:prefab whole data center from building structure to core systems

Interior Prefabrication DC(IPD:install core modules in the pre-built shell

Containerized Prefabrication DC(CPD:infrastructure in containers

What markets do you serve?

BRIGHTRAY is deeply rooted in the APAC market and is now expanding into the U.S. and Middle East markets.

What challenges does the global digital infrastructure industry face today?

  • Speed vs. Quality: Traditional construction methods take 2-3 years per project, yet AI and cloud demand deployment in months—not years.
  • Sustainability Pressure: Data centers are energy-intensive, and global net-zero targets require radical efficiency improvements.
  • Scalability Constraints: Supply chain bottlenecks, skilled labor shortages, and site limitations hinder rapid expansion.

How is BRIGHTRAY adapting to these challenges?

  • Prefabrication Innovation: Our proprietary solutions (FPD, IPD, CPD) shift construction from on-site to factory-controlled environments, slashing timelines by up to 70%.
  • Speed Records: We’ve proven our model with MY-01 (20MW in 8 months) and MY-02 (50MW in 6 months) —landmark projects in Malaysia that set new industry speed benchmarks and demonstrate BRIGHTRAY’s leadership in powering Asia Pacific’s rapidly growing digital hubs.
  • Global-Ready Design: Our solutions are engineered for “global adaptability,” enabling rapid deployment across diverse environments with consistent quality.

What are BRIGHTRAY’s key differentiators?

  • Proven Speed: 6-month delivery for 50MW capacity—unprecedented in the industry.
  • End-to-End Expertise: Our team brings 10 years across the full lifecycle—design, construction, operations.
  • Sustainability by Design: Prefabrication reduces on-site waste, carbon footprint, and energy consumption.
  • Three Flexible Solutions: FPD (full prefab), IPD (interior prefab), CPD (containerized)—tailored to client needs.
  • Global Vision, Local Roots: Deep APAC expertise, now expanding into U.S. and Middle East markets.

What can we expect to see/hear from BRIGHTRAY in the future?  

  • Global Market Expansion: Following our strong foundation in APAC, we are actively entering the U.S. and Middle East markets. Expect announcements on new partnerships, project deployments, and local operations in these key regions.
  • Next-Generation Prefabrication Solutions: We are continuously evolving our proprietary FPD, IPD, and CPD solutions to support higher densities and greater energy efficiency—purpose-built for the AI era’s demanding workloads.
  • New Project Milestones: Building on our Malaysia success (MY-01: 20MW/8 months; MY-02: 50MW/6 months), we will unveil additional record-breaking deployments that further compress timelines while scaling capacity.

What upcoming industry events will you be attending? 

BRIGHTRAY will be attending Nvidia GTC in San Jose.

Do you have any recent news you would like us to highlight?

BRIGHTRAY breaks record by completing data center in 8 months.

Where can our readers learn more about BRIGHTRAY?  

You can learn more about us on our official website, www.brightraydc.com, or on our LinkedIn.

How can our readers contact BRIGHTRAY? 

You can contact us at marketing@brightraydc.com.

# # #

About BRIGHTRAY

BRIGHTRAY is redefining data center delivery through its pioneering prefabrication solutions. As hyperscale demand surges and speed-to-deployment becomes a decisive competitive edge, BRIGHTRAY empowers its clients to bring high-standard, scalable infrastructure online in just months, dramatically compressing timelines, reducing execution risk, and unlocking faster returns. The BRIGHTRAY team, comprising professionals with over 10 years of data center experience and led by executives with over 20 years of industry leadership, has collectively delivered hundreds of data center projects. The team has built end-to-end capabilities across the full lifecycle—from design and construction to operations—and leverages this deep expertise to pioneer innovative prefabricated data center solutions: Full Prefabrication Data Center (FPD), Interior Prefabrication Data Center (IPD), and Containerized Prefabrication Data Center (CPD). Each solution is engineered around three core principles—speed, resilience, and global adaptability to enable seamless deployment across diverse environments.

About Data Center POST

Data Center POST provides a comprehensive view of the digital infrastructure landscape, delivering industry insights into the global data center ecosystem. As the industry’s only peer-contributed and online publication, we offer relevant information from developers, managers, providers, investors, and trendsetters worldwide.

Data Center POST works hard to get the most current information and thought-provoking ideas most apt to add relevance to the success of the data center industry. Stay informed, visit www.datacenterpost.com.

If you are interested in contributing to Data Center POST, contact us at contributions@datacenterpost.com or submit your article here.

Want more digital infrastructure news? Stay in the know and subscribe to Data Center POST today!

The post Company Profile: BRIGHTRAY’s Prefabrication Strategy for the AI Era appeared first on Data Center POST.

Desperate to Fund AI? Leasing May Be the Smartest Move IT Leaders Make in 2026

9 March 2026 at 14:00

AI spending is accelerating at a pace most enterprise budgets simply can’t match. While IT leaders are under pressure to deliver transformative AI capabilities, their capital budgets aren’t growing at the same rate as these AI ambitions. This mismatch is forcing difficult trade-offs: delayed projects, stretching aging infrastructure beyond its intended lifecycle, and diverting funding from other critical initiatives.

But there is another option. Increasingly, IT leaders are turning to technology leasing as a savvy strategy to help expedite AI adoption without sacrificing operational agility or financial liquidity.

AI: Thinking Through the Dollars and Sense

From my vantage point, working closely with IT leaders across industries, I hear the lament. AI infrastructure is expensive and highly concentrated, particularly GPU-based compute power. A single GPU cluster designed to support large-scale AI workloads can cost hundreds of thousands to millions. For enterprise-wide deployments, total data center investments can easily reach $150 million and as much as $500 million.

For mid-tier enterprises, challenges are even greater, as many lack the balance-sheet strength to secure traditional credit for such large capital expenditures. Some resort to private equity or high-interest lenders. But even those who can afford to purchase the infrastructure outright are frustrated by the pace of AI innovation; and the risk of technology becoming quickly outdated or obsolete.

For determined IT leaders, the question is not whether to invest in AI infrastructure, but how to fund it without compromising the broader IT roadmap. This is where the financing strategy becomes just as important as the technology strategy.

IT leasing eases these pressures in several critical ways:

  • Minimizing upfront costs. Traditional purchasing requires a massive outlay of capital, sometimes forcing companies to scale back or winnow down the scope of projects despite urgent demand. Leasing converts that one-time expense into predictable monthly payments. Instead of committing $50 million upfront, an organization can structure payments over time, freeing capital for additional initiatives and allowing multiple AI projects to move forward simultaneously.
  • Enhancing flexibility and reducing financial risk. Purchased technology sits on the balance sheet and depreciates over a fixed period. If business needs shift or the organization upgrades early, it can trigger book losses. Leasing – when structured properly – can classify equipment as an operating expense, keeping it off the balance sheet and enabling companies to pivot more easily without the burden of carrying these assets.

Lease the Entire AI Stack, Not Just the Hardware

IT leaders recognize today’s AI deployments extend far beyond servers. Enterprises are leasing high-performance GPU servers optimized for AI model training and inference, along with high-speed networking equipment, enterprise storage systems, integrated “rack and roll” data center solutions, firewalls, and AI-specific software.

Maintenance contracts, security tools, and embedded applications can all be incorporated into a single lease structure.

This bundling delivers administrative and compliance benefits. Hardware typically carries a residual value often 10–15% below purchase cost, amortized across the lease term. Software licenses and other “soft costs” are included in payments and expire at term end, eliminating resale complications. Clients are responsible only for the hardware at lease completion, simplifying compliance and ensuring security updates, patches, and licenses remain current throughout the lifecycle.

Combat Obsolescence Before It Becomes a Liability

One of the most common concerns I hear from executives is technology obsolescence. And given the pace of AI, where innovation cycles are measured in months, not years, that concern is justified.

Leasing naturally enforces a rigor and discipline for countering obsolescence. A three- or four-year term creates a defined decision point: extend, buy out or upgrade the technology. This prevents the “set it and forget it” ownership mindset that often leads to aging, unsupported systems and expensive, reactive refresh cycles. In AI environments, delaying upgrades can multiply total costs through inefficiencies and lost competitive advantage.

Leasing is a Budget Multiplier

Looking ahead to 2026 and beyond, IT leaders must think differently about capital allocation. No one can predict what the AI landscape will look like in three years. Owning large volumes of rapidly depreciating infrastructure can limit strategic agility.

Leaders must also factor in the full lifecycle cost of AI infrastructure, which includes equipment refreshes, secure data wiping, asset disposition, and regulatory compliance. These factors carry operational and financial burdens when assets are owned outright.

The most important priority today is building a strategy that enables AI adoption with minimal upfront cost and maximum flexibility. Leasing can act as a budget multiplier. Instead of exhausting capital on one large acquisition, organizations can deploy that same funding across predictable monthly payments, preserving liquidity while expanding total project capacity. In doing so, IT leaders maintain momentum across their complete technology roadmap, ensuring AI transformation doesn’t come at the expense of operational resilience.

# # #

About the Author

Frank Sommers brings 30 years of experience in the IT leasing industry, working closely with global enterprise organizations to help them modernize infrastructure while preserving capital and accelerating technology adoption. Known for consistently exceeding sales targets, Frank has also developed and led numerous successful vendor financing programs in partnership with major resellers, creating flexible acquisition models that support complex IT environments. His deep expertise in IT lifecycle management, financing strategies, and enterprise procurement has made him a trusted advisor across the industry. A former collegiate soccer player at Cal Poly San Luis Obispo, Frank brings the same competitiveness and teamwork to every client relationship.

The post Desperate to Fund AI? Leasing May Be the Smartest Move IT Leaders Make in 2026 appeared first on Data Center POST.

Received before yesterday

Three-quarters of AI Subscribers in the US Want Generative AI Included With Their Mobile Contract

22 January 2026 at 14:00

Originally posted on TelecomNewsroom.

Telcos are the missing link in AI adoption, say paying AI subscribers

Nearly three-quarters (74%) of US consumers who pay for generative AI services want those tools included directly with their mobile phone plan, according to new research from subscription bundling platform, Bango.

The survey of 1,400 ChatGPT subscribers in the US also reveals that demand for AI-inclusive telco bundles extends beyond mobile. A further 72% of AI subscribers want AI included as part of their home broadband or TV package, while more than three-quarters (77%) want generative AI tools paired with streaming services such as Netflix or Spotify, offering a bundling opportunity for telcos.

The findings signal a major opportunity for telcos to become the primary distributors of AI services. AI subscribers already spend over $65 per month on these tools, representing a high value audience for telcos.

To read the full press release, please click here.

The post Three-quarters of AI Subscribers in the US Want Generative AI Included With Their Mobile Contract appeared first on Data Center POST.

AI Data Center Market to Surpass USD 1.98 Trillion by 2034

21 January 2026 at 15:00

The global AI data center market was valued at USD 98.2 billion in 2024 and is estimated to grow at a CAGR of 35.5% to reach USD 1.98 trillion by 2034, according to a recent report by Global Market Insights Inc.

Growing adoption of generative AI and machine learning tools requires extraordinary processing power and storage capabilities, increasing reliance on data centers specifically optimized for AI workloads. These environments depend on advanced GPUs, scalable system architecture, and ultra-low-latency networking to support complex model training and inference across industries such as finance, healthcare, and retail. Big data analytics is also accelerating demand, as organizations handle massive streams of structured and unstructured information that must be processed rapidly.

AI-focused facilities enable high-performance computing for real-time workloads, strengthening their role as essential infrastructure for global digital transformation. The rapid expansion of cloud computing, along with the rising number of hyperscale facilities, continues to amplify the need for AI-ready infrastructures. Providers are investing in advanced AI data platforms that offer scalable services to enterprises and developers, further increasing market momentum.

The AI data center market from the hardware segment accounted for USD 61.1 billion in 2024. Growth is driven by expanding use of AI chips, GPU accelerators, advanced cooling technologies, high-density server systems, and optical networking solutions. Rising GPU energy requirements, the shift toward rack densities between 30-120 kW, and large-scale deployment strategies introduced by leading technology companies are shaping long-term capital allocation in the sector.

The cloud-based category held a 58% share in 2024 and is projected to grow at a CAGR of 35.2% from 2025 through 2034. This segment leads due to its unmatched scalability, flexible consumption options, and access to the latest AI-accelerated computing hardware without upfront investment. Hyperscale providers are making multi-billion-dollar commitments to strengthen global AI infrastructures, propelling adoption of AI-driven services and increasing demand for GPUs, TPUs, and specialized processors.

U.S. AI Data Center Market generated USD 33.2 billion in 2024. The country maintains a leading position supported by prominent hyperscale operators and substantial investments in GPU clusters, liquid cooling, and large-scale AI-aligned builds. Federal incentives, regional tax advantages, and infrastructure funding have further solidified the United States as the most capacity-rich region for AI computing.

Key participants in the AI data center market include Huawei, AWS, NVIDIA, HPE, Digital Realty, Google, Lenovo, Microsoft, Equinix, and Dell Technologies. Companies expanding their foothold in the market are focusing on infrastructure modernization, large-scale GPU deployments, and energy-efficient system design.

Many firms are investing in high-density racks, integrated liquid cooling, and next-generation networking to support advanced AI workloads. Strategic partnerships with chipmakers, cloud providers, and colocation operators help accelerate capacity expansion and ensure access to cutting-edge AI hardware. Providers are also scaling global data center footprints, enhancing automation capabilities, and optimizing power utilization through renewable-energy integration. Long-term contracts with enterprises, AI-as-a-service offerings, and the buildout of specialized AI clusters further reinforce competitive positioning and market dominance.

The post AI Data Center Market to Surpass USD 1.98 Trillion by 2034 appeared first on Data Center POST.

It’s Not An AI Bubble — We’re Witnessing the Next “Cloud” Revolution

19 January 2026 at 14:30

Alphabet, Amazon, and Microsoft; these tech giants’ cloud services, Google Cloud, AWS, and Azure, respectively, are considered the driving force behind all current business computing, data, and mobile services. But back in the mid-2000s, they weren’t immediately seen as best bets on Wall Street. When Amazon launched AWS, analysts and investors were skeptical. They dismissed AWS as a distraction from Amazon’s core retail business. The Wall Street wizards did not understand the potential of cloud computing services. Many critics believed enterprises would never move their mission-critical workloads off-premises and into remote data centers.

As we all know, the naysayers were wrong, and cloud computing took off, redefining global business. It turbo-charged the economy, creating trillions in enterprise value while reducing IT costs, increasing application agility, and enabling new business models. In addition, the advent of cloud services lowered barriers to entry for startups and enabled rapid service scaling. Improving efficiency, collaboration, and innovation through scalable, pay-as-you-go access to computing resources was part of the formula for astounding success. The cloud pushed innovation to every corner of society, and those wise financiers misunderstood it. They could not see how this capital-intensive, long-horizon bet would ever pay off.

Now, we are at that moment again. This time with artificial intelligence.

Headlines appear every day saying that we’re in an “AI bubble.” But AI has gone beyond mere speculation as companies (hyperscalers) are in early-stage infrastructure buildout mode. Hyperscalers understand this momentum. They have seen this movie before with a different protagonist, and they know the story ends with transformation, not collapse. The need for transformative compute, power, and connectivity is the catalyst driving a new generation of data center buildouts. The applications, the productivity, and the tools are there. And unlike the early cloud era, sustainable AI-related revenue is a predictable balance sheet line item.

The Data

Consider these most recent quarterly earnings:

  • Microsoft Q3 2025: Revenue: $70.1B, up 13%. Net income: $25.8B, up 18%. Intelligent Cloud grew 21% led by Azure, with 16 points of growth from AI services.
  • Amazon Q3 2025: Revenue: $180.2B, up 13%. AWS grew 20% to $33B. Trainium2, its second-gen AI chip, is a multi-billion-dollar line. AWS added 3.8 GW of power capacity in 12 months due to high demand.
  • Alphabet (Google Parent) Q3 2025: Revenue: $102.35B, up 16%. Cloud revenue grew 33% to $15.2B. Operating income: up nearly 85%, backed by $155B cloud backlog.
  • Meta Q3 2025: Revenue: $51.2B, up 26%. Increased infrastructure spend focused on expanding AI compute capacity. (4)

These are not the signs of a bubble. These are the signatures of a platform shift, and the companies leading it are already realizing returns while businesses weave AI into operations.

Bubble or Bottleneck

However, let’s be clear about this analogy: AI is not simply the next chapter of the cloud. Instead, it builds on and accelerates the cloud’s original mission: making extraordinary computing capabilities accessible and scalable. While the cloud democratized computing, AI is now democratizing intelligence and autonomy. This evolution will transform how we work, secure systems, travel, heal, build, educate, and solve problems.

Just as there were cloud critics, we now have AI critics. They say that aggressive capital spending, rising energy demand, and grid strain are signs that the market is already overextended. The pundits are correct about the spending:

  • Alphabet (Google) Q3 2025: ~US $24B on infrastructure oriented toward AI/data centers.
  • Amazon (AWS) Q3 2025: ~US $34.2B, largely on infrastructure/AI-related efforts.
  • Meta Q3 2025: US $19.4B directed at servers/data centers/network infrastructure for AI.
  • Microsoft Q3 2025: Roughly US $34.9B, of which perhaps US $17-18B or more is directly AI/data-center infrastructure (based on “half” of capex).

However, the pundits’ underlying argument is predicated on the same misunderstandings seen in the run-up to the cloud era: it confuses infrastructure investment with excess spending. The challenge with AI is not too much capacity; it is not enough. Demand is already exceeding grid capacity, land availability, power transmission expansion, and specialized equipment supply.

Bubbles do not behave that way; they generate idle capacity. For example, consider the collapse of Global Crossing. The company created the first transcontinental internet backbone by laying 100,000 route-miles of undersea fiber linking 27 countries.

Unfortunately, Global Crossing did not survive the dot-com bubble burst (1990-2000) and filed for bankruptcy. However, Level 3, then CenturyLink (2017), and Lumen Technologies knew better than to listen to Wall Street and acquired Global Crossing’s cables. Today, Lumen has reported total 2024 revenue of $13.1 billion. Although they don’t specifically list submarine cable business revenue, it’s reasonable to infer that these cables are still generating in the low billion-dollar revenue figures—a nice perpetual paycheck for not listening to the penny pinchers.

The AI economy is moving the value chain down the same path of sustainable profitability. But first, we must address factors such as data center proximity to grid strength, access to substation expansion, transformer supply, water access, cooling capacity, and land for modern power-intensive compute loads.

Power, Land, and the New Workforce

The cloud era prioritized fiber; the AI era is prioritizing power. Transmission corridors, utility partnerships, renewable integration, cooling systems, and purpose-built digital land strategies are essential for AI expansion. And with all that comes the “pick and shovel” jobs building data centers, which Wall Street does not factor into the AI economy. You need to look no further than Caterpillar’s Q3 2025 sales and revenue of $16.1 billion, up 10 percent.

Often overlooked in the tech hype are the industrial, real estate, and power grid requirements for data center builds, which require skilled workers such as electricians, steelworkers, construction crews, civil engineers, equipment manufacturers, utility operators, grid modernizers, and renewable developers. And once they’re up and running, data centers need cloud and network architects, cybersecurity analysts, and AI professionals.

As AI scales, it will lift industrial landowners, renewable power developers, utilities, semiconductor manufacturers, equipment suppliers, telecom networks, and thousands of local trades and service ecosystems, just as it’s lifting Caterpillar. It will accelerate infrastructure revitalization and strengthen rural and suburban economies. It will create new industries, just like the cloud did with Software as a Service (SaaS), e-commerce logistics, digital banking, streaming media, and remote-work platforms.

Conclusion

We’ve seen Wall Street mislabel some of the most significant tech expansions, from the telecom-hotel buildout of the 1990s to the co-location wave, global fiber expansion, hyperscale cloud, and now, with AI. Just like all revolutionary ideas, skepticism tends to precede them, even though there’s an inevitability to them. But stay focused: infrastructure comes before revenue, and revenue tends to arrive sooner than predicted, which brings home the point that AI is not inflating; it is expanding.

Smartphones reshaped consumer behavior within a decade; AI will reshape the industry in less than half that time. This is not a bubble. It is an infrastructure super-cycle predicated on electricity, land, silicon, and ingenuity. Now is the time to act: those who build power-first digital infrastructure are not in the hype business; they’re laying the foundation for the next century of economic growth.

# # #

About the Author

Ryne Friedman is an Associate at hi-tequity, where he leverages his commercial real estate expertise to guide strategic site selection and location analysis for data center development. A U.S. Coast Guard veteran and licensed Florida real estate professional, he previously supported national brands such as Dairy Queen, Crunch Fitness, Jimmy John’s, and 7-Eleven with market research and site acquisition. His background spans roles at SLC Commercial, Lambert Commercial Real Estate, DSA Encore, and DataCenterAndColocation. Ryne studied Business Administration and Management at Central Connecticut State University.

The post It’s Not An AI Bubble — We’re Witnessing the Next “Cloud” Revolution appeared first on Data Center POST.

Nostrum Data Centers Taps JLL Expertise, Powering Spain’s Rise as a Connectivity Hub

12 January 2026 at 13:00

Spain’s digital infrastructure landscape is entering a pivotal new phase, and Nostrum Data Centers is positioning itself at the center of that transformation. By engaging global real estate and data center advisory firm JLL, Nostrum is accelerating the development of a next-generation, AI-ready data center platform designed to support Spain’s emergence as a major connectivity hub for Europe and beyond.

Building the Foundation for an AI-Driven Future

Nostrum Data Centers, the digital infrastructure division of Nostrum Group, is developing a portfolio of sustainable, high-performance data centers purpose-built for artificial intelligence, cloud computing, and high-density workloads. In December 2025, the company announced that its data center assets will be available in 2027, with land and power already secured across all sites, an increasingly rare advantage in today’s constrained infrastructure markets.

The platform includes 500 MW of secured IT capacity, with an additional 300 MW planned for future expansion, bringing total planned development to 800 MW across Spain. This scale positions Nostrum as one of the country’s most ambitious digital infrastructure developers at a time when demand for compute capacity is accelerating across Europe.

Strategic Locations, Connected by Design

Nostrum’s six data center developments are strategically distributed throughout Spain to capitalize on existing power availability, fiber routes, internet exchanges, and subsea connectivity. This geographic diversity allows customers to deploy capacity where it best supports latency-sensitive workloads, redundancy requirements, and long-term growth strategies.

Equally central to Nostrum’s approach is sustainability. Each facility is designed in alignment with the United Nations Sustainable Development Goals (SDGs), delivering industry-leading efficiency metrics, including a Power Usage Effectiveness (PUE) of 1.1 and zero Water Usage Effectiveness (WUE), eliminating water consumption for cooling.

Why JLL? And Why Now?

To support this next phase of growth, Nostrum has engaged JLL to strengthen its go-to-market strategy and customer engagement efforts. JLL brings deep global experience in data center advisory, site selection, and market positioning, helping operators translate technical infrastructure into compelling value for hyperscalers, enterprises, and AI-driven tenants.

“Nostrum Data Centers has a long-term vision for balancing innovation and sustainability. We offer our customers speed to market and scalability throughout our various locations in Spain, all while leading a green revolution to ensure development is done the right way as we position Spain as the next connectivity hub,” says Gabriel Nebreda, Chief Executive Officer at Nostrum Group. “We are confident that our engagement with JLL will be able to help us bolster our efforts and achieve our long-term vision.”

From JLL’s perspective, Spain presents a unique convergence of advantages.

“Spain has a unique market position with its access to robust power infrastructure, its proximity to Points of Presence (PoPs), internet exchanges, subsea connectivity, and being one of the lowest total cost of ownership (TCO) markets,” says Jason Bell, JLL Senior Vice President of Data Center and Technology Services in North America. “JLL is excited to be working with Nostrum Data Centers, providing our expertise and guidance to support their quest to be a leading data center platform in Spain, as well as position Spain as the next connectivity hub in Europe and beyond.”

Advancing Spain’s Role in the Global Digital Economy

With JLL’s support, Nostrum Data Centers is further refining its strategy to meet the technical and operational demands of AI and high-density computing without compromising on efficiency or sustainability. The result is a platform designed not just to meet today’s requirements, but to anticipate what the next decade of digital infrastructure will demand.

As hyperscalers, AI developers, and global enterprises look for scalable, energy-efficient alternatives to traditional European hubs, Spain, and Nostrum Data Centers, are increasingly part of the conversation.

Learn more about Nostrum Data Centers: www.thenostrumgroup.com/nostrum-data-centres

Click here to read the press release!

The post Nostrum Data Centers Taps JLL Expertise, Powering Spain’s Rise as a Connectivity Hub appeared first on Data Center POST.

AI Is Moving to the Water’s Edge, and It Changes Everything

5 January 2026 at 15:00

A new development on the Jersey Shore is signaling a shift in how and where AI infrastructure will grow. A subsea cable landing station has announced plans for a data hall built specifically for AI, complete with liquid-cooled GPU clusters and an advertised PUE of 1.25. That number reflects a well-designed facility, but it highlights an emerging reality. PUE only tells us how much power reaches the IT load. It tells us nothing about how much work that power actually produces.

As more “AI-ready” landing stations come online, the industry is beginning to move beyond energy efficiency alone and toward compute productivity. The question is no longer just how much power a facility uses, but how much useful compute it generates per megawatt. That is the core of Power Compute Effectiveness, PCE. When high-density AI hardware is placed at the exact point where global traffic enters a continent, PCE becomes far more relevant than PUE.

To understand why this matters, it helps to look at the role subsea landing stations play. These are the locations where the massive internet cables from overseas come ashore. They carry banking records, streaming platforms, enterprise applications, gaming traffic, and government communications. Most people never notice them, yet they are the physical beginning of the global internet.

For years, large data centers moved inland, following cheaper land and more available power. But as AI shifts from training to real-time inference, location again influences performance. Some AI workloads benefit from sitting directly on the network path instead of hundreds of miles away. This is why placing AI hardware at a cable landing station is suddenly becoming not just possible, but strategic.

A familiar example is Netflix. When millions of viewers press Play, the platform makes moment-to-moment decisions about resolution, bitrate, and content delivery paths. These decisions happen faster and more accurately when the intelligence sits closer to the traffic itself. Moving that logic to the cable landing reduces distance, delays, and potential bottlenecks. The result is a smoother user experience.

Governments have their own motivations. Many countries regulate which types of data can leave their borders. This concept, often called sovereignty, simply means that certain information must stay within the nation’s control. Placing AI infrastructure at the point where international traffic enters the country gives agencies the ability to analyze, enforce, and protect sensitive data without letting it cross a boundary.

This trend also exposes a challenge. High-density AI hardware produces far more heat than traditional servers. Most legacy facilities, especially multi-tenant carrier hotels in large cities, were never built to support liquid cooling, reinforced floors, or the weight of modern GPU racks. Purpose-built coastal sites are beginning to fill this gap.

And here is the real eye-opener. Two facilities can each draw 10 megawatts, yet one may produce twice the compute of the other. PUE will give both of them the same high efficiency score because it cannot see the difference in output. Their actual productivity, and even their revenue potential, could be worlds apart.

PCE and ROIP, Return on Invested Power, expose that difference immediately. PCE reveals how much compute is produced per watt, and ROIP shows the financial return on that power. These metrics are quickly becoming essential in AI-era build strategies, and investors and boards are beginning to incorporate them into their decision frameworks.

What is happening at these coastal sites is the early sign of a new class of data center. High density. Advanced cooling. Strategic placement at global entry points for digital traffic. Smaller footprints but far higher productivity per square foot.

The industry will increasingly judge facilities not by how much power they receive, but by how effectively they turn that power into intelligence. That shift is already underway, and the emergence of AI-ready landing stations is the clearest signal yet that compute productivity will guide the next generation of infrastructure.

# # #

About the Author

Paul Quigley is the former President and current Chief Strategic Partnership Officer of Airsys Cooling Technologies, and a global advocate for high density, energy efficient data center design. With more than three decades in HVAC and mission critical cooling, he focuses on practical solutions that connect energy stewardship with real world compute performance. Paul writes and speaks internationally about PCE, ROIP, and the future of data center health in the age of AI.

The post AI Is Moving to the Water’s Edge, and It Changes Everything appeared first on Data Center POST.

The Critical Role of Material Selection in Direct-to-Chip Liquid Cooling

30 December 2025 at 15:00

As data centers evolve to support AI, edge computing, and high-density cloud architectures, the challenge is no longer just maintaining an optimal power usage effectiveness (PUE), it is about achieving thermal reliability with unprecedented compute loads. Direct-to-chip liquid cooling continues to push the envelope on heat transfer performance, but one underestimated element in overall system reliability is the material composition of the fluid conveyance network itself. The hoses and fittings that transport coolant through these systems operate in extreme thermal and chemical environments, and their design directly influences uptime, maintenance intervals, and total cost of ownership.

Why Material Selection Matters

At its core, a liquid cooling system is only as reliable as its weakest component. If hoses, fittings, or seals fail due to poor material compatibility, the result could be leaks, contamination, or shortened system life, leading to downtime and costly remediation. Rubber, and rubber-like, materials are critical in hose assemblies, as they must balance flexibility for installation and serviceability with long-term resistance to temperature, pressure, permeation, coolant and coolant additives.

The challenge lies in the fact that not all rubbers, or rubber-like materials, are created equal. Each formulation is a complex recipe of polymers, plasticizers, fillers, and curing agents designed to deliver specific performance characteristics. The wrong selection can lead to issues such as fluid permeation, premature aging, or contamination of the coolant. In mission-critical environments like data centers, where even minor disruptions are unacceptable, this risk is magnified.

Temperature and Chemical Compatibility

Although data center cooling systems typically operate at temperatures between 45°C (113°F) and 65°C (149°F), sometimes reaching 100°C (212°F), those ranges can stress certain materials. Nitrile rubber, for example, performs well in oil-based environments but ages quickly in water-glycol systems, especially at higher temperatures. This can cause hardening, cracking, or coolant contamination.

By contrast, ethylene propylene diene monomer (EPDM) rubber has excellent resistance to water, glycols, and the additives commonly used in data center coolants, such as corrosion inhibitors and biocides. EPDM maintains its properties across the required operating range, making it a proven choice for direct-to-chip applications.

However, not all EPDM is the same. Developing the right EPDM for the application demands a deep understanding of polymer chemistry, filler interactions, and process control to precisely balance flexibility, heat resistance, and long-term stability.

Additionally, two curing processes, sulfur-cured and peroxide-cured, produce different performance outcomes. Sulfur-cured EPDM, while widely used, introduces zinc ions during the curing process. When exposed to deionized water, these ions can leach into the coolant, causing contamination and potentially degrading system performance. Peroxide-cured EPDM avoids this issue, offering higher temperature resistance, lower permeation rates, and greater chemical stability, making it the superior choice for modern liquid cooling.

Even among peroxide-cured EPDM compounds, long term performance is not uniform. While the cure system defines the crosslink chemistry, other formulation choices, particularly filler selection and dispersion, can influence how the material performs over time.

The use of fillers and additives is common in rubber compounding. These ingredients are often selected to control cost, improve processability, or achieve certain performance characteristics such as flame resistance, strength, or flexibility.

The challenge is that some filler systems perform well during initial qualification but are not optimized for long-term exposures faced in the operating environment. Certain fillers or processing aids can slowly migrate over time, introducing extractables into the coolant or subtly altering elastomer properties. For data center applications, EPDM compounds must therefore be engineered with a focus on long term stability, reinforcing why EPDM should not be treated as a commodity material in critical cooling systems.

Risks of Non-Compatible Materials

Material incompatibility can have several cascading effects:

  • Contamination – Non-compatible materials can leach extractables into the coolant, leading to discoloration, chemical imbalance, and reduced thermal efficiency.
  • Permeation – Some rubbers allow fluid to slowly migrate through the hose walls, causing coolant loss or altering the fluid mixture over time.
  • Premature Failure – Elevated temperatures can accelerate aging, leading to cracking, swelling, or loss of mechanical strength.
  • Leakage – Rubber under compression may deform over time, jeopardizing seal integrity if not properly formulated for resistance to compression set and tear.

In a recent two-week aging test at 80°C using a water-glycol coolant, hoses made of nitrile and sulfur-cured EPDM showed visible discoloration of the coolant, indicating leaching and breakdown of the material. Peroxide-cured EPDM, on the other hand, maintained stability, demonstrating its compatibility and reliability in long-term data center applications.

The Gates Approach

Drawing on lessons from mission critical industries that have managed thermal challenges for decades, Gates engineers apply material science rigor to the design of liquid cooling hoses for data center applications.

Rather than relying solely on initial material ratings or short-term qualification criteria, Gates begins by tailoring compound design to the operating environment. This includes deliberate control of polymer selection, filler systems, and cure chemistry to manage long term aging behavior, extractables, permeation, and retention of mechanical properties over time in high purity coolant systems.

Compounds are validated through extended aging and immersion testing that reflects real operating conditions, including exposure to heat, deionized water, and water-glycol coolants. This allows potential material changes to be identified and addressed during development, before installation in the field.

This material science driven process is applied across Gates liquid cooling platforms, including the Data Master, Data Master MegaFlex, and newly released Data Master Eco product lines. By engineering for long term stability rather than only initial compliance, Gates designs hose solutions intended to support reliable operation, predictable maintenance intervals, and extended service life in direct-to-chip liquid cooled data center environments.

Looking Ahead

As data centers continue to scale, thermal management solutions must adapt in parallel. Advanced architectures, higher rack densities, and growing environmental regulations all point to a future where liquid cooling is standard. In this environment, material selection is no longer a secondary consideration; it is foundational to system reliability.

Operators who prioritize material compatibility in fluid conveyance lines will benefit from longer service intervals, improved coolant stability, and reduced risk of downtime. In other words, the proper rubber formulation doesn’t just move fluid, it moves the industry forward.

At Gates, sustainable, high-performance cooling begins with the details. By focusing on the science of materials, we help ensure that data center operators can confidently deploy liquid cooling systems designed for the challenges of today and the innovations of tomorrow.

# # #

About the Author

Chad Chapman is a Mechanical Engineer with over 20 years of experience in the fluid power industry. He currently serves as a Product Application Engineering Manager at Gates, where he leads a team that provides technical guidance, recommendations, and innovative solutions to customers utilizing Gates products and services.

Driven by a passion for problem-solving, Chad thrives on collaborating with customers to understand their unique challenges and deliver solutions that optimize performance. He is energized by learning about new applications and technologies, especially where insights can be shared across industries. At Gates, he has been exploring the emerging field of direct-to-chip liquid cooling, an exciting extension of his deep expertise in thermal management. The rapid advancements in IT technology and AI have made his journey an inspiring and rewarding learning experience.

The post The Critical Role of Material Selection in Direct-to-Chip Liquid Cooling appeared first on Data Center POST.

Making Sense Out of VDI Chaos

22 December 2025 at 19:00

If you’re an IT executive in a mid-sized business, planning your 2026-2027 budget, you’re seeing continued pressure to dedicate more budget to AI related investments. Businesses now must add AI spending to weighing the ROI against budget allocations for virtual desktop infrastructure (VDI), digital transformation, and SaaS applications.

With more limited budgets, mid-sized businesses are in constant struggle to correctly prioritize spending. In the case of VDI, budgeting has gotten more interesting as the market has undergone a major upheaval with new brands, acquisitions and some vendors trying to hold on to market share they gained pre-upheaval. As a result, mid-market businesses, somewhat unwillingly, have had to reassess their VDI related investments and relationships, including their investment in the hardware and software needed to support their hybrid workforce.

VDI market changes have prompted mid-sized businesses to explore new options for their endpoint VDI deployments. They’re looking for improved economies, more ability to customize and to avoid legacy-style locked-in agreements.

Moving Past the Chaos

VDI remains a dominant force in enabling digital transformation and hybrid workforce productivity. The global VDI market is estimated to reach $78 billion by 2032, a CAGR growth rate of 22.1% from 2024. While vendors and providers serving the VDI market may change, the reality is, the need to deploy VDI will only increase as security concerns, remote work and cloud computing continue to make virtual desktops a desired choice.

The VDI industry can look a bit chaotic, but course correction was inevitable as long-term players face a different market in which businesses are looking for more flexibility and the ability to change relationships as their business and operational strategy evolves. It has opened the door to entities like Omnissa which offers a menu of subscription term lengths starting at one year. The legacy, multi-year agreements are giving way to these more flexible options.

To move past VDI market changes, it’s best to focus first on what a business needs in endpoint investments over the next several years. Key considerations include:

  • New technology investments to improve workspace productivity and employee engagement.
  • Clarifying AI business strategy to determine what is needed in endpoint device support.
  • Updating anticipated hybrid workforce headcount to avoid purchasing shortfalls.
  • Evaluating needed endpoint security and compliance improvements.

Once this evaluation is done a business can look at the landscape of VDI choices and fine tune purchasing.

Where Endpoint Hardware Fits

Businesses’ changing approach to VDI and endpoint investment has spurred new interest in evaluating hardware options, notably thin clients and zero clients. Thin clients, in one form or another, have been in use for decades. However, the adoption of VDI and acceleration of remote work has made modern thin clients an essential element in endpoint computing. They offer time and money savings compared to legacy ‘fat’ PCs, with a smaller form factor. Thin clients display remote desktop sessions, while virtual machines (VMs) host the centralized compute operations. Since data is not stored locally, thin clients offer improved security when a hybrid workforce is accessing files and applications at different locations around the globe.

For mid-sized businesses, with few IT professionals already managing many tasks, a modern thin client offers centralized management of on-premises and off-premises endpoints, saving IT considerable time.

Zero clients connect solely and instantly to a remote desktop and reduce cyber threats even further, since they are a leaned down version of a thin client, often connecting to a singular platform only. They are based around zero trust principles and restrict users from saving data locally. When evaluating thin client and zero client choices, some key questions to ask are:

  • Are you supplying thin clients for primarily task workers, power users, or a combination of both? A task worker may only need an Intel Atom x5-E8000 Quad Core Processor, two display ports and four USB ports with an RJ45 connector. A knowledge worker or power user will likely need an Intel N100 Quad Core Processor, two HDMI connectors, 60Hz screen support, six USB ports and an RJ45 connector.
  • Will a thin client need to integrate with a number of VDI and application providers? A flexible thin client will be able to connect to (AVD) Azure Virtual Desktop, Citrix, Omnissa and Windows 365 Cloud PC, among others, to satisfy the needs of different workers and use cases.
  • Does your business involve protecting highly sensitive data subject to stringent compliance regulations? Thin and zero clients that are feature-rich to comply with strict data protection protocols will be a necessary requirement.
  • Do you have separate licensing agreements for endpoint management software and hardware? In many cases integration of licensing agreements can help save budgets and streamline management.
  • Are you looking to move to different subscription and payment models? Mid-sized businesses will find more competitive options in the market that offer flexible term agreements. Businesses also want to avoid being locked into pricier agreements due to vendor mergers, and to avoid ‘tag-on’ fees that can multiply when a vendor adds technology features. They will be critically evaluating options to avoid any unnecessary budget increases.
  • What level of technical support will your IT staff require, from initial installations to firmware updates? Providers vary in pricing for ongoing tech support and what’s covered in the purchasing agreement.

Creating the 2026 Strategy

Going into 2026, it is more of a buyer’s market as companies want to customize their VDI and related investments to better support overall business and endpoint computing goals. Flexible, finely curated agreements will win in the marketplace. To be the most effective, a business will benefit from first examining 2026’s larger goals in workspace improvements, security and compliance and technology investments. This analysis will help more precisely evaluate thin clients and zero purchasing. The VDI market is still recovering from its chaotic period, but mid-sized businesses can avoid the chaos with well thought-out strategies and informed decision making.

# # #

About the Author

Kevin Greenway joined 10ZiG in 2012 and became CTO in 2015. He leads the company’s overall technology and product strategy, collaborating with global teams to ensure continuous innovation in a fast-paced, disruptive market. Under his leadership, 10ZiG delivers modern, managed, and secure endpoints through a unified hardware and software approach.

A computer science graduate with numerous IT certifications, Kevin has more than 25 years of experience in the IT sector, including remote connectivity, terminal emulation, VoIP, unified communications, and VDI remoting protocols. Since joining 10ZiG, he has focused exclusively on VDI and End User Computing (EUC) and oversees strategic technology alliances with leading partners such as Citrix, Microsoft, and Omnissa.

Outside of work, Kevin is a devoted family man who enjoys spending time with his wife, two children, and their dog. He enjoys running, cycling and watching sports such as Motorsport & Football/Soccer, especially his son’s team and Leicester City FC.

The post Making Sense Out of VDI Chaos appeared first on Data Center POST.

Where Is AI Taking Data Centers?

10 December 2025 at 16:00

A Vision for the Next Era of Compute from Structure Research’s Jabez Tan

Framing the Future of AI Infrastructure

At the infra/STRUCTURE Summit 2025, held October 15–16 at the Wynn Las Vegas, Jabez Tan, Head of Research at Structure Research, opened the event with a forward-looking keynote titled “Where Is AI Taking Data Centers?” His presentation provided a data-driven perspective on how artificial intelligence (AI) is reshaping digital infrastructure, redefining scale, design, and economics across the global data center ecosystem.

Tan’s session served as both a retrospective on how far the industry has come and a roadmap for where it’s heading. With AI accelerating demand beyond traditional cloud models, his insights set the tone for two days of deep discussion among the sector’s leading operators, investors, and technology providers.

From the Edge to the Core – A Redefinition of Scale

Tan began by looking back just a few years to what he called “the 2022 era of edge obsession.” At that time, much of the industry believed the future of cloud would depend on thousands of small, distributed edge data centers. “We thought the next iteration of cloud would be hundreds of sites at the base of cell towers,” Tan recalled. “But that didn’t really happen.”

Instead, the reality has inverted. “The edge has become the new core,” he said. “Rather than hundreds of small facilities, we’re now building gigawatts of capacity in centralized regions where power and land are available.”

That pivot, Tan emphasized, is fundamentally tied to economics, where cost, energy, and accessibility converge. It reflects how hyperscalers and AI developers are chasing efficiency and scale over proximity, redefining where and how the industry grows.

The AI Acceleration – Demand Without Precedent

Tan then unpacked the explosive demand for compute since late 2022, when AI adoption began its steep ascent following the launch of ChatGPT. He described the industry’s trajectory as a “roller coaster” marked by alternating waves of panic and optimism—but one with undeniable momentum.

The numbers he shared were striking. NVIDIA’s GPU shipments, for instance, have skyrocketed: from 1.3 million H100 Hopper GPUs in 2024 to 3.6 million Blackwell GPUs sold in just the first three months of 2025, a threefold increase in supply and demand. “That translates to an increase from under one gigawatt of GPU-driven demand to over four gigawatts in a single year,” Tan noted.

Tan linked this trend to a broader shift: “AI isn’t just consuming capacity, it’s generating revenue.” Large language model (LLM) providers like OpenAI, Anthropic, and xAI are now producing billions in annual income directly tied to compute access, signaling a business model where infrastructure equals monetization.

Measuring in Compute, Not Megawatts

One of the most notable insights from Tan’s session was his argument that power is no longer the most accurate measure of data center capacity. “Historically, we measured in square footage, then in megawatts,” he said. “But with AI, the true metric is compute, the amount of processing power per facility.”

This evolution is forcing analysts and operators alike to rethink capacity modeling and investment forecasting. Structure Research, Tan explained, is now tracking data centers by compute density, a more precise reflection of AI-era workloads. “The way we define market share and value creation will increasingly depend on how much compute each facility delivers,” he said.

From Training to Inference – The Next Compute Shift

Tan projected that as AI matures, the balance between training and inference workloads will shift dramatically. “Today, roughly 60% of demand is tied to training,” he explained. “Within five years, 80% will be inference.”

That shift will reshape infrastructure needs, pushing more compute toward distributed yet interconnected environments optimized for real-time processing. Tan described a future where inference happens continuously across global networks, increasing utilization, efficiency, and energy demands simultaneously.

The Coming Capacity Crunch

Perhaps the most sobering takeaway from Tan’s talk was his projection of a looming data center capacity shortfall. Based on Structure Research’s modeling, global AI-related demand could grow from 13 gigawatts in 2025 to more than 120 gigawatts by 2030, far outpacing current build rates.

“If development doesn’t accelerate, we could face a 100-gigawatt gap by the end of the decade,” Tan cautioned. He noted that 81% of capacity under development in the U.S. today comes from credible, established providers, but even that won’t be enough to meet demand. “The solution,” he said, “requires the entire ecosystem, utilities, regulators, financiers, and developers to work in sync.”

Fungibility, Flexibility, and the AI Architecture of the Future

Tan also emphasized that AI architecture must become fungible, able to handle both inference and training workloads interchangeably. He explained how hyperscalers are now demanding that facilities support variable cooling and compute configurations, often shifting between air and liquid systems based on real-time needs.

“This isn’t just about designing for GPUs,” he said. “It’s about designing for fluidity, so workloads can move and scale without constraint.”

Tan illustrated this with real-world examples of AI inference deployments requiring hundreds of cross-connects for data exchange and instant access to multiple cloud platforms. “Operators are realizing that connectivity, not just capacity, is the new value driver,” he said.

Agentic AI – A Telescope for the Mind

To close, Tan explored the concept of agentic AI, systems that not only process human inputs but act autonomously across interconnected platforms. He compared its potential to the invention of the telescope.

“When Galileo introduced the telescope, it challenged humanity’s view of its place in the universe,” Tan said. “Large language models are doing something similar for intelligence. They make us feel small today, but they also open an entirely new frontier for discovery.”

He concluded with a powerful metaphor: “If traditional technologies were tools humans used, AI is the first technology that uses tools itself. It’s a telescope for the mind.”

A Market Transformed by Compute

Tan’s session underscored that AI is redefining not only how data centers are built but also how they are measured, financed, and valued. The industry is entering an era where compute density is the new currency, where inference will dominate workloads, and where collaboration across the entire ecosystem is essential to keep pace with demand.

Infra/STRUCTURE 2026: Save the Date

Want to tune in live, receive all presentations, gain access to C-level executives, investors and industry leading research? Then save the date for infra/STRUCTURE 2026 set for October 7-8, 2026 at The Wynn Las Vegas. Pre-Registration for the 2026 event is now open, and you can visit www.infrastructuresummit.io to learn more.

The post Where Is AI Taking Data Centers? appeared first on Data Center POST.

AI and the Next Frontier of Digital Infrastructure

8 December 2025 at 16:00

Insights from Structure Research, Applied Digital, PowerHouse Data Centers, and DataBank

A New Era of Infrastructure Growth

At the infra/STRUCTURE Summit 2025, held October 15–16 at the Wynn Las Vegas, the session titled “AI: The Next Frontier” brought together data center leaders to discuss how artificial intelligence is reshaping infrastructure demand, investment, and development strategy.

Moderated by Jabez Tan, Head of Research at Structure Research, the conversation featured Wes Cummins, CEO of Applied Digital; Luke Kipfer, Managing Director at PowerHouse Data Centers; and Raul Martynek, CEO of DataBank. Each offered unique perspectives on how their organizations are adapting to the acceleration of AI workloads and what that means for power, scale, and capital in the years ahead.

Industry Transformation – From Hyperscale to AI-Scale

Jabez Tan opened the discussion by reflecting on how quickly the market has shifted. Just one year ago, many were questioning the durability of AI-related infrastructure investments. Now, as Tan observed, “The speed of change has outpaced even the most optimistic expectations.”

Wes Cummins of Applied Digital illustrated this evolution through his company’s own transformation. “We started building Bitcoin data centers,” Cummins said. “We were never a miner, we just built the facilities. Then, when AI took off, we realized our designs could scale. We pivoted early, and when ChatGPT hit, the entire world changed.”

That pivot positioned Applied Digital to become a key player in the new era of high-performance computing (HPC) and GPU-intensive workloads, with facilities like its large-scale campus in North Dakota exemplifying how traditional models have been re-engineered for AI.

Building for Scale – Meeting the Demand Wave

Raul Martynek of DataBank and Luke Kipfer of PowerHouse Data Centers both emphasized how scale and speed have become the defining factors of success. “As an executive developer, you have to have the conviction to bring inventory to market,” Martynek said. “If you’re building in good markets and with the right customers, there’s enormous room for growth.”

Cummins agreed, stressing that the conversation has shifted beyond simply securing power. “We’re moving past the question of who has power,” Cummins said. “Now it’s about who can build at scale, deliver reliably, and operate efficiently. Construction timelines, supply chain access, and delivery speed are the new gating factors.”

The panelists noted that hyperscalers are no longer alone in this race. New AI-focused firms, GPU as a service providers, and cloud entrants are competing for capacity at unprecedented levels, pushing the industry to think and build faster.

Site Strategy and Market Evolution – Staying Close to the Core

On the question of site selection, Martynek explained that traditional Tier 1 markets remain critical, though the boundaries are expanding. “Proximity to major availability zones is still a sound long-term strategy,” Martynek said. “We’re buying land in emerging submarkets of Virginia, for example, close enough to the core, but flexible enough to support scale.”

Kipfer added that hyperscalers’ preferences vary by workload type. “For AI and machine learning, some customers can be farther from peering points,” Kipfer said. “But for commercial cloud and enterprise use cases, Tier 1 and Tier 1-adjacent locations still offer the lowest risk and greatest performance.”

Together, their remarks reflected a balanced market dynamic, one where new geographies are gaining traction, but traditional hubs remain foundational to large-scale AI deployments.

Is This a Bubble? – Understanding the AI Surge

As AI investment accelerates, Tan posed the question many in the audience were thinking: Are we in another tech bubble?

Cummins was direct in his response. “I lived through the dot-com bubble,” he said. “This is different. The rate of adoption and real-world application is unlike anything we’ve ever seen.” He pointed out that ChatGPT reached a billion daily queries in just over two years—compared to Google’s eleven-year journey to the same milestone. “The computing power behind that is staggering,” he added.

Martynek agreed, noting that despite the hype, constraints in power, supply chain, and construction capacity make overbuilding virtually impossible. “It’s actually very hard to build too much right now,” he said. “The market is self-regulating through those bottlenecks.”

Capital Strategy and Long-Term Partnerships

A major theme throughout the discussion was the evolving capital stack supporting AI infrastructure. Martynek shared that DataBank has attracted strong investment from institutional partners seeking stable, long-term returns. “We’ve created investment-grade structures backed by 15-year commitments from top-tier customers,” Martynek said. “That gives our investors confidence and gives us visibility into future growth.”

Cummins added that Applied Digital’s focus is on securing long-term offtake agreements with the right clients, those building sustainable businesses rather than speculative projects. “These are 15-year-plus commitments from high-quality counterparties,” Cummins said. “That’s what allows us to build aggressively but responsibly.”

The panel agreed that long-term alignment between capital providers, developers, and customers will define the next phase of industry maturity.

The Future of AI Infrastructure – Speed, Scale, and Cooperation

Looking ahead, all three panelists emphasized the need for ongoing collaboration across the ecosystem. From developers to operators to hyperscalers, success will depend on shared innovation and operational agility.

Cummins summed up the moment: “The world isn’t going back. We’ve unlocked a new era of computing, and our challenge is to keep up with it. Speed is everything.”

Martynek added, “We’re not overbuilding, we’re underprepared. The companies that can execute with discipline and partnership will define the next decade of infrastructure growth.”

A Market Fueled by Real Demand

The discussion underscored that the AI-driven infrastructure boom is not speculative, it’s structural. Adoption is accelerating faster than any previous technology wave, supply is constrained, and capital is flowing toward long-term, revenue-backed projects. The result is a market with strong fundamentals, focused execution, and unprecedented potential for innovation.

Infra/STRUCTURE 2026: Save the Date

Want to tune in live, received all presentations, gain access to C-level executives, investors and industry leading research? Then save the date for infra/STRUCTURE 2026 set for October 7-8, 2026 at The Wynn Las Vegas. Pre-Registration for the 2026 event is now open, and you can visit www.infrastructuresummit.io to learn more.

The post AI and the Next Frontier of Digital Infrastructure appeared first on Data Center POST.

Investment Perspectives: Navigating the Future of Digital Infrastructure

4 December 2025 at 16:00

Insights from RBC Capital Markets, Compass Datacenters, and TD Securities

Understanding the Investment Landscape in a New Era of AI

The infra/STRUCTURE Summit 2025, held October 15–16 at the Wynn Las Vegas, brought together the world’s leading voices in digital infrastructure to explore the industry’s rapid transformation. Among the standout sessions was Investment Perspectives, where experts discussed how artificial intelligence (AI), energy constraints, and capital strategy are reshaping investment decisions and the future of data center development.

Moderated by Jonathan Atkin, Managing Director at RBC Capital Markets, the panel featured Jonathan Schildkraut, Chief Investment Officer at Compass Datacenters, and Colby Synesael, Managing Director at TD Securities. Together, they provided clear insights into the trends influencing where, why, and how capital is being deployed in the infrastructure sector today.

The Shifting Demand Curve: How AI is Driving Data Center Growth

Jonathan Schildkraut opened the discussion by outlining the four primary workloads fueling infrastructure demand: AI training, AI inference, cloud, and social media. He described these workloads as the engines of growth for the sector, emphasizing that most are revenue-generating. “Three of those four buckets are cash registers,” Schildkraut said. “We’re really seeing those revenue-generating workloads accelerating.”

Colby Synesael added that the balance between AI training and inference is shifting quickly. “A year ago, roughly 75% of AI activity was training and 25% inference,” Synesael explained. “In five years, that ratio could reverse. A lot of inferencing will occur near where applications are used, which changes how we think about data center deployment.” Their remarks highlighted a clear message: AI continues to be the dominant force shaping infrastructure demand, but its evolution is redefining both scale and location.

Market Expansion and Power Constraints 

As Tier 1 data center markets face mounting limitations in available land and energy, both Schildkraut and Atkin noted the increasing strategic importance of Tier 2 and Tier 3 regions. Schildkraut cited examples such as Alabama, Georgia, and Texas, which are emerging as viable alternatives due to improved fiber connectivity and more favorable power economics.

Capital Strategy and Facility Adaptability:Investing for the Long Term

The conversation also delved into how investors are evaluating opportunities in an environment of high demand and rapid technological change. Schildkraut explained that access to capital today depends on two critical factors: tenant quality and facility adaptability. “Investors want to know that the tenant and the workload will be there for the long term,” Schildkraut said. “They also care deeply about whether the facility can evolve with future technologies.”

To illustrate this, Schildkraut described Compass Datacenters’ initiative to upgrade power densities, increasing capacity from 6–7 kilowatts per rack to hybrid systems capable of supporting up to 30 kilowatts. This investment is designed to ensure readiness for the next generation of high performance computing and AI workloads. These types of forward looking strategies are helping operators and investors manage both risk and opportunity in an increasingly complex market.

Globalization and Policy Influence 

When the conversation turned to global trends, Schildkraut predicted that AI infrastructure deployment will expand worldwide but at uneven rates. “Availability of power and land isn’t uniform,” he said. “Government incentives will play a critical role in determining which markets can scale.”

Synesael agreed, adding that regions lacking modern AI infrastructure could face growing disadvantages. “Over the next several years, not having this infrastructure in your country or region will become a major constraint on innovation,” Syneasel said. Their perspectives reinforced that infrastructure development is no longer just a commercial priority, it is also a matter of national competitiveness.

A Market Redefined by Technology and Energy

The discussion revealed that the digital infrastructure market is entering a new phase defined by the convergence of AI driven workloads, energy constraints, and strategic capital deployment. As inference workloads expand, Tier 2 and Tier 3 markets rise in importance, and investors prioritize long-term flexibility, the industry’s success will depend on adaptability and foresight. The session made it clear that data centers are no longer just real estate, they are foundational assets powering the next wave of global innovation.

Infra/STRUCTURE 2026: Save the Date

Want to tune in live, receive all presentations, gain access to C-level executives, investors and industry leading research? Then save the date for infra/STRUCTURE 2026 set for October 7-8, 2026 at The Wynn Las Vegas. Pre-Registration for the 2026 event is now open, and you can visit www.infrastructuresummit.io to learn more.

The post Investment Perspectives: Navigating the Future of Digital Infrastructure appeared first on Data Center POST.

Beyond the Conference: PTC’s Commitment to Connection, Innovation, and Industry Empowerment with Brian Moon

25 November 2025 at 16:30

Episode 62 of the NEDAS Live! Podcast shines a spotlight on Brian Moon, CEO of Pacific Telecommunications Council (PTC), who joined host Ilissa Miller, CEO of iMiller Public Relations, for an in-depth conversation ahead of PTC’s 2026 Annual Conference. As PTC prepares for its 48th year connecting the digital infrastructure community, Moon shares how the organization is adapting to the age of AI, meeting evolving industry needs, empowering members, and fostering innovation.

Evolving Beyond Tradition: PTC’s Growth in the Age of AI

PTC has long been recognized for its January conference in Honolulu, a staple for global industry leaders from across wireline, wireless, subsea, satellite, and data center sectors. Brian Moon traces PTC’s evolution from its origins as a Pacific-focused membership meeting to its current role as a global convener, now at the convergence of AI, edge, and cloud innovation. “It isn’t siloed anymore. AI is interconnecting and converging all the other industries. Nothing works without each other now,” Moon notes. Recent conference sell-outs reflect the enthusiastic embrace of PTC’s refreshed programming and more diverse, tech-forward offerings.​

Member-First Mentality and Year-Round Value

Recognizing that industry professionals want more than a once-a-year event, Moon highlights how PTC reinvests its not-for-profit proceeds to support members. From providing meeting spaces at major industry events to organizing exclusive luncheons and ongoing education programs, PTC prioritizes networking, knowledge-sharing, and tangible benefits. “We want to make sure our members see that their dues are going towards something meaningful,” Moon explains. The upcoming conference’s robust member benefits, accessible pricing, and expanded activities demonstrate a commitment to value and inclusion.​

Leadership, Talent, and Next-Gen Empowerment

A major theme this year is leadership, which is embodied by the debut of the Alaka‘i Stage (meaning “to lead” or “to guide” in Hawaiian), which reimagines thought leadership sessions to foster deeper connections between attendees and top executives. PTC is also addressing industry succession with two leadership development initiatives: the Academy Master Class for mid-career professionals and the Top Talent Leadership program in partnership with Columbia Business School. “These are just a few ways that we’re contributing back to the industry,” explains Moon.

Inclusion Initiatives: Laulima and Industry Diversity

PTC’s new Week of Laulima, Hawaiian for “many hands coming together”, puts a spotlight on women in critical infrastructure. Featuring tracks and safe spaces for networking, coaching, and peer celebration, this program is helping drive strong female representation and engagement at the annual event. “We want all participants to feel they belong and can thrive here,” Moon says, as surging engagement in industry group chats and programming shows the impact.​

Looking Ahead: Convening, Educating, and Innovating

As the intersection of AI, data centers, and connectivity accelerates, Moon underscores PTC’s dual role as convener and educator, providing factual context when public perceptions of the digital infrastructure sector are at stake, including environmental and community impacts. The organization aims to support industry growth and keep their members ahead of the curve, whether through connection, education, or advocacy.

With the PTC Annual Conference on the horizon, the organization continues to shape the global conversation, bringing together the leaders, innovators, and future talent driving the digital economy forward.

The PTC’26 event takes place in Honolulu at the Hilton Hawaiian Village starting Sunday, January 18 through Wednesday, January 21, 2026. The invite-only member’s soiree kicks off the festivities on Saturday, January 17, 2026.

For more information about the event, membership and to register for a pass, visit ptc.org.

To continue the conversation, listen to the full podcast episode here.

The post Beyond the Conference: PTC’s Commitment to Connection, Innovation, and Industry Empowerment with Brian Moon appeared first on Data Center POST.

AI Data Centers Are Ready to Explode, If the Grid Can Keep Up

24 November 2025 at 16:00

Having spent most of my career at the nexus of power generation and industrial infrastructure, I can safely say that few things have stressed the American electric grid quite like the explosive growth in AI-driven data centers. At Industrial Info Resources, we are currently tracking more than $2.7 trillion in data center projects worldwide, including more than $1 trillion in new US investment in just nine short months.

It is not only technology that faces a skyrocketing demand; it’s about electricity. With its voracious power appetite, artificial intelligence is making plain just how unprepared the aging US power grid is for the next major step in technological evolution.

AI’s Appetite for Power

The amount of computational power AI requires is astonishing. More than 700 million new users have gone online in the past year alone, and according to estimates by OpenAI, global compute demand could soon require a gigawatt of new capacity every week. That is roughly one big power station every seven days.

We are already seeing the ramifications in our project data at IIR Energy. A large number of the biggest hyperscale projects are reaching major capacity bottlenecks: utilities in some areas are telling data center operators they won’t be able to provide additional megawatts until as late as 2032. A few years ago, that kind of delay was unthinkable.

Limits like these are forcing developers to think out of the box when considering data center construction locations. No longer are they concentrating on central metro areas, but they are gravitating towards areas around transmission interconnections, wind or solar parks, or even existing industrial areas that are already served by substations.

The New York Independent System Operator’s Comprehensive Reliability Plan, or CRP, predicts impending power shortages across the state. It identifies three key challenges that are occurring at once: an older generation fleet, fast-rising loads from data centers and chip plants, and new hurdles to building supply. It’s a confluence of threats that are straining reliability planning to its limits.

An Outdated Grid Meets a $40 Trillion Market

With electricity demand having been stagnant for the past few years, improvements to the country’s collective power grid have not been prioritized. This recent rebound in load is meeting a grid that’s already congestion-prone and aging. Some regions face record-breaking congestion pricing and curtailment. Last week, PJM (the largest regional electricity transmission organization in the United States) saw wholesale capacity auction power prices jump roughly 800%.

This serves as a powerful reminder that while the digital economy proceeds at light speed, physical infrastructure doesn’t. Transmission upgrades require years to approve and construct, and generation projects may be held back by supply chains or local policy barriers. AI’s future, as grand as it is, now hinges on how fast we will upgrade physical systems that enable it.

Behind the Meter: The New Energy Strategy

Confronted with delayed delivery schedules and lengthy interconnection queues, data center builders are taking control themselves. Increasingly, they are making investments in “behind-the-meter” options to guarantee access to the power they require. They are considering natural gas turbines, high-end fuel cells, as well as extended renewable contracts that come with a direct path to generation independent of having to wait for upgrades from utilities. Technologies for liquid cooling are helping data center operators decrease freshwater consumption as they improve efficiency.

Data centers are no longer simple consumers of power. Increasingly, they are becoming power collaborators, in some instances, power generators. Utilities are adapting by teaming with developers to co-develop generation assets or reassessing baseload integrity. Next-generation designs are on track to reach a megawatt or more per rack by 2029.

Why Reliable Intelligence Matters

In a market changing this rapidly, it’s crucial to have reliable information. And that’s where IIR Energy offers a distinct edge. We follow projects from initial planning to evaluation and refinement, tracking every milestone and closely watching the power fundamentals that influence success.

This transparency allows utilities, investors, and developers to discern actual development from rumors. For example, whereas some reports indicate that big builds for data centers are decreasing, our intelligence indicates just the opposite. The buildout continues to accelerate and spread, transitioning to different areas and different forms of power delivery.

Reliable, corroborated information allows decision-makers to know exactly where expansion is occurring as well as the limitations that will hinder it. This is the basis of business at IIR Energy. We offer insight capable of piercing the din to predict how AI, energy, and infrastructure will continue to develop side by side by side.

All in all, this goes to remind us of a simple yet powerful reality: the AI power race will not just be about smarter algorithms. We’ll need smarter infrastructure to match.

# # #

About the Author

Britt Burt is the Vice President of Power Industry Research at IIR Energy, bringing nearly 40 years of expertise across the power, energy, and data center sectors. He leads IIR’s power research team, overseeing the identification and verification of data on operational and proposed power plants worldwide. Known for his deep industry insight, Britt plays a key role in keeping global energy intelligence accurate and up to date.

The post AI Data Centers Are Ready to Explode, If the Grid Can Keep Up appeared first on Data Center POST.

The Speed of Burn

17 November 2025 at 16:00

It takes the Earth hundreds of millions of years to create usable energy.

It takes us milliseconds to burn it.

That imbalance between nature’s patience and our speed has quietly become one of the defining forces of our time.

All the power that moves our civilization began as light. Every joule traces back to the Big Bang, carried forward by the sun, stored in plants, pressed into fuels, and now released again as electricity. The current that runs through a data center today began its journey billions of years ago…ancient energy returning to motion through modern machines.

And what do we do with it? We turn it into data.

Data has become the fastest-growing form of energy use in human history. We are creating it faster than we can process, understand, or store it. The speed of data now rivals the speed of light itself, and it far exceeds our ability to assign meaning to it.

The result is a civilization burning geological time to produce digital noise.

The Asymmetry of Time

A hyperscale data center can take three to five years to design, permit, and build. The GPUs inside it process information measured in trillionths of a second. That mismatch; years to construct, microseconds to consume, defines the modern paradox of progress. We are building slower than we burn.

Energy creation is slow. Data consumption is instantaneous. And between those two speeds lies a widening moral and physical gap.

When we run a model, render an image, or stream a video, we aren’t just using electricity. We’re releasing sunlight that’s been waiting since the dawn of life to be freed. The electrons are real, finite, and irreplaceable in any human timeframe — yet we treat data as limitless because its cost is invisible.

Less than two percent of all new data is retained after a year. Ninety-eight percent disappears — deleted, overwritten, or simply forgotten. Still, we build ever-larger servers to hold it. We cool them, power them, and replicate them endlessly. It’s as if we’ve confused movement with meaning.

The Age of the Cat-Video Factory

We’ve built cat-video factories on the same grid that could power breakthroughs in medicine, energy, and climate.

There’s nothing wrong with joy or humor. Those things are a beautiful part of being human. But we’ve industrialized the trivial. We’re spending ancient energy to create data that doesn’t last the length of a memory. The cost isn’t measured in dollars; it’s measured in sunlight.

Every byte carries a birth certificate of energy. It may have traveled billions of years to arrive in your device, only to vanish in seconds. We are burning time itself — and we’re getting faster at it every year.

When Compute Outruns Creation

AI’s rise has made this imbalance impossible to ignore. A one-gigawatt data campus, power consumption that once was allocated to the size of a national power plant, can now belong to a single company. Each facility may cost tens of billions of dollars and consume electricity on par with small nations. We’ve reached a world where the scarcity of electrons defines the frontier of innovation.

It’s no longer the code that limits us; it’s the current.

The technology sector celebrates speed: faster training, faster inference, faster deployment. But nature doesn’t share that sense of urgency. Energy obeys the laws of thermodynamics, not the ambitions of quarterly growth. What took the universe 18 billion years to refine (the conversion of matter into usable light) we now exhaust at a pace that makes geological patience seem quaint.

This isn’t an argument against technology. It’s a reminder that progress without proportion becomes entropy. Efficiency without stewardship turns intelligence into heat.

The Stewardship of Light

There’s a better lens for understanding this moment. One that blends physics with purpose.

If all usable power began in the Big Bang and continues as sunlight, then every act of computation is a continuation of that ancient light’s journey. To waste data is to interrupt that journey; to use it well is to extend it. Stewardship, then, isn’t just environmental — it’s existential.

In finance, CFOs use Return on Invested Power, ROIP to judge whether the energy they buy translates into profitable compute and operational output. But there’s a deeper layer worth considering: a moral ROIP. Beyond the dollars, what kind of intelligence are we generating from the power we consume? Are we creating breakthroughs in medicine, energy, and climate, or simply building larger cat-video factories?

Both forms of ROIP matter. One measures financial return on electrons; the other measures human return on enlightenment. Together, they remind us that every watt carries two ledgers: one economic, one ethical.

We can’t slow AI’s acceleration. But we can bring its metabolism back into proportion. That begins with awareness… the humility to see that our data has ancestry, that our machines are burning the oldest relics of the cosmos. Once you see that, every click, every model, every watt takes on new weight.

The Pause Before Progress

Perhaps our next revolution isn’t speed at all. Perhaps it’s stillness, the mere ability to pause and ask whether the next byte we create honors the journey of the photons that power it.

The call isn’t to stop. It’s to think proportionally.

To remember that while energy cannot be created or destroyed, meaning can.

And that the true measure of progress may not be how much faster we can turn power into data, but how much more wisely we can turn data into light again.

Sunlight is the power. Data is the shadow.

The question is whether our shadows are getting longer… or wiser.

# # #

About the Author

Paul Quigley is President of Airsys Cooling Technologies. He writes about the intersection of power, data, and stewardship. Airsys focuses on groundbreaking technology with a conscience

The post The Speed of Burn appeared first on Data Center POST.

Ensuring Equipment Safety and Reliability in Data Centers

13 November 2025 at 15:00

What keeps data center operators up at night? Among other things, worries about the safety and reliability of their assets. Staying competitive, maintaining 24/7 uptime, and meeting customer demand can all seem like overwhelming tasks – especially while operating on a lean budget.

The good news is that safety and reliability are very compatible goals, especially in the data center. An efficient, proactive maintenance strategy will deliver both greater reliability and increased security, so that your data center can support ever-growing demand while maintaining the trust of its customers.

In this article, I’ll talk about the best practices for maintenance teams tasked with increasing safety and uptime. I’ll explain how choosing the right tools can help your data center thrive and scale, without increasing costs.

Baking In Safety and Efficiency 

Solid maintenance practices start at the commissioning stage.

There’s no getting around the fact that a data center build is labor-intensive and demanding. Every single connection, electrical point, and fiber optic cable needs to be tested and verified. If you’re not careful, the commissioning stage has enormous potential for error and wasted resources, especially in a hyperscale location. Here’s how to solve that problem.

Choose Your Tools Wisely

It’s important to use the right tools and build efficiencies into the commissioning stage. Think of this stage as an opportunity to design a process that makes sense for your crew and your resources.

If you’re working with a lean maintenance crew, make sure to use tools that are purpose-built for ease of use, so that everyone on your team can achieve high-quality results right away. Look for cable testers, Optical Time Domain Reflectometers, and Optical Loss Test Sets that are designed with intuitive interfaces and settings.

Select tools that comply with, or exceed, industry standards for accuracy. Precision results will make a huge difference when it comes to the long-term lifespan of your assets. Getting accurate readings the first time also eliminates the need for re-work.

Opt for Safety and Efficiency

As always, safety and efficiency go hand in hand. When you’re building a large or hyperscale data center, small gains in efficiency add up quickly. If your tools allow you to test each connection point just a few seconds more quickly, you’ll see significant savings by the end of the data center construction.

Once the commissioning stage is complete, it’s a question of consolidating your efficiency gains, and finding new ways to keep your data center resilient without raising costs. Let’s see what that looks like.

Using Non-Contact Tools for Safety and Efficiency

Once your data center is fully built, I recommend implementing non-contact tools as far as possible. Done right, this will drastically improve your uptime and performance, while reducing overall costs.

What does non-contact look like? For some equipment, like the pumps and motors that support your cooling equipment, wireless sensors can monitor asset health in real time, tracking vibration levels and temperature.

Using Digital and AI Tools

Tools like a CMMS, or an AI-powered diagnostic engine, sift through asset health data to pinpoint early indications of an emerging fault. Today’s AI tools are trained on billions of data points and can recognize faults in assets and component parts. They can even determine the fault severity level and issue detailed reports on the health of every critical asset in the facility.

Once the fault is identified, CMMS creates a work order and a technician examines the asset, making repairs as needed. For lean maintenance crews, digital tools free up valuable time and labor, so that experienced technicians can focus on carrying out repairs, instead of reading machine tests or generating work orders.

The bottom line: real-time wireless monitoring keeps your technicians safe, eliminating the need for route-based testing with a handheld device. No more sending workers to squeeze into tight spaces or behind machinery just to get a measurement. By extension, no more risk of human error or inaccurate readings. Digital tools don’t make careless mistakes, no matter how often they perform the same task.

Of course, wireless monitoring isn’t the only non-contact approach out there.

Bringing in the bots

It’s now increasingly common to send robots into the data center to perform basic tests. This accomplishes the crucial function of keeping people out of the data center, where they could potentially hurt themselves or damage something.

I often see robots used to perform thermal imaging tests. Thermal imaging is a key element in many maintenance processes, especially in the data center. It’s the best means of pinpointing electrical faults, wiring issues, faulty connections, and other early indicators of major issues.

Using a robot to conduct the testing (or a mounted, non-contact thermal imager) allows you to monitor frequently, for accurate and precise results. This also protects your team from potential dangers like arc flashes and electrical shocks.

Opening the (infrared) window

Infrared windows, installed directly into power cabinets, make power quality monitoring both safer and more efficient. This is by far the safest approach for operators and technicians. It also guarantees readings will be taken regularly and speeds up the measurement process, by eliminating the time-consuming permitting step. The more frequently your team takes readings, the more effectively they can identify emerging issues and get ahead of the serious faults that could impact your assets and your whole facility.

Successful scaling through automation

Standardizing and automating workflows can enable fast, effective scaling. These processes also extend the reach of lean maintenance teams, so that managers can oversee larger facilities while still delivering high performance.

Automated monitoring and testing – with wireless tools, robots, and non-contact technology—deliver data in near real-time. When you pair this with AI, or with data analytic software, you’ll be able to identify emerging asset faults long before they become serious enough to cause downtime. This predictive technology enables far greater uptime and productivity, while also extending the lifespan of your assets.

Automated AI diagnostic tools, condition monitoring, and robotic testing all enable data centers to scale and to continue to deliver the speed and performance that today’s digitalized economy relies on.

# # #

About the Author

Mike Slevin is a General Manager (Networks, Routine Maintenance & Process Instrument) at Fluke, a company known worldwide for its electronic test and measurement tools. Mike works with data centers and industrial clients to improve energy efficiency, safety, and reliability through better monitoring and maintenance practices.

The post Ensuring Equipment Safety and Reliability in Data Centers appeared first on Data Center POST.

❌