Normal view

Received before yesterday

Three-quarters of AI Subscribers in the US Want Generative AI Included With Their Mobile Contract

22 January 2026 at 14:00

Originally posted on TelecomNewsroom.

Telcos are the missing link in AI adoption, say paying AI subscribers

Nearly three-quarters (74%) of US consumers who pay for generative AI services want those tools included directly with their mobile phone plan, according to new research from subscription bundling platform, Bango.

The survey of 1,400 ChatGPT subscribers in the US also reveals that demand for AI-inclusive telco bundles extends beyond mobile. A further 72% of AI subscribers want AI included as part of their home broadband or TV package, while more than three-quarters (77%) want generative AI tools paired with streaming services such as Netflix or Spotify, offering a bundling opportunity for telcos.

The findings signal a major opportunity for telcos to become the primary distributors of AI services. AI subscribers already spend over $65 per month on these tools, representing a high value audience for telcos.

To read the full press release, please click here.

The post Three-quarters of AI Subscribers in the US Want Generative AI Included With Their Mobile Contract appeared first on Data Center POST.

AI Data Center Market to Surpass USD 1.98 Trillion by 2034

21 January 2026 at 15:00

The global AI data center market was valued at USD 98.2 billion in 2024 and is estimated to grow at a CAGR of 35.5% to reach USD 1.98 trillion by 2034, according to a recent report by Global Market Insights Inc.

Growing adoption of generative AI and machine learning tools requires extraordinary processing power and storage capabilities, increasing reliance on data centers specifically optimized for AI workloads. These environments depend on advanced GPUs, scalable system architecture, and ultra-low-latency networking to support complex model training and inference across industries such as finance, healthcare, and retail. Big data analytics is also accelerating demand, as organizations handle massive streams of structured and unstructured information that must be processed rapidly.

AI-focused facilities enable high-performance computing for real-time workloads, strengthening their role as essential infrastructure for global digital transformation. The rapid expansion of cloud computing, along with the rising number of hyperscale facilities, continues to amplify the need for AI-ready infrastructures. Providers are investing in advanced AI data platforms that offer scalable services to enterprises and developers, further increasing market momentum.

The AI data center market from the hardware segment accounted for USD 61.1 billion in 2024. Growth is driven by expanding use of AI chips, GPU accelerators, advanced cooling technologies, high-density server systems, and optical networking solutions. Rising GPU energy requirements, the shift toward rack densities between 30-120 kW, and large-scale deployment strategies introduced by leading technology companies are shaping long-term capital allocation in the sector.

The cloud-based category held a 58% share in 2024 and is projected to grow at a CAGR of 35.2% from 2025 through 2034. This segment leads due to its unmatched scalability, flexible consumption options, and access to the latest AI-accelerated computing hardware without upfront investment. Hyperscale providers are making multi-billion-dollar commitments to strengthen global AI infrastructures, propelling adoption of AI-driven services and increasing demand for GPUs, TPUs, and specialized processors.

U.S. AI Data Center Market generated USD 33.2 billion in 2024. The country maintains a leading position supported by prominent hyperscale operators and substantial investments in GPU clusters, liquid cooling, and large-scale AI-aligned builds. Federal incentives, regional tax advantages, and infrastructure funding have further solidified the United States as the most capacity-rich region for AI computing.

Key participants in the AI data center market include Huawei, AWS, NVIDIA, HPE, Digital Realty, Google, Lenovo, Microsoft, Equinix, and Dell Technologies. Companies expanding their foothold in the market are focusing on infrastructure modernization, large-scale GPU deployments, and energy-efficient system design.

Many firms are investing in high-density racks, integrated liquid cooling, and next-generation networking to support advanced AI workloads. Strategic partnerships with chipmakers, cloud providers, and colocation operators help accelerate capacity expansion and ensure access to cutting-edge AI hardware. Providers are also scaling global data center footprints, enhancing automation capabilities, and optimizing power utilization through renewable-energy integration. Long-term contracts with enterprises, AI-as-a-service offerings, and the buildout of specialized AI clusters further reinforce competitive positioning and market dominance.

The post AI Data Center Market to Surpass USD 1.98 Trillion by 2034 appeared first on Data Center POST.

It’s Not An AI Bubble — We’re Witnessing the Next “Cloud” Revolution

19 January 2026 at 14:30

Alphabet, Amazon, and Microsoft; these tech giants’ cloud services, Google Cloud, AWS, and Azure, respectively, are considered the driving force behind all current business computing, data, and mobile services. But back in the mid-2000s, they weren’t immediately seen as best bets on Wall Street. When Amazon launched AWS, analysts and investors were skeptical. They dismissed AWS as a distraction from Amazon’s core retail business. The Wall Street wizards did not understand the potential of cloud computing services. Many critics believed enterprises would never move their mission-critical workloads off-premises and into remote data centers.

As we all know, the naysayers were wrong, and cloud computing took off, redefining global business. It turbo-charged the economy, creating trillions in enterprise value while reducing IT costs, increasing application agility, and enabling new business models. In addition, the advent of cloud services lowered barriers to entry for startups and enabled rapid service scaling. Improving efficiency, collaboration, and innovation through scalable, pay-as-you-go access to computing resources was part of the formula for astounding success. The cloud pushed innovation to every corner of society, and those wise financiers misunderstood it. They could not see how this capital-intensive, long-horizon bet would ever pay off.

Now, we are at that moment again. This time with artificial intelligence.

Headlines appear every day saying that we’re in an “AI bubble.” But AI has gone beyond mere speculation as companies (hyperscalers) are in early-stage infrastructure buildout mode. Hyperscalers understand this momentum. They have seen this movie before with a different protagonist, and they know the story ends with transformation, not collapse. The need for transformative compute, power, and connectivity is the catalyst driving a new generation of data center buildouts. The applications, the productivity, and the tools are there. And unlike the early cloud era, sustainable AI-related revenue is a predictable balance sheet line item.

The Data

Consider these most recent quarterly earnings:

  • Microsoft Q3 2025: Revenue: $70.1B, up 13%. Net income: $25.8B, up 18%. Intelligent Cloud grew 21% led by Azure, with 16 points of growth from AI services.
  • Amazon Q3 2025: Revenue: $180.2B, up 13%. AWS grew 20% to $33B. Trainium2, its second-gen AI chip, is a multi-billion-dollar line. AWS added 3.8 GW of power capacity in 12 months due to high demand.
  • Alphabet (Google Parent) Q3 2025: Revenue: $102.35B, up 16%. Cloud revenue grew 33% to $15.2B. Operating income: up nearly 85%, backed by $155B cloud backlog.
  • Meta Q3 2025: Revenue: $51.2B, up 26%. Increased infrastructure spend focused on expanding AI compute capacity. (4)

These are not the signs of a bubble. These are the signatures of a platform shift, and the companies leading it are already realizing returns while businesses weave AI into operations.

Bubble or Bottleneck

However, let’s be clear about this analogy: AI is not simply the next chapter of the cloud. Instead, it builds on and accelerates the cloud’s original mission: making extraordinary computing capabilities accessible and scalable. While the cloud democratized computing, AI is now democratizing intelligence and autonomy. This evolution will transform how we work, secure systems, travel, heal, build, educate, and solve problems.

Just as there were cloud critics, we now have AI critics. They say that aggressive capital spending, rising energy demand, and grid strain are signs that the market is already overextended. The pundits are correct about the spending:

  • Alphabet (Google) Q3 2025: ~US $24B on infrastructure oriented toward AI/data centers.
  • Amazon (AWS) Q3 2025: ~US $34.2B, largely on infrastructure/AI-related efforts.
  • Meta Q3 2025: US $19.4B directed at servers/data centers/network infrastructure for AI.
  • Microsoft Q3 2025: Roughly US $34.9B, of which perhaps US $17-18B or more is directly AI/data-center infrastructure (based on “half” of capex).

However, the pundits’ underlying argument is predicated on the same misunderstandings seen in the run-up to the cloud era: it confuses infrastructure investment with excess spending. The challenge with AI is not too much capacity; it is not enough. Demand is already exceeding grid capacity, land availability, power transmission expansion, and specialized equipment supply.

Bubbles do not behave that way; they generate idle capacity. For example, consider the collapse of Global Crossing. The company created the first transcontinental internet backbone by laying 100,000 route-miles of undersea fiber linking 27 countries.

Unfortunately, Global Crossing did not survive the dot-com bubble burst (1990-2000) and filed for bankruptcy. However, Level 3, then CenturyLink (2017), and Lumen Technologies knew better than to listen to Wall Street and acquired Global Crossing’s cables. Today, Lumen has reported total 2024 revenue of $13.1 billion. Although they don’t specifically list submarine cable business revenue, it’s reasonable to infer that these cables are still generating in the low billion-dollar revenue figures—a nice perpetual paycheck for not listening to the penny pinchers.

The AI economy is moving the value chain down the same path of sustainable profitability. But first, we must address factors such as data center proximity to grid strength, access to substation expansion, transformer supply, water access, cooling capacity, and land for modern power-intensive compute loads.

Power, Land, and the New Workforce

The cloud era prioritized fiber; the AI era is prioritizing power. Transmission corridors, utility partnerships, renewable integration, cooling systems, and purpose-built digital land strategies are essential for AI expansion. And with all that comes the “pick and shovel” jobs building data centers, which Wall Street does not factor into the AI economy. You need to look no further than Caterpillar’s Q3 2025 sales and revenue of $16.1 billion, up 10 percent.

Often overlooked in the tech hype are the industrial, real estate, and power grid requirements for data center builds, which require skilled workers such as electricians, steelworkers, construction crews, civil engineers, equipment manufacturers, utility operators, grid modernizers, and renewable developers. And once they’re up and running, data centers need cloud and network architects, cybersecurity analysts, and AI professionals.

As AI scales, it will lift industrial landowners, renewable power developers, utilities, semiconductor manufacturers, equipment suppliers, telecom networks, and thousands of local trades and service ecosystems, just as it’s lifting Caterpillar. It will accelerate infrastructure revitalization and strengthen rural and suburban economies. It will create new industries, just like the cloud did with Software as a Service (SaaS), e-commerce logistics, digital banking, streaming media, and remote-work platforms.

Conclusion

We’ve seen Wall Street mislabel some of the most significant tech expansions, from the telecom-hotel buildout of the 1990s to the co-location wave, global fiber expansion, hyperscale cloud, and now, with AI. Just like all revolutionary ideas, skepticism tends to precede them, even though there’s an inevitability to them. But stay focused: infrastructure comes before revenue, and revenue tends to arrive sooner than predicted, which brings home the point that AI is not inflating; it is expanding.

Smartphones reshaped consumer behavior within a decade; AI will reshape the industry in less than half that time. This is not a bubble. It is an infrastructure super-cycle predicated on electricity, land, silicon, and ingenuity. Now is the time to act: those who build power-first digital infrastructure are not in the hype business; they’re laying the foundation for the next century of economic growth.

# # #

About the Author

Ryne Friedman is an Associate at hi-tequity, where he leverages his commercial real estate expertise to guide strategic site selection and location analysis for data center development. A U.S. Coast Guard veteran and licensed Florida real estate professional, he previously supported national brands such as Dairy Queen, Crunch Fitness, Jimmy John’s, and 7-Eleven with market research and site acquisition. His background spans roles at SLC Commercial, Lambert Commercial Real Estate, DSA Encore, and DataCenterAndColocation. Ryne studied Business Administration and Management at Central Connecticut State University.

The post It’s Not An AI Bubble — We’re Witnessing the Next “Cloud” Revolution appeared first on Data Center POST.

Nostrum Data Centers Taps JLL Expertise, Powering Spain’s Rise as a Connectivity Hub

12 January 2026 at 13:00

Spain’s digital infrastructure landscape is entering a pivotal new phase, and Nostrum Data Centers is positioning itself at the center of that transformation. By engaging global real estate and data center advisory firm JLL, Nostrum is accelerating the development of a next-generation, AI-ready data center platform designed to support Spain’s emergence as a major connectivity hub for Europe and beyond.

Building the Foundation for an AI-Driven Future

Nostrum Data Centers, the digital infrastructure division of Nostrum Group, is developing a portfolio of sustainable, high-performance data centers purpose-built for artificial intelligence, cloud computing, and high-density workloads. In December 2025, the company announced that its data center assets will be available in 2027, with land and power already secured across all sites, an increasingly rare advantage in today’s constrained infrastructure markets.

The platform includes 500 MW of secured IT capacity, with an additional 300 MW planned for future expansion, bringing total planned development to 800 MW across Spain. This scale positions Nostrum as one of the country’s most ambitious digital infrastructure developers at a time when demand for compute capacity is accelerating across Europe.

Strategic Locations, Connected by Design

Nostrum’s six data center developments are strategically distributed throughout Spain to capitalize on existing power availability, fiber routes, internet exchanges, and subsea connectivity. This geographic diversity allows customers to deploy capacity where it best supports latency-sensitive workloads, redundancy requirements, and long-term growth strategies.

Equally central to Nostrum’s approach is sustainability. Each facility is designed in alignment with the United Nations Sustainable Development Goals (SDGs), delivering industry-leading efficiency metrics, including a Power Usage Effectiveness (PUE) of 1.1 and zero Water Usage Effectiveness (WUE), eliminating water consumption for cooling.

Why JLL? And Why Now?

To support this next phase of growth, Nostrum has engaged JLL to strengthen its go-to-market strategy and customer engagement efforts. JLL brings deep global experience in data center advisory, site selection, and market positioning, helping operators translate technical infrastructure into compelling value for hyperscalers, enterprises, and AI-driven tenants.

“Nostrum Data Centers has a long-term vision for balancing innovation and sustainability. We offer our customers speed to market and scalability throughout our various locations in Spain, all while leading a green revolution to ensure development is done the right way as we position Spain as the next connectivity hub,” says Gabriel Nebreda, Chief Executive Officer at Nostrum Group. “We are confident that our engagement with JLL will be able to help us bolster our efforts and achieve our long-term vision.”

From JLL’s perspective, Spain presents a unique convergence of advantages.

“Spain has a unique market position with its access to robust power infrastructure, its proximity to Points of Presence (PoPs), internet exchanges, subsea connectivity, and being one of the lowest total cost of ownership (TCO) markets,” says Jason Bell, JLL Senior Vice President of Data Center and Technology Services in North America. “JLL is excited to be working with Nostrum Data Centers, providing our expertise and guidance to support their quest to be a leading data center platform in Spain, as well as position Spain as the next connectivity hub in Europe and beyond.”

Advancing Spain’s Role in the Global Digital Economy

With JLL’s support, Nostrum Data Centers is further refining its strategy to meet the technical and operational demands of AI and high-density computing without compromising on efficiency or sustainability. The result is a platform designed not just to meet today’s requirements, but to anticipate what the next decade of digital infrastructure will demand.

As hyperscalers, AI developers, and global enterprises look for scalable, energy-efficient alternatives to traditional European hubs, Spain, and Nostrum Data Centers, are increasingly part of the conversation.

Learn more about Nostrum Data Centers: www.thenostrumgroup.com/nostrum-data-centres

Click here to read the press release!

The post Nostrum Data Centers Taps JLL Expertise, Powering Spain’s Rise as a Connectivity Hub appeared first on Data Center POST.

AI Is Moving to the Water’s Edge, and It Changes Everything

5 January 2026 at 15:00

A new development on the Jersey Shore is signaling a shift in how and where AI infrastructure will grow. A subsea cable landing station has announced plans for a data hall built specifically for AI, complete with liquid-cooled GPU clusters and an advertised PUE of 1.25. That number reflects a well-designed facility, but it highlights an emerging reality. PUE only tells us how much power reaches the IT load. It tells us nothing about how much work that power actually produces.

As more “AI-ready” landing stations come online, the industry is beginning to move beyond energy efficiency alone and toward compute productivity. The question is no longer just how much power a facility uses, but how much useful compute it generates per megawatt. That is the core of Power Compute Effectiveness, PCE. When high-density AI hardware is placed at the exact point where global traffic enters a continent, PCE becomes far more relevant than PUE.

To understand why this matters, it helps to look at the role subsea landing stations play. These are the locations where the massive internet cables from overseas come ashore. They carry banking records, streaming platforms, enterprise applications, gaming traffic, and government communications. Most people never notice them, yet they are the physical beginning of the global internet.

For years, large data centers moved inland, following cheaper land and more available power. But as AI shifts from training to real-time inference, location again influences performance. Some AI workloads benefit from sitting directly on the network path instead of hundreds of miles away. This is why placing AI hardware at a cable landing station is suddenly becoming not just possible, but strategic.

A familiar example is Netflix. When millions of viewers press Play, the platform makes moment-to-moment decisions about resolution, bitrate, and content delivery paths. These decisions happen faster and more accurately when the intelligence sits closer to the traffic itself. Moving that logic to the cable landing reduces distance, delays, and potential bottlenecks. The result is a smoother user experience.

Governments have their own motivations. Many countries regulate which types of data can leave their borders. This concept, often called sovereignty, simply means that certain information must stay within the nation’s control. Placing AI infrastructure at the point where international traffic enters the country gives agencies the ability to analyze, enforce, and protect sensitive data without letting it cross a boundary.

This trend also exposes a challenge. High-density AI hardware produces far more heat than traditional servers. Most legacy facilities, especially multi-tenant carrier hotels in large cities, were never built to support liquid cooling, reinforced floors, or the weight of modern GPU racks. Purpose-built coastal sites are beginning to fill this gap.

And here is the real eye-opener. Two facilities can each draw 10 megawatts, yet one may produce twice the compute of the other. PUE will give both of them the same high efficiency score because it cannot see the difference in output. Their actual productivity, and even their revenue potential, could be worlds apart.

PCE and ROIP, Return on Invested Power, expose that difference immediately. PCE reveals how much compute is produced per watt, and ROIP shows the financial return on that power. These metrics are quickly becoming essential in AI-era build strategies, and investors and boards are beginning to incorporate them into their decision frameworks.

What is happening at these coastal sites is the early sign of a new class of data center. High density. Advanced cooling. Strategic placement at global entry points for digital traffic. Smaller footprints but far higher productivity per square foot.

The industry will increasingly judge facilities not by how much power they receive, but by how effectively they turn that power into intelligence. That shift is already underway, and the emergence of AI-ready landing stations is the clearest signal yet that compute productivity will guide the next generation of infrastructure.

# # #

About the Author

Paul Quigley is the former President and current Chief Strategic Partnership Officer of Airsys Cooling Technologies, and a global advocate for high density, energy efficient data center design. With more than three decades in HVAC and mission critical cooling, he focuses on practical solutions that connect energy stewardship with real world compute performance. Paul writes and speaks internationally about PCE, ROIP, and the future of data center health in the age of AI.

The post AI Is Moving to the Water’s Edge, and It Changes Everything appeared first on Data Center POST.

The Critical Role of Material Selection in Direct-to-Chip Liquid Cooling

30 December 2025 at 15:00

As data centers evolve to support AI, edge computing, and high-density cloud architectures, the challenge is no longer just maintaining an optimal power usage effectiveness (PUE), it is about achieving thermal reliability with unprecedented compute loads. Direct-to-chip liquid cooling continues to push the envelope on heat transfer performance, but one underestimated element in overall system reliability is the material composition of the fluid conveyance network itself. The hoses and fittings that transport coolant through these systems operate in extreme thermal and chemical environments, and their design directly influences uptime, maintenance intervals, and total cost of ownership.

Why Material Selection Matters

At its core, a liquid cooling system is only as reliable as its weakest component. If hoses, fittings, or seals fail due to poor material compatibility, the result could be leaks, contamination, or shortened system life, leading to downtime and costly remediation. Rubber, and rubber-like, materials are critical in hose assemblies, as they must balance flexibility for installation and serviceability with long-term resistance to temperature, pressure, permeation, coolant and coolant additives.

The challenge lies in the fact that not all rubbers, or rubber-like materials, are created equal. Each formulation is a complex recipe of polymers, plasticizers, fillers, and curing agents designed to deliver specific performance characteristics. The wrong selection can lead to issues such as fluid permeation, premature aging, or contamination of the coolant. In mission-critical environments like data centers, where even minor disruptions are unacceptable, this risk is magnified.

Temperature and Chemical Compatibility

Although data center cooling systems typically operate at temperatures between 45°C (113°F) and 65°C (149°F), sometimes reaching 100°C (212°F), those ranges can stress certain materials. Nitrile rubber, for example, performs well in oil-based environments but ages quickly in water-glycol systems, especially at higher temperatures. This can cause hardening, cracking, or coolant contamination.

By contrast, ethylene propylene diene monomer (EPDM) rubber has excellent resistance to water, glycols, and the additives commonly used in data center coolants, such as corrosion inhibitors and biocides. EPDM maintains its properties across the required operating range, making it a proven choice for direct-to-chip applications.

However, not all EPDM is the same. Developing the right EPDM for the application demands a deep understanding of polymer chemistry, filler interactions, and process control to precisely balance flexibility, heat resistance, and long-term stability.

Additionally, two curing processes, sulfur-cured and peroxide-cured, produce different performance outcomes. Sulfur-cured EPDM, while widely used, introduces zinc ions during the curing process. When exposed to deionized water, these ions can leach into the coolant, causing contamination and potentially degrading system performance. Peroxide-cured EPDM avoids this issue, offering higher temperature resistance, lower permeation rates, and greater chemical stability, making it the superior choice for modern liquid cooling.

Even among peroxide-cured EPDM compounds, long term performance is not uniform. While the cure system defines the crosslink chemistry, other formulation choices, particularly filler selection and dispersion, can influence how the material performs over time.

The use of fillers and additives is common in rubber compounding. These ingredients are often selected to control cost, improve processability, or achieve certain performance characteristics such as flame resistance, strength, or flexibility.

The challenge is that some filler systems perform well during initial qualification but are not optimized for long-term exposures faced in the operating environment. Certain fillers or processing aids can slowly migrate over time, introducing extractables into the coolant or subtly altering elastomer properties. For data center applications, EPDM compounds must therefore be engineered with a focus on long term stability, reinforcing why EPDM should not be treated as a commodity material in critical cooling systems.

Risks of Non-Compatible Materials

Material incompatibility can have several cascading effects:

  • Contamination – Non-compatible materials can leach extractables into the coolant, leading to discoloration, chemical imbalance, and reduced thermal efficiency.
  • Permeation – Some rubbers allow fluid to slowly migrate through the hose walls, causing coolant loss or altering the fluid mixture over time.
  • Premature Failure – Elevated temperatures can accelerate aging, leading to cracking, swelling, or loss of mechanical strength.
  • Leakage – Rubber under compression may deform over time, jeopardizing seal integrity if not properly formulated for resistance to compression set and tear.

In a recent two-week aging test at 80°C using a water-glycol coolant, hoses made of nitrile and sulfur-cured EPDM showed visible discoloration of the coolant, indicating leaching and breakdown of the material. Peroxide-cured EPDM, on the other hand, maintained stability, demonstrating its compatibility and reliability in long-term data center applications.

The Gates Approach

Drawing on lessons from mission critical industries that have managed thermal challenges for decades, Gates engineers apply material science rigor to the design of liquid cooling hoses for data center applications.

Rather than relying solely on initial material ratings or short-term qualification criteria, Gates begins by tailoring compound design to the operating environment. This includes deliberate control of polymer selection, filler systems, and cure chemistry to manage long term aging behavior, extractables, permeation, and retention of mechanical properties over time in high purity coolant systems.

Compounds are validated through extended aging and immersion testing that reflects real operating conditions, including exposure to heat, deionized water, and water-glycol coolants. This allows potential material changes to be identified and addressed during development, before installation in the field.

This material science driven process is applied across Gates liquid cooling platforms, including the Data Master, Data Master MegaFlex, and newly released Data Master Eco product lines. By engineering for long term stability rather than only initial compliance, Gates designs hose solutions intended to support reliable operation, predictable maintenance intervals, and extended service life in direct-to-chip liquid cooled data center environments.

Looking Ahead

As data centers continue to scale, thermal management solutions must adapt in parallel. Advanced architectures, higher rack densities, and growing environmental regulations all point to a future where liquid cooling is standard. In this environment, material selection is no longer a secondary consideration; it is foundational to system reliability.

Operators who prioritize material compatibility in fluid conveyance lines will benefit from longer service intervals, improved coolant stability, and reduced risk of downtime. In other words, the proper rubber formulation doesn’t just move fluid, it moves the industry forward.

At Gates, sustainable, high-performance cooling begins with the details. By focusing on the science of materials, we help ensure that data center operators can confidently deploy liquid cooling systems designed for the challenges of today and the innovations of tomorrow.

# # #

About the Author

Chad Chapman is a Mechanical Engineer with over 20 years of experience in the fluid power industry. He currently serves as a Product Application Engineering Manager at Gates, where he leads a team that provides technical guidance, recommendations, and innovative solutions to customers utilizing Gates products and services.

Driven by a passion for problem-solving, Chad thrives on collaborating with customers to understand their unique challenges and deliver solutions that optimize performance. He is energized by learning about new applications and technologies, especially where insights can be shared across industries. At Gates, he has been exploring the emerging field of direct-to-chip liquid cooling, an exciting extension of his deep expertise in thermal management. The rapid advancements in IT technology and AI have made his journey an inspiring and rewarding learning experience.

The post The Critical Role of Material Selection in Direct-to-Chip Liquid Cooling appeared first on Data Center POST.

Making Sense Out of VDI Chaos

22 December 2025 at 19:00

If you’re an IT executive in a mid-sized business, planning your 2026-2027 budget, you’re seeing continued pressure to dedicate more budget to AI related investments. Businesses now must add AI spending to weighing the ROI against budget allocations for virtual desktop infrastructure (VDI), digital transformation, and SaaS applications.

With more limited budgets, mid-sized businesses are in constant struggle to correctly prioritize spending. In the case of VDI, budgeting has gotten more interesting as the market has undergone a major upheaval with new brands, acquisitions and some vendors trying to hold on to market share they gained pre-upheaval. As a result, mid-market businesses, somewhat unwillingly, have had to reassess their VDI related investments and relationships, including their investment in the hardware and software needed to support their hybrid workforce.

VDI market changes have prompted mid-sized businesses to explore new options for their endpoint VDI deployments. They’re looking for improved economies, more ability to customize and to avoid legacy-style locked-in agreements.

Moving Past the Chaos

VDI remains a dominant force in enabling digital transformation and hybrid workforce productivity. The global VDI market is estimated to reach $78 billion by 2032, a CAGR growth rate of 22.1% from 2024. While vendors and providers serving the VDI market may change, the reality is, the need to deploy VDI will only increase as security concerns, remote work and cloud computing continue to make virtual desktops a desired choice.

The VDI industry can look a bit chaotic, but course correction was inevitable as long-term players face a different market in which businesses are looking for more flexibility and the ability to change relationships as their business and operational strategy evolves. It has opened the door to entities like Omnissa which offers a menu of subscription term lengths starting at one year. The legacy, multi-year agreements are giving way to these more flexible options.

To move past VDI market changes, it’s best to focus first on what a business needs in endpoint investments over the next several years. Key considerations include:

  • New technology investments to improve workspace productivity and employee engagement.
  • Clarifying AI business strategy to determine what is needed in endpoint device support.
  • Updating anticipated hybrid workforce headcount to avoid purchasing shortfalls.
  • Evaluating needed endpoint security and compliance improvements.

Once this evaluation is done a business can look at the landscape of VDI choices and fine tune purchasing.

Where Endpoint Hardware Fits

Businesses’ changing approach to VDI and endpoint investment has spurred new interest in evaluating hardware options, notably thin clients and zero clients. Thin clients, in one form or another, have been in use for decades. However, the adoption of VDI and acceleration of remote work has made modern thin clients an essential element in endpoint computing. They offer time and money savings compared to legacy ‘fat’ PCs, with a smaller form factor. Thin clients display remote desktop sessions, while virtual machines (VMs) host the centralized compute operations. Since data is not stored locally, thin clients offer improved security when a hybrid workforce is accessing files and applications at different locations around the globe.

For mid-sized businesses, with few IT professionals already managing many tasks, a modern thin client offers centralized management of on-premises and off-premises endpoints, saving IT considerable time.

Zero clients connect solely and instantly to a remote desktop and reduce cyber threats even further, since they are a leaned down version of a thin client, often connecting to a singular platform only. They are based around zero trust principles and restrict users from saving data locally. When evaluating thin client and zero client choices, some key questions to ask are:

  • Are you supplying thin clients for primarily task workers, power users, or a combination of both? A task worker may only need an Intel Atom x5-E8000 Quad Core Processor, two display ports and four USB ports with an RJ45 connector. A knowledge worker or power user will likely need an Intel N100 Quad Core Processor, two HDMI connectors, 60Hz screen support, six USB ports and an RJ45 connector.
  • Will a thin client need to integrate with a number of VDI and application providers? A flexible thin client will be able to connect to (AVD) Azure Virtual Desktop, Citrix, Omnissa and Windows 365 Cloud PC, among others, to satisfy the needs of different workers and use cases.
  • Does your business involve protecting highly sensitive data subject to stringent compliance regulations? Thin and zero clients that are feature-rich to comply with strict data protection protocols will be a necessary requirement.
  • Do you have separate licensing agreements for endpoint management software and hardware? In many cases integration of licensing agreements can help save budgets and streamline management.
  • Are you looking to move to different subscription and payment models? Mid-sized businesses will find more competitive options in the market that offer flexible term agreements. Businesses also want to avoid being locked into pricier agreements due to vendor mergers, and to avoid ‘tag-on’ fees that can multiply when a vendor adds technology features. They will be critically evaluating options to avoid any unnecessary budget increases.
  • What level of technical support will your IT staff require, from initial installations to firmware updates? Providers vary in pricing for ongoing tech support and what’s covered in the purchasing agreement.

Creating the 2026 Strategy

Going into 2026, it is more of a buyer’s market as companies want to customize their VDI and related investments to better support overall business and endpoint computing goals. Flexible, finely curated agreements will win in the marketplace. To be the most effective, a business will benefit from first examining 2026’s larger goals in workspace improvements, security and compliance and technology investments. This analysis will help more precisely evaluate thin clients and zero purchasing. The VDI market is still recovering from its chaotic period, but mid-sized businesses can avoid the chaos with well thought-out strategies and informed decision making.

# # #

About the Author

Kevin Greenway joined 10ZiG in 2012 and became CTO in 2015. He leads the company’s overall technology and product strategy, collaborating with global teams to ensure continuous innovation in a fast-paced, disruptive market. Under his leadership, 10ZiG delivers modern, managed, and secure endpoints through a unified hardware and software approach.

A computer science graduate with numerous IT certifications, Kevin has more than 25 years of experience in the IT sector, including remote connectivity, terminal emulation, VoIP, unified communications, and VDI remoting protocols. Since joining 10ZiG, he has focused exclusively on VDI and End User Computing (EUC) and oversees strategic technology alliances with leading partners such as Citrix, Microsoft, and Omnissa.

Outside of work, Kevin is a devoted family man who enjoys spending time with his wife, two children, and their dog. He enjoys running, cycling and watching sports such as Motorsport & Football/Soccer, especially his son’s team and Leicester City FC.

The post Making Sense Out of VDI Chaos appeared first on Data Center POST.

Where Is AI Taking Data Centers?

10 December 2025 at 16:00

A Vision for the Next Era of Compute from Structure Research’s Jabez Tan

Framing the Future of AI Infrastructure

At the infra/STRUCTURE Summit 2025, held October 15–16 at the Wynn Las Vegas, Jabez Tan, Head of Research at Structure Research, opened the event with a forward-looking keynote titled “Where Is AI Taking Data Centers?” His presentation provided a data-driven perspective on how artificial intelligence (AI) is reshaping digital infrastructure, redefining scale, design, and economics across the global data center ecosystem.

Tan’s session served as both a retrospective on how far the industry has come and a roadmap for where it’s heading. With AI accelerating demand beyond traditional cloud models, his insights set the tone for two days of deep discussion among the sector’s leading operators, investors, and technology providers.

From the Edge to the Core – A Redefinition of Scale

Tan began by looking back just a few years to what he called “the 2022 era of edge obsession.” At that time, much of the industry believed the future of cloud would depend on thousands of small, distributed edge data centers. “We thought the next iteration of cloud would be hundreds of sites at the base of cell towers,” Tan recalled. “But that didn’t really happen.”

Instead, the reality has inverted. “The edge has become the new core,” he said. “Rather than hundreds of small facilities, we’re now building gigawatts of capacity in centralized regions where power and land are available.”

That pivot, Tan emphasized, is fundamentally tied to economics, where cost, energy, and accessibility converge. It reflects how hyperscalers and AI developers are chasing efficiency and scale over proximity, redefining where and how the industry grows.

The AI Acceleration – Demand Without Precedent

Tan then unpacked the explosive demand for compute since late 2022, when AI adoption began its steep ascent following the launch of ChatGPT. He described the industry’s trajectory as a “roller coaster” marked by alternating waves of panic and optimism—but one with undeniable momentum.

The numbers he shared were striking. NVIDIA’s GPU shipments, for instance, have skyrocketed: from 1.3 million H100 Hopper GPUs in 2024 to 3.6 million Blackwell GPUs sold in just the first three months of 2025, a threefold increase in supply and demand. “That translates to an increase from under one gigawatt of GPU-driven demand to over four gigawatts in a single year,” Tan noted.

Tan linked this trend to a broader shift: “AI isn’t just consuming capacity, it’s generating revenue.” Large language model (LLM) providers like OpenAI, Anthropic, and xAI are now producing billions in annual income directly tied to compute access, signaling a business model where infrastructure equals monetization.

Measuring in Compute, Not Megawatts

One of the most notable insights from Tan’s session was his argument that power is no longer the most accurate measure of data center capacity. “Historically, we measured in square footage, then in megawatts,” he said. “But with AI, the true metric is compute, the amount of processing power per facility.”

This evolution is forcing analysts and operators alike to rethink capacity modeling and investment forecasting. Structure Research, Tan explained, is now tracking data centers by compute density, a more precise reflection of AI-era workloads. “The way we define market share and value creation will increasingly depend on how much compute each facility delivers,” he said.

From Training to Inference – The Next Compute Shift

Tan projected that as AI matures, the balance between training and inference workloads will shift dramatically. “Today, roughly 60% of demand is tied to training,” he explained. “Within five years, 80% will be inference.”

That shift will reshape infrastructure needs, pushing more compute toward distributed yet interconnected environments optimized for real-time processing. Tan described a future where inference happens continuously across global networks, increasing utilization, efficiency, and energy demands simultaneously.

The Coming Capacity Crunch

Perhaps the most sobering takeaway from Tan’s talk was his projection of a looming data center capacity shortfall. Based on Structure Research’s modeling, global AI-related demand could grow from 13 gigawatts in 2025 to more than 120 gigawatts by 2030, far outpacing current build rates.

“If development doesn’t accelerate, we could face a 100-gigawatt gap by the end of the decade,” Tan cautioned. He noted that 81% of capacity under development in the U.S. today comes from credible, established providers, but even that won’t be enough to meet demand. “The solution,” he said, “requires the entire ecosystem, utilities, regulators, financiers, and developers to work in sync.”

Fungibility, Flexibility, and the AI Architecture of the Future

Tan also emphasized that AI architecture must become fungible, able to handle both inference and training workloads interchangeably. He explained how hyperscalers are now demanding that facilities support variable cooling and compute configurations, often shifting between air and liquid systems based on real-time needs.

“This isn’t just about designing for GPUs,” he said. “It’s about designing for fluidity, so workloads can move and scale without constraint.”

Tan illustrated this with real-world examples of AI inference deployments requiring hundreds of cross-connects for data exchange and instant access to multiple cloud platforms. “Operators are realizing that connectivity, not just capacity, is the new value driver,” he said.

Agentic AI – A Telescope for the Mind

To close, Tan explored the concept of agentic AI, systems that not only process human inputs but act autonomously across interconnected platforms. He compared its potential to the invention of the telescope.

“When Galileo introduced the telescope, it challenged humanity’s view of its place in the universe,” Tan said. “Large language models are doing something similar for intelligence. They make us feel small today, but they also open an entirely new frontier for discovery.”

He concluded with a powerful metaphor: “If traditional technologies were tools humans used, AI is the first technology that uses tools itself. It’s a telescope for the mind.”

A Market Transformed by Compute

Tan’s session underscored that AI is redefining not only how data centers are built but also how they are measured, financed, and valued. The industry is entering an era where compute density is the new currency, where inference will dominate workloads, and where collaboration across the entire ecosystem is essential to keep pace with demand.

Infra/STRUCTURE 2026: Save the Date

Want to tune in live, receive all presentations, gain access to C-level executives, investors and industry leading research? Then save the date for infra/STRUCTURE 2026 set for October 7-8, 2026 at The Wynn Las Vegas. Pre-Registration for the 2026 event is now open, and you can visit www.infrastructuresummit.io to learn more.

The post Where Is AI Taking Data Centers? appeared first on Data Center POST.

AI and the Next Frontier of Digital Infrastructure

8 December 2025 at 16:00

Insights from Structure Research, Applied Digital, PowerHouse Data Centers, and DataBank

A New Era of Infrastructure Growth

At the infra/STRUCTURE Summit 2025, held October 15–16 at the Wynn Las Vegas, the session titled “AI: The Next Frontier” brought together data center leaders to discuss how artificial intelligence is reshaping infrastructure demand, investment, and development strategy.

Moderated by Jabez Tan, Head of Research at Structure Research, the conversation featured Wes Cummins, CEO of Applied Digital; Luke Kipfer, Managing Director at PowerHouse Data Centers; and Raul Martynek, CEO of DataBank. Each offered unique perspectives on how their organizations are adapting to the acceleration of AI workloads and what that means for power, scale, and capital in the years ahead.

Industry Transformation – From Hyperscale to AI-Scale

Jabez Tan opened the discussion by reflecting on how quickly the market has shifted. Just one year ago, many were questioning the durability of AI-related infrastructure investments. Now, as Tan observed, “The speed of change has outpaced even the most optimistic expectations.”

Wes Cummins of Applied Digital illustrated this evolution through his company’s own transformation. “We started building Bitcoin data centers,” Cummins said. “We were never a miner, we just built the facilities. Then, when AI took off, we realized our designs could scale. We pivoted early, and when ChatGPT hit, the entire world changed.”

That pivot positioned Applied Digital to become a key player in the new era of high-performance computing (HPC) and GPU-intensive workloads, with facilities like its large-scale campus in North Dakota exemplifying how traditional models have been re-engineered for AI.

Building for Scale – Meeting the Demand Wave

Raul Martynek of DataBank and Luke Kipfer of PowerHouse Data Centers both emphasized how scale and speed have become the defining factors of success. “As an executive developer, you have to have the conviction to bring inventory to market,” Martynek said. “If you’re building in good markets and with the right customers, there’s enormous room for growth.”

Cummins agreed, stressing that the conversation has shifted beyond simply securing power. “We’re moving past the question of who has power,” Cummins said. “Now it’s about who can build at scale, deliver reliably, and operate efficiently. Construction timelines, supply chain access, and delivery speed are the new gating factors.”

The panelists noted that hyperscalers are no longer alone in this race. New AI-focused firms, GPU as a service providers, and cloud entrants are competing for capacity at unprecedented levels, pushing the industry to think and build faster.

Site Strategy and Market Evolution – Staying Close to the Core

On the question of site selection, Martynek explained that traditional Tier 1 markets remain critical, though the boundaries are expanding. “Proximity to major availability zones is still a sound long-term strategy,” Martynek said. “We’re buying land in emerging submarkets of Virginia, for example, close enough to the core, but flexible enough to support scale.”

Kipfer added that hyperscalers’ preferences vary by workload type. “For AI and machine learning, some customers can be farther from peering points,” Kipfer said. “But for commercial cloud and enterprise use cases, Tier 1 and Tier 1-adjacent locations still offer the lowest risk and greatest performance.”

Together, their remarks reflected a balanced market dynamic, one where new geographies are gaining traction, but traditional hubs remain foundational to large-scale AI deployments.

Is This a Bubble? – Understanding the AI Surge

As AI investment accelerates, Tan posed the question many in the audience were thinking: Are we in another tech bubble?

Cummins was direct in his response. “I lived through the dot-com bubble,” he said. “This is different. The rate of adoption and real-world application is unlike anything we’ve ever seen.” He pointed out that ChatGPT reached a billion daily queries in just over two years—compared to Google’s eleven-year journey to the same milestone. “The computing power behind that is staggering,” he added.

Martynek agreed, noting that despite the hype, constraints in power, supply chain, and construction capacity make overbuilding virtually impossible. “It’s actually very hard to build too much right now,” he said. “The market is self-regulating through those bottlenecks.”

Capital Strategy and Long-Term Partnerships

A major theme throughout the discussion was the evolving capital stack supporting AI infrastructure. Martynek shared that DataBank has attracted strong investment from institutional partners seeking stable, long-term returns. “We’ve created investment-grade structures backed by 15-year commitments from top-tier customers,” Martynek said. “That gives our investors confidence and gives us visibility into future growth.”

Cummins added that Applied Digital’s focus is on securing long-term offtake agreements with the right clients, those building sustainable businesses rather than speculative projects. “These are 15-year-plus commitments from high-quality counterparties,” Cummins said. “That’s what allows us to build aggressively but responsibly.”

The panel agreed that long-term alignment between capital providers, developers, and customers will define the next phase of industry maturity.

The Future of AI Infrastructure – Speed, Scale, and Cooperation

Looking ahead, all three panelists emphasized the need for ongoing collaboration across the ecosystem. From developers to operators to hyperscalers, success will depend on shared innovation and operational agility.

Cummins summed up the moment: “The world isn’t going back. We’ve unlocked a new era of computing, and our challenge is to keep up with it. Speed is everything.”

Martynek added, “We’re not overbuilding, we’re underprepared. The companies that can execute with discipline and partnership will define the next decade of infrastructure growth.”

A Market Fueled by Real Demand

The discussion underscored that the AI-driven infrastructure boom is not speculative, it’s structural. Adoption is accelerating faster than any previous technology wave, supply is constrained, and capital is flowing toward long-term, revenue-backed projects. The result is a market with strong fundamentals, focused execution, and unprecedented potential for innovation.

Infra/STRUCTURE 2026: Save the Date

Want to tune in live, received all presentations, gain access to C-level executives, investors and industry leading research? Then save the date for infra/STRUCTURE 2026 set for October 7-8, 2026 at The Wynn Las Vegas. Pre-Registration for the 2026 event is now open, and you can visit www.infrastructuresummit.io to learn more.

The post AI and the Next Frontier of Digital Infrastructure appeared first on Data Center POST.

Investment Perspectives: Navigating the Future of Digital Infrastructure

4 December 2025 at 16:00

Insights from RBC Capital Markets, Compass Datacenters, and TD Securities

Understanding the Investment Landscape in a New Era of AI

The infra/STRUCTURE Summit 2025, held October 15–16 at the Wynn Las Vegas, brought together the world’s leading voices in digital infrastructure to explore the industry’s rapid transformation. Among the standout sessions was Investment Perspectives, where experts discussed how artificial intelligence (AI), energy constraints, and capital strategy are reshaping investment decisions and the future of data center development.

Moderated by Jonathan Atkin, Managing Director at RBC Capital Markets, the panel featured Jonathan Schildkraut, Chief Investment Officer at Compass Datacenters, and Colby Synesael, Managing Director at TD Securities. Together, they provided clear insights into the trends influencing where, why, and how capital is being deployed in the infrastructure sector today.

The Shifting Demand Curve: How AI is Driving Data Center Growth

Jonathan Schildkraut opened the discussion by outlining the four primary workloads fueling infrastructure demand: AI training, AI inference, cloud, and social media. He described these workloads as the engines of growth for the sector, emphasizing that most are revenue-generating. “Three of those four buckets are cash registers,” Schildkraut said. “We’re really seeing those revenue-generating workloads accelerating.”

Colby Synesael added that the balance between AI training and inference is shifting quickly. “A year ago, roughly 75% of AI activity was training and 25% inference,” Synesael explained. “In five years, that ratio could reverse. A lot of inferencing will occur near where applications are used, which changes how we think about data center deployment.” Their remarks highlighted a clear message: AI continues to be the dominant force shaping infrastructure demand, but its evolution is redefining both scale and location.

Market Expansion and Power Constraints 

As Tier 1 data center markets face mounting limitations in available land and energy, both Schildkraut and Atkin noted the increasing strategic importance of Tier 2 and Tier 3 regions. Schildkraut cited examples such as Alabama, Georgia, and Texas, which are emerging as viable alternatives due to improved fiber connectivity and more favorable power economics.

Capital Strategy and Facility Adaptability:Investing for the Long Term

The conversation also delved into how investors are evaluating opportunities in an environment of high demand and rapid technological change. Schildkraut explained that access to capital today depends on two critical factors: tenant quality and facility adaptability. “Investors want to know that the tenant and the workload will be there for the long term,” Schildkraut said. “They also care deeply about whether the facility can evolve with future technologies.”

To illustrate this, Schildkraut described Compass Datacenters’ initiative to upgrade power densities, increasing capacity from 6–7 kilowatts per rack to hybrid systems capable of supporting up to 30 kilowatts. This investment is designed to ensure readiness for the next generation of high performance computing and AI workloads. These types of forward looking strategies are helping operators and investors manage both risk and opportunity in an increasingly complex market.

Globalization and Policy Influence 

When the conversation turned to global trends, Schildkraut predicted that AI infrastructure deployment will expand worldwide but at uneven rates. “Availability of power and land isn’t uniform,” he said. “Government incentives will play a critical role in determining which markets can scale.”

Synesael agreed, adding that regions lacking modern AI infrastructure could face growing disadvantages. “Over the next several years, not having this infrastructure in your country or region will become a major constraint on innovation,” Syneasel said. Their perspectives reinforced that infrastructure development is no longer just a commercial priority, it is also a matter of national competitiveness.

A Market Redefined by Technology and Energy

The discussion revealed that the digital infrastructure market is entering a new phase defined by the convergence of AI driven workloads, energy constraints, and strategic capital deployment. As inference workloads expand, Tier 2 and Tier 3 markets rise in importance, and investors prioritize long-term flexibility, the industry’s success will depend on adaptability and foresight. The session made it clear that data centers are no longer just real estate, they are foundational assets powering the next wave of global innovation.

Infra/STRUCTURE 2026: Save the Date

Want to tune in live, receive all presentations, gain access to C-level executives, investors and industry leading research? Then save the date for infra/STRUCTURE 2026 set for October 7-8, 2026 at The Wynn Las Vegas. Pre-Registration for the 2026 event is now open, and you can visit www.infrastructuresummit.io to learn more.

The post Investment Perspectives: Navigating the Future of Digital Infrastructure appeared first on Data Center POST.

Beyond the Conference: PTC’s Commitment to Connection, Innovation, and Industry Empowerment with Brian Moon

25 November 2025 at 16:30

Episode 62 of the NEDAS Live! Podcast shines a spotlight on Brian Moon, CEO of Pacific Telecommunications Council (PTC), who joined host Ilissa Miller, CEO of iMiller Public Relations, for an in-depth conversation ahead of PTC’s 2026 Annual Conference. As PTC prepares for its 48th year connecting the digital infrastructure community, Moon shares how the organization is adapting to the age of AI, meeting evolving industry needs, empowering members, and fostering innovation.

Evolving Beyond Tradition: PTC’s Growth in the Age of AI

PTC has long been recognized for its January conference in Honolulu, a staple for global industry leaders from across wireline, wireless, subsea, satellite, and data center sectors. Brian Moon traces PTC’s evolution from its origins as a Pacific-focused membership meeting to its current role as a global convener, now at the convergence of AI, edge, and cloud innovation. “It isn’t siloed anymore. AI is interconnecting and converging all the other industries. Nothing works without each other now,” Moon notes. Recent conference sell-outs reflect the enthusiastic embrace of PTC’s refreshed programming and more diverse, tech-forward offerings.​

Member-First Mentality and Year-Round Value

Recognizing that industry professionals want more than a once-a-year event, Moon highlights how PTC reinvests its not-for-profit proceeds to support members. From providing meeting spaces at major industry events to organizing exclusive luncheons and ongoing education programs, PTC prioritizes networking, knowledge-sharing, and tangible benefits. “We want to make sure our members see that their dues are going towards something meaningful,” Moon explains. The upcoming conference’s robust member benefits, accessible pricing, and expanded activities demonstrate a commitment to value and inclusion.​

Leadership, Talent, and Next-Gen Empowerment

A major theme this year is leadership, which is embodied by the debut of the Alaka‘i Stage (meaning “to lead” or “to guide” in Hawaiian), which reimagines thought leadership sessions to foster deeper connections between attendees and top executives. PTC is also addressing industry succession with two leadership development initiatives: the Academy Master Class for mid-career professionals and the Top Talent Leadership program in partnership with Columbia Business School. “These are just a few ways that we’re contributing back to the industry,” explains Moon.

Inclusion Initiatives: Laulima and Industry Diversity

PTC’s new Week of Laulima, Hawaiian for “many hands coming together”, puts a spotlight on women in critical infrastructure. Featuring tracks and safe spaces for networking, coaching, and peer celebration, this program is helping drive strong female representation and engagement at the annual event. “We want all participants to feel they belong and can thrive here,” Moon says, as surging engagement in industry group chats and programming shows the impact.​

Looking Ahead: Convening, Educating, and Innovating

As the intersection of AI, data centers, and connectivity accelerates, Moon underscores PTC’s dual role as convener and educator, providing factual context when public perceptions of the digital infrastructure sector are at stake, including environmental and community impacts. The organization aims to support industry growth and keep their members ahead of the curve, whether through connection, education, or advocacy.

With the PTC Annual Conference on the horizon, the organization continues to shape the global conversation, bringing together the leaders, innovators, and future talent driving the digital economy forward.

The PTC’26 event takes place in Honolulu at the Hilton Hawaiian Village starting Sunday, January 18 through Wednesday, January 21, 2026. The invite-only member’s soiree kicks off the festivities on Saturday, January 17, 2026.

For more information about the event, membership and to register for a pass, visit ptc.org.

To continue the conversation, listen to the full podcast episode here.

The post Beyond the Conference: PTC’s Commitment to Connection, Innovation, and Industry Empowerment with Brian Moon appeared first on Data Center POST.

AI Data Centers Are Ready to Explode, If the Grid Can Keep Up

24 November 2025 at 16:00

Having spent most of my career at the nexus of power generation and industrial infrastructure, I can safely say that few things have stressed the American electric grid quite like the explosive growth in AI-driven data centers. At Industrial Info Resources, we are currently tracking more than $2.7 trillion in data center projects worldwide, including more than $1 trillion in new US investment in just nine short months.

It is not only technology that faces a skyrocketing demand; it’s about electricity. With its voracious power appetite, artificial intelligence is making plain just how unprepared the aging US power grid is for the next major step in technological evolution.

AI’s Appetite for Power

The amount of computational power AI requires is astonishing. More than 700 million new users have gone online in the past year alone, and according to estimates by OpenAI, global compute demand could soon require a gigawatt of new capacity every week. That is roughly one big power station every seven days.

We are already seeing the ramifications in our project data at IIR Energy. A large number of the biggest hyperscale projects are reaching major capacity bottlenecks: utilities in some areas are telling data center operators they won’t be able to provide additional megawatts until as late as 2032. A few years ago, that kind of delay was unthinkable.

Limits like these are forcing developers to think out of the box when considering data center construction locations. No longer are they concentrating on central metro areas, but they are gravitating towards areas around transmission interconnections, wind or solar parks, or even existing industrial areas that are already served by substations.

The New York Independent System Operator’s Comprehensive Reliability Plan, or CRP, predicts impending power shortages across the state. It identifies three key challenges that are occurring at once: an older generation fleet, fast-rising loads from data centers and chip plants, and new hurdles to building supply. It’s a confluence of threats that are straining reliability planning to its limits.

An Outdated Grid Meets a $40 Trillion Market

With electricity demand having been stagnant for the past few years, improvements to the country’s collective power grid have not been prioritized. This recent rebound in load is meeting a grid that’s already congestion-prone and aging. Some regions face record-breaking congestion pricing and curtailment. Last week, PJM (the largest regional electricity transmission organization in the United States) saw wholesale capacity auction power prices jump roughly 800%.

This serves as a powerful reminder that while the digital economy proceeds at light speed, physical infrastructure doesn’t. Transmission upgrades require years to approve and construct, and generation projects may be held back by supply chains or local policy barriers. AI’s future, as grand as it is, now hinges on how fast we will upgrade physical systems that enable it.

Behind the Meter: The New Energy Strategy

Confronted with delayed delivery schedules and lengthy interconnection queues, data center builders are taking control themselves. Increasingly, they are making investments in “behind-the-meter” options to guarantee access to the power they require. They are considering natural gas turbines, high-end fuel cells, as well as extended renewable contracts that come with a direct path to generation independent of having to wait for upgrades from utilities. Technologies for liquid cooling are helping data center operators decrease freshwater consumption as they improve efficiency.

Data centers are no longer simple consumers of power. Increasingly, they are becoming power collaborators, in some instances, power generators. Utilities are adapting by teaming with developers to co-develop generation assets or reassessing baseload integrity. Next-generation designs are on track to reach a megawatt or more per rack by 2029.

Why Reliable Intelligence Matters

In a market changing this rapidly, it’s crucial to have reliable information. And that’s where IIR Energy offers a distinct edge. We follow projects from initial planning to evaluation and refinement, tracking every milestone and closely watching the power fundamentals that influence success.

This transparency allows utilities, investors, and developers to discern actual development from rumors. For example, whereas some reports indicate that big builds for data centers are decreasing, our intelligence indicates just the opposite. The buildout continues to accelerate and spread, transitioning to different areas and different forms of power delivery.

Reliable, corroborated information allows decision-makers to know exactly where expansion is occurring as well as the limitations that will hinder it. This is the basis of business at IIR Energy. We offer insight capable of piercing the din to predict how AI, energy, and infrastructure will continue to develop side by side by side.

All in all, this goes to remind us of a simple yet powerful reality: the AI power race will not just be about smarter algorithms. We’ll need smarter infrastructure to match.

# # #

About the Author

Britt Burt is the Vice President of Power Industry Research at IIR Energy, bringing nearly 40 years of expertise across the power, energy, and data center sectors. He leads IIR’s power research team, overseeing the identification and verification of data on operational and proposed power plants worldwide. Known for his deep industry insight, Britt plays a key role in keeping global energy intelligence accurate and up to date.

The post AI Data Centers Are Ready to Explode, If the Grid Can Keep Up appeared first on Data Center POST.

The Speed of Burn

17 November 2025 at 16:00

It takes the Earth hundreds of millions of years to create usable energy.

It takes us milliseconds to burn it.

That imbalance between nature’s patience and our speed has quietly become one of the defining forces of our time.

All the power that moves our civilization began as light. Every joule traces back to the Big Bang, carried forward by the sun, stored in plants, pressed into fuels, and now released again as electricity. The current that runs through a data center today began its journey billions of years ago…ancient energy returning to motion through modern machines.

And what do we do with it? We turn it into data.

Data has become the fastest-growing form of energy use in human history. We are creating it faster than we can process, understand, or store it. The speed of data now rivals the speed of light itself, and it far exceeds our ability to assign meaning to it.

The result is a civilization burning geological time to produce digital noise.

The Asymmetry of Time

A hyperscale data center can take three to five years to design, permit, and build. The GPUs inside it process information measured in trillionths of a second. That mismatch; years to construct, microseconds to consume, defines the modern paradox of progress. We are building slower than we burn.

Energy creation is slow. Data consumption is instantaneous. And between those two speeds lies a widening moral and physical gap.

When we run a model, render an image, or stream a video, we aren’t just using electricity. We’re releasing sunlight that’s been waiting since the dawn of life to be freed. The electrons are real, finite, and irreplaceable in any human timeframe — yet we treat data as limitless because its cost is invisible.

Less than two percent of all new data is retained after a year. Ninety-eight percent disappears — deleted, overwritten, or simply forgotten. Still, we build ever-larger servers to hold it. We cool them, power them, and replicate them endlessly. It’s as if we’ve confused movement with meaning.

The Age of the Cat-Video Factory

We’ve built cat-video factories on the same grid that could power breakthroughs in medicine, energy, and climate.

There’s nothing wrong with joy or humor. Those things are a beautiful part of being human. But we’ve industrialized the trivial. We’re spending ancient energy to create data that doesn’t last the length of a memory. The cost isn’t measured in dollars; it’s measured in sunlight.

Every byte carries a birth certificate of energy. It may have traveled billions of years to arrive in your device, only to vanish in seconds. We are burning time itself — and we’re getting faster at it every year.

When Compute Outruns Creation

AI’s rise has made this imbalance impossible to ignore. A one-gigawatt data campus, power consumption that once was allocated to the size of a national power plant, can now belong to a single company. Each facility may cost tens of billions of dollars and consume electricity on par with small nations. We’ve reached a world where the scarcity of electrons defines the frontier of innovation.

It’s no longer the code that limits us; it’s the current.

The technology sector celebrates speed: faster training, faster inference, faster deployment. But nature doesn’t share that sense of urgency. Energy obeys the laws of thermodynamics, not the ambitions of quarterly growth. What took the universe 18 billion years to refine (the conversion of matter into usable light) we now exhaust at a pace that makes geological patience seem quaint.

This isn’t an argument against technology. It’s a reminder that progress without proportion becomes entropy. Efficiency without stewardship turns intelligence into heat.

The Stewardship of Light

There’s a better lens for understanding this moment. One that blends physics with purpose.

If all usable power began in the Big Bang and continues as sunlight, then every act of computation is a continuation of that ancient light’s journey. To waste data is to interrupt that journey; to use it well is to extend it. Stewardship, then, isn’t just environmental — it’s existential.

In finance, CFOs use Return on Invested Power, ROIP to judge whether the energy they buy translates into profitable compute and operational output. But there’s a deeper layer worth considering: a moral ROIP. Beyond the dollars, what kind of intelligence are we generating from the power we consume? Are we creating breakthroughs in medicine, energy, and climate, or simply building larger cat-video factories?

Both forms of ROIP matter. One measures financial return on electrons; the other measures human return on enlightenment. Together, they remind us that every watt carries two ledgers: one economic, one ethical.

We can’t slow AI’s acceleration. But we can bring its metabolism back into proportion. That begins with awareness… the humility to see that our data has ancestry, that our machines are burning the oldest relics of the cosmos. Once you see that, every click, every model, every watt takes on new weight.

The Pause Before Progress

Perhaps our next revolution isn’t speed at all. Perhaps it’s stillness, the mere ability to pause and ask whether the next byte we create honors the journey of the photons that power it.

The call isn’t to stop. It’s to think proportionally.

To remember that while energy cannot be created or destroyed, meaning can.

And that the true measure of progress may not be how much faster we can turn power into data, but how much more wisely we can turn data into light again.

Sunlight is the power. Data is the shadow.

The question is whether our shadows are getting longer… or wiser.

# # #

About the Author

Paul Quigley is President of Airsys Cooling Technologies. He writes about the intersection of power, data, and stewardship. Airsys focuses on groundbreaking technology with a conscience

The post The Speed of Burn appeared first on Data Center POST.

Ensuring Equipment Safety and Reliability in Data Centers

13 November 2025 at 15:00

What keeps data center operators up at night? Among other things, worries about the safety and reliability of their assets. Staying competitive, maintaining 24/7 uptime, and meeting customer demand can all seem like overwhelming tasks – especially while operating on a lean budget.

The good news is that safety and reliability are very compatible goals, especially in the data center. An efficient, proactive maintenance strategy will deliver both greater reliability and increased security, so that your data center can support ever-growing demand while maintaining the trust of its customers.

In this article, I’ll talk about the best practices for maintenance teams tasked with increasing safety and uptime. I’ll explain how choosing the right tools can help your data center thrive and scale, without increasing costs.

Baking In Safety and Efficiency 

Solid maintenance practices start at the commissioning stage.

There’s no getting around the fact that a data center build is labor-intensive and demanding. Every single connection, electrical point, and fiber optic cable needs to be tested and verified. If you’re not careful, the commissioning stage has enormous potential for error and wasted resources, especially in a hyperscale location. Here’s how to solve that problem.

Choose Your Tools Wisely

It’s important to use the right tools and build efficiencies into the commissioning stage. Think of this stage as an opportunity to design a process that makes sense for your crew and your resources.

If you’re working with a lean maintenance crew, make sure to use tools that are purpose-built for ease of use, so that everyone on your team can achieve high-quality results right away. Look for cable testers, Optical Time Domain Reflectometers, and Optical Loss Test Sets that are designed with intuitive interfaces and settings.

Select tools that comply with, or exceed, industry standards for accuracy. Precision results will make a huge difference when it comes to the long-term lifespan of your assets. Getting accurate readings the first time also eliminates the need for re-work.

Opt for Safety and Efficiency

As always, safety and efficiency go hand in hand. When you’re building a large or hyperscale data center, small gains in efficiency add up quickly. If your tools allow you to test each connection point just a few seconds more quickly, you’ll see significant savings by the end of the data center construction.

Once the commissioning stage is complete, it’s a question of consolidating your efficiency gains, and finding new ways to keep your data center resilient without raising costs. Let’s see what that looks like.

Using Non-Contact Tools for Safety and Efficiency

Once your data center is fully built, I recommend implementing non-contact tools as far as possible. Done right, this will drastically improve your uptime and performance, while reducing overall costs.

What does non-contact look like? For some equipment, like the pumps and motors that support your cooling equipment, wireless sensors can monitor asset health in real time, tracking vibration levels and temperature.

Using Digital and AI Tools

Tools like a CMMS, or an AI-powered diagnostic engine, sift through asset health data to pinpoint early indications of an emerging fault. Today’s AI tools are trained on billions of data points and can recognize faults in assets and component parts. They can even determine the fault severity level and issue detailed reports on the health of every critical asset in the facility.

Once the fault is identified, CMMS creates a work order and a technician examines the asset, making repairs as needed. For lean maintenance crews, digital tools free up valuable time and labor, so that experienced technicians can focus on carrying out repairs, instead of reading machine tests or generating work orders.

The bottom line: real-time wireless monitoring keeps your technicians safe, eliminating the need for route-based testing with a handheld device. No more sending workers to squeeze into tight spaces or behind machinery just to get a measurement. By extension, no more risk of human error or inaccurate readings. Digital tools don’t make careless mistakes, no matter how often they perform the same task.

Of course, wireless monitoring isn’t the only non-contact approach out there.

Bringing in the bots

It’s now increasingly common to send robots into the data center to perform basic tests. This accomplishes the crucial function of keeping people out of the data center, where they could potentially hurt themselves or damage something.

I often see robots used to perform thermal imaging tests. Thermal imaging is a key element in many maintenance processes, especially in the data center. It’s the best means of pinpointing electrical faults, wiring issues, faulty connections, and other early indicators of major issues.

Using a robot to conduct the testing (or a mounted, non-contact thermal imager) allows you to monitor frequently, for accurate and precise results. This also protects your team from potential dangers like arc flashes and electrical shocks.

Opening the (infrared) window

Infrared windows, installed directly into power cabinets, make power quality monitoring both safer and more efficient. This is by far the safest approach for operators and technicians. It also guarantees readings will be taken regularly and speeds up the measurement process, by eliminating the time-consuming permitting step. The more frequently your team takes readings, the more effectively they can identify emerging issues and get ahead of the serious faults that could impact your assets and your whole facility.

Successful scaling through automation

Standardizing and automating workflows can enable fast, effective scaling. These processes also extend the reach of lean maintenance teams, so that managers can oversee larger facilities while still delivering high performance.

Automated monitoring and testing – with wireless tools, robots, and non-contact technology—deliver data in near real-time. When you pair this with AI, or with data analytic software, you’ll be able to identify emerging asset faults long before they become serious enough to cause downtime. This predictive technology enables far greater uptime and productivity, while also extending the lifespan of your assets.

Automated AI diagnostic tools, condition monitoring, and robotic testing all enable data centers to scale and to continue to deliver the speed and performance that today’s digitalized economy relies on.

# # #

About the Author

Mike Slevin is a General Manager (Networks, Routine Maintenance & Process Instrument) at Fluke, a company known worldwide for its electronic test and measurement tools. Mike works with data centers and industrial clients to improve energy efficiency, safety, and reliability through better monitoring and maintenance practices.

The post Ensuring Equipment Safety and Reliability in Data Centers appeared first on Data Center POST.

Data Center White Space for the AI Era: Emergence of New Architecture

11 November 2025 at 19:00

Originally posted on Compu Dynamics.

The most important change happening in data centers today isn’t simply the rise of AI or the increasing use of GPUs. It’s the structural shift in how data centers are designed, integrated, and operated. The traditional model that separated mechanical systems, electrical distribution, and compute infrastructure into distinct zones is giving way to something far more interconnected.

McKinsey & Company in their recent study projects global data center demand will grow at roughly 22% per year through 2030, reaching approximately 220 gigawatts of capacity — nearly six times the footprint of 2020. And nearly half of all non-IT capital spending in these facilities is now allocated to power and cooling infrastructure, not servers themselves.

That trend reflects a clear reality: in the AI era, performance depends on the environment in which compute operates. It’s not just how many GPUs you deploy; it’s how efficiently power is delivered, how heat is captured and removed, and how seamlessly these systems respond to dynamic workloads.

For decades, data centers were arranged like a campus of independent systems. Mechanical equipment lived in dedicated rooms, electrical systems occupied another area, and compute racks sat in the white space. Each discipline could be planned and operated more or less independently because the demands were steady and predictable.

To continue reading, please click here.

The post Data Center White Space for the AI Era: Emergence of New Architecture appeared first on Data Center POST.

Colocation, Connectivity, and Capacity

11 November 2025 at 15:00

Capacity Europe 2025: An Industry Newcomer’s Overview 

Capacity Europe took place from October 21-23, 2025 in London and brought more than 3,600 industry experts together to discuss the future of the telecommunications industry.

Central themes included the growing demand for capacity with the growth of AI and positioning data centers in edge or hub locations. Conversations surrounding the theme of AI were far more common than previous years and discussions about how the industry should best respond underscored all the panels.

The agenda featured many panels such as:

  • The AI conundrum: Establishing ‘hubs’ or edge revival?
  • Build today or buy forever: the role of European data centers in facilitating the AI explosion
  • Chasing power: how to meet future requirements
  • The investment outlook for digital infrastructure
  • Global Connectivity Trends: A European Perspective
  • The Hollow Core Fibre Opportunity: Faster, Further & Deployable Now
  • Testing the waters for quantum communications networks
  • The rise of Eastern European terrestrial corridors

The conclusion from “The AI conundrum: Establishing ‘hubs’ or edge revival?” panel included insights such as Wes Jensen at Wanaware’s point of understanding that inference happens at the edge while training is done at the hubs, so growing demand will necessitate more infrastructure at both, demanding a strong response from the industry.

The role of European data centers was also a central point for discussion at Capacity Europe 2025. With many panelists believing that Europe has the opportunity to adopt at a level competitive to the US and China, the atmosphere was cautious yet optimistic. Regulatory hurdles and plenty of red tape must first be addressed before data centers in Europe can truly flourish at a level close to the success of the US and China.

Additionally, power was also an important part of the debate. Growing demand has worried nearby communities, and discussion about creating a friendly approach that doesn’t villainize data centers is vital in promoting their adoption across Europe. Panelists concluded that turning that PR around requires a tremendous amount of force, but is still a possible undertaking.

Power availability is limited as many of these proposed plant projects will take substantial time, while a data center project may only take two or three years to complete, the average power plant would take longer. There is an inevitable gap in power availability as data centers race to catch demand faster than power can be supplied.

The conversation in the conference also addressed what Nabeel Mahmood of ZincFive mentioned to be a gray tsunami, a shortfall of young professionals entering the industry while there is a large portion of older professionals retiring. The conclusion was generally that the industry should gain awareness and ride off the publicity of data centers to appeal to students. One such program, “Talent in Digital Infrastructure,” was run at the event with a range of speakers from various backgrounds and topics. Students from both UK universities and sixth forms listened to bring awareness to the fact the industry existed, with many speakers emphasizing that they found their way into telecommunications by accident and weren’t aware that it was even an option.

Capacity Europe not only connected the telecommunications industry from across continents, but also provided important insight about the rapidly changing state of the industry. Moving forward, the success of European telecommunications innovation is in the hands of the many experienced and intelligent industry professionals to deal with the new problems posed by the rapid growth and scaling of artificial intelligence.

If you’re interested in participating in the industry-shaping discussion, you can save the date for Capacity Europe 2026! The event will be from the 13th to 16th of October, at the Intercontinental O2 in London.

# # #

About the Author

Sebastian Cohen is an intern at iMiller Public Relations and student at the University of St. Andrews where he is pursuing a degree in Financial Economics and Management.

The post Colocation, Connectivity, and Capacity appeared first on Data Center POST.

Leadership in the Age of AI

10 November 2025 at 17:00

In the latest episode of NEDAS Live!, host, and founder and CEO of iMiller Public Relations, Ilissa Miller sits down with Marci Nigro, Founder and CEO of Purpose Consulting Services. With nearly three decades of experience behind her, Nigro offers a candid, industry-grounded perspective on how artificial intelligence is redefining leadership and talent priorities within digital infrastructure.​

Beginning the conversation in Episode 61, Nigro highlights a dramatic shift: organizations and investors are now evaluating executives through a much broader operational and environmental lens. The new era of leadership demands not only technical expertise, but also a capacity to navigate hypergrowth, manage complex environments, and cultivate innovation. On top of this, leaders are expected to also navigate capital markets and investors.​

She notes a rising need for cross-sector skills, especially in convergence areas like energy and utilities, which were seldom client requirements even a few years ago. Hybrid leadership roles are increasingly sought, ones that combine a myriad of mindsets including, strategist, technologist, and philosopher. Emotional intelligence (EQ) and relational skills have become as vital as industry knowledge, particularly as leaders must excel in high-stakes, fast-paced environments.​

Despite automation’s growing reach, Nigro insists that true success hinges on human-centered leadership. Empathy, vulnerability, and a purposeful approach to relationships matter more than ever, especially at the executive level. Successful leaders align talent to company culture and strategy, refusing to rely on personal connections alone, which is a major change from past hiring habits.​

Culture fit, Nigro stresses, is paramount. “If the culture piece isn’t aligned, it will damage success for both the company and individual,” she observes. Her advice for next-generation executives: invest in self-education, leverage peer knowledge, and remain adaptable as AI reshapes expectations.​

To continue the conversation, listen to the full podcast episode here.

The post Leadership in the Age of AI appeared first on Data Center POST.

Addressing the RF Blind Spot in Modern Data Centers

10 November 2025 at 16:00

The rapid adoption of artificial intelligence (AI) and the computing power required to train and deploy advanced models have driven a surge in data center development at a scale not seen before. According to UBS, companies will spend $375 billion globally this year on AI infrastructure and $500 billion next year. It is projected that more than 4,750 data centers will be under construction in primary markets in the United States alone in 2025.

While data center investments often focus on servers, power, and cooling, Cellular Connectivity is an underrated element in ensuring these facilities operate reliably and safely long term. It’s important for operators to understand how this impacts both commercial operations and public safety.

Supporting Technicians and On-Site Personnel

Reliable cellular connectivity is important in day-to-day operations for technicians, engineers, and contractors. From accessing digital work orders to coordinating with off-site experts, mobile devices are central tools for keeping operations running smoothly.

The challenge is that signal strength often weakens in the very areas where staff spend the most time: data halls, mechanical rooms, and utility spaces. Consistent coverage across the entire facility eliminates those gaps. It allows technicians to complete tasks more efficiently, reduces delays, and ensures that communications remain uninterrupted.

Connectivity also improves worker safety. Personnel must be able to reach colleagues or emergency services at any time, regardless of where they are in the facility. Reliable connectivity helps protect both people and operations.

Cellular Connectivity for Data Center Operations

Data centers are highly complex ecosystems, requiring constant monitoring, rapid coordination, and efficient communication. They are also often built in remote locations with plenty of land and natural resources to help with cooling, but this results in terrible cellular connectivity. In addition, they are primarily constructed of steel and concrete for stability and fire resistance, which are also incredibly challenging for radio frequency (RF) to penetrate naturally. Weak signals or dropped calls can delay problem resolution, introduce operational risks, and reduce resiliency.

In the event of an emergency, the stakes are even higher. Cellular service becomes the lifeline for coordinating evacuation procedures, communicating with local authorities, and enabling first responders to perform their duties. Without strong coverage throughout the facility, including in underground or shielded areas, response times can be compromised.

Solutions like distributed antenna systems (DAS) help solve this challenge by connecting base stations to the site, bringing wireless connectivity from the macro network to inside the facility ensuring operators can maintain real-time contact with vendors, remote support teams, and internal staff.

As new facilities increasingly rise in remote or challenging environments, extending reliable cellular service inside the building ensures operational continuity, no matter the location or construction materials involved.

Unified Cellular Networks for Lower Costs

Even though there is record data center spending, cellular infrastructure can be costly. But there are ways to mitigate the expenses up front. Normally, DAS is implemented in large facilities due to public safety requirements. Building codes enforced by authorities having jurisdiction (AHJs) require in-building coverage for emergency communications, ensuring that first responders can connect reliably in critical situations. These mandates drive the deployment of emergency responder communication enhancement systems (ECRES) designed to meet strict performance standards in adherence with the International Fire Code (IFC) and the National Fire Protection Association (NFPA).

Often too late, most operators realize that this infrastructure can deliver substantial benefits for their own staff, but at this point, it usually requires an entirely separate system in parallel with the public safety system, including new remote units, cables, and passive components. But if operators are to be forward thinking and install them both at the same time, the system can serve both public safety and commercial cellular needs within a unified architecture.

The advantages are significant. A unified cellular network reduces the cost and complexity of building two separate systems in parallel. It also ensures that first responders, facility operators, and everyday users all benefit from consistent connectivity throughout the building. It is also capable of supporting evolving technologies such as 5G and emerging public safety requirements.

Developing Resilience

As AI accelerates the demand for new data centers, operators must look beyond traditional infrastructure requirements. Power and cooling remain fundamental, but so too does the ability to maintain clear and reliable lines of communication. Cellular coverage should not be a secondary concern because it supports remote monitoring, emergency response, technician efficiency, and worker safety. When deployed as a unified cellular solution, it also maximizes investment by serving both public safety and commercial needs.

In a mission-critical environment like data center operations, uninterrupted communication onsite and with outside stakeholders is non-negotiable. As facilities continue to expand in size and complexity, cellular connectivity will be essential in ensuring it is always operational with minimal downtime.

# # #

About the Author:

Mohammed Ali is the manager of DAS Engineering at Advanced RF Technologies, Inc. (ADRF), responsible for leading the DAS engineering division within the company across all global accounts. He has more than 10 years of experience in in-building DAS engineering and wireless network planning. Prior to joining ADRF, Mohammed worked as an RF Engineer at TeleworX and Huawei Technologies Sudan and a Network Management Engineer at ZAIN Sudan. Mohammed holds a Bachelor of Science in Telecommunications Engineering from the University of Khartoum in Sudan and a Master’s of Science degree in Telecommunications Engineering from the University of Maryland.

The post Addressing the RF Blind Spot in Modern Data Centers appeared first on Data Center POST.

Strategic Evolution of Data Center Infrastructure for the Age of AI

7 November 2025 at 15:30

Originally posted on Compu Dynamics.

Artificial intelligence is transforming how digital infrastructure is conceived, designed, and deployed. While the world’s largest cloud providers continue to build massive hyperscale campuses, a new layer of demand is emerging — AI training clusters, high-performance compute environments, and inference nodes that require speed, density, and adaptability more than sheer scale.

For these applications, modular design is playing a strategic role. It isn’t a replacement for traditional builds. It’s an evolutionary complement — enabling rapid, precise deployment wherever high-density compute is needed.

Purpose-Built for AI, Not the Cloud of Yesterday

Traditional colocation and hyperscale data center facilities were engineered for predictable, virtualized workloads. AI environments behave differently. They run hotter, denser, and evolve faster. Training clusters may exceed 200 kW per rack and require liquid-cooling integration from day one. Inference workloads demand proximity to the user to minimize latency.

Modular data center solutions provide a practical way to meet those demands. Prefabricated, fully engineered modules can be built in parallel with site work, tested in controlled conditions, and commissioned in days rather than months. Each enclosure can be tailored to its purpose — an AI training pod, an inference edge node, or a compact expansion of existing capacity.

To continue reading, please click here.

The post Strategic Evolution of Data Center Infrastructure for the Age of AI appeared first on Data Center POST.

The Challenges of Building Data Centers in the AI Era

5 November 2025 at 16:00

Amazon’s Chief Executive, Andy Jassy, recently told investors that the company could significantly increase its sales if it had more data centers. Jassy explained that electricity is critical to the company’s success, and that “the single biggest constraint is power.”

It’s Artificial Intelligence (AI) that is driving this need for more power, propelling computing demand to levels not seen since the advent of cloud computing. Training foundation models, deploying inference at scale, and supporting AI-powered applications require massive levels of compute, storage, and power capacity that have never been experienced. However, the task of scaling data centers creates a range of structural challenges, including power availability, supply chain fragility, security, and geographic constraints.

Power as the Ultimate Bottleneck

One of the primary challenges for building data centers is power. These data centers require megawatts of power delivered to racks designed for densities exceeding 50 kilowatts per square meter. Securing this kind of power can be difficult, with interconnection queues for new generation and transmission projects often extending over a decade.

Gas power plants may not be the solution. These kinds of energy plants, which have not already contracted equipment, are unlikely to be available until the 2030s. In addition to the negative environmental impacts that can agitate communities, there’s a fear that investments in gas-fired infrastructure could become “stranded” as the world transitions to cleaner energy sources. And yet, renewable energy build-outs can be constrained by transmission bottlenecks and land availability.

This conundrum posed by gas power and renewable energy sources highlights a problem between the current speed of AI workloads, which tend to occur within six-month time frames, and the multi-year timelines of energy infrastructure. This mismatch highlights how power availability is becoming a significant constraint in the AI era.

Supply Chain Fragility

Supply chains are the next most significant challenge after power. Delays in infrastructure components, such as transformers, uninterruptible power supply (UPS) systems, switchgear, generators, and cooling distribution units, are stalling and complicating projects. According to Deloitte, 65% of companies identified supply chain disruptions as a significant issue for data center build-outs.

Critical equipment now carries 12–18-month lead times, and global logistics remain susceptible to geopolitical instability. Trade restrictions, material shortages, and regional conflicts all impact procurement schedules, creating challenges for developers as they strive to align construction timelines with delivery schedules. With speed to market being the key to competitiveness, a one-year delay in equipment delivery could result in a significant and potentially fatal lag. The ability to pre-plan procurement, diversify suppliers, and stage modular components is quickly becoming a competitive differentiator.

Security and Reliability Pressures

With AI playing a critical role in economic and national competitiveness, security becomes an all-important concern. Sixty-four percent of data center executives surveyed by Deloitte ranked security as one of the biggest challenges. Vulnerabilities in AI data centers pose not only a threat to business profitability but also impact the healthcare, finance, and national defense sectors.

Modern operators must think about resilience in layered terms: physical hardening, advanced cyber protection, and compliance adherence, all while delivering at hyperscale speed. Building secure, resilient AI centers is no longer just an IT issue; it’s a national infrastructure imperative.

Spatial and Infrastructure Constraints

Geography presents the next biggest hurdle. Appropriate locations that have accessibility to load centers, available land, and access to water for cooling are not easy to find. Reliable power delivery is hindered by space limitations that make colocating data centers next to transmission infrastructure a challenging task. As for legacy infrastructure, it fails to meet the rack densities or dynamic load profiles required by modern AI. This inability forces operators to weigh the costs of retrofits against the benefits of greenfield development.

The Timeline Paradox

Traditional data center builds typically take 18 to 24 months. However, AI technology is evolving much more quickly. Model architectures and hardware accelerators change every six months. By the time a facility comes online, its design assumptions may already be outdated in relation to the latest AI requirements.

This paradox is forcing developers to reimagine delivery, turning to modular builds, pre-fabricated components, and “power-first” design strategies that bring capacity online in phases. The goal is no longer a perfect data center, but one that can evolve in lockstep with AI’s breakneck pace.

Conclusion

Industry leaders are reimagining procurement to ensure that critical components can be delivered earlier; they’re also diversifying supplier bases to lessen geopolitical risk and adopting modular construction to speed up deployment. Some organizations are partnering with utilities to co-plan grid upgrades, and others are exploring on-site generation and storage to bypass interconnection queues.

Treating supply chain resilience as a competitive differentiator is the ticket to a prosperous future for AI infrastructure. Organizations that can strike a balance between speed and reliability will keep pace with AI innovation.

The AI revolution is redesigning the structure of the digital economy. The challenges, ranging from strained power grids and fragile supply chains to evolving security demands and spatial constraints, are significant. Organizations that successfully navigate these challenges will set the standard for resilient digital infrastructure in the decades to come.

# # #

About the Author

Scott Embley is an Associate at hi-tequity, supporting sales operations, business development, and client relationships to drive company growth. He manages the full sales cycle, identifies new opportunities through market research, and ensures client success through proactive communication. Scott holds a B.S. in Business Administration and Management from Liberty University, where he graduated summa cum laude.

The post The Challenges of Building Data Centers in the AI Era appeared first on Data Center POST.

❌