Normal view

Received before yesterday

Three Questions Every AI‑Driven Operator Should Ask Their Infrastructure Partner

1 April 2026 at 16:00

Originally posted on Datalec LTD.

Data centre leaders left ExCeL London earlier this month with one message ringing loud and clear: AI‑driven growth is accelerating, power is tight, and the choice of infrastructure partner is now business‑critical, not optional.

Against a backdrop of rapid hyperscale and colocation expansion, constrained power availability and rising energy scrutiny, the conversations at Data Centre World London 2026 underscored that operators need partners who can help them plan power‑first, deploy at speed, and operate reliably in high‑density environments.

For Datalec Precision Installations (DPI), DCW London was an opportunity to demonstrate exactly that kind of integrated, global capability, from modular data centre solutions through to facilities management, consultancy and lifecycle services. The questions operators brought to the stand were remarkably consistent, whether they were building in the UK, expanding in the Middle East, or planning their next phase of growth in APAC.

Below, we revisit three of the most important questions AI‑driven operators were asking in London and why they will matter even more as the industry converges on Singapore for DCW Asia later this year.

1. How quickly can you take me from secured power to live, AI‑ready capacity?

If there was one common theme at DCW London, it was that power availability has become the primary constraint on new data centre builds, not demand. Once operators have secured land and grid, the urgent requirement is simple: how fast can we safely turn that capacity into revenue‑generating, AI‑ready infrastructure?

This is where modular, pre‑engineered solutions dominated the conversation. Many visitors to the DPI stand wanted to understand how modular white space, plant and service corridors could compress design and construction timelines without sacrificing resilience or compliance. DPI’s next‑generation Modular Data Centre Solutions attracted strong interest because they are designed precisely for this challenge. They help clients move from planning to live halls at speed, whether that’s a new campus in a European hub, a hyperscale expansion in the Middle East, or an edge or colocation site in a fast‑growing APAC market.

To continue reading, please click here.

The post Three Questions Every AI‑Driven Operator Should Ask Their Infrastructure Partner appeared first on Data Center POST.

Why Effective Facilities Management Is Essential for Today’s Data Centre Operators

27 February 2026 at 16:00

Originally posted on Datalec LTD.

In a digital economy where uptime is non-negotiable, effective critical facilities management (FM) is becoming a primary lever for managing outage risk in high‑density, AI‑driven data centres. As infrastructure grows more complex and AI-driven compute places unprecedented strain on power and cooling systems, operators face escalating risks, making the cost of getting FM wrong higher than ever.

Evolving Pressures, Escalating Risks: The New Reality for Data Centre Operators

Despite steady year-on-year improvements in resilience, the industry continues to operate under significant pressure. According to Uptime Institute’s 2025 Outage Analysis, outages are occurring less frequently but are becoming more complex and more expensive when they do happen. Power-related failures remain the leading cause of impactful incidents, accounting for 54% of major outages, while 53% of operators reported at least one outage in the past three years, even as overall rates decline.

This challenge is amplified by the rapid rise of AI and the high-density compute requirements. AI workloads are now “straining existing infrastructure, especially around power and cooling,” creating new categories of risk that simply didn’t exist a decade ago. Staffing shortages across the sector add further pressure, reducing the availability of experienced professionals capable of managing mission-critical environments.

The financial implications are equally significant. More than 54% of organisations reported that their most recent outage exceeded $100,000 in cost, and 20% experienced losses above $1 million. For large enterprises, downtime can reach $540,000 to well over $1 million per hour, depending on sector and workload criticality.

This is the operating landscape that data centre leaders must now navigate, where even small procedural missteps can cascade into business-critical failures.

To continue reading, please click here.

The post Why Effective Facilities Management Is Essential for Today’s Data Centre Operators appeared first on Data Center POST.

Beyond Copper and Optics: How e‑Tube Powers the Terabit Era

24 December 2025 at 14:00

As data centers push toward terabit-scale bandwidth, legacy copper interconnects are hitting their limit or as the industry calls it, the “copper cliff.” Traditional copper cabling, once the workhorse of short-reach connectivity, has become too thick, too inflexible, and too short to keep pace with the scale of xPU bandwidth growth in the data center. On the other hand, optical solutions will work but are saddled with the “optical penalty,” that includes power-hungry and expensive electrical and optical components, manufacturing design complexity,  latency challenges and more importantly, reliability issues after deployment.

With performance, cost, and operational downsides increasing for both copper and optical interconnect, network operators are looking beyond the old interconnect paradigm and toward more scalable options to scale at the pace of the next generation AI clusters in data centers.

Enter e-Tube: the industry’s third interconnect option.

e-Tube Technology is a scalable multi-terabit interconnect platform that uses RF data transmission through a plastic dielectric waveguide. Designed to meet the coming demands of 1.6T and 3.2T bandwidth requirements, e-Tube leverages cables made from common plastic material such as low-density polyethylene (LDPE), which avoids the high-frequency loss and physical constraints inherent to copper. The result is a flexible, power-efficient, and highly reliable link that delivers the reach and performance required to scale up AI clusters in next-generation data center designs.

Figure 1 [Patented e-Tube Platform]

The industry is taking notice of the impact e-Tube will make, with results showing up to 10x the reach of copper while being 5x lighter and 2x thinner. Compared with optical cables, e-Tube consumes 3x less power, achieves 1,000x lower latency, and an impressive 3x less cost. Its scalable design architecture delivers consistent bandwidth for future data speeds to 448Gbps and beyond across networks, extending existing use cases and creating new applications that copper and optical interconnects cannot support.

With the impending copper cliff and optical penalty looming in the horizon, the time is now for data center operators to consider a third interconnect option. e-Tube RF transmission over a plastic dielectric delivers measurable impact with longer reach, best-in-class energy efficiency, near-zero latency, not to mention a cost-effective option. As AI workloads explode and terabit-scale fabrics become the norm, e‑Tube is posed to be a foundational cable interconnect for scaling up AI clusters for the next generation of data centers.

# # #

About the Author

Sean Park is a seasoned executive with over 25 years of experience in the semiconductors, wireless, and networking market. Throughout his career, Sean has held several leadership positions at prominent technology companies, including IDT, TeraSquare, and Marvell Semiconductor. As the CEO, CTO, and Founder at Point2 Technology, Sean was responsible for leading the company’s strategic direction and overseeing its day-to-day operations. He also served as a Director at Marvell, where he provided invaluable guidance and expertise to help the company achieve its goals. He holds a Ph.D. in Electrical Engineering from the University of Washington and also attended Seoul National University.

The post Beyond Copper and Optics: How e‑Tube Powers the Terabit Era appeared first on Data Center POST.

Redefining Investment and Innovation in Digital Infrastructure

9 December 2025 at 14:00

How new entrants are reshaping data center operations, capital models, and sustainable development

At the infra/STRUCTURE Summit 2025, held October 15–16 at The Wynn Las Vegas, one of the most engaging conversations explored how a new generation of operators is reshaping the data center landscape.

The session, “New Operating Platforms,” moderated by Philbert Shih, Managing Director of Structure Research, brought together executives leading some of the most innovative digital infrastructure ventures: Ernest Popescu, CEO of Metrobloks Data Centers; Eanna Murphy, Founder and CEO of Montera Infrastructure; and Chuck McBride, CEO of Atmosphere Data Centers.

Together, they discussed how new business models, evolving capital structures, and sustainability commitments are redefining what it means to operate in the fast-changing world of digital infrastructure.

Identifying Gaps in a Rapidly Evolving Market

Shih opened the discussion by noting that the surge in investment across digital infrastructure has created room for new operating platforms to emerge.

“The industry has arguably over-indexed on hyperscale and colocation,” Shih said. “But the opportunity now lies in the gaps, in the diverse mix of services, geographies, and market segments that remain underserved.”

He challenged the panelists to explore how their platforms are addressing those gaps, and what kinds of efficiencies or innovations are shaping their approach.

Building for Speed and Efficiency

Murphy described his company’s focus on secondary and emerging markets, areas where demand is strong but infrastructure capacity has lagged.

“We wanted to look at regions where enterprise customers were underserved,” Murphy said. “Our model focuses on connecting Tier 2 cities and surrounding areas, delivering capacity closer to users and creating new connectivity ecosystems.”

Murphy emphasized that Montera’s approach is designed for speed and scale, combining pre-engineered designs and local partnerships to accelerate delivery.

“Even in smaller markets,” Murphy said, “you can build meaningful density if you plan it right and align with community needs.”

Balancing Capital, Capacity, and Time-to-Market

Popescu noted that access to capital remains one of the biggest hurdles for new operators, especially those outside traditional hyperscale markets.

“There’s plenty of opportunity in the market, but capital deployment still comes down to risk tolerance and timing,” Popescu said. “You can’t shortcut power availability, but you can manage time-to-market with flexible models and smart partnerships.”

Metrobloks focuses on developing scalable, self-performable campuses in underserved markets, combining modular design with utility partnerships to bring new capacity online faster.

“It might not be massive by hyperscale standards,” Popescu said. “But for our customers, being able to access distribution power in 12 to 18 months can make all the difference.”

Sustainability and the Next Generation of Infrastructure

For McBride, sustainability and long-term adaptability are at the heart of his company’s strategy.

“We made a conscious choice not to inherit legacy assets,” McBride said. “Instead, we’re building brand-new AI-ready campuses in underserved markets, what we call next-generation training centers.”

Atmosphere’s developments prioritize renewable energy integration and community revitalization. McBride described projects that convert industrial land, such as former power plant sites, into modern digital campuses.

“We’re taking coal-fired sites and turning them into green campuses,” McBride said. “It’s about giving these sites a second life while meeting the demands of AI and high-performance computing.”

Adapting to Changing Technology Cycles

The conversation turned to how operators are preparing for rapid changes in compute and chip technology, particularly as AI drives unprecedented density and cooling requirements.

Murphy noted the growing challenge of aligning long-term infrastructure planning with short hardware cycles.

“Every six months we’re seeing new chip architectures from NVIDIA, AMD, and others,” Murphy said. “But the data center development cycle is still three to five years. The challenge is designing for what’s next without overcommitting to what’s current.”

Panelists agreed that future-proofing is now a key differentiator, with flexibility, modularity, and liquid cooling readiness built into early designs.

Smarter Capital and Better Collaboration

Reflecting on the evolution of the investment landscape, Popescu shared that today’s capital partners are far more informed about the digital infrastructure asset class than even a few years ago.

“Institutional investors have become much more educated,” Popescu said. “The conversations are smarter, and there’s a better understanding of the balance between cost, speed, and sustainability.”

McBride added that hyperscalers, too, have shown greater willingness to adapt pricing and partnership structures in response to development challenges.

“Three years ago, I had never seen the major cloud players react so quickly,” McBride said. “They know developers are essential to getting capacity online, and that alignment benefits everyone.”

The Opportunity Ahead

In closing, Shih reflected on how the emergence of these new operating platforms is reshaping the broader ecosystem.

“We’re watching the rise of operators who are not just building capacity but reimagining how the industry functions,” Shih said. “They’re bridging the gap between capital, sustainability, and innovation, and that’s what will define the next phase of growth.”

As the digital infrastructure industry continues to evolve, these leaders are demonstrating that success now depends as much on creativity and collaboration as it does on capital and construction.

Infra/STRUCTURE 2026: Save the Date

Want to tune in live, receive all presentations, gain access to C-level executives, investors and industry leading research? Then save the date for infra/STRUCTURE 2026 set for October 7-8, 2026 at The Wynn Las Vegas. Pre-Registration for the 2026 event is now open, and you can visit www.infrastructuresummit.io to learn more.

The post Redefining Investment and Innovation in Digital Infrastructure appeared first on Data Center POST.

The Challenges of Building Data Centers in the AI Era

5 November 2025 at 16:00

Amazon’s Chief Executive, Andy Jassy, recently told investors that the company could significantly increase its sales if it had more data centers. Jassy explained that electricity is critical to the company’s success, and that “the single biggest constraint is power.”

It’s Artificial Intelligence (AI) that is driving this need for more power, propelling computing demand to levels not seen since the advent of cloud computing. Training foundation models, deploying inference at scale, and supporting AI-powered applications require massive levels of compute, storage, and power capacity that have never been experienced. However, the task of scaling data centers creates a range of structural challenges, including power availability, supply chain fragility, security, and geographic constraints.

Power as the Ultimate Bottleneck

One of the primary challenges for building data centers is power. These data centers require megawatts of power delivered to racks designed for densities exceeding 50 kilowatts per square meter. Securing this kind of power can be difficult, with interconnection queues for new generation and transmission projects often extending over a decade.

Gas power plants may not be the solution. These kinds of energy plants, which have not already contracted equipment, are unlikely to be available until the 2030s. In addition to the negative environmental impacts that can agitate communities, there’s a fear that investments in gas-fired infrastructure could become “stranded” as the world transitions to cleaner energy sources. And yet, renewable energy build-outs can be constrained by transmission bottlenecks and land availability.

This conundrum posed by gas power and renewable energy sources highlights a problem between the current speed of AI workloads, which tend to occur within six-month time frames, and the multi-year timelines of energy infrastructure. This mismatch highlights how power availability is becoming a significant constraint in the AI era.

Supply Chain Fragility

Supply chains are the next most significant challenge after power. Delays in infrastructure components, such as transformers, uninterruptible power supply (UPS) systems, switchgear, generators, and cooling distribution units, are stalling and complicating projects. According to Deloitte, 65% of companies identified supply chain disruptions as a significant issue for data center build-outs.

Critical equipment now carries 12–18-month lead times, and global logistics remain susceptible to geopolitical instability. Trade restrictions, material shortages, and regional conflicts all impact procurement schedules, creating challenges for developers as they strive to align construction timelines with delivery schedules. With speed to market being the key to competitiveness, a one-year delay in equipment delivery could result in a significant and potentially fatal lag. The ability to pre-plan procurement, diversify suppliers, and stage modular components is quickly becoming a competitive differentiator.

Security and Reliability Pressures

With AI playing a critical role in economic and national competitiveness, security becomes an all-important concern. Sixty-four percent of data center executives surveyed by Deloitte ranked security as one of the biggest challenges. Vulnerabilities in AI data centers pose not only a threat to business profitability but also impact the healthcare, finance, and national defense sectors.

Modern operators must think about resilience in layered terms: physical hardening, advanced cyber protection, and compliance adherence, all while delivering at hyperscale speed. Building secure, resilient AI centers is no longer just an IT issue; it’s a national infrastructure imperative.

Spatial and Infrastructure Constraints

Geography presents the next biggest hurdle. Appropriate locations that have accessibility to load centers, available land, and access to water for cooling are not easy to find. Reliable power delivery is hindered by space limitations that make colocating data centers next to transmission infrastructure a challenging task. As for legacy infrastructure, it fails to meet the rack densities or dynamic load profiles required by modern AI. This inability forces operators to weigh the costs of retrofits against the benefits of greenfield development.

The Timeline Paradox

Traditional data center builds typically take 18 to 24 months. However, AI technology is evolving much more quickly. Model architectures and hardware accelerators change every six months. By the time a facility comes online, its design assumptions may already be outdated in relation to the latest AI requirements.

This paradox is forcing developers to reimagine delivery, turning to modular builds, pre-fabricated components, and “power-first” design strategies that bring capacity online in phases. The goal is no longer a perfect data center, but one that can evolve in lockstep with AI’s breakneck pace.

Conclusion

Industry leaders are reimagining procurement to ensure that critical components can be delivered earlier; they’re also diversifying supplier bases to lessen geopolitical risk and adopting modular construction to speed up deployment. Some organizations are partnering with utilities to co-plan grid upgrades, and others are exploring on-site generation and storage to bypass interconnection queues.

Treating supply chain resilience as a competitive differentiator is the ticket to a prosperous future for AI infrastructure. Organizations that can strike a balance between speed and reliability will keep pace with AI innovation.

The AI revolution is redesigning the structure of the digital economy. The challenges, ranging from strained power grids and fragile supply chains to evolving security demands and spatial constraints, are significant. Organizations that successfully navigate these challenges will set the standard for resilient digital infrastructure in the decades to come.

# # #

About the Author

Scott Embley is an Associate at hi-tequity, supporting sales operations, business development, and client relationships to drive company growth. He manages the full sales cycle, identifies new opportunities through market research, and ensures client success through proactive communication. Scott holds a B.S. in Business Administration and Management from Liberty University, where he graduated summa cum laude.

The post The Challenges of Building Data Centers in the AI Era appeared first on Data Center POST.

❌