Normal view

Received before yesterday

CDM Acquires Majority Stake in R&D Specialties to Power AI Modular Data Center Surge

1 April 2026 at 13:00

In a strategic move underscoring the shift toward modular infrastructure, Compu Dynamics Modular (CDM), a Chantilly, Virginia-based specialist in prefabricated data center solutions, has acquired a majority stake in R&D Specialties, an Odessa, Texas, manufacturer of UL-certified control panels and modular electrical systems. Announced today, the deal expands CDM’s manufacturing footprint to 120,000 square feet, with room for growth on a 15-acre campus, positioning the company to meet skyrocketing demand for AI-ready, high-density deployments from hyperscalers, colocation providers, and enterprises.

This acquisition arrives at a pivotal moment. AI and high-performance computing (HPC) workloads demand unprecedented speed, density, and scalability – challenges traditional builds struggle to match. Modular solutions, once niche, are now the default for rapid, repeatable deployments.

“Modular infrastructure is where efficiency meets innovation,” said Ron Mann, vice president of CDM. “For decades, we’ve delivered solutions that solve real engineering challenges in high-stakes environments. Joining forces with R&D Specialties allows us to bring that expertise to the next generation of AI data centers at scale.”

Steve Altizer, president and CEO of Compu Dynamics, emphasized the market imperative: “This investment is about building the capabilities and capacity the market is demanding right now. AI infrastructure requires a different approach; one that delivers faster, scales smarter, and performs better. R&D Specialties brings the engineering depth and manufacturing precision that align perfectly with where this industry is headed.”

R&D Specialties, founded in 1983, excels in custom-engineered systems for mission-critical settings, complementing CDM’s vendor-neutral, end-to-end services – from design and liquid-cooled IT platforms to commissioning and maintenance. Brad Howell, president of R&D Specialties, noted the synergy: “Through joining forces with CDM, our growth opportunities for the combined teams have expanded even further. Being part of the AI infrastructure revolution and building what’s next is exciting.”

For data center operators, this signals broader ecosystem maturation. CDM’s turnkey modules accelerate time to market while integrating high-density power, low-latency networking, and sustainability features. With an extensive North American partner network, the combined entity can deploy campus-scale solutions anywhere, anytime – critical as AI power needs strain grids and supply chains.

This deal exemplifies how strategic M&A is fueling modular dominance, helping the industry navigate AI’s compute explosion with agility and reliability. Learn more at cd-modular.com.

The post CDM Acquires Majority Stake in R&D Specialties to Power AI Modular Data Center Surge appeared first on Data Center POST.

Not All Data Centers Are Built the Same — Inside MedOne’s Infrastructure Strategy in Israel

30 March 2026 at 14:00

Walk through the sales deck of almost any data center operator and you’ll find the same language: Tier III certified, N+1 redundancy, 99.999% uptime. The terminology is standardized because the underlying assumption is standardized that infrastructure is, at its core, a commodity.

That assumption is worth examining more carefully. Because what looks identical on paper can behave very differently under pressure. And the gap between a standard data center and a strategic one isn’t visible in a specification sheet. It shows up in an outage.

Engineering for Reality, Not for Ideal Conditions

MedOne, Israel’s largest data center operator with more than 25 years of experience building and managing critical infrastructure,serves some of the country’s most demanding clients , banks, healthcare providers, government agencies, defense-adjacent technology firms and large-scale enterprise platforms. These are mission-critical environments where downtime carries legal, financial and operational consequences that go well beyond a service credit. When a payment system goes dark or a hospital’s records platform becomes unavailable, the impact is measured in far more than lost revenue.

Building for that client base forced a different set of engineering questions from the start. Not “how do we achieve uptime under normal conditions?” but “how do we maintain continuity when normal conditions no longer exist?” That shift in the design brief changes almost every decision that follows.

MedOne’s facilities are built underground , not as a differentiating feature, but as a structural response to the requirement for physical isolation. Underground construction reduces exposure to environmental variables, provides stable ambient temperatures for cooling efficiency and removes a layer of external dependency that surface-level facilities carry by default. For mission-critical clients operating under strict regulatory and continuity requirements, physical hardening is not optional; it’s a baseline expectation.

Redundancy vs. Independence: A Difference That Matters

Most data centers are built around redundancy. Redundant power feeds, redundant cooling circuits, redundant network paths. Redundancy is valuable  but it operates on a specific assumption: that external systems are available, and that a backup path exists when the primary one fails.

Independence operates on a different assumption entirely: that external systems may not be available at all, and that the facility must be capable of sustaining itself regardless.

MedOne’s facilities are designed to operate independently for up to 72 hours without relying on external power or water infrastructure. This means on-site fuel reserves, independent power generation and self-sufficient cooling systems, the entire physical stack, sustained without input from the national grid or municipal utilities.

“Redundancy still assumes external systems are available somewhere in the chain,” says Eli Matara, chief commercial officer at MedOne. “Independence means we can continue operating even when they aren’t. For mission-critical clients -that’s not a philosophical difference. It’s the difference between staying operational and explaining an outage.”

The engineering logic becomes clearer when you think in layers. Modern infrastructure is a dependency chain: power feeds cooling, cooling enables compute, compute supports network, network delivers applications. Each layer inherits the risk of the layer beneath it. Redundant components within a single layer don’t eliminate risk if those components share an upstream dependency, a common substation, a shared conduit, or a single utility provider. Standard infrastructure is designed to recover when a layer fails. Strategic infrastructure is designed so that failure of an external input doesn’t cascade through the layers above it in the first place.

Connectivity Is Infrastructure, Not a Feature

For mission-critical clients, a facility that is running but unreachable is still down. That’s why MedOne treats connectivity not as a managed service sitting above the infrastructure layer, but as a core part of the architecture itself.

MedOne operates as one of Israel’s primary carrier-neutral interconnection hubs. Carrier neutrality means that multiple competing telecommunications providers, global carriers, regional operators and local fiber networks  all terminate directly inside MedOne’s facilities. Clients are not locked into a single provider and can choose, combine or change carriers without physical migration or dependency on a single network operator. In a region where geopolitical conditions can affect routing availability, that freedom is not a commercial convenience, it’s a risk management tool.

The connectivity architecture extends to direct cloud on-ramps, submarine cable landing stations and Israel’s core fiber backbone  all designed to avoid the hidden convergence points where redundant-looking network paths physically meet and paper diversity collapses into a single point of failure.

“A data center that’s operational but unreachable is still down from a customer’s perspective,” Matara says. “Path diversity and true interconnection aren’t add-ons. They’re part of the same design logic as power and cooling independence.”

Starting With Infrastructure, Not With Cloud

The prevailing assumption in enterprise infrastructure planning has been that cloud resilience is sufficient  that hyperscaler uptime guarantees translate into genuine continuity. MedOne’s model challenges that directly. With more than 15 years of experience supporting high-performance computing environments, the company brings a depth of technical understanding that extends well beyond standard enterprise workloads  and that shapes how it thinks about the relationship between physical infrastructure and the services built on top of it.

Cloud services are only as resilient as the physical infrastructure they run on. Starting with hardened, sovereign, physically isolated infrastructure — and building cloud and managed services on top of it  produces a fundamentally more resilient architecture than layering cloud on top of a standard facility and relying on the SLA to cover the gaps.

For mission-critical clients in regulated industries, this distinction carries additional weight. Data sovereignty, regulatory compliance and audit requirements often demand infrastructure that can be physically verified, locally governed and operationally isolated; a carrier-neutral, underground, autonomy-designed facility answers those requirements in a way that a hyperscaler availability zone cannot.

Meeting Sovereign and Regulatory Standards

For banks, insurers and payment providers in Israel, enforceable data sovereignty is now a hard regulatory expectation, not marketing language. MedOne’s underground, carrier-neutral facilities are designed to support Israeli privacy and data-security requirements, including strict controls over physical access, operations and data flows that enable financial institutions to demonstrate compliance and satisfy supervisory scrutiny.

The Real Test

Infrastructure decisions made under stable conditions tend to look similar. The divergence happens when conditions change.

Israel’s ongoing conflict with Iran brought missile alerts, physical security responses and disruptions to civilian utilities  creating operating conditions where continuity could not be assumed and where the difference between facilities designed for stability and those designed for disruption became impossible to ignore. MedOne’s facilities continued operating throughout. Not because the engineering was lucky, but because the architecture was designed from the ground up for exactly that scenario: external disruption as a baseline assumption, not an edge case.

That is the core argument for the strategic model. Resilience built into the architecture from the start performs differently than resilience added as a layer on top of a standard design. For organizations that cannot afford to find out which kind they have at the worst possible moment, the engineering choices made before a facility is ever switched on are the ones that matter most.

# # #

About the Author

Eli Matara is Chief Commercial Officer at MedOne, Israel’s leading provider of underground, carrier-neutral data centers and a central connectivity hub linking Israel to global networks.

With more than 20 years in enterprise sales, Eli leads the company’s commercial strategy across colocation, cloud, and connectivity. He works closely with Israel’s largest enterprises, global S&P 500 companies, and mission-critical organizations, helping them secure long-term infrastructure partnerships built for resilience, scale, and AI-driven workloads.

The post Not All Data Centers Are Built the Same — Inside MedOne’s Infrastructure Strategy in Israel appeared first on Data Center POST.

Celebrating International Data Center Day: Inspiring the Next Generation of Tech Leaders

26 March 2026 at 16:00

Originally posted on Nomad Futurist.

Happy International Data Center Day! Today, we shine a spotlight on an industry that quietly powers our modern world. Behind every video call, online class, cloud application, and AI breakthrough is a network of infrastructure that most people never see — but rely on every single day: the data center industry.

This day is about more than celebrating technology; it’s about celebrating the people who make it all possible. From engineers and technicians to sustainability leaders, network specialists, and innovators, data centers are driven by talented professionals shaping the future of technology and connectivity.

Yet, one of the biggest challenges remains awareness. Many students and educators still don’t know that these careers exist, or the incredible opportunities they offer.

At the Nomad Futurist Foundation, we know that exposure changes everything. When students step inside a data center, meet the people behind the operations, and see the technology up close, curiosity transforms into possibility. Experiencing these environments firsthand opens doors to careers that are not only in high demand but essential to powering our digital future.

To continue reading, please click here.

The post Celebrating International Data Center Day: Inspiring the Next Generation of Tech Leaders appeared first on Data Center POST.

The 1 Gigawatt Data Center Dilemma

26 March 2026 at 15:00

The AI revolution is pushing the data center industry toward gigawatt-scale campuses. But the real question today is not how large a facility can be built. The real question is how quickly power can be converted into revenue.

Consider a 1 gigawatt data center project. One gigawatt equals one thousand megawatts of capacity. In today’s market, typical infrastructure costs for large data centers range between 8 million and 12 million dollars per megawatt for standard facilities. That places the infrastructure cost of a 1 GW campus between 8 billion and 12 billion dollars.

In many U.S. markets, developers are seeing costs closer to 10 to 14 million dollars per megawatt, which would place a 1 GW campus between 10 and 14 billion dollars. AI optimized data centers can be even more expensive due to high density racks, liquid cooling systems, and larger electrical infrastructure. Those facilities can reach 15 to 20 million dollars per megawatt, pushing a 1 GW campus to 15 to 20 billion dollars in infrastructure alone.

Once servers, GPUs, networking equipment, and storage are installed, the total project value can easily exceed 30 billion dollars. But capital cost is no longer the biggest constraint, energy is.

According to the International Energy Agency, global data center electricity consumption reached roughly 415 terawatt hours in 2024, representing about 1.5 percent of global electricity demand. That number is projected to approach 800 terawatt hours by 2030 as AI adoption accelerates. At the same time, power infrastructure is struggling to keep up. The United States interconnection queue alone now exceeds 2 terawatts of generation capacity waiting for approval, and in many regions new grid connections can take three to six years. This creates a major financial challenge for traditional hyperscale development.

Large buildings are often constructed years before sufficient power becomes available. Hundreds of megawatts of capacity can sit idle while developers wait for substations, transmission lines, and utility upgrades. On a one gigawatt campus that could mean billions of dollars tied up in infrastructure waiting for power.

Now compare that with a modular campus strategy.

Instead of constructing massive buildings designed for the full gigawatt from day one, the campus can be deployed incrementally as power becomes available. A one gigawatt campus could begin with a 20 megawatt deployment. Using the same industry pricing ranges, that first deployment would require between 160 and 240 million dollars at eight to twelve million dollars per megawatt, or up to 300 to 400 million dollars if the facility is designed for high density AI workloads. What makes this model powerful is how quickly revenue can begin.

In many markets AI capacity is leasing between 150 thousand and 250 thousand dollars per megawatt per month depending on location and density. A 20 megawatt deployment can therefore generate roughly 3 to 5 million dollars per month, or approximately 36 to 60 million dollars per year, while the rest of the campus continues expanding. Instead of waiting years for a massive hyperscale facility to be completed, the project can begin generating revenue within 12 to 18 months.

As additional power becomes available the campus grows from twenty megawatts to one hundred megawatts, then several hundred megawatts, and eventually the full one gigawatt capacity. By the time the campus reaches full scale, the project may already be generating hundreds of millions of dollars annually.

There is also another strategic advantage that is becoming increasingly important: mobility of infrastructure.

If power availability changes, new energy sources come online, or grid constraints shift to another region, modular facilities can be redeployed where energy exists. Massive fixed hyperscale buildings cannot move.

This dramatically changes the risk profile.

Traditional hyperscale development concentrates 10 to 20 billion dollars into a single permanent structure. Modular campuses distribute capital across infrastructure that scales directly with available power.

In a world where energy has become the limiting factor for digital growth, the future of hyperscale development may not be one giant building. It may be gigawatt scale campuses built from modular infrastructure designed to grow with power.

# # #

About the Author

Kliton Agolli Co-Founder, Board Member & Director of Global Growth Northstar Technologies Group | Naples, Florida.

Kliton Agolli is a senior security and international business development executive with more than 35 years of experience operating at the intersection of national security, executive protection, counterintelligence, and global commercial expansion. His career spans military service, law enforcement, VIP and diplomatic protection, healthcare and hospitality security, and cross-border business development in complex and high-risk environments.

At Northstar Technologies Group, Mr. Agolli leads global growth strategy, international partnerships, and strategic market expansion. He plays a key role in aligning advanced security and infrastructure technologies with government, defense, healthcare, and mission-critical commercial clients worldwide. His work focuses on risk-informed growth, regulatory compliance, and building long-term strategic alliances across Europe, the Middle East, and the United States.

The post The 1 Gigawatt Data Center Dilemma appeared first on Data Center POST.

Purpose-Built for AI: The Shift Toward Modular Data Center Infrastructure

26 March 2026 at 14:00

Originally posted on Compu Dynamics.

Discover how AI is transforming mission‑critical infrastructure: From modular data center design and liquid cooling to extreme power density to purpose‑built AI facilities, Steve Altizer, President and CEO of Compu Dynamics, covers these topics in this recent conversation.

At PTC 2026 in Hawaii, Isabel Paradis of HOT TELECOM held a discussion with Altizer to discuss how AI is reshaping the way modular data centers are designed now and in the future.

AI Is Rewriting the Rules of Data Center Design

AI is transforming data centers. While many are still trying to shoehorn AI workloads into traditional designs, that approach is only going to last a few more years. Hyperscalers are leading the way into an AI‑centric future, where liquid cooling – once a specialty – is now becoming standard across the industry.

Retrofitting conventional colo or cloud facilities for AI is not ideal. It’s not as cost effective as doing something that’s purpose built, yet building AI‑only facilities also carries risk, because repurposing that heavy investment later is difficult. The industry is therefore moving toward modular infrastructure, which allows for hybrid, purpose‑built AI facilities that remain flexible enough to serve a range of customers.

To continue reading, please click here.

The post Purpose-Built for AI: The Shift Toward Modular Data Center Infrastructure appeared first on Data Center POST.

AI’s Overlooked Bottleneck: Why Front-End Networks Are Crucial to AI Data Center Performance

24 March 2026 at 14:00

By Mike Hodge, AI Solutions Lead, Keysight Technologies

It’s the heart of the AI gold rush, and everyone wants to capitalize on the next big thing. Large language models, multimodal systems, and domain-specific AI workloads are moving from experimentation to production at scale. Across industries, enterprises are building their own proprietary models or integrating pre-trained ones to power applications spanning from video analytics to highly specialized inference services.

This shift has triggered a new wave of infrastructure investment. But while GPUs and accelerators dominate the conversation, scaling AI platforms has produced a less obvious constraint: front-end network performance. In increasingly distributed, multi-tenant AI environments, the ability to move data efficiently into (and across) platforms has become just as critical as raw compute density.

New AI platforms mean new expectations for infrastructure

AI infrastructure is no longer the exclusive domain of a handful of hyperscalers. A growing class of service providers has begun offering end-to-end AI platforms where compute, storage, networking, and orchestration are delivered as a service. Their value proposition is straightforward: customers bring data and models, while the platform handles the complexity of building, operating, and maintaining large-scale data center deployments.

Service models like these, however, place extraordinary demands on networking. Unlike traditional cloud workloads, AI jobs are defined by massive, sustained data movement and tight coupling between data pipelines and compute utilization. GPUs cannot perform at peak efficiency unless data arrives on time, in the right order, and at predictable speeds.

As a result, network performance is now one of the primary determinants of training, inference, and infrastructure efficiency.

The eye of the storm is moving from the fabric to the front end

AI infrastructure discussions often focus on back-end fabrics. Think about things like high-bandwidth, low-latency interconnects between GPUs, for example. However, while these fabrics are indeed essential, they are only part of the picture.

Before training or inference ever begins, data must first traverse the front-end network. This occurs in several ways, but some of the most common paths include:

  • From remote object stores or on-premises repositories into the data center
  • From ingress points into virtual machines or containers
  • From storage into GPU-attached hosts

This is where north-south traffic (external to internal) intersects with east-west traffic (host-to-host and service-to-service). And in AI environments, these flows are not occasional spikes. They are sustained, high-throughput, latency-sensitive streams that run continuously throughout the lifecycle of a job.

When front-end networks underperform, the consequences are costly and immediate: idle accelerators, elongated training windows, unpredictable inference latency, and poor multi-tenant isolation.

Why traditional network validation falls short

Most cloud networks were designed around general-purpose workloads. Think about things like web services, databases, and transactional systems with relatively modest bandwidth demands and fluctuating traffic patterns punctuated by the occasional spike.

AI workloads, on the other hand, break these assumptions. On the front end, AI traffic is characterized by:

  • Extremely large data transfers, often using jumbo frames
  • Long-lived connections, sustained over hours or days
  • Millions of concurrent sessions in multi-tenant environments
  • Tight latency and jitter tolerances to avoid starving accelerators

Conventional network testing approaches — such as synthetic benchmarks, isolated link tests, or small-scale simulations — are unable to replicate this behavior. As a result, many issues only surface once customer workloads are already running, which also happens to be when the cost of remediation is highest.

The need for realistic workload emulation

Optimizing front-end AI networks requires the ability to reproduce real workload behavior at scale. That means emulating both north-south and east-west traffic patterns simultaneously, across distributed environments and under sustained load.

For north-south paths, this includes verifying that large datasets can be reliably pulled from diverse external sources into local storage. Moreover, the network must also be able to do so with consistent throughput, predictable latency, and no silent data loss. Transfers like these are essential, as any inefficiency propagates directly into longer training times and underutilized GPUs.

For east-west paths, the challenge shifts to connection density, latency, and scalability. Once workloads are running, virtual machines and services exchange data continuously. Sometimes within the same host, sometimes across racks, and sometimes across geographically separated data centers. Modern AI platforms increasingly rely on SmartNICs and offload technologies to make this feasible, so these components must also be validated under realistic connection rates and protocol behavior.

Without large-scale, workload-accurate testing, subtle bottlenecks — such as rule-processing limits, connection-tracking inefficiencies, or unexpected latency spikes — can remain hidden until production traffic exposes them.

Front-end optimization is a competitive differentiator

In response, the most advanced AI platform operators are shifting left: validating their front-end networks before customers ever deploy workloads. Along the way, their proactive approach is changing the economics of AI infrastructure.

Stress-testing networks under real-world conditions offers a range of benefits for network operators:

  • Identifying performance cliffs at high line rates
  • Understanding how different layers of the stack interact under load
  • Resolving scaling limitations in NICs, virtual networking, or storage paths
  • Delivering predictable performance across tenants and geographies

It’s not just about improving peak throughput. It’s about building confidence that platforms perform as expected under peak pressure. And in a market where AI workloads are expensive, time-sensitive, and strategically important, this confidence becomes a differentiator. Customers may never see the network directly, but they feel its impact in faster training cycles, lower inference latency, and fewer production surprises.

Looking ahead: front-end networks and the next generation of AI

AI workloads continue to evolve. Microservices-based architectures, distributed inference pipelines, and increasingly stateful services are placing even more emphasis on low-latency, high-availability front-end connectivity. At the same time, data is becoming more geographically distributed, pushing platforms to span multiple regions and network domains.

In this environment, front-end networks are no longer a supporting actor. They are a core component of AI system design. That means they must be engineered, validated, and optimized with the same rigor applied to compute and accelerators.

The lesson is clear: operators cannot optimize AI infrastructure by focusing on GPUs alone. The performance, efficiency, and reliability of tomorrow’s AI platforms will be defined just as much by how well they move data as by how fast they process it.

The post AI’s Overlooked Bottleneck: Why Front-End Networks Are Crucial to AI Data Center Performance appeared first on Data Center POST.

Capacity LATAM 2026 Signals a New Era of AI, Cloud, and Data Center Growth Across Latin America

24 March 2026 at 13:00

Capacity LATAM 2026, held March 17-18 in São Paulo, Brazil, made it clear that Latin America’s digital infrastructure market is no longer defined by potential, but by execution. As demand for cloud, AI, and connectivity accelerates across the region, the conversation has shifted from future opportunity to immediate deployment where power, capital, and collaboration must align to keep pace with growth.

Across the event, the narrative moved well beyond subsea routes and international traffic flows. Instead, speakers focused on how Latin America is becoming a destination for data creation, processing, and storage. With the region’s data center market projected to nearly double by 2030, investment is accelerating across Brazil, Mexico, Chile, and Colombia, while emerging markets are beginning to play a more strategic role in regional infrastructure planning.

Collaboration emerged as a central theme, particularly as infrastructure deployments become more complex and capital-intensive. During the “From Fiber to Facility” keynote, Gabriel del Campo, Data Center Vice President at Cirion Technologies emphasized that scaling data centers and networks across Latin America requires tighter alignment between operators, fiber providers, and hyperscalers. That coordination is increasingly necessary to navigate supply chain challenges and accelerate time to market in a region where demand is rising quickly.

Investment momentum continues to build, with the “LATAM’s $100B Digital Surge” keynote framing the scale of capital entering the market. Rodolfo Macarrein, Partner at Altman Solon highlighted how shifting political and regulatory dynamics are influencing where and how capital is deployed while reinforcing that long-term demand fundamentals remain strong. Key markets such as São Paulo, Santiago, and Querétaro are emerging as focal points for AI-ready capacity, driven by hyperscale expansion and enterprise demand.

AI infrastructure is already beginning to shape the next phase of development. In the AI keynote, Ivo Ivanov, CEO at DE-CIX pointed to the rise of next-generation digital hubs designed for high-density compute, where power availability, connectivity, and scalability must be considered from day one. José Eduardo Quintella, CEO at Terranova reinforced this by highlighting how speed to deployment and execution are becoming critical differentiators, particularly as new facilities are being delivered on accelerated timelines to meet demand.

Connectivity remains the backbone of this transformation. The subsea keynote highlighted new systems such as Firmina and Humboldt that are expanding capacity and reducing latency between Latin America and global markets. Peter Wood, Senior Research Analyst at TeleGeography emphasized the strategic importance of these routes in supporting cloud expansion and future AI workloads, particularly as latency-sensitive applications become more prevalent across the region.

Energy is quickly becoming one of the most important variables in the region’s growth trajectory. As discussed throughout the energy and infrastructure sessions, access to reliable and sustainable power will ultimately determine how quickly Latin America can scale to meet demand. Renewable energy partnerships, evolving grid strategies, and new power procurement models are all playing a role in shaping where future capacity will be built.

What stood out most across Capacity LATAM 2026 was the level of alignment between stakeholders. Operators, investors, and policymakers are increasingly focused on the same challenge: how to scale infrastructure quickly while addressing constraints around power, supply chains, and regulatory complexity. The shift toward AI-ready infrastructure, combined with sustained cloud demand, is accelerating timelines and raising the stakes for execution.

As the event concluded, the broader message was clear. Latin America is no longer simply part of the global network, it is becoming a critical region where infrastructure must be built to support both local demand and international data flows. The next phase of growth will depend on how effectively the region can translate investment into deployable, scalable infrastructure.

Upcoming Capacity events will continue to spotlight the trends shaping digital infrastructure worldwide, from AI-driven demand to evolving connectivity models. Explore the full event calendar at www.capacityglobal.com/events to see where the industry is heading next.

Dates for Capacity LATAM 2027 are not yet available, for information please visit www.capacityglobal.com/events.

The post Capacity LATAM 2026 Signals a New Era of AI, Cloud, and Data Center Growth Across Latin America appeared first on Data Center POST.

AI Workloads and the Implications for High-Density Data Centre Design

23 March 2026 at 14:00

AI workloads are pushing data centre infrastructure towards higher rack densities, new cooling strategies and greater power demand. Jamie Darragh, Data Centre Director, Europe, at global data centre engineering design consultancy Black & White Engineering, examines the design implications for the next generation of facilities.

AI and high-performance computing are placing new demands on data centre infrastructure. Rack densities are increasing; facilities are being delivered at larger scale and operators are under pressure to support workloads that consume far greater levels of power and generate far higher heat loads than conventional cloud environments.

Independent forecasts underline the pace of expansion. Gartner estimates global data centre electricity consumption will rise from around 448TWh in 2025 to roughly 980TWh by 2030, driven largely by AI-optimised computing infrastructure. Within that growth, AI servers alone are expected to account for close to 44% of data centre power consumption by the end of the decade.

For our engineering teams, these workloads are altering the practical limits of traditional infrastructure design. Rack densities exceeding 100–200kW are now appearing in project specifications, particularly where large AI training clusters are planned. These loads influence every part of the building environment, from electrical distribution and cooling capacity to structural loading and cable management.

Designing for extreme density

Under these conditions, air cooling alone becomes difficult to sustain across entire facilities. Liquid cooling is therefore increasingly included in the baseline design of new data centres rather than introduced later as a specialist solution. This cooling method is becoming increasingly favourable due to its higher specific thermal capacity compared with air, which enables more efficient heat transfer and removal. Direct-to-chip and rack-level systems are being designed alongside air cooling so facilities can accommodate different densities and equipment types across the same site.

The introduction of liquid systems requires careful coordination between disciplines. Facilities must manage environments where air and liquid cooling operate together, supported by monitoring platforms, safety controls and operational procedures capable of supporting both approaches.

Some IT chips require different liquid cooling temperatures than those used in air-cooling systems, creating technical hurdles for the overall heat rejection system and requiring precise control of the cooling circuit temperature. Another engineering challenge lies in integrating these systems with power distribution, control platforms and maintenance strategies rather than selecting one cooling method over another.

Higher density also narrows operational tolerance. Commissioning becomes more demanding and redundancy strategies require more detailed modelling. Infrastructure must be capable of supporting peak compute demand while maintaining efficiency when loads are lower, placing greater emphasis on flexible electrical and mechanical systems.

The scale of development is also increasing. Buildings that once delivered a few megawatts of capacity are now part of campus-scale developments where multiple data halls contribute to facilities delivering hundreds of megawatts. data centres are increasingly planned and delivered as long-term infrastructure assets rather than individual projects.

This environment encourages repeatable design and industrialised delivery methods. Developers and investors expect predictable construction schedules and consistent performance across multiple sites. As a result, engineering teams are placing greater emphasis on modular infrastructure systems and digital design methods that allow mechanical and electrical systems to be configured and deployed repeatedly.

Power, control and operational intelligence

Power availability is also becoming a determining factor in project planning. In many regions, grid connection capacity is now one of the main constraints on new development. Gartner has warned that by 2027 as many as 40% of AI data centres could face operational limits because of power availability.

Developers are therefore engaging more closely with utilities during early feasibility stages and exploring complementary infrastructure such as on-site generation and energy storage. In some cases, data centres are also being designed to contribute to wider grid stability through demand response and energy management capability.

Artificial intelligence is also beginning to influence how facilities themselves are operated. Machine-learning systems are already being used in some environments to optimise airflow patterns, cooling plant performance and power distribution using live operational data.

The next stage will see more widespread use of integrated control platforms and digital twins capable of modelling facility behaviour in real time. These systems allow operators to simulate infrastructure performance under different load conditions, test operational changes and identify maintenance requirements before faults occur.

Environmental performance remains another constraint as compute density increases. Higher workloads place additional pressure on energy supply while raising questions around water consumption, construction materials and waste heat recovery. Planning authorities and investors are increasingly looking for measurable improvements in efficiency and carbon reporting before approving new developments. Sustainability therefore sits alongside power and cooling as a central engineering consideration rather than a secondary design feature.

Taken together, these conditions create a more complex design environment for data centre infrastructure. Higher compute densities, power constraints and new operational technologies require mechanical, electrical and digital systems to be considered together from the earliest design stages.

Facilities intended to support AI workloads must accommodate far greater performance requirements than earlier generations of data centres while remaining adaptable as infrastructure technologies and operating practices continue to develop.

# # #

About the Author

Jamie Darragh is Data Centre Director, Europe at Black & White Engineering. He leads the delivery of complex, mission-critical projects across the region, with a focus on technical quality, design coordination and strong client relationships. A Chartered Engineer and member of CIBSE and the IET, Jamie has worked across Europe, the Middle East and the UK since 2005. He brings a clear, practical approach to engineering challenges, combining technical expertise with commercial awareness. He is committed to developing teams that work collaboratively and perform at a high level. Jamie has received several industry awards, recognising both his technical capability and his impact on the built environment including ‘Engineer of the Year’ at leading Middle East industry awards.

The post AI Workloads and the Implications for High-Density Data Centre Design appeared first on Data Center POST.

When Your Data Center Becomes a Liability Overnight

19 March 2026 at 14:00

How Centralized Infrastructure Intelligence Turns Emergency Replacements into Controlled Operations

Most infrastructure professionals spend their careers building for the planned: capacity expansions, technology refreshes, migration cycles that unfold over quarters or years. And then a Monday morning email changes everything.

A government agency bans equipment from a trusted vendor. A threat intelligence report reveals that a state-sponsored actor has been inside your network switches for eighteen months. A manufacturer announces that the platform running your entire campus backbone loses support in nine months. In each case, the same question emerges: how quickly can you identify every affected device across every facility, and how fast can you replace them without breaking what still works?

For a surprising number of organizations, the honest answer is: they don’t know. That gap between confidence in steady-state operations and readiness for unplanned mass replacement is where real risk lives.

The Forces That Turn Infrastructure Upside Down

Emergency hardware replacement at scale is not hypothetical. Recent years have produced real-world triggers across four broad categories, each with distinct operational implications.

Regulatory and geopolitical mandates. The federal effort to remove Chinese-manufactured telecommunications equipment from American networks—driven by the FCC’s Covered List and Section 889 of the National Defense Authorization Act—has forced carriers and federal contractors into wholesale infrastructure replacement on compliance timelines that don’t flex for budget cycles. The FCC has estimated the total program cost at nearly five billion dollars. Any organization touching federal dollars must verify its infrastructure is clean; if it isn’t, replacement is a compliance obligation, not a planning exercise.

Security crises that outpace patching. The Salt Typhoon campaign revealed that Chinese state-sponsored hackers had penetrated multiple major US telecommunications providers, maintaining persistent access for up to two years—exploiting legacy equipment, unpatched router vulnerabilities, and weak credential management. Investigators found routers with patches available for seven years that had never been applied. For affected carriers, the response demanded physical replacement of compromised infrastructure that could no longer be trusted regardless of patch status. When an adversary achieves sufficient persistence, patching becomes insufficient. Replacement is the only reliable remediation.

End-of-life announcements. Vendor lifecycle decisions create quieter but equally urgent pressure. An organization running multiple hardware platforms faces different end-of-support timelines for each, and dependencies between them mean replacing one can cascade into forced changes elsewhere. Without a consolidated view of what is running, where, and when it loses support, these effects are invisible until they cause failures.

Architectural shifts. Zero trust adoption, SASE frameworks, and cloud-delivered security are rendering entire categories of on-premises equipment architecturally obsolete—not because they’ve failed, but because the security model has moved on. The question is not whether legacy VPN appliances and perimeter firewalls will be replaced, but how quickly, and whether the organization has the visibility to execute in a controlled manner.

Why Standard Processes Break Down

Every mature IT organization has IMAC processes: Install, Move, Add, Change. These handle the predictable rhythm of infrastructure life. Emergency replacement programs share almost none of their characteristics.

They are triggered externally. Their scope is massive—hundreds or thousands of devices across multiple sites. They arrive without allocated budgets or pre-positioned inventory, carrying compliance deadlines indifferent to resource constraints.

The organizations that handle these events well recognize them for what they are: standalone programs needing their own governance, funding, and dedicated teams—and their own information infrastructure. That last requirement is where centralized infrastructure management becomes not a convenience but a prerequisite.

What Centralized Infrastructure Intelligence Must Deliver

Four questions—answered immediately.

What is affected, and where is it? When a regulatory notice references a specific manufacturer, or a security advisory identifies a particular hardware model and firmware version, the operations team needs a definitive count within hours, not weeks. Organizations maintaining a continuously updated centralized inventory—capturing hardware models, firmware versions, physical locations, logical roles, and contractual associations—can answer by running a query. Organizations relying on spreadsheets and periodic audits cannot. The difference in response time is typically measured in weeks, and in a compliance-driven scenario, weeks are what you don’t have. Equally important is dependency mapping: understanding that replacing a core switch will affect upstream routers, downstream access switches, and out-of-band management paths. Without it, a replacement that looks straightforward on paper can produce cascading outages in execution.

What is the replacement path? A legacy switch may need to be replaced by different models depending on port density, power constraints, and compatibility with adjacent equipment. Workflow-driven execution ensures every replacement follows the same approval steps, documentation requirements, and validation procedures—preventing errors that compound in programs spanning hundreds of sites.

Where are we right now? Leadership needs a live view of progress—which sites are lagging, where tasks are stalled, which teams are hitting milestones. This enables resource reallocation, timely escalation of procurement bottlenecks, and an auditable record for regulators. It also surfaces patterns previously invisible: a region that consistently runs behind, or an approval step adding days of unnecessary latency.

What did we learn? Emergency replacements are no longer rare—any organization operating at scale should expect one every few years. Those that conduct structured post-project reviews build a compounding advantage: better scoping templates, more accurate resource models, and pre-validated replacement mappings that make the next response faster.

Building Readiness Before the Next Crisis

Emergency replacements cannot be made painless—they are disruptive, expensive, and stressful regardless of preparation. But the difference between an organization that navigates one in three months and one that takes twelve is almost entirely a function of work done before the trigger.

That preparation has three dimensions: information readiness (a continuously updated inventory with hardware identity, location, firmware status, and dependency relationships), process readiness (defined workflow-driven procedures that activate quickly rather than being reinvented under pressure), and organizational readiness (governance, budget authority, and executive sponsorship that allows an emergency program to stand up as a dedicated initiative).

The organizations best positioned for the next regulatory mandate, zero-day disclosure, or end-of-life cascade are investing in that readiness today—not because they know what the trigger will be, but because they’ve built a discipline prepared for all of them.

# # #

About the Author

Oliver Lindner has over 30 years of experience in IT and the management of IT infrastructures with a focus on data centers. He has worked for many years at FNT Software, a leading provider of integrated software solutions for IT management. In his current position as Director of Product Management, he is responsible for the strategic direction and continuous improvement of the software products for data centers. The aim is to support customers in the efficient and transparent design of their IT infrastructure.

Oliver Lindner attaches great importance to customer focus, innovation and quality. His expertise also includes the development and provision of Software as a Service (SaaS) solutions that offer customers maximum flexibility and efficiency. To this end, he works closely with his own team, partners and customers to create sustainable and innovative software solutions.

The post When Your Data Center Becomes a Liability Overnight appeared first on Data Center POST.

Data Center HVAC Market to Surpass USD 36 Billion by 2035

19 March 2026 at 13:00

The global data center HVAC market was valued at USD 13.7 billion in 2025 and is estimated to grow at a CAGR of 9.8% to reach USD 36 billion by 2035, according to recent report by Global Market Insights Inc.

Growth in the global data center HVAC industry is being fueled by rising computing intensity, expanding AI-driven workloads, and the continued development of hyperscale and enterprise facilities. As server densities increase and high-performance computing environments generate greater thermal loads, advanced cooling infrastructure has become essential to maintain operational stability and uptime. Research and development efforts across the HVAC industry are increasingly focused on liquid cooling technologies and next-generation thermal management systems capable of handling elevated power densities.

At the same time, stricter regulatory oversight related to energy consumption and environmental performance is encouraging operators to enhance system efficiency and reduce carbon output. ESG-focused initiatives and net-zero commitments are prompting facility upgrades aimed at optimizing Power Usage Effectiveness and lowering operating expenses. Improvements in airflow engineering, adoption of sustainable refrigerants, and integration of energy-efficient cooling architectures are reshaping infrastructure strategies. As regulatory expectations and energy costs continue to rise, demand for intelligent, high-efficiency HVAC solutions in data centers is expected to accelerate significantly.

Rising load capacities, sustainability targets, and regulatory compliance requirements are creating pressure for compact, scalable, and adaptable HVAC systems. Industry participants are responding by designing modular cooling platforms that can operate effectively across diverse geographies while maximizing space utilization and energy performance.

The data center HVAC market from solutions segment accounted for 76% share in 2025 and is forecast to grow at a CAGR of 8.9% from 2026 to 2035. Advanced monitoring tools equipped with artificial intelligence enable predictive maintenance, improve airflow management, and reduce unnecessary power consumption. Increased adoption of liquid-based cooling technologies is supporting high-density server environments while enhancing reliability and extending equipment lifespan through energy-conscious design.

The air-based cooling technologies segment held a 50% share in 2025 and is projected to grow at a CAGR of 8.8% during 2026-2035. Enhanced airflow optimization systems, variable-speed fan configurations, and intelligent environmental controls are improving thermal consistency and minimizing energy waste. Economizer-enabled designs are facilitating greater use of ambient air, while modular cooling units support scalability across both hyperscale and edge environments. Growing server power density is also accelerating interest in direct cooling and immersion-based methods supported by advanced coolant formulations that enhance heat transfer efficiency.

United States data center HVAC market reached USD 4.7 billion in 2025. Increasing cloud integration and AI-intensive applications are driving demand for more efficient cooling architectures. Investments are being supported by electrification incentives and decarbonization initiatives, encouraging broader adoption of intelligent HVAC controls and energy-optimized systems. Integration with smart building platforms and grid-responsive technologies is enabling facilities to manage peak loads, reduce demand charges, and incorporate renewable energy sources.

Key companies operating in the global data center HVAC market include Vertiv, Schneider Electric, Carrier Global, Daikin Industries, Trane Technologies, Johnson Controls, STULZ, Alfa Laval, Danfoss, and Modine Manufacturing. Companies in the global market are strengthening their competitive position through continuous innovation, strategic partnerships, and geographic expansion. Leading players are investing heavily in research and development to enhance liquid cooling efficiency, improve airflow intelligence, and integrate AI-driven monitoring systems. Collaborations with cloud service providers and data center developers are enabling customized cooling deployments for high-density environments. Firms are also expanding manufacturing capacity and regional service networks to support rapid infrastructure growth. Sustainability-focused product development, including low-global-warming-potential refrigerants and energy-efficient system architectures, is becoming a central competitive differentiator.

The post Data Center HVAC Market to Surpass USD 36 Billion by 2035 appeared first on Data Center POST.

Middle East Conflict Could Put $30 Billion of Digital Infrastructure at Risk

17 March 2026 at 14:00

Iran’s recent drone strikes across the Gulf revealed a new vulnerability in the global digital economy. For the first time, hyperscale cloud infrastructure that powers banks, fintech platforms, and digital services became a direct target of regional conflict.

According to reporting by Reuters, drone strikes during the regional conflict damaged two AWS data center facilities in the United Arab Emirates, while a nearby strike affected another in Bahrain.

The attacks disrupted power systems, triggered fire suppression systems, and forced operators to isolate affected infrastructure. Several availability zones in the AWS Middle East region went offline while engineers restored operations.

The disruption spread quickly through the regional digital ecosystem.

Banks and fintech platforms reported delayed transactions and degraded services. Consumer applications also experienced outages. Companies including Careem, Emirates NBD, Hubpay, Alaan, Snowflake, and Policybazaar UAE reported disruptions during the incident as cloud workloads failed over to backup infrastructure.

The attacks did not completely destroy the facilities, but they exposed how quickly a localized strike can ripple through a cloud-dependent economy.

Analysts say incidents of this scale typically generate tens of millions of dollars in combined operational losses when infrastructure repair, service downtime, and mitigation costs are included. Cloud operators must repair damaged equipment and restore systems, while customers absorb the cost of interrupted digital services.

A Rapidly Expanding Digital Infrastructure Hub

The Gulf has rapidly become one of the fastest-growing digital infrastructure markets in the world.

Today the Gulf Cooperation Council hosts more than 70 data centers with roughly 557–738 megawatts of live IT capacity.

Country Estimated Data Centers IT Capacity
UAE 24–34 240–376 MW
Saudi Arabia 14+ ~222 MW
Qatar 7–11 30–50 MW
Bahrain 6–9 50–60 MW
Oman 13–16 10–20 MW
Kuwait 5 5–10 MW
GCC Total 70+ 557–738 MW

Governments and technology companies have already announced more than $30 billion in new data center investments, and analysts expect Gulf computing capacity to exceed 2 gigawatts by 2030.

The region also hosts an expanding hyperscale cloud ecosystem. The Gulf currently includes around ten cloud regions operated by Amazon Web Services, Microsoft Azure, Google Cloud, Oracle, and Alibaba. These regions contain approximately 20-25 hyperscale facilities, also known as availability zones.

Saudi Arabia’s plans to build a 500-megawatt AI data center complex illustrate the scale of future expansion.

Infrastructure Concentrated in a Few Cities

Despite this growth, most computing capacity remains concentrated in a handful of metropolitan clusters.

Metro Area Estimated Capacity
Dubai 150–200 MW
Abu Dhabi 100–150 MW
Riyadh ~110 MW
Dammam / Khobar 60–70 MW
Manama 50–60 MW
Doha 30–50 MW

These hubs contain roughly 80–85 percent of the Gulf’s computing capacity. This concentration means disruptions affecting only a few metropolitan areas could impact most of the region’s cloud infrastructure.

Analysts estimate that up to 70 percent of Gulf data center capacity lies within areas exposed to regional conflict escalation, particularly along the Persian Gulf coastline.

A Global Digital Corridor

The strategic importance of the region extends beyond local markets.

Around 90 percent of internet traffic between Europe and Asia travels through Middle Eastern routes, supported by roughly 20 submarine cable systems and 13 active Internet Exchange Points across the Gulf.

Oman plays a particularly important role in this connectivity network. The country hosts five submarine cable landing stations and connections to more than fourteen international cable systems, positioning it as a key gateway linking Asia, Europe, and Africa.

As hyperscale cloud infrastructure and submarine cable networks continue expanding, the Gulf increasingly serves as a digital bridge between continents.

Conflict Risk Meets Digital Infrastructure

Cloud data centers are no longer just technical facilities, they have become critical infrastructure and Iran’s strikes demonstrated how modern conflicts now intersect with infrastructure that powers the digital economy.

Cloud data centers now sit alongside ports, pipelines, and power plants as strategic assets. The more the Gulf becomes a hub for cloud infrastructure, AI computing, and global internet traffic, the more regional instability can trigger international digital disruptions.

The attacks on AWS facilities therefore represent more than a regional security incident. They highlight a structural risk: a growing share of global digital infrastructure now operates inside one of the world’s most geopolitically volatile regions.

# # #

About the Author

Matvii Diadkov is a technology investor and operator with over a decade of experience building digital infrastructure platforms across logistics, e-commerce, real estate, blockchain technologies, and AI. His work includes ecosystem-level deployments and advisory roles tied to Vision-aligned digital systems in asset-heavy sectors across Oman and the wider region, where he also an adviser to Gulf businesses on digital transformation and infrastructure development.

The post Middle East Conflict Could Put $30 Billion of Digital Infrastructure at Risk appeared first on Data Center POST.

The New Demands on Data Center and Storage Leaders

16 March 2026 at 18:00

Looking back on a career in IT, I wanted to reflect on the 20-plus years I spent working in and running data centers for Fortune 500 companies in the New York and New Jersey area. This was an exciting time leading both large and small teams through some of the most complex transformations in IT infrastructure. That included designing a trading floor infrastructure for a major bank that was implemented globally, overseeing the merger of two banks with very different IT backbones, driving a mainframe-to-open-systems modernization effort, managing a data center consolidation, and establishing global IT standards.

Today, the challenges to the job are even more profound than transitioning from mainframes to the Internet, digital, mobile, and cloud world. With the advent of AI and explosive data growth from so many more devices and applications, IT infrastructure leaders must rewrite their stories to keep pace.

After moving to the vendor side several years ago and working as a Senior Solutions Architect at Komprise, I get to work with IT leaders daily.  I see just how much the role of the infrastructure or data center director has changed. Here’s how I see the shift with some tips for IT infrastructure directors and executives to stay relevant in their organizations while navigating these cataclysmic shifts in technology and work.

A Shift Toward Complexity and Constant Adaptation

The job of managing data centers and infrastructure has become more multi-faceted. It is no longer just about uptime and physical infrastructure. Directors are now expected to understand a rapidly expanding universe of technologies. There is increased separation of duties and new responsibilities that did not exist 10 years ago. Add in constant security threats, cloud optimization demands, and the exponential growth of unstructured data which requires ensuring that it is accessible where needed, but in a safe, secure manner and the scope of the role expands fast. And while all of this happens, IT budgets are being squeezed. The mandate remains the same: do more with less.

The Unstructured Data Growth Challenge

A resounding pressure point today is storage and the relentless growth of unstructured data. Recent estimates from IDC show that over 80 percent of enterprise data is unstructured, and that volume is expected to reach 291 zettabytes by 2027.

How do you back it all up in a timely way? How do you replicate it for disaster recovery? How do you ensure protection and accessibility? How do you efficiently prepare it for AI ingestion? It has really come down to understanding that all data is not the same, and you must treat data differently so that you can be efficient in your management of the data. Knowing what data you have, where it lives, and what value it offers is now a core competency for any infrastructure leader.

Hybrid IT and Simplification as a Strategy

Over the past few years, I have seen storage and infrastructure strategies shift significantly. The old model of managing everything the same way is obsolete. My approach has always been to keep environments as simple and basic as possible to reduce unnecessary complexity. In today’s typical hybrid IT landscape, that means using tools that are vendor-agnostic, that work across on-prem, outsourced, and cloud environments, and that give you a single dashboard to make informed decisions.

AI, Cost Cutting, and Evolving Job Roles

There is a lot of noise about AI taking over roles in IT. I do not believe that infrastructure managers, storage engineers, or data center professionals should fear for their jobs. However, relying on the status quo is not a strategy. The one thing that I have seen as a necessity for IT personnel is the ability to adjust and evolve as changes have appeared in the IT arena.

One thing is certain; AI is becoming ingrained across the business, and IT must be able to support it across every function. Nearly 90% of enterprises report regular AI use in at least one business function, compared with 78 percent in 2024, according to 2025 research from McKinsey. Learning how to work with AI, understanding its use cases and business applications, and knowing how to prepare the right data for it are key new skills. Equally important is staying current with cloud technologies and security best practices.

Balancing Cost, Security, and AI Readiness

IT leaders are being asked to walk a tightrope. On one side is the need to control cost and ensure security. On the other side is the drive to make data accessible and ready for AI. Yet these demands are interlinked. Cost control and security are critical to ensure that AI ambitions don’t fail or stall. Without security, AI becomes a liability rather than an advantage. The question facing today’s IT directors is along the lines of: “How do we make data more accessible without increasing risk or cost?” Success will come from integrating these requirements, not prioritizing one at the expense of the other.

Why It Is Still an Exciting Time to Work in IT Infrastructure

There is such a tremendous amount of growth in the amount of data being generated, and data has moved from a support function to a true driver of decisions, products, and strategy. Data is now central to every organization, from predicting outcomes, automating decisions, and personalizing experiences in real-time. Add to the fact that both AI and ML have accentuated the value of data, and there’s a lot of opportunity in this area for people who want to grow their careers and remain in IT infrastructure.

The ability to efficiently and strategically manage data and build the right environment for cost control along with flexibility and innovation is a huge need for the enterprise. In our recent industry survey (link) we found that AI data management is a top desired skillset, and organizations are prioritizing hiring individuals who can confidently lead the AI infrastructure discipline.

What’s Ahead for 2026 and Beyond

Looking ahead, I expect infrastructure directors to move beyond managing infrastructure to leading transformation. This means aligning technology with business strategy in areas such as AI integration, cybersecurity, cost control, and workforce development. AI is moving beyond the hype; it’s becoming increasingly relevant in production workflows. Security will continue to be a priority and will need to be addressed. Lastly, bridging the talent gap and reskilling existing workforces should be a focus.

Five Tips for Adapting as a Modern Infrastructure Leader

  1. Treat data differently
    Stop managing all data the same way. Understand what is valuable, what is redundant, what is creating undue risks, and what needs to be accessible. Prioritize accordingly.
  2. Focus on vendor-agnostic tools
    Choose solutions that work across vendors, technologies and architectures and reduce lock-in. This simplifies operations, reduces cost and delivers better agility.
  3. Invest in learning AI concepts
    You do not need to be a data scientist. But you should understand how AI uses data, and how to prepare infrastructure to support it with proper governance.
  4. Stay current with security developments
    Security threats evolve constantly. Keep up with best practices and build security into every aspect of data and infrastructure management. Partner with the CSO.
  5. Use simplicity as a guiding principle
    Complexity creates risk and inefficiency. Whenever possible, simplify tools, processes, and architectures.


Final Thoughts

The infrastructure director’s role is not what it used to be, and that is a good thing. The scope has grown, the influence has deepened, and the strategic value of IT is clearer than ever. While the challenges are many, so are the opportunities. Those who can adapt, simplify, and lead through change will continue to be essential to their organizations.

# # #

About the Author: 

Paul Romano is a Senior Solutions Architect at Komprise. He has 25 years’ experience at Fortune 100 companies, possessing significant expertise in setting IT direction and policies, data center build outs and migrations, IT architecture, server and endpoint security, penetration testing, establishing productions support standards and guidelines, managing large IT projects and budgets, and integrating new technologies/technology practices into existing environments.

The post The New Demands on Data Center and Storage Leaders appeared first on Data Center POST.

Duos Technologies Finalizes Hydra Host Contract for Distributed AI Infrastructure

16 March 2026 at 17:00

Duos Technologies Group, Inc. (Nasdaq: DUOT), through its operating subsidiary Duos Edge AI, Inc., has executed a definitive contract with Hydra Host, advancing the previously announced plan to deploy a high-density NVIDIA GPU cluster for a leading global technology company. The GPU-as-a-Service (GPUaaS) contract is expected to generate approximately $176 million in revenue over a 36 month term, including an initial $18 million customer pre payment, with projected gross margins exceeding 80 percent and expected annual EBITDA of approximately $40 million.

The agreement establishes Duos Edge AI as an emerging provider of distributed AI infrastructure designed for large scale compute workloads. Fully funded through Duos Technologies Group’s recently completed $65 million public offering and existing hardware financing arrangements, the partnership enables deployment to commence immediately without reliance on additional equity financing.

“The initial deployment will be located at a strategic site and will consist of multiple high density modular Edge Data Centers (EDCs) which are specifically designed to support large scale AI workloads,” said Doug Recker, newly appointed CEO effective April 1st, 2026. “Manufacturing of the EDCs is currently underway, with critical power modules already ordered to support deployment timelines.”

The first phase of the project includes an initial 4.3 plus MW colocation commitment from a leading global technology company that will serve as the project’s anchor tenant. This deployment represents the largest Edge Data Center project in Company history, with additional colocation revenue expected as the site scales toward its full power capacity.

This contract provides strong commercial validation for Duos’ High Power EDC business line, purpose built for AI companies and high performance compute tenants that require premium rack space, dedicated high density power, and rapid deployment. As Duos advances its long term objective of 75MW of distributed capacity, the Company is actively evaluating additional high density deployment sites to meet accelerating demand from AI hyperscalers, NeoCloud operators, and other AI infrastructure customers.

To learn more about Duos Technologies Group, Inc., visit www.duostechnologies.com.

The post Duos Technologies Finalizes Hydra Host Contract for Distributed AI Infrastructure appeared first on Data Center POST.

Metro Connect USA 2026 Highlights the Future of U.S. Digital Infrastructure

16 March 2026 at 16:00

Metro Connect USA 2026 brought the digital infrastructure community together in Fort Lauderdale, Florida, Feb. 23 to 25, as executives, investors and network operators gathered to discuss the evolving connectivity landscape. Over three days, conversations across keynote sessions, panels and private meetings focused on how the industry is adapting to the rapid growth of artificial intelligence, cloud services and bandwidth demand.

The 2026 event drew more than 3,700 decision-makers representing over 1,200 companies, reflecting the scale of collaboration and investment shaping the next phase of digital infrastructure development in the United States.

Artificial intelligence was a central theme throughout the conference. Industry leaders discussed how AI workloads are driving new requirements for data center capacity, fiber connectivity and power infrastructure. As AI adoption expands beyond hyperscale environments into enterprise applications and edge deployments, operators are facing increasing pressure to scale networks capable of supporting high-volume data movement and compute-intensive workloads.

Fiber infrastructure also remained a key topic. Discussions throughout the event highlighted continued investment in metro fiber expansion, long-haul backbone routes and fiber-to-the-home networks. As cloud platforms, streaming services and AI applications generate greater data traffic, fiber continues to serve as the underlying foundation supporting the digital economy.

Several speakers addressed how infrastructure and investment strategies are evolving alongside these shifts. Marc Ganzi, Chief Executive Offer at DigitalBridge discussed the continued influx of capital into digital infrastructure and the importance of disciplined investment as the sector scales. Steve Smith, Chief Executive Officer at Zayo Group highlighted the role of fiber expansion in supporting enterprise connectivity and hyperscale demand. Alex Hernandez, CEO of PowerBridge, participated in discussions focused on the growing power demands associated with AI infrastructure, including how utilities, data center developers and investors are working to expand power capacity and modernize energy delivery to support large-scale computing environments.

From the investment perspective, Santhosh Rao, Managing Director, Head of Digital Infrastructure at MUFG explored the evolving capital structures supporting infrastructure development, including structured financing and private credit solutions. Anton Moldan, Senior Managing Director at Macquarie Group shared insights into how institutional investors continue to evaluate digital infrastructure assets as a long-term growth opportunity within global infrastructure portfolios.

Beyond the formal sessions, Metro Connect remains known for its highly productive networking environment. Thousands of meetings took place across the event’s exhibit floor, private meeting rooms and curated networking gatherings, reinforcing the conference’s reputation as a place where partnerships are formed and transactions begin.

Outside the formal sessions, attendees spent much of the week engaged in meetings and informal discussions across the venue’s networking areas. Many participants noted that the event continues to serve as a gathering point for companies exploring partnerships, investment opportunities and infrastructure projects.

Looking ahead, the industry will reconvene next year as Metro Connect USA 2027 moves to a new venue. The event will take place February 8–10, 2027 at the Diplomat Beach Resort in Hollywood, Florida.

The post Metro Connect USA 2026 Highlights the Future of U.S. Digital Infrastructure appeared first on Data Center POST.

Telescent Introduces High-Density Optical Circuit Switching for AI GPU Clusters

12 March 2026 at 18:00

As artificial intelligence infrastructure continues to scale, the physical networks connecting large GPU clusters are becoming increasingly complex. Training environments for large language models and advanced machine learning workloads require massive bandwidth between compute nodes, driving a rapid increase in fiber connectivity inside modern data centers.

Telescent’s latest system addresses these operational challenges with a new high-density robotic cross connect system designed for DR4 and DR8 parallel optics interconnects used in large-scale AI training clusters. The system extends the company’s G5 robotic platform to support the extremely high fiber counts now common in AI cluster architectures.

AI Infrastructure Is Driving Massive Fiber Growth

AI workloads are reshaping the internal design of data center networks. As GPU clusters grow larger and more interconnected, operators are increasingly deploying parallel optics technologies such as DR4 transceivers to support the bandwidth required between compute nodes. While these architectures enable faster data movement across GPU fabrics, they also significantly increase the number of fiber connections that must be installed and managed.

In some environments, a single AI training cluster can include an exceptionally high number of fiber links. Managing those connections manually can slow deployment timelines and increase the risk of configuration errors or service interruptions.

Automation at the Physical Layer

Telescent’s robotic cross connect system is designed to automate physical layer management in these high-density environments. By enabling automated fiber path configuration and reconfiguration, the system allows operators to turn up new cluster resources more quickly while minimizing the manual patching work that traditionally accompanies large-scale network changes.

“The bandwidth requirements of AI infrastructure are rewriting the rules of data center fiber management. A single AI cluster can require hundreds of thousands of fiber connections, and the move to parallel optics architectures like DR4 multiplies that count significantly,” said Anthony Kewitsch, CEO and Founder of Telescent. “Our new high density robotic cross connect system gives operators a powerful automated solution to manage this complexity to ensure maximum GPU utilization and operational efficiency while future proofing the physical layer for the next wave of AI innovation.”

Supporting the Next Phase of AI Infrastructure

As hyperscale operators and AI infrastructure providers deploy increasingly dense compute environments, the operational demands of managing fiber connectivity are growing alongside them. Automation platforms that bring intelligence and remote control to the physical network layer are becoming an important tool for maintaining reliability and flexibility.

Telescent’s robotic automation platform enables software-controlled fiber connectivity across large-scale deployments, helping operators reduce manual intervention while allowing network paths to be reconfigured quickly as infrastructure requirements evolve.

Demonstration at OFC 2026

Telescent will showcase a live demonstration of the new system at the Optical Fiber Communication Conference (OFC) 2026 in Los Angeles from March 17 to 19 at Booth #607. The demonstration will highlight how robotic automation can simplify the management of fiber-dense AI clusters and help operators address the growing connectivity demands of next-generation AI infrastructure.

To learn more about Telescent’s optical automation solutions, visit www.telescent.com.

The post Telescent Introduces High-Density Optical Circuit Switching for AI GPU Clusters appeared first on Data Center POST.

Company Profile: BRIGHTRAY’s Prefabrication Strategy for the AI Era

12 March 2026 at 15:30

Data Center POST had the opportunity to connect with David Wang, Founder and Chairman of BRIGHTRAY, who is leading a new paradigm in data center delivery—speed without compromise, scale with sustainability. With over 25 years of industry experience, including senior leadership at Schneider Electric and HP managing mission-critical infrastructure, Wang founded BRIGHTRAY to address the explosive AI-driven demand for rapid, high-density infrastructure.

Traditional construction can no longer keep pace. That’s why at BRIGHTRAY our strategy on proprietary Prefabrication Data Center Solutions, enabling ultra-high-density deployment at unprecedented speed. This is proven by the company’s Malaysia milestones: MY-01 (20MW) delivered in 8 months, and MY-02 (50MW) completed in just 6 months, setting new benchmarks for speed and scalability.

Looking ahead, Wang is leading BRIGHTRAY’s global expansion from our strong APAC foundation into the U.S. and Middle East markets with the vision to establish BRIGHTRAY as “Your Gateway to Excellence in Integrated IDC Services”, building a resilient, sustainable digital backbone for the AI era.

The information below is summarized to provide our readers a deeper dive into who BRIGHTRAY is, what they do and the problems they are solving in the industry.

What does BRIGHTRAY do?  

BRIGHTRAY provides prefabricated data center solutions that are designed and built off-site for faster, more efficient deployment.

What problems does BRIGHTRAY solve in the market?

The company addresses the growing demand for speed and scalability in data center infrastructure. BRIGHTRAY helps clients compress deployment timelines, reduce execution risk, and bring infrastructure online faster, enabling quicker returns and greater adaptability across different environments. The company is capable of delivering a 50MW data center in as fast as 6 months, setting a new industry benchmark

What are BRIGHTRAY’s core products or services?

Prefabrication Data Center Solutions

Full Prefabrication DC(FPD:prefab whole data center from building structure to core systems

Interior Prefabrication DC(IPD:install core modules in the pre-built shell

Containerized Prefabrication DC(CPD:infrastructure in containers

What markets do you serve?

BRIGHTRAY is deeply rooted in the APAC market and is now expanding into the U.S. and Middle East markets.

What challenges does the global digital infrastructure industry face today?

  • Speed vs. Quality: Traditional construction methods take 2-3 years per project, yet AI and cloud demand deployment in months—not years.
  • Sustainability Pressure: Data centers are energy-intensive, and global net-zero targets require radical efficiency improvements.
  • Scalability Constraints: Supply chain bottlenecks, skilled labor shortages, and site limitations hinder rapid expansion.

How is BRIGHTRAY adapting to these challenges?

  • Prefabrication Innovation: Our proprietary solutions (FPD, IPD, CPD) shift construction from on-site to factory-controlled environments, slashing timelines by up to 70%.
  • Speed Records: We’ve proven our model with MY-01 (20MW in 8 months) and MY-02 (50MW in 6 months) —landmark projects in Malaysia that set new industry speed benchmarks and demonstrate BRIGHTRAY’s leadership in powering Asia Pacific’s rapidly growing digital hubs.
  • Global-Ready Design: Our solutions are engineered for “global adaptability,” enabling rapid deployment across diverse environments with consistent quality.

What are BRIGHTRAY’s key differentiators?

  • Proven Speed: 6-month delivery for 50MW capacity—unprecedented in the industry.
  • End-to-End Expertise: Our team brings 10 years across the full lifecycle—design, construction, operations.
  • Sustainability by Design: Prefabrication reduces on-site waste, carbon footprint, and energy consumption.
  • Three Flexible Solutions: FPD (full prefab), IPD (interior prefab), CPD (containerized)—tailored to client needs.
  • Global Vision, Local Roots: Deep APAC expertise, now expanding into U.S. and Middle East markets.

What can we expect to see/hear from BRIGHTRAY in the future?  

  • Global Market Expansion: Following our strong foundation in APAC, we are actively entering the U.S. and Middle East markets. Expect announcements on new partnerships, project deployments, and local operations in these key regions.
  • Next-Generation Prefabrication Solutions: We are continuously evolving our proprietary FPD, IPD, and CPD solutions to support higher densities and greater energy efficiency—purpose-built for the AI era’s demanding workloads.
  • New Project Milestones: Building on our Malaysia success (MY-01: 20MW/8 months; MY-02: 50MW/6 months), we will unveil additional record-breaking deployments that further compress timelines while scaling capacity.

What upcoming industry events will you be attending? 

BRIGHTRAY will be attending Nvidia GTC in San Jose.

Do you have any recent news you would like us to highlight?

BRIGHTRAY breaks record by completing data center in 8 months.

Where can our readers learn more about BRIGHTRAY?  

You can learn more about us on our official website, www.brightraydc.com, or on our LinkedIn.

How can our readers contact BRIGHTRAY? 

You can contact us at marketing@brightraydc.com.

# # #

About BRIGHTRAY

BRIGHTRAY is redefining data center delivery through its pioneering prefabrication solutions. As hyperscale demand surges and speed-to-deployment becomes a decisive competitive edge, BRIGHTRAY empowers its clients to bring high-standard, scalable infrastructure online in just months, dramatically compressing timelines, reducing execution risk, and unlocking faster returns. The BRIGHTRAY team, comprising professionals with over 10 years of data center experience and led by executives with over 20 years of industry leadership, has collectively delivered hundreds of data center projects. The team has built end-to-end capabilities across the full lifecycle—from design and construction to operations—and leverages this deep expertise to pioneer innovative prefabricated data center solutions: Full Prefabrication Data Center (FPD), Interior Prefabrication Data Center (IPD), and Containerized Prefabrication Data Center (CPD). Each solution is engineered around three core principles—speed, resilience, and global adaptability to enable seamless deployment across diverse environments.

About Data Center POST

Data Center POST provides a comprehensive view of the digital infrastructure landscape, delivering industry insights into the global data center ecosystem. As the industry’s only peer-contributed and online publication, we offer relevant information from developers, managers, providers, investors, and trendsetters worldwide.

Data Center POST works hard to get the most current information and thought-provoking ideas most apt to add relevance to the success of the data center industry. Stay informed, visit www.datacenterpost.com.

If you are interested in contributing to Data Center POST, contact us at contributions@datacenterpost.com or submit your article here.

Want more digital infrastructure news? Stay in the know and subscribe to Data Center POST today!

The post Company Profile: BRIGHTRAY’s Prefabrication Strategy for the AI Era appeared first on Data Center POST.

Scaling Power Density in Urban Carrier Hotels

12 March 2026 at 14:00

Originally posted on 1547realty.

AI and accelerated computing are reshaping expectations for data center infrastructure, and that shift is especially visible inside carrier hotel environments. Research from McKinsey notes that average rack power densities have more than doubled in two years, rising from 8 kilowatts to 17 kilowatts, with projections reaching 30 kilowatts per rack by 2027. Carrier hotels have long served as central meeting points for carriers, content providers, and enterprises, delivering dense interconnection in the heart of major metros. As fifteenfortyseven Critical Systems Realty (1547) has outlined in its connectivity hubs blog, these buildings keep communities and businesses online by concentrating networks and cloud on-ramps in a single, neutral location. For 1547, the focus is evolving these hubs to host modern AI workloads without compromising the connectivity advantages that make them essential.

A Shifting Infrastructure Reality

Carrier hotels were not originally built for AI. Historically, these facilities centered on abundant fiber, building-level power resilience, and space for many carriers to interconnect, with typical cabinet deployments remaining within just a few kilowatts. Dgtl Infra describes carrier hotels as highly interconnected urban facilities where carriers, cloud providers, and enterprises converge to exchange traffic and access key services. Modern GPU-based systems have pushed power density requirements into the tens of kilowatts, with infrastructure manufacturers such as Vertiv pointing to configurations exceeding 100 kilowatts per rack in advanced AI and high-performance computing environments. Instead of asking how much floor space is available, customers now want to know how much usable power can be delivered to each rack and how the facility will manage the resulting heat.

Why Increasing Power Density Creates Unique Challenges for Carrier Hotels

The same characteristics that define carrier hotels also introduce constraints that greenfield campuses do not face. Many occupy historic or mixed-use buildings in dense metro cores, where increasing utility capacity requires coordination with local utilities, municipalities, and building ownership. 1547’s Pittock Block in Portland illustrates this directly, with a century-old downtown landmark transformed into a modern carrier hotel and data center. Cooling presents a parallel challenge. Traditional air-cooled systems adequate for network gear and standard compute begin to struggle as rack densities climb, and McKinsey projects a potential supply deficit by 2030, driven by AI-ready capacity requirements that current infrastructure was not designed to meet.

To continue reading, please click here.

The post Scaling Power Density in Urban Carrier Hotels appeared first on Data Center POST.

Digital Infra 3.0: Power, Fiber, and Edge Will Drive the AI Industrial Revolution

10 March 2026 at 16:00

At Metro Connect USA 2026, held February 22-25 in Fort Lauderdale, Marc Ganzi, Chief Executive Officer of DigitalBridge, delivered a keynote outlining how artificial intelligence is reshaping the digital infrastructure industry. In his address, “Digital Infra 3.0: Building the AI Industrial Revolution,” Ganzi described how the sector is evolving from a connectivity-focused market into a broader ecosystem that includes data centers, fiber networks, edge computing, and energy infrastructure.

Ganzi emphasized that AI has moved beyond hype and is beginning to generate measurable outcomes across industries. While much of the public discussion focuses on applications and large language models, he noted that the true monetization of AI will occur through enterprise and industrial use cases. Manufacturing, agriculture, healthcare, and transportation are already integrating AI-driven automation, robotics, and predictive analytics to improve productivity and efficiency.

These developments rely on a layered infrastructure environment. Hyperscale facilities train AI models, while edge data centers support inferencing workloads closer to where data is used. Fiber networks provide the low-latency connectivity required to move massive volumes of data between locations, and wireless systems connect devices and sensors in the physical world. Beneath all of these components sits an increasingly critical factor: power.

Power availability was a central theme of Ganzi’s keynote. As AI workloads grow, electricity demand is rising faster than grid capacity can keep pace. The digital infrastructure industry is now leasing significantly more power than the grid can bring online each year, creating a widening gap between supply and demand. As a result, developers are increasingly operating as energy strategists, exploring diversified energy approaches that may include microgrids, battery storage, solar, wind, and natural gas generation.

The search for reliable power is also influencing where new infrastructure is built. While traditional hubs such as Northern Virginia remain central to the industry, developers are exploring additional markets where grid access and energy availability make large-scale AI deployments possible. In many cases, power availability has become the deciding factor in site selection.

Despite the focus on energy, Ganzi reminded the audience that connectivity remains essential to the AI economy. The ability to move enormous amounts of data across networks continues to depend on high-capacity fiber infrastructure and low-latency connectivity. Even as AI advances in software and hardware, the underlying network infrastructure remains fundamental.

Ganzi also described the evolution of AI infrastructure in phases. The industry has moved through the early stage of training large language models and is now entering a period where inferencing and edge deployments are expanding. The next stage will involve integrating AI directly into physical environments, where intelligent systems control machines, robotics, and automated processes across multiple industries.

As the sector expands, developers face growing challenges that include power constraints, permitting delays, supply chain pressures, water usage concerns, and increased scrutiny from investors. Ganzi stressed that success will depend on operational discipline, strong customer relationships, and the ability to deliver infrastructure projects reliably and on schedule.

Ultimately, he framed the current moment as the beginning of Digital Infra 3.0, a phase in which digital infrastructure converges with traditional infrastructure to support the AI economy. As AI adoption accelerates, the companies that successfully combine power, connectivity, and compute will play a defining role in building the foundation for the next era of global digital infrastructure.

The discussion around digital infrastructure, connectivity, and AI will continue at the next major Capacity event, International Telecoms Week (ITW) in Washington, D.C., May 18-21, 2026.

To learn more about upcoming events in the Capacity Media portfolio, visit www.capacitymedia.com/events.

The post Digital Infra 3.0: Power, Fiber, and Edge Will Drive the AI Industrial Revolution appeared first on Data Center POST.

Capacity Middle East and Datacloud Middle East 2026 Highlight Rapid Growth in AI and Data Center Infrastructure

9 March 2026 at 15:00

The Middle East has long been described as a geographic bridge connecting Europe, Asia, and Africa. Today, however, the region is becoming far more than a transit corridor. At Capacity Middle East 2026 and Datacloud Middle East 2026, held in Dubai, February 10-12, 2026, industry leaders explored how the region is rapidly evolving into a major destination for digital infrastructure investment. Telecom operators, data center developers, investors, and technology providers gathered to discuss the next phase of growth, which includes expanding connectivity routes, scaling AI-ready data centers, and strengthening the interconnection ecosystems needed to support the region’s digital economy.

The Middle East’s Connectivity Role Is Expanding

For many years, global connectivity discussions framed the Middle East primarily as a transit hub linking international markets. Speakers at Capacity Middle East emphasized that this narrative is evolving as regional internet traffic, enterprise workloads, and cloud adoption continue to grow across the Middle East. Infrastructure strategies are increasingly focused on supporting demand generated within the region itself rather than simply facilitating global transit. This shift is encouraging greater investment in fiber interconnection between data center clusters, cross-border terrestrial routes linking neighboring markets, and internet exchange points that allow regional traffic to remain within the region. As the Middle East’s digital economy expands, more data is being generated and consumed locally, reinforcing the need for robust regional infrastructure.

Hybrid Connectivity Routes Are Gaining Momentum

Another major topic throughout Capacity Middle East was the development of hybrid connectivity routes that combine subsea cables with terrestrial fiber infrastructure. While subsea cables remain the backbone of global connectivity, geopolitical risks and congestion along traditional Red Sea routes have highlighted the need for diversified network paths between Asia and Europe. Operators are increasingly exploring alternative corridors that incorporate land-based routes across regional markets. Industry leaders noted that deploying these hybrid routes is not simply an engineering challenge. Subsea and terrestrial networks operate under different economic models and regulatory frameworks, meaning coordination across multiple jurisdictions will be required to ensure these routes remain commercially viable. Despite those complexities, hybrid infrastructure is expected to play an important role in strengthening global connectivity resilience.

Data Center Development Is Accelerating Across the Region

At Datacloud Middle East, much of the conversation centered on the region’s rapidly expanding data center ecosystem. The Middle East offers several structural advantages that are attracting global infrastructure investment, including competitive energy pricing, available land for hyperscale campuses, strong sovereign investment funds, and coordinated national digital strategies. Market insights shared during the event indicated that vacancy rates across regional data center markets remain low while a significant portion of new capacity is already pre-leased before completion. Although most existing capacity remains concentrated in the United Arab Emirates and Saudi Arabia, emerging markets such as Oman and Jordan are also advancing national initiatives designed to attract new digital infrastructure development and diversify the region’s data center footprint.

AI Is Reshaping Data Center Design

Artificial intelligence infrastructure requirements were a central theme at Datacloud Middle East. Traditional enterprise data centers typically operate at densities between 10 and 20 kilowatts per rack, but AI training clusters are already pushing beyond 100 kilowatts per rack, creating new challenges for power delivery, cooling strategies, and facility design. Because large-scale data center projects often require 18 to 24 months to build, developers must make long-term infrastructure decisions with limited visibility into future workload requirements. As a result, many operators are shifting toward flexible data center architectures capable of supporting both traditional enterprise workloads and high-density AI environments. Rather than designing facilities for a single predictable future state, the industry is increasingly prioritizing adaptability.

Industry Leaders Highlight the Region’s Momentum

Several speakers provided important insights into the trends shaping the Middle East’s digital infrastructure ecosystem. Johan Nilerud, Chief Strategy Officer at Khazna Data Centers, discussed how hyperscale demand and national digital initiatives are accelerating the development of large-scale data center campuses across the Gulf. Karim Benkirane, Chief Commercial Officer at du, highlighted the role telecommunications providers play in enabling cloud adoption and expanding regional connectivity capacity. Mehdi Paryavi, Chairman of the International Data Center Authority, explored how national initiatives such as Oman’s Digital Triangle are positioning emerging markets to compete for future AI and cloud infrastructure investment. Tahir Gok, MENA Lead at datacenterHawk, shared market insights showing continued demand for colocation capacity and strong growth across the region’s key digital hubs. Julian Barratt-Due, Managing Director at KKR, also discussed the growing interest from international investors seeking opportunities to participate in the Middle East’s digital infrastructure expansion alongside sovereign wealth funds.

Interconnection Will Define the Next Phase

A consistent theme across both conferences was the critical importance of interconnection. Data centers, cloud platforms, AI infrastructure, and enterprise networks all rely on strong connectivity ecosystems. Without robust interconnection between facilities, internet exchanges, and regional fiber routes, the full value of new infrastructure investments cannot be realized. Industry leaders emphasized that the next phase of digital infrastructure development in the Middle East will require dense fiber ecosystems, carrier-neutral exchanges, and strong regional connectivity frameworks that allow traffic to move efficiently across markets.

A New Era for Middle East Digital Infrastructure

Capacity Middle East and Datacloud Middle East demonstrated how quickly the region’s infrastructure landscape is evolving. Supported by AI demand, sovereign investment, and coordinated national strategies, the Middle East is rapidly expanding its connectivity and data center capacity. The region’s role in the global digital ecosystem is no longer limited to bridging continents. Instead, it is emerging as a strategic hub where infrastructure is being built to support both global traffic flows and a rapidly growing regional digital economy. As investment continues to accelerate, the conversations taking place in Dubai suggest that the Middle East will remain a central focus of digital infrastructure development in the years ahead.

The next Capacity event will be International Telecoms Week (ITW) in Washington, D.C., May 18-21, 2026.

To learn more about upcoming events in the Capacity Media portfolio, visit www.capacitymedia.com/events.

The post Capacity Middle East and Datacloud Middle East 2026 Highlight Rapid Growth in AI and Data Center Infrastructure appeared first on Data Center POST.

SDC Austin Building B Progress Q1 2026

3 March 2026 at 17:00

Originally posted on Sabey Data Centers.

Following the successful full lease-up of our first data center at our Round Rock, TX campus, we wanted to share a brief construction update on SDC AustinBuilding B.
Construction is now well underway, with the primary concrete structure rising and vertical construction clearly progressing. The project is tracking to schedule, and site activity has ramped up significantly as we move through early structural milestones.

Building B Highlights:

  • 54MW of total capacity, powered by an onsite substation
  • Fully secured utility power for the entire facility
  • Liquid cooling optimized design to support next-generation workloads
  • 6 data halls, each offering 30,000 SF of space

You can view a short video of our construction progress here.

The post SDC Austin Building B Progress Q1 2026 appeared first on Data Center POST.

❌