Normal view

Received today — 2 April 2026

Three Questions Every AI‑Driven Operator Should Ask Their Infrastructure Partner

1 April 2026 at 16:00

Originally posted on Datalec LTD.

Data centre leaders left ExCeL London earlier this month with one message ringing loud and clear: AI‑driven growth is accelerating, power is tight, and the choice of infrastructure partner is now business‑critical, not optional.

Against a backdrop of rapid hyperscale and colocation expansion, constrained power availability and rising energy scrutiny, the conversations at Data Centre World London 2026 underscored that operators need partners who can help them plan power‑first, deploy at speed, and operate reliably in high‑density environments.

For Datalec Precision Installations (DPI), DCW London was an opportunity to demonstrate exactly that kind of integrated, global capability, from modular data centre solutions through to facilities management, consultancy and lifecycle services. The questions operators brought to the stand were remarkably consistent, whether they were building in the UK, expanding in the Middle East, or planning their next phase of growth in APAC.

Below, we revisit three of the most important questions AI‑driven operators were asking in London and why they will matter even more as the industry converges on Singapore for DCW Asia later this year.

1. How quickly can you take me from secured power to live, AI‑ready capacity?

If there was one common theme at DCW London, it was that power availability has become the primary constraint on new data centre builds, not demand. Once operators have secured land and grid, the urgent requirement is simple: how fast can we safely turn that capacity into revenue‑generating, AI‑ready infrastructure?

This is where modular, pre‑engineered solutions dominated the conversation. Many visitors to the DPI stand wanted to understand how modular white space, plant and service corridors could compress design and construction timelines without sacrificing resilience or compliance. DPI’s next‑generation Modular Data Centre Solutions attracted strong interest because they are designed precisely for this challenge. They help clients move from planning to live halls at speed, whether that’s a new campus in a European hub, a hyperscale expansion in the Middle East, or an edge or colocation site in a fast‑growing APAC market.

To continue reading, please click here.

The post Three Questions Every AI‑Driven Operator Should Ask Their Infrastructure Partner appeared first on Data Center POST.

AI Workloads and the Implications for High-Density Data Centre Design

23 March 2026 at 14:00

AI workloads are pushing data centre infrastructure towards higher rack densities, new cooling strategies and greater power demand. Jamie Darragh, Data Centre Director, Europe, at global data centre engineering design consultancy Black & White Engineering, examines the design implications for the next generation of facilities.

AI and high-performance computing are placing new demands on data centre infrastructure. Rack densities are increasing; facilities are being delivered at larger scale and operators are under pressure to support workloads that consume far greater levels of power and generate far higher heat loads than conventional cloud environments.

Independent forecasts underline the pace of expansion. Gartner estimates global data centre electricity consumption will rise from around 448TWh in 2025 to roughly 980TWh by 2030, driven largely by AI-optimised computing infrastructure. Within that growth, AI servers alone are expected to account for close to 44% of data centre power consumption by the end of the decade.

For our engineering teams, these workloads are altering the practical limits of traditional infrastructure design. Rack densities exceeding 100–200kW are now appearing in project specifications, particularly where large AI training clusters are planned. These loads influence every part of the building environment, from electrical distribution and cooling capacity to structural loading and cable management.

Designing for extreme density

Under these conditions, air cooling alone becomes difficult to sustain across entire facilities. Liquid cooling is therefore increasingly included in the baseline design of new data centres rather than introduced later as a specialist solution. This cooling method is becoming increasingly favourable due to its higher specific thermal capacity compared with air, which enables more efficient heat transfer and removal. Direct-to-chip and rack-level systems are being designed alongside air cooling so facilities can accommodate different densities and equipment types across the same site.

The introduction of liquid systems requires careful coordination between disciplines. Facilities must manage environments where air and liquid cooling operate together, supported by monitoring platforms, safety controls and operational procedures capable of supporting both approaches.

Some IT chips require different liquid cooling temperatures than those used in air-cooling systems, creating technical hurdles for the overall heat rejection system and requiring precise control of the cooling circuit temperature. Another engineering challenge lies in integrating these systems with power distribution, control platforms and maintenance strategies rather than selecting one cooling method over another.

Higher density also narrows operational tolerance. Commissioning becomes more demanding and redundancy strategies require more detailed modelling. Infrastructure must be capable of supporting peak compute demand while maintaining efficiency when loads are lower, placing greater emphasis on flexible electrical and mechanical systems.

The scale of development is also increasing. Buildings that once delivered a few megawatts of capacity are now part of campus-scale developments where multiple data halls contribute to facilities delivering hundreds of megawatts. data centres are increasingly planned and delivered as long-term infrastructure assets rather than individual projects.

This environment encourages repeatable design and industrialised delivery methods. Developers and investors expect predictable construction schedules and consistent performance across multiple sites. As a result, engineering teams are placing greater emphasis on modular infrastructure systems and digital design methods that allow mechanical and electrical systems to be configured and deployed repeatedly.

Power, control and operational intelligence

Power availability is also becoming a determining factor in project planning. In many regions, grid connection capacity is now one of the main constraints on new development. Gartner has warned that by 2027 as many as 40% of AI data centres could face operational limits because of power availability.

Developers are therefore engaging more closely with utilities during early feasibility stages and exploring complementary infrastructure such as on-site generation and energy storage. In some cases, data centres are also being designed to contribute to wider grid stability through demand response and energy management capability.

Artificial intelligence is also beginning to influence how facilities themselves are operated. Machine-learning systems are already being used in some environments to optimise airflow patterns, cooling plant performance and power distribution using live operational data.

The next stage will see more widespread use of integrated control platforms and digital twins capable of modelling facility behaviour in real time. These systems allow operators to simulate infrastructure performance under different load conditions, test operational changes and identify maintenance requirements before faults occur.

Environmental performance remains another constraint as compute density increases. Higher workloads place additional pressure on energy supply while raising questions around water consumption, construction materials and waste heat recovery. Planning authorities and investors are increasingly looking for measurable improvements in efficiency and carbon reporting before approving new developments. Sustainability therefore sits alongside power and cooling as a central engineering consideration rather than a secondary design feature.

Taken together, these conditions create a more complex design environment for data centre infrastructure. Higher compute densities, power constraints and new operational technologies require mechanical, electrical and digital systems to be considered together from the earliest design stages.

Facilities intended to support AI workloads must accommodate far greater performance requirements than earlier generations of data centres while remaining adaptable as infrastructure technologies and operating practices continue to develop.

# # #

About the Author

Jamie Darragh is Data Centre Director, Europe at Black & White Engineering. He leads the delivery of complex, mission-critical projects across the region, with a focus on technical quality, design coordination and strong client relationships. A Chartered Engineer and member of CIBSE and the IET, Jamie has worked across Europe, the Middle East and the UK since 2005. He brings a clear, practical approach to engineering challenges, combining technical expertise with commercial awareness. He is committed to developing teams that work collaboratively and perform at a high level. Jamie has received several industry awards, recognising both his technical capability and his impact on the built environment including ‘Engineer of the Year’ at leading Middle East industry awards.

The post AI Workloads and the Implications for High-Density Data Centre Design appeared first on Data Center POST.

Duos Edge AI Deploys Second Edge Data Center in Amarillo, Texas Market

20 March 2026 at 13:00

Duos Technologies Group Inc. (Nasdaq: DUOT), through Duos Edge AI, Inc., has announced the deployment of its second Edge Data Center in the Amarillo, Texas market. The new carrier-neutral, SOC 2-compliant facility is located on Potter County land adjacent to the largest colocation facility in the Texas Panhandle, further strengthening digital infrastructure for carriers, healthcare organizations, enterprises, and public sector entities across the region.​

Building on the success of its initial Amarillo deployment, this latest installation expands Duos Edge AI’s footprint in the Panhandle and adds high-density, low-latency computing capabilities for real-time AI applications, enhanced bandwidth, and secure data processing.

“We are proud to deepen our commitment to the Amarillo market with this second deployment, building on the foundation established by our initial EDC, which brought high-performance computing directly to the heart of the Panhandle,” said Dave Irek, Chief Operations Officer of Duos Edge AI. “This expansion enhances capacity and capability in the region, and by partnering on Potter County land adjacent to a premier colocation hub, we are creating a robust, carrier-neutral ecosystem designed to support innovation, attract investment, and drive long-term economic growth.”​

The company said the deployment also helps reduce dependence on data centers located in tier one cities while supporting underserved and high-growth markets across Texas. Duos Edge AI’s broader Texas expansion includes recent installations in Lubbock, Waco, Victoria, Abilene, and Corpus Christi.​

Potter County Judge Nancy Tanner added, “This collaboration with Duos Edge AI represents a significant investment in our community’s future. Positioning this advanced, carrier-neutral data center on county land next to the Panhandle’s largest colocation facility will attract new businesses, improve connectivity for our residents and schools, and position Potter County as a leader in digital infrastructure.”​

The new EDC is expected to be fully operational in the coming months.

To learn more about Duos Edge AI, visit www.duosedge.ai.

The post Duos Edge AI Deploys Second Edge Data Center in Amarillo, Texas Market appeared first on Data Center POST.

Foresight Raises $25M to Tackle Infrastructure Execution Risks in the AI Era

18 March 2026 at 17:00

As global investment in AI infrastructure, power, and advanced manufacturing accelerates, a critical constraint is coming into sharper focus—project execution.

A newly announced $25 million Series A funding round for Foresight underscores a broader industry shift: while capital continues to flow into large-scale infrastructure, delivering these projects on time and on budget remains a persistent challenge.

The current wave of infrastructure investment is unprecedented in both scale and complexity. Hyperscale data centers, energy systems, and advanced industrial facilities are being developed simultaneously across global markets, often with overlapping supply chains and tight delivery timelines.

However, execution has emerged as a systemic issue.

Research indicates that nearly 90% of large-scale infrastructure projects are completed late or exceed budget expectations. In the context of AI infrastructure, delays can have cascading effects—impacting capacity availability, increasing financing costs, and delaying revenue generation.

Industry observers note that as demand for compute continues to surge, particularly for AI workloads, the margin for error in delivery timelines is shrinking.

A Shift Toward Predictive Delivery Models

Foresight, which positions itself as a predictive project delivery platform, is part of a growing cohort of technology providers aiming to address these execution challenges through data and automation.

The company’s platform is designed to move beyond traditional project management approaches—often reliant on static schedules and retrospective reporting—by introducing continuous validation of project progress and early identification of risk factors.

According to the company, its system enables infrastructure owners to establish baseline schedules more quickly, integrate data across stakeholders, and forecast potential delays before they materialize. Early adopters report improvements in forecast accuracy and reductions in cost overruns.

While such claims reflect a broader trend toward digitization in construction and infrastructure delivery, they also point to a deeper industry need: greater predictability in increasingly complex builds.

Why Execution Matters More in the AI Era

For data center developers and operators, execution risk is becoming more consequential.

Unlike previous infrastructure cycles, AI-driven demand is both immediate and rapidly evolving. Delays in bringing capacity online can result in missed opportunities, strained customer relationships, and competitive disadvantages in key markets.

At the same time, projects are becoming more interdependent. Power availability, equipment procurement, and site development must align precisely—leaving little room for disruption.

This dynamic is prompting a reassessment of how infrastructure projects are planned and managed, with greater emphasis on real-time data, cross-functional visibility, and proactive intervention.

Expanding Beyond Data Centers

Although the initial focus is on sectors such as hyperscale data centers, the challenges associated with project execution are not unique to digital infrastructure.

Foresight plans to expand its platform into adjacent industries, including energy, defense, and advanced manufacturing—areas that share similar characteristics: large capital commitments, complex supply chains, and high sensitivity to delays.

The company’s recent funding, led by Macquarie Capital Venture Capital, reflects investor interest in solutions that address these systemic inefficiencies.

An Industry Inflection Point

The emergence of predictive project delivery tools signals a broader transformation in how infrastructure is built.

For years, innovation in the data center sector has centered on compute performance, cooling technologies, and energy efficiency. Increasingly, attention is shifting toward the process of delivery itself.

As infrastructure programs continue to scale, the ability to execute with precision may become a defining factor in project success.

In an environment where demand is high and timelines are compressed, the question facing the industry is evolving—from whether projects can be financed to whether they can be delivered as planned.

The post Foresight Raises $25M to Tackle Infrastructure Execution Risks in the AI Era appeared first on Data Center POST.

Duos Edge AI and Seimitsu Strengthen Georgia’s Digital Infrastructure

18 March 2026 at 15:00

Duos Technologies Group Inc. (Nasdaq: DUOT), through Duos Edge AI, Inc., has formed a strategic partnership with Seimitsu to revolutionize digital infrastructure across Georgia. By combining Duos Edge AI’s modular, high-performance solutions with Seimitsu’s expansive high-speed fiber network, the collaboration delivers low-latency processing and high-bandwidth connectivity for businesses, municipalities, and healthcare providers statewide.

“Our mission is to bring the power of the cloud to the street corner. Partnering with Seimitsu allows us to integrate our Edge AI nodes into a robust, reliable fiber backbone, ensuring that Georgia’s industries – from the port of Savannah to Atlanta’s technology corridors – have the infrastructure they need to compete globally,” said Dave Irek, Chief Operations Officer of Duos Edge AI.

As demand for real-time data processing grows, driven by AI, IoT, and autonomous systems, infrastructure closer to end users has become critical. This partnership positions Georgia at the forefront of the Edge revolution with ultra-low latency processing, Seimitsu’s 25 terabits of low-latency fiber capacity across the Southeast, and rapid deployment of Duos Edge AI nodes in underserved and high-demand areas.

Sam Cook, CEO of Seimitsu, added, “For more than 40 years, Seimitsu has been committed to connecting our communities. This partnership with Duos Edge AI represents the next step in that journey. By integrating edge computing directly into our network, we are moving beyond simple transit services and delivering true digital transformation for our clients.”

The partnership supports Duos Edge AI’s nationwide expansion of distributed AI infrastructure through strategic fiber, power, and site partnerships.

To learn more about Duos Edge AI, visit www.duosedge.ai.

The post Duos Edge AI and Seimitsu Strengthen Georgia’s Digital Infrastructure appeared first on Data Center POST.

Metro Connect USA 2026 Highlights the Future of U.S. Digital Infrastructure

16 March 2026 at 16:00

Metro Connect USA 2026 brought the digital infrastructure community together in Fort Lauderdale, Florida, Feb. 23 to 25, as executives, investors and network operators gathered to discuss the evolving connectivity landscape. Over three days, conversations across keynote sessions, panels and private meetings focused on how the industry is adapting to the rapid growth of artificial intelligence, cloud services and bandwidth demand.

The 2026 event drew more than 3,700 decision-makers representing over 1,200 companies, reflecting the scale of collaboration and investment shaping the next phase of digital infrastructure development in the United States.

Artificial intelligence was a central theme throughout the conference. Industry leaders discussed how AI workloads are driving new requirements for data center capacity, fiber connectivity and power infrastructure. As AI adoption expands beyond hyperscale environments into enterprise applications and edge deployments, operators are facing increasing pressure to scale networks capable of supporting high-volume data movement and compute-intensive workloads.

Fiber infrastructure also remained a key topic. Discussions throughout the event highlighted continued investment in metro fiber expansion, long-haul backbone routes and fiber-to-the-home networks. As cloud platforms, streaming services and AI applications generate greater data traffic, fiber continues to serve as the underlying foundation supporting the digital economy.

Several speakers addressed how infrastructure and investment strategies are evolving alongside these shifts. Marc Ganzi, Chief Executive Offer at DigitalBridge discussed the continued influx of capital into digital infrastructure and the importance of disciplined investment as the sector scales. Steve Smith, Chief Executive Officer at Zayo Group highlighted the role of fiber expansion in supporting enterprise connectivity and hyperscale demand. Alex Hernandez, CEO of PowerBridge, participated in discussions focused on the growing power demands associated with AI infrastructure, including how utilities, data center developers and investors are working to expand power capacity and modernize energy delivery to support large-scale computing environments.

From the investment perspective, Santhosh Rao, Managing Director, Head of Digital Infrastructure at MUFG explored the evolving capital structures supporting infrastructure development, including structured financing and private credit solutions. Anton Moldan, Senior Managing Director at Macquarie Group shared insights into how institutional investors continue to evaluate digital infrastructure assets as a long-term growth opportunity within global infrastructure portfolios.

Beyond the formal sessions, Metro Connect remains known for its highly productive networking environment. Thousands of meetings took place across the event’s exhibit floor, private meeting rooms and curated networking gatherings, reinforcing the conference’s reputation as a place where partnerships are formed and transactions begin.

Outside the formal sessions, attendees spent much of the week engaged in meetings and informal discussions across the venue’s networking areas. Many participants noted that the event continues to serve as a gathering point for companies exploring partnerships, investment opportunities and infrastructure projects.

Looking ahead, the industry will reconvene next year as Metro Connect USA 2027 moves to a new venue. The event will take place February 8–10, 2027 at the Diplomat Beach Resort in Hollywood, Florida.

The post Metro Connect USA 2026 Highlights the Future of U.S. Digital Infrastructure appeared first on Data Center POST.

Duos Technologies Signs ~$200M LOI and Appoints Doug Recker as CEO

2 March 2026 at 19:00

Duos Technologies Group, Inc. (Nasdaq: DUOT), a leader in intelligent technologies and digital infrastructure, has signed a non-binding letter of intent (LOI) with Hydra Host to deploy a high-density NVIDIA GPU cluster for a leading global technology customer. The project supports a GPU-as-a-Service (GPUaaS) partnership expected to generate approximately $176 million in revenue over a 36-month term, with gross margins exceeding 80% and projected annual EBITDA of more than $40 million.

“We are thrilled to partner with the Duos team on this opportunity,” said Aaron Ginn, CEO and Co-Founder of Hydra Host. “Their ability to deliver immediate access to power combined with an industry-leading deployment speed makes them a standout in the market. We see significant runway ahead as we look to expand our collaboration around colocation and Duos’ High-Power EDC model, which we believe is purpose-built to address a market where demand for AI compute capacity is fundamentally outpacing the speed at which traditional data center supply can be delivered.”

Complementing this milestone, Duos has appointed Doug Recker as Chief Executive Officer, effective April 1, 2026, as the company accelerates its transformation into a focused Edge AI and digital infrastructure platform. Mr. Recker succeeds Chuck Ferry, who will continue to serve on the board of directors.

“This initial customer marks a pivotal step in accelerating the buildout of Duos Edge AI,” said Doug Recker, Chief Executive Officer. “We are now entering an exciting phase of execution, further reinforced by our recently announced LOI with Hydra Host, which underscores growing third-party demand for our distributed AI infrastructure model and validates the scalability of our platform. With secured power, rapid deployment capabilities, and expanding strategic partnerships, we believe Duos is well positioned to pursue high-value infrastructure opportunities. Our focus remains on disciplined expansion, capital-efficient growth, and delivering sustainable long-term value for our shareholders.”

Beyond GPUaaS revenue, the collaboration creates a pathway for approximately $25 million in incremental colocation revenue over the same term, validating Duos’ High-Power Edge Data Center (EDC) business line. The company has also signed a non-binding LOI for a ground lease in Iowa with access to up to 10MW of utility power, advancing its long-term goal of building up to 75MW of distributed capacity.

To learn more about Duos Technologies Group, Inc., visit www.duostechnologies.com.

The post Duos Technologies Signs ~$200M LOI and Appoints Doug Recker as CEO appeared first on Data Center POST.

Company Profile: Clearfield on Simplifying Fiber for AI-Ready Networks

11 February 2026 at 17:30

Data Center POST had the opportunity to connect with Clearfield’s Chief Commercial Officer, Anis Khemakhem, who is deeply passionate about technology, particularly in advancing fiber optics and telecommunications solutions. Throughout his career, he has consistently focused on leveraging cutting-edge technology to improve connectivity and enhance digital access across various sectors. His executive experience, including leadership positions at Clearfield, Amphenol and Carlisle Interconnect Technologies, demonstrates his executive engagement capabilities and capacity to handle complex, multi-stakeholder projects.

The information below is summarized to provide our readers a deeper dive into who Clearfield is, what they do and the problems they are solving in the industry.

What does Clearfield do?  

Clearfield designs and manufactures fiber connectivity solutions that simplify how operators build and scale modern networks. We focus on critical connection points across broadband, data center, edge, and wireless environments.

Since our inception, we’ve helped community broadband providers close the digital divide. Today, we also apply that modular, craft-friendly approach to wireless networks as well as data centers and distributed edge facilities that support AI-driven workloads. Our goal is to help operators deploy high-performance fiber faster, with less complexity and lower long-term operational costs.

What problems does Clearfield solve in the market?

Network operators are facing rising fiber density, limited space and labor constraints – not to mention pressure to scale quickly without disrupting live infrastructure. Clearfield addresses these challenges by simplifying fiber deployment and ongoing management.

Our solutions reduce installation time, streamline maintenance, and enable incremental growth. Whether supporting broadband expansion or high-density data center environments, we help customers reduce operational friction and future-proof their networks as data volumes and performance demands accelerate.

What are Clearfield’s core products or services?

Our core offerings include fiber management, protection, and delivery solutions, such as patch panels, cassettes, passive and edge cabinets, racks, enclosures, and fiber assemblies. A key recent introduction is our NOVA Platform, a modular, high-density fiber ecosystem designed for data centers, modern central offices, and edge environments.

The NOVA Platform features tool-less installation, front-of-rack access, and consistent documentation to simplify scaling. Across our portfolio, we focus on labor lite design and operational consistency to help customers deploy and manage fiber efficiently. NOVA is no exception.

What markets do you serve?

Clearfield serves community broadband providers, regional and national ISPs, incumbent telcos, utilities, municipalities, cooperatives, and enterprise networks. We also support hyperscale and colocation data centers, enterprise campuses, government and military networks, and distributed edge environments.

Increasingly, our solutions are used where fiber connects data centers to AI workloads and local compute resources at the edge. High-bandwidth, low-latency fiber is the only way society will be able to support data-intensive emerging technologies — from autonomous vehicles to precision agriculture. In rural broadband builds and high-density data halls alike, we serve operators that need scalable, reliable fiber infrastructure across diverse environments.

What challenges does the global digital infrastructure industry face today?

The industry is navigating explosive data growth driven by AI, cloud computing, and increasingly distributed architectures. Networks are extending beyond centralized data centers toward edge environments closer to users and applications. So, fiber counts, space, and power requirements are growing while skilled labor remains limited.

Operators must scale capacity quickly without sacrificing reliability or affordability. The challenge is not only bandwidth, but also density, manageability, and the ability to evolve without constant redesign.

How is Clearfield adapting to these challenges?

Clearfield is addressing these challenges by designing platforms that reduce complexity at every stage of deployment. The NOVA Platform exemplifies this approach, offering high-density, modular solutions with tool-less installation and all work performed at the front of the rack.

Across our portfolio, we emphasize consistent installation methods, clean documentation, and incremental scalability. This reduces training requirements, limits downtime, and allows operators to grow capacity without disrupting active networks — whether in a rural head end or a data center supporting AI workloads.

What are Clearfield’s key differentiators?

Our primary differentiator is how intentionally we design for the realities of the field. Clearfield solutions are modular, craft-friendly, and built to minimize labor and operational complexity.

Rather than isolated products, we deliver platform-based ecosystems that scale consistently across environments. This helps customers simplify inventory, standardize training, and deploy fiber with confidence. Our roots in community broadband give us a unique perspective that translates well to today’s data center and edge applications, where efficiency and scalability are critical.

What can we expect to see/hear from Clearfield in the future?  

You can expect Clearfield to continue expanding its footprint in data centers and edge computing while remaining committed to community broadband. We’ll introduce additional high-density, modular solutions that support AI-driven architectures and growing fiber demands. But our focus will remain on platforms that bridge environments.

We want to empower operators to apply a consistent, efficient approach as networks become more distributed. Ultimately, we aim to help customers scale faster, manage complexity more easily, and build infrastructure that supports both current and future workloads.

What upcoming industry events will you be attending? 

Clearfield launched the NOVA Platform at BICSI Winter 2026, where attendees were able to see live demonstrations of our high-density patch panels and cassettes and explore the broader ecosystem. That won’t be the last chance to see NOVA. We will participate in many major industry events this year, engaging with network operators, designers, and partners to share best practices and demonstrate how our solutions simplify fiber deployment.

Do you have any recent news you would like us to highlight?

Clearfield recently launched the NOVA Platform, a modular, high-density fiber ecosystem designed for data centers, enterprise networks, and edge environments. NOVA delivers tool-less installation, higher port density, and improved documentation. This innovative solution suite addresses the growing demands of AI-driven and 100G-plus networks. The platform includes patch panels, cassettes, cabinets, racks, and fiber assemblies that scale consistently across environments and are already generating strong interest across multiple markets.

Is there anything else you would like our readers to know about Clearfield and capabilities?

Clearfield sits at the intersection of broadband and data center infrastructure at a time when AI is reshaping network design. Fiber is the common foundation, but operational simplicity is becoming just as important as speed. Our experience helping operators deploy efficient, scalable networks translates directly to today’s high-density and edge environments. Whether connecting communities or powering AI workloads closer to users, Clearfield delivers fiber infrastructure designed to scale cleanly and perform reliably.

Where can our readers learn more about Clearfield?  

Visit us online at www.seeclearfield.com and follow us on social media.

How can our readers contact Clearfield? 

The contact page on our website has multiple ways to get in touch with our team to learn more about the NOVA Platform and our other solutions.

# # #

About Data Center POST

Data Center POST provides a comprehensive view of the digital infrastructure landscape, delivering industry insights into the global data center ecosystem. As the industry’s only peer-contributed and online publication, we offer relevant information from developers, managers, providers, investors, and trendsetters worldwide.

Data Center POST works hard to get the most current information and thought-provoking ideas most apt to add relevance to the success of the data center industry. Stay informed, visit www.datacenterpost.com.

If you are interested in contributing to Data Center POST, contact us at contributions@datacenterpost.com or submit your article here.

Want more digital infrastructure news? Stay in the know and subscribe to Data Center POST today!

The post Company Profile: Clearfield on Simplifying Fiber for AI-Ready Networks appeared first on Data Center POST.

Pre-Connectorized Fiber for 400G/800G and Beyond: Implications for U.S. DCs

10 February 2026 at 19:00

Author: Paulo Campos, President, R&M USA Inc.

U.S. data centers are moving quickly from 100G/200G to 400G and 800G, while preparing for 1.6T. The main driver is AI: training and inference fabrics generate huge east-west (server-to-server) traffic, and any network bottleneck leaves expensive GPUs/accelerators underutilized. Cisco notes that modern AI workloads are “data-intensive” and generate “massive east-west traffic within data centers”.

This step-change is now viable because switching and NIC silicon can deliver much higher bandwidth density. Broadcom’s Tomahawk 5-class devices, for example, support up to 128×400GbE or 64×800GbE in a single chip, enabling higher-radix leaf/spine designs with fewer boxes and links. Optics are improving cost- and power-efficiency as well; a Cisco Live optics session highlights a representative comparison of one 400G module at ~12W versus four 100G modules at ~17W for the same aggregate bandwidth.

In parallel, multi-site “metro cloud” growth is increasing demand for faster data center interconnect (DCI). Coherent pluggables and emerging standards such as OIF 800ZR are making routed IP-over-DWDM architectures more practical for metro DCI.

What this changes

As data centers move to 400G/800G+, the physical layer shifts toward higher-density fiber with tighter loss budgets and stricter operational discipline:

  • Parallel optics increase multi-fiber connectivity. Many short-reach 400G links (e.g., 400GBASE-DR4) use four parallel single-mode fiber pairs with 100G PAM4 per lane, which increases the use of MPO/MTP trunking, polarity management and breakout harnesses/cassettes over simple duplex patching. VSFF connectors (for example MMC/SN-MT) are currently becoming an alternative to familiar MTP/MPO connectivity.
  • PAM4 is less forgiving. Operators typically specify lower-loss components, reduce mated pairs, and enforce more rigorous inspection and cleaning to protect link margin.
  • Single-mode (OS2) expands inside the building. New builds often standardise on OS2 for spine/leaf and any run beyond in-row distances, while copper is largely confined to very short in-rack DACs (with AOCs/AECs or fiber used as lengths increase).
  • DCI emphasizes single-mode duplex LC with coherent optics/DWDM, where fiber quality and minimal patching become critical.

The pre-con solution

Pre-connectorized (pre-terminated) cabling systems – including hardened variants – fit current U.S. requirements for speed, performance and repeatability:

  • Faster deployment and predictable performance: factory-terminated “plug-and-play” trunks and panels reduce on-site termination, minimize installer variability, and help teams hit tight loss budgets at 400G/800G and beyond.
  • Higher density and simpler change control: preterm MPO/MTP trunks with modular panels/cassettes pack more fibers into less space and make adds/changes faster with less disruption.
  • Alignment to standards and repeatable architectures: ANSI/TIA-942 defines minimum requirements for data-center infrastructure, while ANSI/BICSI 002-2024 provides widely used best-practice guidance for data-center design and implementation – both encouraging well-defined pathways and modular, repeatable approaches.
  • Resilience for harsh pathways: between buildings, in ducts, and at the edge (modular/outdoor DCs), hardened features such as robust pulling grips and improved protection against water/dirt can reduce rework during construction.

As U.S. data centers push into 400G/800G and prepare for 1.6T, pre-connectorized fiber helps deliver deployment speed, high-density layouts, and repeatable, testable performance – often with less reliance on scarce specialist termination labor.

# # #

References

  1. Cisco. “AI Networking in Data Centers.” Cisco website. (Accessed Jan 2026).
  2. Cisco Live 2025. “400G, 800G, and Terabit Pluggable Optics” (BRKOPT-2699).
  3. OIF. “Implementation Agreement for 800ZR Coherent Interfaces (OIF-800ZR-01.0).” Oct 8, 2024.
  4. Semiconductor Today. “OIF releases 800ZR coherent interface implementation agreement.” Nov 1, 2024.
  5. Ciena. “Standards Update: 200GbE, 400GbE and Beyond.” Jan 29, 2018.
  6. TIA. “ANSI/TIA-942 Standard.” TIA Online.
  7. BICSI. “ANSI/BICSI 002-2024: The Standard for Data Center Design.” BICSI website.

The post Pre-Connectorized Fiber for 400G/800G and Beyond: Implications for U.S. DCs appeared first on Data Center POST.

Why 2026 Will Be a Turning Point for Server Cooling – and What Enterprises Should Know About It

10 February 2026 at 16:00

Direct-source cooling moves from niche to necessity as AI-era thermal limits collide with traditional airflow design

For decades, server and IT device cooling has followed a predictable playbook: move enough air, manage hot and cold aisles, and rely on increasingly sophisticated fans and facility-level HVAC to keep silicon within tolerance. That model is now approaching its limits.

The rise of AI workloads–characterized by dense computing, high-bandwidth memory, and sustained 24/7 utilization–is forcing a rethink of how heat is removed from systems. The industry is shifting from generalized airflow toward direct-source cooling: targeted, device-level technologies designed to eliminate localized hot spots before they degrade performance or reliability.

2026 will mark a notable turning point as OEM roadmaps, AI-driven performance expectations, and the physical limits of traditional fans converge, making new thermal approaches not optional but inevitable. It will be a pivotal year for system design evolution, with a growing number of manufacturers aligning their roadmaps around architectures that must deliver very high compute horsepower and memory bandwidth to support AI workloads.

As a result, advanced thermal management is emerging as a critical enabler of performance, reliability, and product differentiation across the IT sector.

The Problem: AI Is Breaking the Thermal Envelope

AI-era servers and PCs don’t just run hotter — they also run continuously. Unlike bursty enterprise workloads of the past, AI inference and training systems push CPUs, GPUs, and memory at sustained utilization levels. Heat becomes the dominant constraint.

In practice, this manifests as thermal orphans: localized pockets of trapped heat inside a server or rack that traditional airflow simply can’t reach. When those pockets overheat, the system responds the only way it can: by throttling performance. For data center operators, throttling is not a thermal issue; it’s a business problem. It means paid-for silicon isn’t delivering paid-for performance.

From Airflow to Direct-Source Cooling

The industry needs to supplement, not replace, existing cooling with direct-source airflow applied exactly where heat accumulates. Ventiva’s approach is to add compact ionic modules near problem components, creating just enough directed airflow to clear thermal orphans without redesigning the whole chassis.

Rather than spinning fans faster or redesigning entire racks, system designers can use solid-state, ionic cooling-based solutions that sit close to heat-generating components. These solutions each create airflow by charging ions and using their motion to pull air through a targeted zone.

The result is modest but decisive: 2 to 3 cubic feet per minute (CFM) of airflow, precisely applied, is enough to push trapped hot air out of isolated pockets and back into the main airflow path. That small amount of airflow can be the difference between sustained full performance and permanent throttling.

How Ionic Cooling Works and Why It Matters

With ionic cooling technology, a current is passed through an emitter that ionizes molecules in the surrounding air. Those ions are attracted to an oppositely charged collector, and their movement creates airflow — without any mechanical parts. This has implications enterprises should care about:

  • No moving parts means fewer mechanical failures and longer operational life.
  • Dust-aware sensing allows the system to detect contamination and trigger automated cleaning, addressing a common failure mode in fans.
  • Consistent airflow over time prevents the gradual thermal degradation that shortens component lifespan.

Heat is the fastest way to degrade electronics. By keeping memory and processors within optimal temperature ranges, direct-source cooling doesn’t just improve performance — it improves system longevity.

Performance First, Not Just Efficiency

While energy efficiency is often part of cooling conversations, performance stability is also a key concern. In AI-heavy environments, the worst outcome isn’t higher power draw; it’s unpredictable performance. There are a lot of issues around how high you can go with performance and still deal with the thermal envelopes, such that your system is reliable and can run as required.

By ensuring thermal stability, direct-source cooling allows systems to run at full bore, 24/7, without throttling. For enterprises, this reframes the ROI discussion. Cooling is no longer a facilities cost to be minimized; it’s a performance enabler that protects compute investment.

Fans Are Hitting Their Design Limits

Traditional fan technology is mature, and that’s part of the problem. Incremental gains are getting harder, while fan-based designs face inherent trade-offs. These are: a) higher RPM increases noise and power consumption; b) mechanical wear limits reliability; and c) airflow paths struggle to reach dense, obstructed layouts.

Cold plate and liquid cooling approaches address some of these challenges but add complexity, cost, and service requirements. Ionic cooling occupies a different niche: solid-state, targeted, and augmentative.

Ionic cooling technology isn’t a replacement for fans or liquid cooling. Instead, it fills the gap where traditional methods fail. These include hot spots, edge deployments, and compact systems.

Edge and Client Devices: The Steeper Hill

Ironically, qualifying new cooling technology for laptops and edge devices is more difficult than for data centers. Constrained spaces, lack of physical supervision, dust exposure, and high reliability expectations make these environments unforgiving.

Because there is so much more room in a data center, there’s much more volume of air moving, so you’re not necessarily going to contaminate your data center with, say, pet dander, fibers, or other kinds of dust that you would with a mobile (PC) unit. Edge devices also fall into this category.

Ionic cooling technology has proven particularly well-suited here. Edge devices often run unattended, making mechanical reliability critical. Mini-data center form factors, such as compact AI systems, combine high compute density with limited airflow. Client devices are becoming AI-aware, running inference locally and behaving more like servers than PCs.

As edge systems increasingly process AI workloads on-device, rather than in centralized clouds, they inherit data center-class thermal challenges without data center-class infrastructure.

2026: Why the Timing Matters

2026 is when multiple forces will align. Here’s the evidence:

  • OEM “AI-ready” commitments. Major OEMs are locking product release schedules around AI capability. That means more memory, more compute, and higher sustained power.
  • Thermal headroom is gone. Existing designs have little margin left. Incremental fan improvements won’t close the gap.
  • Market realism. Data center managers are no longer asking if AI workloads will strain cooling but how to prevent performance collapse when they do.

CTO Choices: What to Evaluate Now

For IT and infrastructure buyers planning 2026 and beyond, the cooling decision tree is changing. Key questions include the following:

  • Where do performance bottlenecks originate — facility-level airflow or device-level hot spots?
  • Is throttling already occurring under sustained AI load?
  • Do edge or compact systems lack serviceability or supervision?
  • Can targeted airflow extend system life without redesigning the entire rack?

Direct-source ionic cooling technologies such as Ventiva’s don’t replace existing infrastructure, but they can delay costly redesigns, protect performance, and extend hardware ROI.

The Bigger Shift

The transition from fan-centric cooling to hybrid, direct-source approaches mirrors earlier infrastructure shifts. Just as AI forced a rethink of networking, storage, and compute architectures, it is now reshaping thermal design. In that sense, cooling is no longer a background concern. It is becoming a first-class architectural decision–one that will increasingly differentiate AI-ready systems from those that merely claim to be.

2026 is now here, and enterprises that treat cooling as a strategic lever and not an afterthought will be better positioned to extract real value from their AI investments.

# # #

About the Author

Dr. Brian Cumpston is Director of Application Engineering at Ventiva, where he leads the integration of advanced thermal management technologies into consumer electronics and computing platforms. With 25+ years of experience spanning multiple industries, he specializes in the commercialization of disruptive technologies that redefine performance and efficiency standards.

Brian brings a deep background in system architecture and a nuanced understanding of power and performance tradeoffs. He partners with OEMs to solve complex design challenges across acoustics, form factor, and energy efficiency, helping to unlock new possibilities for AI-enabled devices and next-generation platforms.

Brian holds a B.S. in Chemical Engineering from the University of Arizona and a Ph.D. in Chemical Engineering from the Massachusetts Institute of Technology.

The post Why 2026 Will Be a Turning Point for Server Cooling – and What Enterprises Should Know About It appeared first on Data Center POST.

Company Profile: VIRTUS on Redefining Data Centre Growth in Europe

9 February 2026 at 17:30

Data Center POST had the opportunity to connect with Christina Mertens, who joined VIRTUS as VP Business Development EMEA in June of 2022. With her she brings over ten years’ experience in developing strategies for, and expanding, existing and new hyperscale infrastructure geographies across EMEA.

For the past decade, she has worked for Amazon in EMEA, where she expanded the existing AWS data centre regions in colocation and self-built facilities, as well as launched new region geographies as the country manager. In her previous role as Data Center Divestiture Principal at Amazon Web Services in EMEA, Christina worked alongside large strategic hyperscale cloud customers, advising them on their infrastructure assets and developing new models to facilitate and enhance their cloud migration journey. She is the Managing Director of Germany and Italy, responsible for overseeing all aspects of the business, including expansions, sales, data centre design, construction and operations.

The information below is summarized to provide our readers a deeper dive into who VIRTUS is, what they do and the problems they are solving in the industry.

What does VIRTUS do?  

VIRTUS is a European data centre provider and the largest in the UK. With over 10 years of experience, whichever sector a business operates in, VIRTUS tailors solutions to specific customer requirements.

What problems does VIRTUS solve in the market?

Businesses have unique workloads, project durations and changing requirements. VIRTUS’ solutions are designed to provide the digital infrastructure which supports these needs. Built to a vast scale, all of our data centres are designed modularly, allowing full flexibility for data centre customers’ requirements. Our facilities operate using 100% renewable energy and are amongst the most efficient facilities in the world.

What are VIRTUS’ core products or services?

We build AI-ready, built to suit and colocation data centres.

VIRTUS’ AI Ready Data Centres are designed to support the high performance computing (HPC) demands of artificial intelligence workloads. Our facilities provide the optimum environment for HPC deployments of any size, including the next generation of AI IT infrastructure and Machine Learning (ML) workloads, which require next generation cooling deployment and increased power per rack.

Our built to suit data centres are those designed specially for the customer. We know that organisations of all sizes need real flexibility, which is why we work with our customers to create bespoke solutions. For example, some require cutting-edge AI solutions which may require space to scale at speed, others might have a hyperscale cloud deployment that needs custom built data halls.

Our colocation service is designed to provide maximum flexibility with individual IT power and space requirements. The modular facilities are designed to scale up with customer growth. This combined with truly flexible commercials allows customers to grow in a cost efficient and unrestrictive environment.

What markets do you serve?

VIRTUS’ European data centres are strategically located in key markets; currently this is London (UK), Berlin (Germany) and Milan (Italy). As part of ST Telemedia Global Data Centres’ (STT GDC) global platform, we have a presence in ten geographies, more than 101 data centres and over 2GW of IT load across 20+ major business markets.

Our vast experience comes from working with many industry sectors – from financial institutions which require ultra-low latency, to thriving tech start ups which rely on contiguous space to grow, and providing entire buildings or campuses for the world’s largest hyperscalers.

What challenges does the global digital infrastructure industry face today?

Many current European data centres simply cannot meet the short- and long-term demands for critical digital infrastructure, often due to a shortage of infrastructure that can support high HPC workloads. It is a fundamental challenge to find land with access to renewable power to build new facilities, quickly and at scale.

For years, development revolved around a handful of key metropolitan hubs. Frankfurt, London, Amsterdam and Paris (collectively known as the FLAP locations) carried much of the continent’s cloud, enterprise and interconnection load, due to their proximity to financial services, global carriers and concentrated digital ecosystems.

Undoubtedly, whilst those hubs continue to grow, their conditions have changed. Power supply is being delayed due to parts of the electricity distribution network not being capable of transporting it, suitable land parcels are becoming scarcer and therefore more expensive to secure, and planning regulations are increasing, lengthening timelines to approvals, if they are granted at all.

Meanwhile, demand for computing power is surging in ways that surpass forecasts made even two years ago. AI training and inference, HPC, analytics and modernised public services all require significant and sustained energy and cooling capacity.

McKinsey suggests that global demand for data centre capacity could more than triple by 2030. It is clear that Europe needs more digital infrastructure, but it needs that infrastructure in places with the headroom and regulatory clarity to support long term expansion. And this is partly why what is sometimes known as the second-tier locations are becoming increasingly more critical to expanding Europe’s digital architecture.

Over the next five years, this is not a marginal shift. Analysts expect Europe’s installed data centre capacity to more than double, from roughly 24 GW in 2025 to around 55 GW by 2030, with secondary markets growing fastest. And, while recent CBRE analysis indicates that in 2025, around 57% of new capacity will still be delivered in the core FLAP-D markets, the remaining 43% will come from secondary locations such as Milan, Madrid and Berlin, many of which are now on track to exceed 100 MW of installed capacity in their own right. This is the context in which tier two locations are moving from “nice to have” to essential if Europe is to keep pace with global demand.

How is VIRTUS adapting to these challenges?

Our strategy is to build new facilities at scale, located close to, but not necessarily in major European metropolitan cities, and supplied with renewable energy.

We are currently building a €3bn 300MW data centre campus development at Wustermark, west of Berlin. Wustermark offers what many central locations cannot – land large enough for a multi-building campus, access to sustainable electricity, proximity to rail and motorway networks, and alignment with Germany’s policy focus on digital capacity. The site is also positioned to benefit from Germany’s wider energy and grid modernisation programmes, including access to renewable energy to power the campus as it is adjacent to Germany’s largest on-shore windfarms capable via a substation and direct coupling, of fulfilling the energy requirements of the facility.

This move towards larger campuses is a calculated strategy that acknowledges the non-linear cost relationship inherent in these types of operations; larger megascale campuses capable of 200-500MWs can often afford providers – and therefore customers – greater efficiencies.

We are also constructing another facility in Italy. Located in Cornaredo, within the Milan West data centre cluster the site will provide ample capacity to support hyperscalers, enterprises and service providers as digital infrastructure demands in Europe continue to grow.

What are VIRTUS’s key differentiators?

What sets VIRTUS apart from our competitors can be found in many aspects of the design, build and operations of our facilities. However, the quality of operations – the Operational Excellence – is where we truly excel. The way we have implemented design innovations makes a difference to the service we provide in terms of efficiency and resilience. It’s how we design, build, test, maintain, change and operate our facilities that differentiates us – ensuring robust and reliable availability is delivered.

What can we expect to see/hear from VIRTUS in the future?  

It’s an exciting time for VIRTUS Europe, but to meet customer demand we’re still increasing our presence as the leader in the UK market, opening two new London data centres in 2026 (LONDON12 and LONDON14) and in the near future a large four data centre campus at Saunderton, whilst continuing our European expansion.

What upcoming industry events will you be attending? 

The VIRTUS team is attending the following events: Platform UK where Adam Eaton will be speaking on a keynote panel, Energy Storage Summit where Helen Kinsman will be speaking on a panel, Compute Summit where Ramzi Charif will be speaking on a panel, and finally Datacloud Energy where Helen Kinsman will be speaking on another panel.

Do you have any recent news you would like us to highlight?

Earlier in 2026 we announced VIRTUS’ new CEO, Adam Eaton. Under his leadership, we will continue to expand our portfolio of high-efficiency, sustainable data centres, building on more than a decade of rapid growth across the UK and Europe. VIRTUS remains committed to its vision to deliver world-class, energy-efficient infrastructure that supports the growth of the digital economy.

Where can our readers learn more about VIRTUS?  

You can learn more about us on our website, www.virtusdatacentres.com.

How can our readers contact VIRTUS? 

You can contact us through the form on our website, www.virtusdatacentres.com/contact-us.

# # #

About Data Center POST

Data Center POST provides a comprehensive view of the digital infrastructure landscape, delivering industry insights into the global data center ecosystem. As the industry’s only peer-contributed and online publication, we offer relevant information from developers, managers, providers, investors, and trendsetters worldwide.

Data Center POST works hard to get the most current information and thought-provoking ideas most apt to add relevance to the success of the data center industry. Stay informed, visit www.datacenterpost.com.

If you are interested in contributing to Data Center POST, contact us at contributions@datacenterpost.com or submit your article here.

Want more digital infrastructure news? Stay in the know and subscribe to Data Center POST today!

The post Company Profile: VIRTUS on Redefining Data Centre Growth in Europe appeared first on Data Center POST.

Data Center Liquid Cooling Market to Surpass USD 27.1 Billion by 2035

9 February 2026 at 16:00

The global data center liquid cooling market was valued at USD 4.8 billion in 2025 and is estimated to grow at a CAGR of 18.2% to reach USD 27.1 billion by 2035, according to a recent report by Global Market Insights Inc.

Rising energy costs, coupled with stringent sustainability requirements, are accelerating the adoption of liquid cooling technologies across data centers. Liquid cooling systems offer significantly lower Power Usage Effectiveness (PUE) ratios ranging from 1.05 to 1.15 compared to 1.4-1.8 for traditional air-cooled facilities, which directly lowers electricity consumption and reduces carbon emissions. Regulatory mandates, including the EU Energy Efficiency Directive, Germany’s Energy Efficiency Act targeting PUE 1.3 by 2027, and California’s energy efficiency standards, are pushing operators toward advanced cooling solutions.

Furthermore, the ability of liquid cooling systems to recover waste heat for district heating or industrial processes transforms data centers into contributors to circular energy economies, supporting corporate net-zero initiatives and enhancing operational sustainability. North America continues to lead the data center liquid cooling market, driven by a dense concentration of hyperscale cloud operators, semiconductor manufacturers, and systems integrators deploying high-density AI and HPC infrastructure.

The solution segment held a 71% share in 2025 and is forecast to grow at a CAGR of 15% from 2026 to 2035. Direct-to-chip cooling is the fastest-growing technology, employing cold plates and micro-channel coolers attached directly to processors, GPUs, and memory to remove 60-80% of heat before it enters the air. These systems circulate coolants such as water with inhibitors or glycol mixtures across chip surfaces, achieving thermal resistances as low as 0.01-0.05°C/W.

The single-phase liquid cooling systems segment reached USD 3.1 billion in 2025. These systems maintain coolant in liquid form throughout the cycle, transferring heat via conduction and convection without phase change. Coolants circulate through cold plates, immersion tanks, or heat exchangers at 18-50°C, depending on design, while facility chillers, dry coolers, or towers remove heat from the loop.

U.S. data center liquid cooling market captured USD 1.29 billion in 2025. Federal initiatives, including AI and HPC programs, semiconductor funding under the CHIPS Act, and defense modernization projects incorporating AI, are key drivers of liquid cooling adoption in public sector data centers.

Leading companies in the data center liquid cooling market include Alfa Laval, Asetek, Boyd, CoolIT Systems, Green Revolution Cooling, LiquidStack, Rittal, Schneider Electric (Motivair), Stulz, and Vertiv. Key strategies adopted by companies in the market focus on technological innovation, such as developing high-efficiency immersion and direct-to-chip cooling solutions for next-generation processors and GPUs. Firms are forming strategic partnerships with hyperscale cloud providers, semiconductor manufacturers, and HPC integrators to expand deployment. Investments in R&D for energy-efficient, modular, and scalable systems strengthen product differentiation. Companies are also emphasizing geographic expansion into emerging markets, supporting sustainability initiatives, and integrating IoT-enabled monitoring tools to optimize performance, enhance reliability, and maintain long-term client relationships.

The post Data Center Liquid Cooling Market to Surpass USD 27.1 Billion by 2035 appeared first on Data Center POST.

Duos Technologies Achieves $28 Million Revenue for 2025

9 February 2026 at 15:00

Duos Technologies Group, Inc. (Nasdaq: DUOT), a leader in intelligent technologies and digital infrastructure, has announced that it achieved its stated revenue guidance for the fiscal year ending December 31, 2025. The company recorded revenue of $28 million, an estimated 288% increase over the prior year and almost double its previous best year. Duos also expects to achieve positive adjusted EBITDA in the fourth quarter of FY25 which would be the second consecutive quarter this was accomplished.

Building on strong momentum throughout 2025, Duos has expanded its offerings to include Data Center Infrastructure Solutions, enhancing its core data center vertical while supporting the accelerating deployment cadence of Duos Edge AI’s patented modular Edge Data Centers. Duos has rolled out 12 of the EDCs to leased site locations across Texas with an additional two EDCs shipping in the coming week and the final one planned for the Illinois location will be deployed as soon as weather permits.

“I am very pleased that we were able to deliver on our commitment of at least $28 million revenue for 2025 and that we expect to achieve positive adjusted EBITDA in the fourth quarter,” said Doug Recker, President of Duos and Founder of Duos Edge AI. “We continue to roll out our EDCs, now with the patented clean room and can also acknowledge that we are engaged in multiple discussions with industry leaders regarding planned expansion of our EDCs for use in AI applications. I will provide further updates as they are available and in any case, no later than our earnings call in late March.”

Complementing this growth, Duos recently launched its Infrastructure Solutions Group, a dedicated subdivision within Duos Edge AI. In its initial quarter of operation, the Infrastructure Solutions Group signed approximately $7 million in contracts during Q4, demonstrating early traction and validating the strategic value of this expansion.

Final results remain subject to audit. The company expects to report comprehensive fourth quarter and full year 2025 results at the end of March.

To learn more about Duos Technologies Group, Inc., visit www.duostech.com.

The post Duos Technologies Achieves $28 Million Revenue for 2025 appeared first on Data Center POST.

Company Profile: GreenScale on Building Sustainable, Power-Rich Digital Infrastructure

5 February 2026 at 17:30

Data Center POST had the opportunity to connect with Jean-François Berche, the Chief Technology Officer at GreenScale, who is guiding the company’s technological vision towards infrastructure that is scalable, efficient, and above all, sustainable. He focuses on developing data centres capable of supporting the complex needs of AI-driven workloads, while ensuring GreenScale leads in technology integration within the energy ecosystem.

Jean-François previously held senior roles at Microsoft and AWS, where he was instrumental in expanding the cloud infrastructure to meet the growing demands of AI. His extensive work in site selection, colocation, and cloud region expansion at Microsoft and AWS positions him to drive GreenScale’s technological capabilities to the pinnacle of what is possible.

His passion for sustainability in technology is well-aligned with GreenScale’s mission. Outside of work, Jean-François remains committed to exploring how technology can positively impact society through sustainable and innovative practices. The interview information below has been summarized to provide readers with clarity into who GreenScale is, what they do and the problems they are solving in the industry.

What does GreenScale do?  

GreenScale is a sustainable data centre platform redefining the future of sustainable digital infrastructure across Europe’s expanding data centre markets.

What problems does GreenScale solve in the market?

As demand for high-performance AI and cloud workloads accelerates, power availability, grid constraints, and environmental impact have become critical bottlenecks. At GreenScale, we are developing a sustainable data centre platform that positively contributes to the grid, local communities, and the wider energy ecosystem. We provide access to long-term power scalability, combined with deep local relationships with grid utilities and local communities, to enable customers to grow compute capacity quickly, efficiently, and responsibly.

What are GreenScale’s core products or services?

Digital infrastructure

What markets do you serve?

We’re developing data centres in Europe, with plans for international expansion.

What challenges does the global digital infrastructure industry face today?

The global digital infrastructure industry faces the challenge of scaling AI and cloud capacity amid constrained power availability, grid limitations, and growing environmental concerns.

How is GreenScale adapting to these challenges?

Sustainability at GreenScale starts with site selection. By focusing on new power-rich regions such as Norway, where hydropower is abundant, and Derry/Londonderry, where strong wind resources support renewable energy generation, we secure clean, scalable energy from the outset. Working closely with local utilities allows us to contribute positively to the grid while accelerating speed to deployment and enabling responsible, long-term growth for digital infrastructure.

What are GreenScale’s key differentiators?

GreenScale’s key differentiators lie in our ability to deliver at speed while maintaining a strong sustainability focus. We prioritise rapid deployment through strategic partnerships, including our recently announced collaboration with Vertiv, and by building in new power-rich markets that support long-term scalability. Our platform is underpinned by a deep commitment to ESG and led by a team with over 100 years of combined industry experience, enabling us to execute reliably in a rapidly evolving market.

What upcoming industry events will you be attending? 

PTC, NVIDIA GTC, DCAC, Data Centre Expo, Data Centre World London, Datacloud Global Congress and many more!

Do you have any recent news you would like us to highlight?

Vertiv and GreenScale Announce Strategic Collaboration to Deploy AI-Ready Data Centre Platforms across Europe.

Where can our readers learn more about GreenScale?  

Readers can learn more on our company website, www.greenscaledc.com.

How can our readers contact GreenScale? 

You can contact us through our website, www.greenscaledc.com/contact.

# # #

About Data Center POST

Data Center POST provides a comprehensive view of the digital infrastructure landscape, delivering industry insights into the global data center ecosystem. As the industry’s only peer-contributed and online publication, we offer relevant information from developers, managers, providers, investors, and trendsetters worldwide.

Data Center POST works hard to get the most current information and thought-provoking ideas most apt to add relevance to the success of the data center industry. Stay informed, visit www.datacenterpost.com.

If you are interested in contributing to Data Center POST, contact us at contributions@datacenterpost.com or submit your article here.

Want more digital infrastructure news? Stay in the know and subscribe to Data Center POST today!

The post Company Profile: GreenScale on Building Sustainable, Power-Rich Digital Infrastructure appeared first on Data Center POST.

Company Profile: STT GDC Philippines on Building the Philippines’ Largest AI-Ready Data Center Campus

4 February 2026 at 17:30

Data Center POST had the opportunity to connect with Carlo Malana, President and CEO of STT GDC Philippines, which is a joint venture among Globe Telecom, Ayala Corporation and ST Telemedia Global Data Centres. The company provides secure, reliable, and sustainable data centers to enable digital transformation for global and local businesses. With more than two decades of diverse leadership experience in the ICT industry, his background includes strategic roles at AT&T and as CIO for Globe. He earned a double degree from the University of California at Berkeley and an MBA from Southern Methodist University.

With over 20 years in Information Communications Technology (ICT) including roles with AT&T, across the United States, Mexico, and the Philippines, he has led both technology and business organizations in such diverse areas as strategy, program management, merger integration, retail, finance, customer operations, and sales.

The interview information below has been summarized to provide readers with clarity into who STT GDC Philippines is, what they do and the problems they are solving in the industry.

What does STT GDC Philippines do?  

ST Telemedia Global Data Centres (STT GDC) Philippines empowers business digital transformation through a service model integrating Colocation, Cross connect, and Support Services. We provide Colocation via scalable, sustainable, and secure infrastructure operated to strict global standards, a commitment recently validated by our flagship 124MW STT Fairview Data Center Campus, achieving the IDCA G2 Design Certification, and our STT Cavite 1 data center earning the Uptime Institute Tier III Design Certification. While our Interconnect & Connectivity solutions provide a carrier-neutral platform optimized for seamless access to hybrid and multi-cloud environments, our Support Services complement this technology as your extended technical team, managing critical facility operations so you can focus exclusively on your core business performance.

What problems does STT GDC Philippines solve in the market?

STT GDC Philippines addresses the critical shortage of high-quality digital infrastructure in Southeast Asia (SEA) by replacing outdated systems with massive, scalable facilities built for the future. We solve the capacity shortfall by delivering hyperscale-ready infrastructure, such as our 124MW STT Fairview campus, designed to meet the rigorous TIA-942 Rated 3 and Uptime Institute Tier III standards for concurrent maintainability. We specifically address the urgent demand for AI and high-performance computing by building AI-ready facilities equipped with high power density and advanced liquid cooling support. Most importantly, we eliminate downtime concerns by providing SLA-backed availability, ensuring your mission-critical business operations remain secure and stable 24/7 with a sustainable environment. Finally, we remove connectivity restrictions through our carrier-neutral ecosystem, providing a resilient platform that offers customers superior network choice and the flexibility to connect with the partners that best serve their requirements.

What are STT GDC Philippines’s core products or services?

Our core services are colocation, cross connect, and support services.

What markets do you serve?

ST Telemedia Global Data Centres (STT GDC) Philippines is a leading carrier-neutral provider dedicated to supporting the high-density requirements of Hyperscalers, AI companies, and large enterprises in the banking, financial services, and telecommunications sectors.

As a joint venture between Globe Telecom, Ayala Corporation, and STT GDC, we enable digital transformation by offering scalable, sustainable, and secure infrastructure designed for mission-critical applications. Our facilities are specifically optimized for high-performance workloads, leveraging strategic partnerships with industry leaders and partners to deploy advanced solutions such as liquid cooling for AI-driven demands.

Our data centers provide a flexible technology foundation with direct access to major global cloud platforms and a diverse ecosystem of connectivity partners. This carrier-neutral approach ensures optimal connectivity for hybrid and multi-cloud environments, while our strict operational excellence and 24/7 on-site technical expertise deliver industry-leading uptime. By integrating these best-in-class partnerships, we allow your organization to rely completely on our reliable infrastructure while you focus on driving your core business growth.

What challenges does the global digital infrastructure industry face today?

The industry is currently facing a massive energy and power crisis, where securing reliable electricity has become significantly harder than finding physical land. Because AI operations consume vast amounts of energy, they place an immense strain on local power grids, making it difficult for operators to find suitable locations while sticking to green energy goals.

Secondly, the rapid adoption of AI has created a thermal management challenge; the extreme heat generated by modern high-performance chips exceeds the limits of traditional air cooling, forcing a pivot toward advanced liquid cooling methods even as universal standards remain undefined.

Finally, geopolitical instability and supply chain disruptions are acting as a major brake on progress. Rising global tensions are complicating where secure networks can be built, while acute shortages of essential equipment, like high-voltage transformers and backup generators, are delaying construction and preventing the infrastructure from keeping pace with global demand.

How is STT GDC Philippines adapting to these challenges?

STT GDC Philippines is adapting by building flexible, high-capacity infrastructure, such as the 124 MW STT Fairview Data Center Campus, that is fully ready for AI and liquid cooling but remains adaptable to changing technology rather than being limited to a single purpose. We are addressing the energy challenge by committing to 100% renewable energy for our operations. To navigate global instability, we maintain a fairly neutral position as a carrier-neutral platform, ensuring resilience and open choices for all networks.

What are STT GDC Philippines’s key differentiators?

Our key differentiators begin with our adherence to global standards, ensuring that every facility in our portfolio operates with the same rigor and reliability found across our international platform. This foundation allows us to provide the most extensive capacity in the region, highlighted by the 124MW STT Fairview Data Center Campus, the largest, most interconnected carrier-neutral, and sustainable data center in the Philippines. Our commitment to international, sustainability-driven design is evident in our LEED Gold and TIA-942 Rated 3 certifications, as well as our “AI-ready” infrastructure that supports liquid cooling to reduce environmental impact.

Beyond physical assets, we prioritize our talent through the DC Power Up program, a milestone initiative that trains and certifies the next generation of data center professionals to ensure a future-ready workforce. Our operational excellence is the heartbeat of our business, utilizing advanced automation and AI-powered cooling to maintain peak efficiency 24/7. Finally, we leverage deep local expertise through our powerful partnership with Globe and Ayala, combining the country’s leading telecommunications reach and corporate heritage to provide customers with a seamless, trustworthy gateway into the Philippine digital economy.

What can we expect to see/hear from STT GDC Philippines in the future?  

STT GDC Philippines is focused on rapidly scaling its delivery capabilities, a goal already in motion as we begin operating with our first customers at STT Fairview 1. This marks a significant milestone for what will be the largest and most AI-ready data center campus in the Philippines, featuring infrastructure specifically engineered for high-density computing and advanced liquid cooling. Our commitment to innovation is further showcased at our AI Synergy Lab, where we demonstrate the future of thermal management and high-efficiency power solutions. To support this growth, we are accelerating partnerships across the ecosystem by  recently onboarding key connectivity partners to ensure our facilities serve as the premier, carrier-neutral gateway for Southeast Asia’s digital future.

What upcoming industry events will you be attending? 

We are excited to represent STT GDC Philippines at two of the most influential technology gatherings in the region and the world this year. This February, our team will be in Jakarta for APRICOT 2026, the Asia Pacific region’s premier internet operations and networking summit. This event is a critical forum for us to collaborate with network engineers and policymakers to strengthen the digital fabric of Southeast Asia. Following this, we will be attending NVIDIA GTC in March in San Jose, California. Often called the “Super Bowl of AI,” GTC is where we engage with the latest breakthroughs in AI infrastructure and high-performance computing, ensuring that our data centers remain at the cutting edge of the global AI revolution.

Do you have any recent news you would like us to highlight?

We are excited to share several major milestones that underscore our rapid growth and commitment to the Philippines’ digital future. Most recently, in October 2025, we announced the onboarding of our first connectivity partners at our flagship STT Fairview Data Center campus. These partnerships are significant for our carrier-neutral ecosystem, providing customers with diverse network choices and the resilience needed for AI-powered growth. Additionally, the 124MW STT Fairview Data Center campus recently achieved the prestigious IDCA G2 Design Certification, recognizing its world-class N+1 design and operational excellence. On the sustainability front, we are proud to have transitioned to 100% renewable energy across all our operational data centers as of early 2025.

Is there anything else you would like our readers to know about STT GDC Philippines and capabilities?

Finally, we want your readers to know that STT GDC Philippines is actively pioneering the future of high-performance computing through our AI Synergy Lab. Launched in collaboration with industry leaders, the lab allows enterprises to run actual AI workloads in a controlled environment, providing a live showroom for high-density computing solutions that are essential for modern digital transformation. By bridging the gap between theoretical AI potential and real-world deployment, the AI Synergy Lab ensures that our partners can optimize their hardware configurations for maximum performance and efficiency. This initiative reinforces our commitment to making the Philippines a premier hub for AI innovation in Southeast Asia, providing the specialized environment required to support the next generation of intelligent computing.

Where can our readers learn more about STT GDC Philippines?  

Readers can learn more on our company website, www.sttelemediagdc.com/ph-en.

How can our readers contact STT GDC Philippines? 

You can contact us through Facebook, Linkedin, or our website.

# # #

About Data Center POST

Data Center POST provides a comprehensive view of the digital infrastructure landscape, delivering industry insights into the global data center ecosystem. As the industry’s only peer-contributed and online publication, we offer relevant information from developers, managers, providers, investors, and trendsetters worldwide.

Data Center POST works hard to get the most current information and thought-provoking ideas most apt to add relevance to the success of the data center industry. Stay informed, visit www.datacenterpost.com.

If you are interested in contributing to Data Center POST, contact us at contributions@datacenterpost.com or submit your article here.

Want more digital infrastructure news? Stay in the know and subscribe to Data Center POST today!

The post Company Profile: STT GDC Philippines on Building the Philippines’ Largest AI-Ready Data Center Campus appeared first on Data Center POST.

Company Profile: Enchanted Rock – Focused on the Future of Data Center Power Reliability

3 February 2026 at 17:30

Data Center POST had the opportunity to connect with Allan Schurr, Chief Commercial Officer at Enchanted Rock, where he leads commercial strategy, partnerships, and market expansion across data centers, utilities, industrial facilities, and critical infrastructure. With deep experience at the intersection of energy, infrastructure, and technology, Allan works closely with hyperscalers, developers, and operators to address one of the industry’s most pressing challenges: securing reliable, scalable power amid tightening grid constraints.

Throughout the conversation, Allan shared a pragmatic perspective to the energy transition, focused on solutions that work today, while enabling lower-carbon outcomes over time. The information below is summarized to provide our readers a deeper dive into who Enchanted Rock is, what they do and the problems they are solving in the industry.

What does Enchanted Rock do?  

Enchanted Rock delivers resilient, dispatchable onsite power generation that enables data centers and other critical facilities to secure power when and where the grid cannot. Working in partnership with utilities, our turnkey power solutions accelerate deployment, protect operations, and strengthen grid reliability.

What problems does Enchanted Rock solve in the market?

Enchanted Rock addresses the growing gap between fast-growing data center power demand and grid capacity. Our solutions help customers overcome interconnection delays, transmission constraints, and utility upgrade timelines, while ensuring reliable power during early operations, peak demand, and grid outages.

What are Enchanted Rock’s core products or services?

At Enchanted Rock, we focus on dispatchable onsite generation systems, end-to-end project delivery (such as design, engineering, EPC, commissioning), and long-term operations and maintenance. We also prioritize portfolio-level power strategy, market participation, and policy support.

What markets do you serve?

Enchanted Rock serves critical infrastructure customers across North America, with a focus on regions experiencing grid congestion, rapid load growth, and constrained interconnection capacity. This includes hyperscale, enterprise, colocation, and edge data center operators, as well as commercial and industrial sites.

What challenges does the global digital infrastructure industry face today?

The defining issue for the global digital infrastructure industry today is power. As AI and cloud adoption accelerate, electricity demand is increasing faster than traditional grid expansion timelines. This has resulted in longer interconnection queues, higher costs, and greater engagement from communities and regulators. In this environment, power availability, reliability, sustainability, and speed to deployment are no longer separate considerations, they must be addressed together.

How is Enchanted Rock adapting to these challenges?

We enable customers to bring capacity online immediately through onsite natural gas or renewable natural gas generation while remaining flexible as grid conditions evolve. Our portfolio-level approach allows developers and operators to standardize scalable solutions, reduce risk across multiple sites, achieve emissions-reduction goals, and align near-term reliability with long-term energy strategies.

What are Enchanted Rock’s key differentiators?

At Enchanted Rock, we focus on proven, dispatchable onsite power that protects customer uptime while supporting grid stability alongside flexible interconnection strategies that adapt to evolving utility, market, and regulatory conditions. Also, we have onsite generation that enables early operations and capacity ramp while data centers await permanent grid interconnection and end-to-end ownership and operational accountability across the full project lifecycle. On top of that, we have portfolio-level scalability for multi-site data center deployments and the ability to integrate with renewable energy and long-term decarbonization strategies.

What can we expect to see/hear from Enchanted Rock in the future?  

Enchanted Rock will continue expanding its role as a long-term power partner across data centers, industrial customers, and other critical infrastructure. We are focused on scaling portfolio-level onsite power solutions, developing new models for collaboration with utilities, and advancing systems that enhance grid reliability and community resilience. Looking ahead, we will continue investing in flexible, lower-emission generation that enables faster infrastructure deployment while supporting the evolving needs of the grid and the communities it serves.

What upcoming industry events will you be attending? 

Enchanted Rock attends and participates in industry events throughout the year focused on energy resiliency, power infrastructure, and digital infrastructure development. Later this month, catch us at PowerGen and the Power Resilience Forum; for a full schedule of upcoming events, visit www.enchantedrock.com/events.

Do you have any recent news you would like us to highlight?

Enchanted Rock recently appointed John Carrington as Chief Executive Officer to guide the company’s next phase of strategic growth, leveraging his extensive experience scaling energy and technology businesses nationwide. The company also introduced new onsite power generation platforms, the ERT500TM natural gas generator and RockBlock TM system, engineered to deliver higher power density, lower emissions, and utility-grade resiliency while reducing reliance on traditional diesel generators.

Is there anything else you would like our readers to know about Enchanted Rock and capabilities?

As grid conditions become more volatile and utility constraints intensify, Enchanted Rock is focused on helping customers and grid operators adapt to both near-term reliability risks and long-term structural change. We work alongside utilities, regulators, and customers to deploy flexible onsite power that not only protects facilities from outages, but also supports grid operations, evolving interconnection models, and emerging policy requirements.

Even in a year without U.S. hurricane landfalls, the grid faced significant stress in 2025, from winter storms and record heat to accelerating load growth driven by data centers and electrification. During that period, Enchanted Rock protected 348 sites, avoided 2,022 outages, and prevented more than 4,800 hours of downtime, including one avoided outage lasting 628 hours. That real-world performance underscores our role as a dependable, adaptive power partner, helping facilities stay powered, utilities manage constraints, and communities remain resilient as reliability margins tighten heading into 2026 and beyond.

Where can our readers learn more about Enchanted Rock?  

Visit www.enchantedrock.com or follow us on LinkedIn.

How can our readers contact Enchanted Rock? 

You can reach us on the contact page on our website; www.enchantedrock.com/contact.

# # #

About Data Center POST

Data Center POST provides a comprehensive view of the digital infrastructure landscape, delivering industry insights into the global data center ecosystem. As the industry’s only peer-contributed and online publication, we offer relevant information from developers, managers, providers, investors, and trendsetters worldwide.

Data Center POST works hard to get the most current information and thought-provoking ideas most apt to add relevance to the success of the data center industry. Stay informed, visit www.datacenterpost.com.

If you are interested in contributing to Data Center POST, contact us at contributions@datacenterpost.com or submit your article here.

Want more digital infrastructure news? Stay in the know and subscribe to Data Center POST today!

The post Company Profile: Enchanted Rock – Focused on the Future of Data Center Power Reliability appeared first on Data Center POST.

‘Do not overlook it at pre-construction’: Gore Street Capital’s head of asset management on BESS data

26 March 2026 at 13:39
BESS fund manager Gore Street Capital's director of asset management Daniel Sherlock-Burke recently discussed the work it is doing around capturing and making use of its huge quantities of operational data.

Received before yesterday

Optimizing Communication for Mixture-of-Experts Training with Hybrid Expert Parallel

2 February 2026 at 18:43
In LLM training, Expert Parallel (EP) communication for hyperscale mixture-of-experts (MoE) models is challenging. EP communication is essentially all-to-all,...

In LLM training, Expert Parallel (EP) communication for hyperscale mixture-of-experts (MoE) models is challenging. EP communication is essentially all-to-all, but due to its dynamics and sparseness (only topk experts per AI token instead of all experts), it’s challenging to implement and optimize. This post details an efficient MoE EP communication solution, Hybrid-EP, and its use in the…

Source

Oracle’s Financing Primes The OpenAI Pump

3 February 2026 at 02:12

Software giant Oracle has a vast installed base of enterprise customers that it has agglomerated over the decades that gives it the cash flow to do many things.

Oracle’s Financing Primes The OpenAI Pump was written by Timothy Prickett Morgan at The Next Platform.

❌