❌

Reading view

Duos Technologies Signs ~$200M LOI and Appoints Doug Recker as CEO

Duos Technologies Group, Inc. (Nasdaq: DUOT), a leader in intelligent technologies and digital infrastructure, has signed a non-binding letter of intent (LOI) with Hydra Host to deploy a high-density NVIDIA GPU cluster for a leading global technology customer. The project supports a GPU-as-a-Service (GPUaaS) partnership expected to generate approximately $176 million in revenue over a 36-month term, with gross margins exceeding 80% and projected annual EBITDA of more than $40 million.

β€œWe are thrilled to partner with the Duos team on this opportunity,” said Aaron Ginn, CEO and Co-Founder of Hydra Host. β€œTheir ability to deliver immediate access to power combined with an industry-leading deployment speed makes them a standout in the market. We see significant runway ahead as we look to expand our collaboration around colocation and Duos’ High-Power EDC model, which we believe is purpose-built to address a market where demand for AI compute capacity is fundamentally outpacing the speed at which traditional data center supply can be delivered.”

Complementing this milestone, Duos has appointed Doug Recker as Chief Executive Officer, effective April 1, 2026, as the company accelerates its transformation into a focused Edge AI and digital infrastructure platform. Mr. Recker succeeds Chuck Ferry, who will continue to serve on the board of directors.

β€œThis initial customer marks a pivotal step in accelerating the buildout of Duos Edge AI,” said Doug Recker, Chief Executive Officer. β€œWe are now entering an exciting phase of execution, further reinforced by our recently announced LOI with Hydra Host, which underscores growing third-party demand for our distributed AI infrastructure model and validates the scalability of our platform. With secured power, rapid deployment capabilities, and expanding strategic partnerships, we believe Duos is well positioned to pursue high-value infrastructure opportunities. Our focus remains on disciplined expansion, capital-efficient growth, and delivering sustainable long-term value for our shareholders.”

Beyond GPUaaS revenue, the collaboration creates a pathway for approximately $25 million in incremental colocation revenue over the same term, validating Duos’ High-Power Edge Data Center (EDC) business line. The company has also signed a non-binding LOI for a ground lease in Iowa with access to up to 10MW of utility power, advancing its long-term goal of building up to 75MW of distributed capacity.

To learn more about Duos Technologies Group, Inc., visit www.duostechnologies.com.

The post Duos Technologies Signs ~$200M LOI and Appoints Doug Recker as CEO appeared first on Data Center POST.

  •  

Upscale AI Nabs Cash To Forge β€œSkyHammer” Scale Up Fabric Switch

The first company that can make a UALink switch with high radix – meaning lots of ports – and high aggregate bandwidth across those ports that can compete toe-to-toe with Nvidia’s NVSwitch memory fabric and NVLink ports is going to make a lot of money. …

Upscale AI Nabs Cash To Forge β€œSkyHammer” Scale Up Fabric Switch was written by Timothy Prickett Morgan at The Next Platform.

  •  

TSMC Has No Choice But To Trust The Sunny AI Forecasts Of Its Customers

If the GenAI expansion runs out of gas, Taiwan Semiconductor Manufacturing Co, the world’s most important foundry for advanced chippery, will be the first to know. …

TSMC Has No Choice But To Trust The Sunny AI Forecasts Of Its Customers was written by Timothy Prickett Morgan at The Next Platform.

  •  

By Decade’s End, AI Will Drive More Than Half Of All Chip Sales

As the year came to an end, we tore apart IDC’s assessments for server spending, including the huge jump in accelerated supercomputers for running GenAI and more traditional machine learning workloads and as this year got started, we did forensic analysis and modeling based on the company’s reckoning of Ethernet switching and routing revenues. …

By Decade’s End, AI Will Drive More Than Half Of All Chip Sales was written by Timothy Prickett Morgan at The Next Platform.

  •  

Scaling NVFP4 Inference for FLUX.2 on NVIDIA Blackwell Data Center GPUs

In 2025, NVIDIA partnered with Black Forest Labs (BFL) to optimize the FLUX.1 text-to-image model series, unlocking FP4 image generation performance on NVIDIA...

In 2025, NVIDIA partnered with Black Forest Labs (BFL) to optimize the FLUX.1 text-to-image model series, unlocking FP4 image generation performance on NVIDIA Blackwell GeForce RTX 50 Series GPUs. As a natural extension of the latent diffusion model, FLUX.1 Kontext [dev] proved that in-context learning is a feasible technique for visual-generation models, not just large language models (LLMs).

Source

  •  

NVIDIA DLSS 4.5 Delivers Super Resolution Upgrades and New Dynamic Multi Frame Generation

NVIDIA DLSS 4 with Multi Frame Generation has become the fastest-adopted NVIDIA gaming technology ever. Over 250 games and apps use it to make real-time path...

NVIDIA DLSS 4 with Multi Frame Generation has become the fastest-adopted NVIDIA gaming technology ever. Over 250 games and apps use it to make real-time path tracing possibleβ€”and upcoming titles for 2026, including PRAGMATA and Resident Evil Requiem, also plan to incorporate the software. At CES 2026, the technology became even more powerful. NVIDIA introduced DLSS 4.5…

Source

  •  

Delivering Flexible Performance for Future-Ready Data Centers with NVIDIA MGX

The AI boom reshaping the computing landscape is poised to scale even faster in 2026. As breakthroughs in model capability and computing power drive rapid...

The AI boom reshaping the computing landscape is poised to scale even faster in 2026. As breakthroughs in model capability and computing power drive rapid growth, enterprise data centers are being pushed beyond the limits of conventional server and rack architectures. This is creating new pressures on power budgets, thermal envelopes, and facility space. NVIDIA MGX modular reference…

Source

  •  

Enhancing Communication Observability of AI Workloads with NCCL Inspector

When using the NVIDIA Collective Communication Library (NCCL) to run a deep learning training or inference workload that uses collective operations (such as...

When using the NVIDIA Collective Communication Library (NCCL) to run a deep learning training or inference workload that uses collective operations (such as AllReduce, AllGather, and ReduceScatter), it can be challenging to determine how NCCL is performing during the actual workload run. This post introduces the NCCL Inspector Profiler Plugin, which addresses this problem. It offers a way for…

Source

  •  

Interconnection and Colocation: The Backbone of AI-Ready Infrastructure

Originally posted on 1547Realty.

AI is changing what infrastructure needs to do. It is no longer enough to provide power cooling and a basic network connection. Modern AI and high performance computing workloads depend on constant access to large data sets and fast communication between systems. That makes interconnection an essential part of the environment that supports them.

Traditional cloud environments were not built for dense GPU clusters or latency sensitive applications. This has helped drive the rise of neocloud providers, which focus on specialized compute and rely on data centers for the physical setting in which it operates.

Industry reporting fromΒ RCR WirelessΒ notes that many neocloud providers choose to colocate in established facilities instead of building new data centers. This gives them faster speed to market and direct access to network ecosystems that would take years to recreate on their own. In this context data centers with strong connectivity play a central role.

1547 operates facilities that combine space and power with the network access needed for AI and neocloud deployments. These environments allow operators to place infrastructure where it can perform as intended.

The Shift from Cloud First to Cloud Right

For many years, the default approach for new applications was simple. Put it in the cloud. That cloud first mindset is now giving way to a cloud-right strategy. The question is no longer only whether something can run in the cloud, but whether it should.

AI and high-performance workloads often need to run close to users, to data sources, or along specific network routes. They require predictable latency and steady throughput. When model training or inference spans many GPUs across different clusters, even small delays can affect performance and cost.

Analysts have observed that organizations are matching each workload to the environment that fits it best. AsΒ RTInsights highlights, not every workload performs well in a single centralized cloud. Some applications remain in hyperscale environments. Others move to edge sites, private clouds or colocation facilities that offer greater control over performance. Neocloud operators support this shift by offering GPU focused infrastructure from locations chosen for both efficiency and access to network routes.

To do that, they need more than space. They need carriers, cloud on-ramps, internet exchanges and private connection options. They need a fabric that lets them move data efficiently between customers, partners, and providers. Connectivity within the facility brings these elements together and supports cloud right placement.

1547 facilities support this shift by giving operators access to diverse networks in key markets. These environments allow AI workloads to sit where they perform best while staying connected to the wider ecosystem.

To continue reading, please click here.

The post Interconnection and Colocation: The Backbone of AI-Ready Infrastructure appeared first on Data Center POST.

  •