Normal view

Received before yesterday

AI in data: sorting reality from hallucination

Many people do not use the term artificial intelligence correctly: vendors, investors, and even some operators label everything from basic automation scripts to deep learning controllers as AI. This inflation of the term has commercial and strategic motives. AI branding helps attract funding, creates differentiation in the market, and positions traditional analytics as cutting-edge solutions.

However, this broad usage also breeds confusion and skepticism. Data center operators, uncertain about the level of autonomy or risk they face, often hesitate to implement even safe, deterministic systems.

Many operators remain hesitant to implement AI in their data centers, often citing fears of hallucination — the risk that an AI system might generate false or invented information. Yet not all AI behaves this way, and the term is frequently misapplied. By clarifying the different types of AI, how they vary in capability and reliability, and which pose genuine hallucination risks, operators can better distinguish dependable automation from the marketing-driven “AI-washing” that fuels confusion and obscures real risk.

The spectrum of AI in data centers

AI in data centers spans a broad continuum, from deterministic, data-driven algorithms to advanced systems capable of adaptive or autonomous decision-making. Treating these technologies as a single category obscures important differences in capability, reliability and operational risk. Understanding this spectrum is critical for evaluating what each system can — and cannot — safely automate.

Table 1 compares the different types of AI used in modern data centers.

Table 1 AI types used in data centers

Table: AI types used in data centers

Mislabeling AI: the root of hallucination fears

Across the tech sector, and within data center operations in particular, everything from basic regression models to large transformer networks is labeled as AI. This conflation blurs the operational reality:

  • Predictive and optimization models (ML, neural networks) rely on measurable data and statistical learning. They rarely improvise.
  • Generative and language models (LLMs, GenAI) produce content probabilistically, often without grounding in external data, which creates a risk of fabrication.
  • Agentic AI orchestrates systems and can call on other models (including LLMs) to plan or communicate, but its reliability depends on which components it uses.

This terminological blur feeds operator anxiety. A predictive control loop that tunes chillers based on real-time feedback is not at risk of hallucination, yet many operators equate it with the behavior of chatbots and generative systems. In practice, hallucination is a property of generative AI, not of deterministic automation or data-driven control.

Understanding which AI types can hallucinate, and why, is essential for evaluating their operational reliability. Table 2 below clarifies the differences across major AI categories used in data centers.

Table 2 Hallucination behavior and risks across AI types

Table: Hallucination behavior and risks across AI types

Managing and mitigating hallucination risks

Operators can apply a focused set of safeguards that keep AI useful while limiting unsafe or fabricated outputs:

  • Constrain generative models to verified, domain‑specific sources such as maintenance manuals, runbooks, building management system (BMS)/data center infrastructure management (DCIM) logs, incident records and approved knowledge articles.
  • Use retrieval‑augmented generation (RAG) so that models base responses on current operational data rather than general training alone.
  • Adopt hybrid architectures that pair LLM copilots with deterministic rule engines or physics‑based digital twins, which can verify or veto proposed actions before they affect live systems.
  • Require human‑in‑the‑loop validation before AI can change configurations, control physical systems, or execute high‑impact runbook steps.
  • Establish clear governance that makes a distinction between “assistive AI” (documentation, recommendations, analysis) and “operational AI” (any system that can directly change configurations or physical infrastructure).
  • Apply strict scoping and access control so more powerful generative or agentic components start in read‑only or advisory modes and follow least‑privilege principles for credentials and APIs.

The Uptime Intelligence View

Much of the data center industry’s caution around AI appears to come from treating it as a single, generative technology rather than a stack of distinct capabilities. In real deployments, predictive models are typically aligned with control and optimization tasks; emerging agentic approaches support orchestrated, multi-step decision flows; and LLMs or other generative systems are best suited for documentation, reasoning support, and advisory use under governance constraints. When these distinctions are made explicit, AI can be a potential enabler of resilient, self‑optimizing facilities and poses less risk of becoming a direct threat to uptime.

The post AI in data: sorting reality from hallucination appeared first on Uptime Institute Blog.

What’s Going On With Hyundai? Ask Mary?

4 April 2026 at 22:42

At the New York Auto Show on Wednesday, Hyundai announced a strategic departure on its vehicles. CEO José Muñoz announced that they are adding focus to the US market, which is Hyundai’s largest market. They are investing $26 billion in the US and building a steel plant, while planning for ... [continued]

The post What’s Going On With Hyundai? Ask Mary? appeared first on CleanTechnica.

Hyundai IONIQ 5 Sales Actually Up This Year!

4 April 2026 at 17:00

Last night, I wrote about how Ford and Nissan EV sales collapsed in the US in 2025. Tesla sales also weren’t good, and the overall US EV market is clearly down following the ending if the $7,500 US EV tax credit. So, I was quite surprised to see that the ... [continued]

The post Hyundai IONIQ 5 Sales Actually Up This Year! appeared first on CleanTechnica.

Data Centers Are Transitioning From AC to DC

24 March 2026 at 16:00


Last week’s Nvidia GTC conference highlighted new chip architectures to power AI. But as the chips become faster and more powerful, the remainder of data center infrastructure is playing catch-up. The power-delivery community is responding: Announcements from Delta, Eaton, Schneider Electric, and Vertiv showcased new designs for the AI era. Complex and inefficient AC-to-DC power conversions are gradually being replaced by DC configurations, at least in hyperscale data centers.

“While AC distribution remains deeply entrenched, advances in power electronics and the rising demands of AI infrastructure are accelerating interest in DC architectures,” says Chris Thompson, vice president of advanced technology and global microgrids at Vertiv.

AC-to-DC Conversion Challenges

Today, nearly all data centers are designed around AC utility power. The electrical path includes multiple conversions before power reaches the compute load. Power typically enters the data center as medium-voltage AC (1 to 35 kilovolts), is stepped down to low-voltage AC (480 or 415 volts) using a transformer, converted to DC inside an uninterruptible power supply (UPS) for battery storage, converted back to AC, and converted again to low-voltage DC (typically 54 V DC) at the server, supplying the DC power computing chips actually require.

“The double conversion process ensures the output AC is clean, stable, and suitable for data center servers,” says Luiz Fernando Huet de Bacellar, vice president of engineering and technology at Eaton.

That setup worked well enough for the amounts of power required by traditional data centers. Traditional data center computational racks draw on the order of 10 kW each. For AI, that is starting to approach 1 megawatt. At that scale, the energy losses, current levels, and copper requirements of AC-to-DC conversions become increasingly difficult to justify. Every conversion incurs some power loss. On top of that, as the amount of power that needs to be delivered grows, the sheer size of the convertors, as well as the connector requirements of copper busbars, becomes untenable. According to an Nvidia blog, a 1-MW rack could require as much as 200 kilograms of copper busbar. For a 1-gigawatt data center, it could amount to 200,000 kg of copper.

Benefits of High-Voltage DC Power

By converting 13.8-kV AC grid power directly to 800 V DC at the data center perimeter, most intermediate conversion steps are eliminated. This reduces the number of fans and power-supply units, and leads to higher system reliability, lower heat dissipation, improved energy efficiency, and a smaller equipment footprint.

“Each power conversion between the electric grid or power source and the silicon chips inside the servers causes some energy loss,” says Bacellar.

Switching from 415-V AC to 800-V DC in electrical distribution enables 85 percent more power to be transmitted through the same conductor size. This happens because higher voltage reduces current demand, lowering resistive losses and making power transfer more efficient. Thinner conductors can handle the same load, reducing copper requirements by 45 percent, a 5 percent improvement in efficiency, and 30 percent lower total cost of ownership for gigawatt-scale facilities.

“In a high-voltage DC architecture, power from the grid is converted from medium-voltage AC to roughly 800-V DC and then distributed throughout the facility on a DC bus,” said Vertiv’s Thompson. “At the rack, compact DC-to-DC converters step that voltage down for GPUs and CPUs.”

A report from technology advisory group Omdia claims that higher voltage DC data centers have already appeared in China. In the Americas, the Mt. Diablo Initiative (a collaboration among Meta, Microsoft, and the Open Compute Project) is a 400-V DC rack power distribution experiment.

Innovations in DC Power Systems

A handful of vendors are trying to get ahead of the game. Vertiv’s 800-V DC ecosystem that integrates with Nvidia Vera Rubin Ultra Kyber platforms will be commercially available in the second half of 2026. Eaton, too, is well advanced in its 800-V DC systems innovation courtesy of a medium-voltage solid-state transformer (SST) that will sit at the heart of DC power distribution system. Meanwhile Delta, has released 800-V DC in-row 660-kW power racks with a total of 480 kW of embedded battery backup units. And, SolarEdge is hard at work on a 99%-efficient SST that will be paired with a native DC UPS and a DC power distribution layer.

But much of the industry is far behind. Patrick Hughes, senior vice president of strategy, technical, and industry affairs for the National Electrical Manufacturers Association, says most innovation is happening at the 400-V DC level, though some are preparing 800-V DC. He believes the industry needs a complete, coordinated ecosystem, including power electronics, protection, connectors, sensing, and service‑safe components that scale together rather than in isolation. That, in turn, requires retooling manufacturing capacity for DC‑specific equipment, expanding semiconductor and materials supply, and clear, long‑term demand commitments that justify major capital investment across the value chain.

“Many are taking a cautious approach, offering limited or adapted solutions while waiting for clearer standards, safety frameworks, and customer commitments,” said Hughes. “Building the supply chain will hinge on stabilizing standards and safety frameworks so suppliers can design, certify, manufacture, and install equipment with confidence.”

What’s Going On With Hyundai? Ask Mary?

4 April 2026 at 22:42

At the New York Auto Show on Wednesday, Hyundai announced a strategic departure on its vehicles. CEO José Muñoz announced that they are adding focus to the US market, which is Hyundai’s largest market. They are investing $26 billion in the US and building a steel plant, while planning for ... [continued]

The post What’s Going On With Hyundai? Ask Mary? appeared first on CleanTechnica.

Hyundai IONIQ 5 Sales Actually Up This Year!

Last night, I wrote about how Ford and Nissan EV sales collapsed in the US in 2025. Tesla sales also weren’t good, and the overall US EV market is clearly down following the ending if the $7,500 US EV tax credit. So, I was quite surprised to see that the ... [continued]

The post Hyundai IONIQ 5 Sales Actually Up This Year! appeared first on CleanTechnica.

Hyundai IONIQ 6 N Crowned 2026 World Performance Car

3 April 2026 at 00:55

IONIQ 6 N named 2026 World Performance Car, marking the second time in three years that Hyundai’s N brand has secured the award after IONIQ 5 N in 2024 This marks the fifth consecutive year Hyundai Motor has been honored at the World Car Awards Hyundai Motor Company has achieved ... [continued]

The post Hyundai IONIQ 6 N Crowned 2026 World Performance Car appeared first on CleanTechnica.

CSIRO’s thermal-sensing robots detect solar module faults in landmark Australian trial

26 March 2026 at 00:06
Australia's national science agency, CSIRO, has successfully completed trials of autonomous robots designed to revolutionise maintenance operations at large-scale solar installations.

EPRI launches new large load framework to reduce time to power for data centers

23 March 2026 at 19:56

To address time to power, one of the biggest constraints currently slowing data center deployment, EPRI is launching Flex MOSAIC, a uniform flexibility classification framework for large electric loads, developed through its DCFlex initiative in collaboration with more than 65 utilities, system operators, regulators, hyperscalers and technology providers.

The voluntary framework is meant to establish a “shared, credible way” to define flexibility from large loads (particularly data centers) based on the magnitude, timing, duration and frequency of their response. By enabling a common understanding of what flexibility a load can deliver, EPRI argues the framework could help shorten interconnection timelines, improve grid planning confidence and accelerate access to power without compromising reliability or affordability.

“As demand from AI and data centers grows at unprecedented speed, flexibility is becoming the third leg of the speed-to-power stool, alongside generation and transmission,” said EPRI President and CEO Arshad Mansoor. “This framework allows everyone — utilities, regulators, and large‑load developers — to have common language about flexibility and to trust what that language means. That shared understanding is essential to moving faster while maintaining reliability.”

Data center construction and the advent of artificial intelligence (AI) are driving unprecedented electric load growth across the United States. Massive hyperscalers with deep pockets and bold aspirations need power, and they need it fast.

From May 12-14, 2026, DTECH Data Centers & AI will assemble utilities, engineers, and technical decision-makers from across this emerging ecosphere in Scottsdale, Arizona, to discuss everything from capacity constraints to streamlining studies, from modernizing infrastructure to integrating onsite generation into both utility and customer-side systems.

Register for DTECH Data Centers & AI now before early-bird pricing ends on April 1, 2026.

The framework defines flexibility through practical performance characteristics, including how quickly a load can respond, how long adjustments can last and how much power can be reduced or shifted. These characteristics are organized into a set of uniform flexibility classes that utilities, system operators and data centers can apply consistently across regions.

The framework is meant to provide a technical foundation that jurisdictions and market participants can adapt to their local needs. “As large, flexible loads play a growing role in the power system, having clear, technically grounded definitions of flexibility is critical for reliability,” said North American Electric Reliability Corporation President Jim Robb. “A common framework like this can help system operators and planners speak the same language, essential for maintaining a reliable grid.”

“As demand from data centers accelerates, state regulators are focused on ensuring customers are not burdened by the costs of serving new, large loads, as well as maintaining grid reliability,” said NARUC President Ann Rendahl. “NARUC looks forward to engaging with EPRI and others on how a voluntary, standardized framework like Flex MOSAIC can create a common language and shared understanding of flexibility, and provide benefits to state regulators when evaluating data center integration, without shifting costs to customers or compromising grid reliability.”

Initial framework participants include Alliant Energy, Arizona Public Service, California ISO, El Centro Nacional de Control de Energía (CENACE), Compass Datacenters, Constellation Energy, DTE Energy, Entergy, Exelon, Georgia Transmission Corporation, Google, Honeywell, Independent Electricity System Operator (IESO), ING, Jenbacher, Korea Power Exchange (KPX), KPMG, LG Pado, Lincoln Electric System, Lower Colorado River Authority, Meta, Midcontinent Independent System Operator (MISO), Nebraska Public Power District, NERC, NVIDIA, Portland General Electric, PSEG, Rayburn Electric, Salt River Project, Siemens, Southern Company, Southwest Power Pool and United Power.

Caterpillar engines to support 2 GW of onsite power at West Virginia data center campus tied to Microsoft, NVIDIA

17 March 2026 at 23:17

Caterpillar moved to the center of the AI infrastructure buildout this week as developer Nscale said it would use the company’s natural gas generator sets to power a major new West Virginia data center campus tied to Microsoft and NVIDIA.

Monday’s announcement positions Caterpillar’s G3500 series reciprocating engine platform as core infrastructure for what Nscale said could become one of the country’s largest dedicated AI compute developments.

Under the plan, Caterpillar equipment would support 2 GW of onsite generation by the first half of 2028 at the Monarch Compute Campus in Mason County, West Virginia, giving the project a faster path to power as grid access and transmission upgrades remain a constraint for large data center loads.

Nscale said the campus would host up to 1.35 GW of AI compute capacity for Microsoft under a letter of intent tied to NVIDIA Vera Rubin NVL72 systems and the NVIDIA DSX AI Factory reference design. The company also announced it had acquired American Intelligence & Power Corp., which includes the 2,250-acre Monarch site and what Nscale described as the first state-certified AI microgrid in the U.S., with expansion potential beyond 8 GW.



It’s the latest example of a data center project being structured around large-scale onsite natural gas generation, rather than waiting solely on utility service.

In a recent report, Cleanview identified 46 U.S. data center projects representing a combined 56 GW of planned behind-the-meter power capacity, which it estimated at roughly 30% of planned U.S. data center capacity in its tracker.

The research company also said 90% of the projects it identified were announced in 2025, indicating that “Bring Your Own Power” has shifted from a niche workaround to a more mainstream development path as grid interconnection timelines lengthen.

“A year ago, behind-the-meter data center power was a curiosity, embodied by xAI’s controversial decision to truck mobile generators into Memphis,” said Cleanview. “Now it’s an increasingly common development strategy.”

Cleanview added many of the projects it identified have secured equipment partners and are already under construction.

“Projects like Monarch demonstrate how Caterpillar’s natural gas generation platforms are being deployed as core infrastructure for data centers and other power intensive applications where reliability, speed of deployment, and lifecycle performance are critical,” Melissa Busen, Caterpillar senior vice president of Electric Power, said in a statement.

Nscale said the West Virginia site would operate independently of the local grid, which it argued would avoid adding costs to existing utility customers, while preserving the option of a future grid interconnection that could allow exports. The company also said it is pursuing carbon sequestration and a design intended to reduce water use.


Kevin Clark

Kevin Clark is the editor of Factor This Power Engineering, where he reports on power generation, grid reliability and emerging trends shaping the electric sector. Kevin also leads editorial strategy for the POWERGEN conference. He previously spent a decade as a television news and digital journalist. Have a story idea? Email Kevin at kevin.clark@clarionevents.com.

Three Questions Every AI‑Driven Operator Should Ask Their Infrastructure Partner

1 April 2026 at 16:00

Originally posted on Datalec LTD.

Data centre leaders left ExCeL London earlier this month with one message ringing loud and clear: AI‑driven growth is accelerating, power is tight, and the choice of infrastructure partner is now business‑critical, not optional.

Against a backdrop of rapid hyperscale and colocation expansion, constrained power availability and rising energy scrutiny, the conversations at Data Centre World London 2026 underscored that operators need partners who can help them plan power‑first, deploy at speed, and operate reliably in high‑density environments.

For Datalec Precision Installations (DPI), DCW London was an opportunity to demonstrate exactly that kind of integrated, global capability, from modular data centre solutions through to facilities management, consultancy and lifecycle services. The questions operators brought to the stand were remarkably consistent, whether they were building in the UK, expanding in the Middle East, or planning their next phase of growth in APAC.

Below, we revisit three of the most important questions AI‑driven operators were asking in London and why they will matter even more as the industry converges on Singapore for DCW Asia later this year.

1. How quickly can you take me from secured power to live, AI‑ready capacity?

If there was one common theme at DCW London, it was that power availability has become the primary constraint on new data centre builds, not demand. Once operators have secured land and grid, the urgent requirement is simple: how fast can we safely turn that capacity into revenue‑generating, AI‑ready infrastructure?

This is where modular, pre‑engineered solutions dominated the conversation. Many visitors to the DPI stand wanted to understand how modular white space, plant and service corridors could compress design and construction timelines without sacrificing resilience or compliance. DPI’s next‑generation Modular Data Centre Solutions attracted strong interest because they are designed precisely for this challenge. They help clients move from planning to live halls at speed, whether that’s a new campus in a European hub, a hyperscale expansion in the Middle East, or an edge or colocation site in a fast‑growing APAC market.

To continue reading, please click here.

The post Three Questions Every AI‑Driven Operator Should Ask Their Infrastructure Partner appeared first on Data Center POST.

AI Workloads and the Implications for High-Density Data Centre Design

23 March 2026 at 14:00

AI workloads are pushing data centre infrastructure towards higher rack densities, new cooling strategies and greater power demand. Jamie Darragh, Data Centre Director, Europe, at global data centre engineering design consultancy Black & White Engineering, examines the design implications for the next generation of facilities.

AI and high-performance computing are placing new demands on data centre infrastructure. Rack densities are increasing; facilities are being delivered at larger scale and operators are under pressure to support workloads that consume far greater levels of power and generate far higher heat loads than conventional cloud environments.

Independent forecasts underline the pace of expansion. Gartner estimates global data centre electricity consumption will rise from around 448TWh in 2025 to roughly 980TWh by 2030, driven largely by AI-optimised computing infrastructure. Within that growth, AI servers alone are expected to account for close to 44% of data centre power consumption by the end of the decade.

For our engineering teams, these workloads are altering the practical limits of traditional infrastructure design. Rack densities exceeding 100–200kW are now appearing in project specifications, particularly where large AI training clusters are planned. These loads influence every part of the building environment, from electrical distribution and cooling capacity to structural loading and cable management.

Designing for extreme density

Under these conditions, air cooling alone becomes difficult to sustain across entire facilities. Liquid cooling is therefore increasingly included in the baseline design of new data centres rather than introduced later as a specialist solution. This cooling method is becoming increasingly favourable due to its higher specific thermal capacity compared with air, which enables more efficient heat transfer and removal. Direct-to-chip and rack-level systems are being designed alongside air cooling so facilities can accommodate different densities and equipment types across the same site.

The introduction of liquid systems requires careful coordination between disciplines. Facilities must manage environments where air and liquid cooling operate together, supported by monitoring platforms, safety controls and operational procedures capable of supporting both approaches.

Some IT chips require different liquid cooling temperatures than those used in air-cooling systems, creating technical hurdles for the overall heat rejection system and requiring precise control of the cooling circuit temperature. Another engineering challenge lies in integrating these systems with power distribution, control platforms and maintenance strategies rather than selecting one cooling method over another.

Higher density also narrows operational tolerance. Commissioning becomes more demanding and redundancy strategies require more detailed modelling. Infrastructure must be capable of supporting peak compute demand while maintaining efficiency when loads are lower, placing greater emphasis on flexible electrical and mechanical systems.

The scale of development is also increasing. Buildings that once delivered a few megawatts of capacity are now part of campus-scale developments where multiple data halls contribute to facilities delivering hundreds of megawatts. data centres are increasingly planned and delivered as long-term infrastructure assets rather than individual projects.

This environment encourages repeatable design and industrialised delivery methods. Developers and investors expect predictable construction schedules and consistent performance across multiple sites. As a result, engineering teams are placing greater emphasis on modular infrastructure systems and digital design methods that allow mechanical and electrical systems to be configured and deployed repeatedly.

Power, control and operational intelligence

Power availability is also becoming a determining factor in project planning. In many regions, grid connection capacity is now one of the main constraints on new development. Gartner has warned that by 2027 as many as 40% of AI data centres could face operational limits because of power availability.

Developers are therefore engaging more closely with utilities during early feasibility stages and exploring complementary infrastructure such as on-site generation and energy storage. In some cases, data centres are also being designed to contribute to wider grid stability through demand response and energy management capability.

Artificial intelligence is also beginning to influence how facilities themselves are operated. Machine-learning systems are already being used in some environments to optimise airflow patterns, cooling plant performance and power distribution using live operational data.

The next stage will see more widespread use of integrated control platforms and digital twins capable of modelling facility behaviour in real time. These systems allow operators to simulate infrastructure performance under different load conditions, test operational changes and identify maintenance requirements before faults occur.

Environmental performance remains another constraint as compute density increases. Higher workloads place additional pressure on energy supply while raising questions around water consumption, construction materials and waste heat recovery. Planning authorities and investors are increasingly looking for measurable improvements in efficiency and carbon reporting before approving new developments. Sustainability therefore sits alongside power and cooling as a central engineering consideration rather than a secondary design feature.

Taken together, these conditions create a more complex design environment for data centre infrastructure. Higher compute densities, power constraints and new operational technologies require mechanical, electrical and digital systems to be considered together from the earliest design stages.

Facilities intended to support AI workloads must accommodate far greater performance requirements than earlier generations of data centres while remaining adaptable as infrastructure technologies and operating practices continue to develop.

# # #

About the Author

Jamie Darragh is Data Centre Director, Europe at Black & White Engineering. He leads the delivery of complex, mission-critical projects across the region, with a focus on technical quality, design coordination and strong client relationships. A Chartered Engineer and member of CIBSE and the IET, Jamie has worked across Europe, the Middle East and the UK since 2005. He brings a clear, practical approach to engineering challenges, combining technical expertise with commercial awareness. He is committed to developing teams that work collaboratively and perform at a high level. Jamie has received several industry awards, recognising both his technical capability and his impact on the built environment including ‘Engineer of the Year’ at leading Middle East industry awards.

The post AI Workloads and the Implications for High-Density Data Centre Design appeared first on Data Center POST.

Duos Edge AI Deploys Second Edge Data Center in Amarillo, Texas Market

20 March 2026 at 13:00

Duos Technologies Group Inc. (Nasdaq: DUOT), through Duos Edge AI, Inc., has announced the deployment of its second Edge Data Center in the Amarillo, Texas market. The new carrier-neutral, SOC 2-compliant facility is located on Potter County land adjacent to the largest colocation facility in the Texas Panhandle, further strengthening digital infrastructure for carriers, healthcare organizations, enterprises, and public sector entities across the region.​

Building on the success of its initial Amarillo deployment, this latest installation expands Duos Edge AI’s footprint in the Panhandle and adds high-density, low-latency computing capabilities for real-time AI applications, enhanced bandwidth, and secure data processing.

“We are proud to deepen our commitment to the Amarillo market with this second deployment, building on the foundation established by our initial EDC, which brought high-performance computing directly to the heart of the Panhandle,” said Dave Irek, Chief Operations Officer of Duos Edge AI. “This expansion enhances capacity and capability in the region, and by partnering on Potter County land adjacent to a premier colocation hub, we are creating a robust, carrier-neutral ecosystem designed to support innovation, attract investment, and drive long-term economic growth.”​

The company said the deployment also helps reduce dependence on data centers located in tier one cities while supporting underserved and high-growth markets across Texas. Duos Edge AI’s broader Texas expansion includes recent installations in Lubbock, Waco, Victoria, Abilene, and Corpus Christi.​

Potter County Judge Nancy Tanner added, “This collaboration with Duos Edge AI represents a significant investment in our community’s future. Positioning this advanced, carrier-neutral data center on county land next to the Panhandle’s largest colocation facility will attract new businesses, improve connectivity for our residents and schools, and position Potter County as a leader in digital infrastructure.”​

The new EDC is expected to be fully operational in the coming months.

To learn more about Duos Edge AI, visit www.duosedge.ai.

The post Duos Edge AI Deploys Second Edge Data Center in Amarillo, Texas Market appeared first on Data Center POST.

Foresight Raises $25M to Tackle Infrastructure Execution Risks in the AI Era

18 March 2026 at 17:00

As global investment in AI infrastructure, power, and advanced manufacturing accelerates, a critical constraint is coming into sharper focus—project execution.

A newly announced $25 million Series A funding round for Foresight underscores a broader industry shift: while capital continues to flow into large-scale infrastructure, delivering these projects on time and on budget remains a persistent challenge.

The current wave of infrastructure investment is unprecedented in both scale and complexity. Hyperscale data centers, energy systems, and advanced industrial facilities are being developed simultaneously across global markets, often with overlapping supply chains and tight delivery timelines.

However, execution has emerged as a systemic issue.

Research indicates that nearly 90% of large-scale infrastructure projects are completed late or exceed budget expectations. In the context of AI infrastructure, delays can have cascading effects—impacting capacity availability, increasing financing costs, and delaying revenue generation.

Industry observers note that as demand for compute continues to surge, particularly for AI workloads, the margin for error in delivery timelines is shrinking.

A Shift Toward Predictive Delivery Models

Foresight, which positions itself as a predictive project delivery platform, is part of a growing cohort of technology providers aiming to address these execution challenges through data and automation.

The company’s platform is designed to move beyond traditional project management approaches—often reliant on static schedules and retrospective reporting—by introducing continuous validation of project progress and early identification of risk factors.

According to the company, its system enables infrastructure owners to establish baseline schedules more quickly, integrate data across stakeholders, and forecast potential delays before they materialize. Early adopters report improvements in forecast accuracy and reductions in cost overruns.

While such claims reflect a broader trend toward digitization in construction and infrastructure delivery, they also point to a deeper industry need: greater predictability in increasingly complex builds.

Why Execution Matters More in the AI Era

For data center developers and operators, execution risk is becoming more consequential.

Unlike previous infrastructure cycles, AI-driven demand is both immediate and rapidly evolving. Delays in bringing capacity online can result in missed opportunities, strained customer relationships, and competitive disadvantages in key markets.

At the same time, projects are becoming more interdependent. Power availability, equipment procurement, and site development must align precisely—leaving little room for disruption.

This dynamic is prompting a reassessment of how infrastructure projects are planned and managed, with greater emphasis on real-time data, cross-functional visibility, and proactive intervention.

Expanding Beyond Data Centers

Although the initial focus is on sectors such as hyperscale data centers, the challenges associated with project execution are not unique to digital infrastructure.

Foresight plans to expand its platform into adjacent industries, including energy, defense, and advanced manufacturing—areas that share similar characteristics: large capital commitments, complex supply chains, and high sensitivity to delays.

The company’s recent funding, led by Macquarie Capital Venture Capital, reflects investor interest in solutions that address these systemic inefficiencies.

An Industry Inflection Point

The emergence of predictive project delivery tools signals a broader transformation in how infrastructure is built.

For years, innovation in the data center sector has centered on compute performance, cooling technologies, and energy efficiency. Increasingly, attention is shifting toward the process of delivery itself.

As infrastructure programs continue to scale, the ability to execute with precision may become a defining factor in project success.

In an environment where demand is high and timelines are compressed, the question facing the industry is evolving—from whether projects can be financed to whether they can be delivered as planned.

The post Foresight Raises $25M to Tackle Infrastructure Execution Risks in the AI Era appeared first on Data Center POST.

Duos Edge AI and Seimitsu Strengthen Georgia’s Digital Infrastructure

18 March 2026 at 15:00

Duos Technologies Group Inc. (Nasdaq: DUOT), through Duos Edge AI, Inc., has formed a strategic partnership with Seimitsu to revolutionize digital infrastructure across Georgia. By combining Duos Edge AI’s modular, high-performance solutions with Seimitsu’s expansive high-speed fiber network, the collaboration delivers low-latency processing and high-bandwidth connectivity for businesses, municipalities, and healthcare providers statewide.

“Our mission is to bring the power of the cloud to the street corner. Partnering with Seimitsu allows us to integrate our Edge AI nodes into a robust, reliable fiber backbone, ensuring that Georgia’s industries – from the port of Savannah to Atlanta’s technology corridors – have the infrastructure they need to compete globally,” said Dave Irek, Chief Operations Officer of Duos Edge AI.

As demand for real-time data processing grows, driven by AI, IoT, and autonomous systems, infrastructure closer to end users has become critical. This partnership positions Georgia at the forefront of the Edge revolution with ultra-low latency processing, Seimitsu’s 25 terabits of low-latency fiber capacity across the Southeast, and rapid deployment of Duos Edge AI nodes in underserved and high-demand areas.

Sam Cook, CEO of Seimitsu, added, “For more than 40 years, Seimitsu has been committed to connecting our communities. This partnership with Duos Edge AI represents the next step in that journey. By integrating edge computing directly into our network, we are moving beyond simple transit services and delivering true digital transformation for our clients.”

The partnership supports Duos Edge AI’s nationwide expansion of distributed AI infrastructure through strategic fiber, power, and site partnerships.

To learn more about Duos Edge AI, visit www.duosedge.ai.

The post Duos Edge AI and Seimitsu Strengthen Georgia’s Digital Infrastructure appeared first on Data Center POST.

Metro Connect USA 2026 Highlights the Future of U.S. Digital Infrastructure

16 March 2026 at 16:00

Metro Connect USA 2026 brought the digital infrastructure community together in Fort Lauderdale, Florida, Feb. 23 to 25, as executives, investors and network operators gathered to discuss the evolving connectivity landscape. Over three days, conversations across keynote sessions, panels and private meetings focused on how the industry is adapting to the rapid growth of artificial intelligence, cloud services and bandwidth demand.

The 2026 event drew more than 3,700 decision-makers representing over 1,200 companies, reflecting the scale of collaboration and investment shaping the next phase of digital infrastructure development in the United States.

Artificial intelligence was a central theme throughout the conference. Industry leaders discussed how AI workloads are driving new requirements for data center capacity, fiber connectivity and power infrastructure. As AI adoption expands beyond hyperscale environments into enterprise applications and edge deployments, operators are facing increasing pressure to scale networks capable of supporting high-volume data movement and compute-intensive workloads.

Fiber infrastructure also remained a key topic. Discussions throughout the event highlighted continued investment in metro fiber expansion, long-haul backbone routes and fiber-to-the-home networks. As cloud platforms, streaming services and AI applications generate greater data traffic, fiber continues to serve as the underlying foundation supporting the digital economy.

Several speakers addressed how infrastructure and investment strategies are evolving alongside these shifts. Marc Ganzi, Chief Executive Offer at DigitalBridge discussed the continued influx of capital into digital infrastructure and the importance of disciplined investment as the sector scales. Steve Smith, Chief Executive Officer at Zayo Group highlighted the role of fiber expansion in supporting enterprise connectivity and hyperscale demand. Alex Hernandez, CEO of PowerBridge, participated in discussions focused on the growing power demands associated with AI infrastructure, including how utilities, data center developers and investors are working to expand power capacity and modernize energy delivery to support large-scale computing environments.

From the investment perspective, Santhosh Rao, Managing Director, Head of Digital Infrastructure at MUFG explored the evolving capital structures supporting infrastructure development, including structured financing and private credit solutions. Anton Moldan, Senior Managing Director at Macquarie Group shared insights into how institutional investors continue to evaluate digital infrastructure assets as a long-term growth opportunity within global infrastructure portfolios.

Beyond the formal sessions, Metro Connect remains known for its highly productive networking environment. Thousands of meetings took place across the event’s exhibit floor, private meeting rooms and curated networking gatherings, reinforcing the conference’s reputation as a place where partnerships are formed and transactions begin.

Outside the formal sessions, attendees spent much of the week engaged in meetings and informal discussions across the venue’s networking areas. Many participants noted that the event continues to serve as a gathering point for companies exploring partnerships, investment opportunities and infrastructure projects.

Looking ahead, the industry will reconvene next year as Metro Connect USA 2027 moves to a new venue. The event will take place February 8–10, 2027 at the Diplomat Beach Resort in Hollywood, Florida.

The post Metro Connect USA 2026 Highlights the Future of U.S. Digital Infrastructure appeared first on Data Center POST.

Duos Technologies Signs ~$200M LOI and Appoints Doug Recker as CEO

2 March 2026 at 19:00

Duos Technologies Group, Inc. (Nasdaq: DUOT), a leader in intelligent technologies and digital infrastructure, has signed a non-binding letter of intent (LOI) with Hydra Host to deploy a high-density NVIDIA GPU cluster for a leading global technology customer. The project supports a GPU-as-a-Service (GPUaaS) partnership expected to generate approximately $176 million in revenue over a 36-month term, with gross margins exceeding 80% and projected annual EBITDA of more than $40 million.

“We are thrilled to partner with the Duos team on this opportunity,” said Aaron Ginn, CEO and Co-Founder of Hydra Host. “Their ability to deliver immediate access to power combined with an industry-leading deployment speed makes them a standout in the market. We see significant runway ahead as we look to expand our collaboration around colocation and Duos’ High-Power EDC model, which we believe is purpose-built to address a market where demand for AI compute capacity is fundamentally outpacing the speed at which traditional data center supply can be delivered.”

Complementing this milestone, Duos has appointed Doug Recker as Chief Executive Officer, effective April 1, 2026, as the company accelerates its transformation into a focused Edge AI and digital infrastructure platform. Mr. Recker succeeds Chuck Ferry, who will continue to serve on the board of directors.

“This initial customer marks a pivotal step in accelerating the buildout of Duos Edge AI,” said Doug Recker, Chief Executive Officer. “We are now entering an exciting phase of execution, further reinforced by our recently announced LOI with Hydra Host, which underscores growing third-party demand for our distributed AI infrastructure model and validates the scalability of our platform. With secured power, rapid deployment capabilities, and expanding strategic partnerships, we believe Duos is well positioned to pursue high-value infrastructure opportunities. Our focus remains on disciplined expansion, capital-efficient growth, and delivering sustainable long-term value for our shareholders.”

Beyond GPUaaS revenue, the collaboration creates a pathway for approximately $25 million in incremental colocation revenue over the same term, validating Duos’ High-Power Edge Data Center (EDC) business line. The company has also signed a non-binding LOI for a ground lease in Iowa with access to up to 10MW of utility power, advancing its long-term goal of building up to 75MW of distributed capacity.

To learn more about Duos Technologies Group, Inc., visit www.duostechnologies.com.

The post Duos Technologies Signs ~$200M LOI and Appoints Doug Recker as CEO appeared first on Data Center POST.

Company Profile: Clearfield on Simplifying Fiber for AI-Ready Networks

11 February 2026 at 17:30

Data Center POST had the opportunity to connect with Clearfield’s Chief Commercial Officer, Anis Khemakhem, who is deeply passionate about technology, particularly in advancing fiber optics and telecommunications solutions. Throughout his career, he has consistently focused on leveraging cutting-edge technology to improve connectivity and enhance digital access across various sectors. His executive experience, including leadership positions at Clearfield, Amphenol and Carlisle Interconnect Technologies, demonstrates his executive engagement capabilities and capacity to handle complex, multi-stakeholder projects.

The information below is summarized to provide our readers a deeper dive into who Clearfield is, what they do and the problems they are solving in the industry.

What does Clearfield do?  

Clearfield designs and manufactures fiber connectivity solutions that simplify how operators build and scale modern networks. We focus on critical connection points across broadband, data center, edge, and wireless environments.

Since our inception, we’ve helped community broadband providers close the digital divide. Today, we also apply that modular, craft-friendly approach to wireless networks as well as data centers and distributed edge facilities that support AI-driven workloads. Our goal is to help operators deploy high-performance fiber faster, with less complexity and lower long-term operational costs.

What problems does Clearfield solve in the market?

Network operators are facing rising fiber density, limited space and labor constraints – not to mention pressure to scale quickly without disrupting live infrastructure. Clearfield addresses these challenges by simplifying fiber deployment and ongoing management.

Our solutions reduce installation time, streamline maintenance, and enable incremental growth. Whether supporting broadband expansion or high-density data center environments, we help customers reduce operational friction and future-proof their networks as data volumes and performance demands accelerate.

What are Clearfield’s core products or services?

Our core offerings include fiber management, protection, and delivery solutions, such as patch panels, cassettes, passive and edge cabinets, racks, enclosures, and fiber assemblies. A key recent introduction is our NOVA Platform, a modular, high-density fiber ecosystem designed for data centers, modern central offices, and edge environments.

The NOVA Platform features tool-less installation, front-of-rack access, and consistent documentation to simplify scaling. Across our portfolio, we focus on labor lite design and operational consistency to help customers deploy and manage fiber efficiently. NOVA is no exception.

What markets do you serve?

Clearfield serves community broadband providers, regional and national ISPs, incumbent telcos, utilities, municipalities, cooperatives, and enterprise networks. We also support hyperscale and colocation data centers, enterprise campuses, government and military networks, and distributed edge environments.

Increasingly, our solutions are used where fiber connects data centers to AI workloads and local compute resources at the edge. High-bandwidth, low-latency fiber is the only way society will be able to support data-intensive emerging technologies — from autonomous vehicles to precision agriculture. In rural broadband builds and high-density data halls alike, we serve operators that need scalable, reliable fiber infrastructure across diverse environments.

What challenges does the global digital infrastructure industry face today?

The industry is navigating explosive data growth driven by AI, cloud computing, and increasingly distributed architectures. Networks are extending beyond centralized data centers toward edge environments closer to users and applications. So, fiber counts, space, and power requirements are growing while skilled labor remains limited.

Operators must scale capacity quickly without sacrificing reliability or affordability. The challenge is not only bandwidth, but also density, manageability, and the ability to evolve without constant redesign.

How is Clearfield adapting to these challenges?

Clearfield is addressing these challenges by designing platforms that reduce complexity at every stage of deployment. The NOVA Platform exemplifies this approach, offering high-density, modular solutions with tool-less installation and all work performed at the front of the rack.

Across our portfolio, we emphasize consistent installation methods, clean documentation, and incremental scalability. This reduces training requirements, limits downtime, and allows operators to grow capacity without disrupting active networks — whether in a rural head end or a data center supporting AI workloads.

What are Clearfield’s key differentiators?

Our primary differentiator is how intentionally we design for the realities of the field. Clearfield solutions are modular, craft-friendly, and built to minimize labor and operational complexity.

Rather than isolated products, we deliver platform-based ecosystems that scale consistently across environments. This helps customers simplify inventory, standardize training, and deploy fiber with confidence. Our roots in community broadband give us a unique perspective that translates well to today’s data center and edge applications, where efficiency and scalability are critical.

What can we expect to see/hear from Clearfield in the future?  

You can expect Clearfield to continue expanding its footprint in data centers and edge computing while remaining committed to community broadband. We’ll introduce additional high-density, modular solutions that support AI-driven architectures and growing fiber demands. But our focus will remain on platforms that bridge environments.

We want to empower operators to apply a consistent, efficient approach as networks become more distributed. Ultimately, we aim to help customers scale faster, manage complexity more easily, and build infrastructure that supports both current and future workloads.

What upcoming industry events will you be attending? 

Clearfield launched the NOVA Platform at BICSI Winter 2026, where attendees were able to see live demonstrations of our high-density patch panels and cassettes and explore the broader ecosystem. That won’t be the last chance to see NOVA. We will participate in many major industry events this year, engaging with network operators, designers, and partners to share best practices and demonstrate how our solutions simplify fiber deployment.

Do you have any recent news you would like us to highlight?

Clearfield recently launched the NOVA Platform, a modular, high-density fiber ecosystem designed for data centers, enterprise networks, and edge environments. NOVA delivers tool-less installation, higher port density, and improved documentation. This innovative solution suite addresses the growing demands of AI-driven and 100G-plus networks. The platform includes patch panels, cassettes, cabinets, racks, and fiber assemblies that scale consistently across environments and are already generating strong interest across multiple markets.

Is there anything else you would like our readers to know about Clearfield and capabilities?

Clearfield sits at the intersection of broadband and data center infrastructure at a time when AI is reshaping network design. Fiber is the common foundation, but operational simplicity is becoming just as important as speed. Our experience helping operators deploy efficient, scalable networks translates directly to today’s high-density and edge environments. Whether connecting communities or powering AI workloads closer to users, Clearfield delivers fiber infrastructure designed to scale cleanly and perform reliably.

Where can our readers learn more about Clearfield?  

Visit us online at www.seeclearfield.com and follow us on social media.

How can our readers contact Clearfield? 

The contact page on our website has multiple ways to get in touch with our team to learn more about the NOVA Platform and our other solutions.

# # #

About Data Center POST

Data Center POST provides a comprehensive view of the digital infrastructure landscape, delivering industry insights into the global data center ecosystem. As the industry’s only peer-contributed and online publication, we offer relevant information from developers, managers, providers, investors, and trendsetters worldwide.

Data Center POST works hard to get the most current information and thought-provoking ideas most apt to add relevance to the success of the data center industry. Stay informed, visit www.datacenterpost.com.

If you are interested in contributing to Data Center POST, contact us at contributions@datacenterpost.com or submit your article here.

Want more digital infrastructure news? Stay in the know and subscribe to Data Center POST today!

The post Company Profile: Clearfield on Simplifying Fiber for AI-Ready Networks appeared first on Data Center POST.

❌