Reading view

AI and cooling: toward more automation

AI is increasingly steering the data center industry toward new operational practices, where automation, analytics and adaptive control are paving the way for “dark” — or lights-out, unstaffed — facilities. Cooling systems, in particular, are leading this shift. Yet despite AI’s positive track record in facility operations, one persistent challenge remains: trust.

In some ways, AI faces a similar challenge to that of commercial aviation several decades ago. Even after airlines had significantly improved reliability and safety performance, making air travel not only faster but also safer than other forms of transportation, it still took time for public perceptions to shift.

That same tension between capability and confidence lies at the heart of the next evolution in data center cooling controls. As AI models — of which there are several — improve in performance, becoming better understood, transparent and explainable, the question is no longer whether AI can manage operations autonomously, but whether the industry is ready to trust it enough to turn off the lights.

AI’s place in cooling controls

Thermal management systems, such as CRAHs, CRACs and airflow management, represent the front line of AI deployment in cooling optimization. Their modular nature enables the incremental adoption of AI controls, providing immediate visibility and measurable efficiency gains in day-to-day operations.

AI can now be applied across four core cooling functions:

  • Dynamic setpoint management. Continuously recalibrates temperature, humidity and fan speeds to match load conditions.
  • Thermal load forecasting. Predicts shifts in demand and makes adjustments in advance to prevent overcooling or instability.
  • Airflow distribution and containment. Uses machine learning to balance hot and cold aisles and stage CRAH/CRAC operations efficiently.
  • Fault detection, predictive and prescriptive diagnostics. Identifies coil fouling, fan oscillation, or valve hunting before they degrade performance.

A growing ecosystem of vendors is advancing AI-driven cooling optimization across both air- and water-side applications. Companies such as Vigilent, Siemens, Schneider Electric, Phaidra and Etalytics offer machine learning platforms that integrate with existing building management systems (BMS) or data center infrastructure management (DCIM) systems to enhance thermal management and efficiency.

Siemens’ White Space Cooling Optimization (WSCO) platform applies AI to match CRAH operation with IT load and thermal conditions, while Schneider Electric, through its Motivair acquisition, has expanded into liquid cooling and AI-ready thermal systems for high-density environments. In parallel, hyperscale operators, such as Google and Microsoft, have built proprietary AI engines to fine-tune chiller and CRAH performance in real time. These solutions range from supervisory logic to adaptive, closed-loop control. However, all share a common aim: improve efficiency without compromising compliance with service level agreements (SLAs) or operator oversight.

The scope of AI adoption

While IT cooling optimization has become the most visible frontier, conversations with AI control vendors reveal that most mature deployments still begin at the facility water loop rather than in the computer room. Vendors often start with the mechanical plant and facility water system because these areas present fewer variables, such as temperature differentials, flow rates and pressure setpoints, and can be treated as closed, well-bounded systems.

This makes the water loop a safer proving ground for training and validating algorithms before extending them to computer room air cooling systems, where thermal dynamics are more complex and influenced by containment design, workload variability and external conditions.

Predictive versus prescriptive: the maturity divide

AI in cooling is evolving along a maturity spectrum — from predictive insight to prescriptive guidance and, increasingly, to autonomous control. Table 1 summarizes the functional and operational distinctions among these three stages of AI maturity in data center cooling.

Table 1 Predictive, prescriptive, and autonomous AI in data center cooling

Table: Predictive, prescriptive, and autonomous AI in data center cooling

Most deployments today stop at the predictive stage, where AI enhances situational awareness but leaves action to the operator. Achieving full prescriptive control will require not only a deeper technical sophistication but also a shift in mindset.

Technically, it is more difficult to engineer because the system must not only forecast outcomes but also choose and execute safe corrective actions within operational limits. Operationally, it is harder to trust because it challenges long-held norms about accountability and human oversight.

The divide, therefore, is not only technical but also cultural. The shift from informed supervision to algorithmic control is redefining the boundary between automation and authority.

AI’s value and its risks

No matter how advanced the technology becomes, cooling exists for one reason: maintaining environmental stability and meeting SLAs. AI-enhanced monitoring and control systems support operating staff by:

  • Predicting and preventing temperature excursions before they affect uptime.
  • Detecting system degradation early and enabling timely corrective action.
  • Optimizing energy performance under varying load profiles without violating SLA thresholds.

Yet efficiency gains mean little without confidence in system reliability. It is also important to clarify that AI in data center cooling is not a single technology. Control-oriented machine learning models, such as those used to optimize CRAHs, CRACs and chiller plants, operate within physical limits and rely on deterministic sensor data. These differ fundamentally from language-based AI models such as GPT, where “hallucinations” refer to fabricated or contextually inaccurate responses.

At the Uptime Network Fall Americas Fall Conference 2025, several operators raised concerns about AI hallucinations — instances where optimization models generate inaccurate or confusing recommendations from event logs. In control systems, such errors often arise from model drift, sensor faults, or incomplete training data, not from the reasoning failures seen in language-based AI. When a model’s understanding of system behavior falls out of sync with reality, it can misinterpret anomalies as trends, eroding operator confidence faster than it delivers efficiency gains.

The discomfort is not purely technical, it is also human. Many data center operators remain uneasy about letting AI take the controls entirely, even as they acknowledge its potential. In AI’s ascent toward autonomy, trust remains the runway still under construction.

Critically, modern AI control frameworks are being designed with built-in safety, transparency and human oversight. For example, Vigilent, a provider of AI-based optimization controls for data center cooling, reports that its optimizing control switches to “guard mode” whenever it is unable to maintain the data center environment within tolerances. Guard mode brings on additional cooling capacity (at the expense of power consumption) to restore SLA-compliant conditions. Typical examples include rapid drift or temperature hot spots. In addition, there is also a manual override option, which enables the operator to take control through monitoring and event logs.

This layered logic provides operational resiliency by enabling systems to fail safely: guard mode ensures stability, manual override guarantees operator authority, and explainability, via decision-tree logic, keeps every AI action transparent. Even in dark-mode operation, alarms and reasoning remain accessible to operators.

These frameworks directly address one of the primary fears among data center operators: losing visibility into what the system is doing.

Outlook

Gradually, the concept of a dark data center, one operated remotely with minimal on-site staff, has shifted from being an interesting theory to a desirable strategy. In recent years, many infrastructure operators have increased their use of automation and remote-management tools to enhance resiliency and operational flexibility, while also mitigating low staffing levels. Cooling systems, particularly those governed by AI-assisted control, are now central to this operational transformation.

Operational autonomy does not mean abandoning human control; it means achieving reliable operation without the need for constant supervision. Ultimately, a dark data center is not about turning off the lights, it is about turning on trust.


The Uptime Intelligence View

AI in thermal management has evolved from an experimental concept into an essential tool, improving efficiency and reliability across data centers. The next step — coordinating facility water, air and IT cooling liquid systems — will define the evolution toward greater operational autonomy. However, the transition to “dark” operation will be as much cultural as it is technical. As explainability, fail-safe modes and manual overrides build operator confidence, AI will gradually shift from being a copilot to autopilot. The technology is advancing rapidly; the question is how quickly operators will adopt it.

The post AI and cooling: toward more automation appeared first on Uptime Institute Blog.

  •  

DPI and PODTECH Partner to Scale AI Infrastructure Commissioning Across Europe, Asia, and the Middle East

Datalec Precision Installations (DPI) and PODTECH have announced a global technology partnership focused on delivering pre-staged, deployment-ready AI infrastructure solutions as hyperscaler demand drives data center vacancy rates to historic lows. With capacity tightening to 6.5% in Europe and 5.9% in the U.K., the partnership addresses a critical bottleneck in AI data center commissioning, where deployment timelines and technical complexity have become major constraints for enterprises and cloud platforms scaling GPU-intensive workloads.

The AI Infrastructure Commissioning Challenge

As hyperscalers deploy more than $600 billion in AI data center infrastructure this year, representing 75% of total capital expenditure, the focus has shifted from simply securing capacity to ensuring infrastructure is fully validated and production-ready at deployment. AI workloads demand far more than traditional data center services. NVIDIA-based AI racks require specialized expertise in NVLink fabric configuration, GPU testing, compute node initialization, dead-on-arrival (DOA) testing, site and factory acceptance testing (SAT/FAT), and network validation. These technical requirements, combined with increasingly tight deployment windows, have created demand for integrated commissioning providers capable of delivering turnkey solutions.

Integrated Capabilities Across the AI Lifecycle

The DPI-PODTECH partnership brings together complementary capabilities across the full AI infrastructure stack. DPI contributes expertise in infrastructure connectivity and mechanical systems. PODTECH adds software development, commissioning protocols, and systems integration delivered through more than 60 technical specialists across the U.K., Asia, and the Middle East. Together, the companies offer end-to-end services from pre-deployment validation through network bootstrapping, ensuring AI environments are fully operational before customer handoff.

The partnership builds on successful NVIDIA AI rack deployments for international hyperscaler programs, where both companies demonstrated the ability to manage complex, multi-site rollouts. By formalizing their collaboration, DPI and PODTECH are positioning to scale these capabilities across regions where data center capacity is most constrained and AI infrastructure demand is accelerating fastest.

Strategic Focus on High-Growth Markets

The partnership specifically targets Europe, Asia, and the Middle East, markets experiencing acute capacity constraints and surging AI investment. PODTECH’s existing presence across these regions gives the partnership immediate on-the-ground capacity to support hyperscaler and enterprise deployments. The company’s ISO 27001, ISO 9001, and ISO 20000-1 certifications provide the compliance foundation required for clients in regulated industries and public sector engagements.

Industry Perspective

“As organizations accelerate their AI adoption, the reliability and performance of the underlying infrastructure have never been more critical,” said James Bangs, technology and services director at DPI. “Building on our partnership with PODTECH, we have already delivered multiple successful deployments together, and this formal collaboration enables us to scale our capabilities globally.”

Harry Pod, founder at PODTECH, emphasized the operational benefits of the integrated model: “Following our successful collaborations with Datalec on major NVIDIA AI rack deployments, we are very proud to officially combine our capabilities. By working as one integrated delivery team, we can provide clients with packaged, pre-staged, and deployment-ready AI infrastructure solutions grounded in quality, precision, and engineering excellence.”

Looking Ahead

For enterprises and hyperscalers navigating AI infrastructure decisions in 2026, the partnership signals a shift toward specialized commissioning providers capable of managing the entire deployment lifecycle. With hyperscaler capital expenditure forecast to remain elevated through 2027 and vacancy rates showing no signs of easing, demand for integrated commissioning services is likely to intensify across DPI and PODTECH’s target markets.

Organizations evaluating AI infrastructure commissioning strategies can learn more at datalecltd.com.

The post DPI and PODTECH Partner to Scale AI Infrastructure Commissioning Across Europe, Asia, and the Middle East appeared first on Data Center POST.

  •  

Why Data Sovereignty Is Becoming a Strategic Imperative for AI Infrastructure

As artificial intelligence reshapes how organizations generate value from data, a quieter shift is happening beneath the surface. The question is no longer just how data is protected, but where it is processed, who governs it, and how infrastructure decisions intersect with national regulation and digital policy.

Datalec Precision Installations (DPI) is seeing this shift play out across global markets as enterprises and public sector organizations reassess how their data center strategies support both AI performance and regulatory alignment. What was once treated primarily as a compliance issue is increasingly viewed as a foundational design consideration.

Sovereignty moves upstream.

Data sovereignty has traditionally been addressed after systems were deployed, often resulting in fragmented architectures or operational workarounds. That approach is becoming less viable as regulations tighten and AI workloads demand closer proximity to sensitive data.

Organizations are now factoring sovereignty into infrastructure planning from the start, ensuring data remains within national borders and is governed by local legal frameworks. For many, this shift reduces regulatory risk while creating clearer operational boundaries for advanced workloads.

AI raises the complexity

AI intensifies data governance challenges by extending them beyond storage into compute and model execution. Training and inference processes frequently involve regulated or sensitive datasets, increasing exposure when data or workloads cross borders.

This has driven growing interest in sovereign AI environments, where data, compute, and models remain within a defined jurisdiction. Beyond compliance, these environments offer greater control over digital capabilities and reduced dependence on external platforms.

Balancing performance and governance 

Supporting sovereign AI requires infrastructure that can deliver high-density compute and low-latency performance without compromising physical security or regulatory alignment. DPI addresses this by delivering AI-ready data center environments designed to support GPU-intensive workloads while meeting regional compliance requirements.

The objective is to enable organizations to deploy advanced AI systems locally without sacrificing scalability or operational efficiency.

Regional execution at global scale

Demand for localized, compliant infrastructure is growing across regions where digital policy and economic strategy intersect. DPI’s expansion across the Middle East, APAC, and other international markets reflects this trend, combining regional delivery with standardized operational practices across 21 global entities.

According to Michael Aldridge, DPI’s Group Information Security Officer, organizations increasingly view localized infrastructure as a way to future-proof their digital strategies rather than constrain them.

Compliance as differentiation

As AI adoption accelerates, infrastructure and governance decisions are becoming inseparable. Organizations that can control where data lives and how AI systems operate are better positioned to manage risk, meet regulatory expectations, and move faster in regulated markets.

DPI’s approach reflects a broader industry shift: compliance is no longer just about meeting requirements, but about enabling innovation in an AI-driven environment.

To read DPI’s full perspective on data sovereignty and AI readiness, visit the company’s website.

The post Why Data Sovereignty Is Becoming a Strategic Imperative for AI Infrastructure appeared first on Data Center POST.

  •  

2025 in Review: Sabey’s Biggest Milestones and What They Mean

Originally posted on Sabey Data Centers.

At Sabey Data Centers, progress is more than a series of headlines. It’s a blueprint for what’s possible when infrastructure, efficiency and stewardship go hand in hand. From our award-winning sustainability initiatives to bold new campus designs and record-setting expansions, each milestone this year has demonstrated our commitment to powering tomorrow’s workloads with conscience and confidence.

As 2026 is already well underway and already promising to be a banner year, we wanted to pause and reflect on the path we forged in 2025.

Capacity expansion: built for growth 

In 2025, we announced strategic power expansions across our Pacific Northwest, Ashburn, Austin and Quincy locations. In Seattle and Columbia, 30MW of new power is now anticipated to come online by 2027, enabling tenants to scale quickly while leveraging ultra-efficient energy (carbon-free in Seattle) and a regional average PUE as low as 1.2.

On the East Coast, Ashburn’s third and final building broke ground, set to introduce 54MW of additional capacity with the first tranches set to come online in 2026. This will be the first three-story facility in Sabey’s portfolio, purpose-built for air-cooled, liquid-cooled and hybrid deployments, with rack densities over 100kW and an average PUE of 1.35. In both regions, Sabey’s expansion balances hyperscale demand with customization, modular scale and resilient connectivity.

The launch of construction for Austin Building B was another major milestone, expanding our presence in the dynamic Round Rock tech corridor. This three-story, liquid-cooling-ready facility is designed to deliver 54 megawatts of total power capacity. Building B continues our commitment to scalable, energy-efficient digital infrastructure, tailored for enterprise and hyperscale workloads.

To continue reading, please click here.

The post 2025 in Review: Sabey’s Biggest Milestones and What They Mean appeared first on Data Center POST.

  •  

Hyundai Motor & Kia Unveil ‘Vision Pulse,’ Driver Safety Technology that Detects Beyond Obstacles

Vision Pulse leverages ultra-wide band (UWB) signals to detect precise object positions in real time and issue timely alerts, increasing driving safety UWB-based detection enables real-time accuracy of up to 10-centimeters within a 100-meter radius, even in visually obstructed environments Enables development of advanced driver assistance functions that maintain high ... [continued]

The post Hyundai Motor & Kia Unveil ‘Vision Pulse,’ Driver Safety Technology that Detects Beyond Obstacles appeared first on CleanTechnica.

  •  

Tesla, SpaceX, & xAI Merging?

Well, things might be getting wild. When SolarCity was facing financial challenges, Tesla swallowed it up. Elon Musk was the Chairman of the Board at SolarCity, and his cousins were the cofounders, CEO, and CTO. The synergies were supposed to help both, but Tesla’s solar business has declined a great ... [continued]

The post Tesla, SpaceX, & xAI Merging? appeared first on CleanTechnica.

  •  

Forget Sensors, Tesla’s AI Training Costs Are Soaring

One argument Elon Musk and Tesla fans have made for ages — for about a decade — is that the extra costs of sensors like lidar and radar for self-driving vehicles are not worth it, are illogical, and will be the death of a company like Waymo. Just use cameras ... [continued]

The post Forget Sensors, Tesla’s AI Training Costs Are Soaring appeared first on CleanTechnica.

  •  

Here Comes Concentrating Solar Power To Decarbonize Industrial Heat

The US concentrating solar power startup GlassPoint has its sights set on ripe solar markets in the US Southwest as well as southern Europe, the Middle East, and South America. 

The post Here Comes Concentrating Solar Power To Decarbonize Industrial Heat appeared first on CleanTechnica.

  •  

Tesla, SpaceX, & xAI Merging?

Well, things might be getting wild. When SolarCity was facing financial challenges, Tesla swallowed it up. Elon Musk was the Chairman of the Board at SolarCity, and his cousins were the cofounders, CEO, and CTO. The synergies were supposed to help both, but Tesla’s solar business has declined a great ... [continued]

The post Tesla, SpaceX, & xAI Merging? appeared first on CleanTechnica.

  •  

Forget Sensors, Tesla’s AI Training Costs Are Soaring

One argument Elon Musk and Tesla fans have made for ages — for about a decade — is that the extra costs of sensors like lidar and radar for self-driving vehicles are not worth it, are illogical, and will be the death of a company like Waymo. Just use cameras ... [continued]

The post Forget Sensors, Tesla’s AI Training Costs Are Soaring appeared first on CleanTechnica.

  •  

Traton to build more trucks with PlusAI’s autonomous driving software




Autonomous trucking software provider PlusAI will expand its partnership with commercial vehicle manufacturer Traton Group in a bid to accelerate the development and scaled deployment of on-highway autonomous trucking solutions in the U.S. and Europe, the companies said this week.

As part of the deal, Munich, Germany-based Traton will commit up to $25 million in R&D funding to PlusAI to accelerate factory integration of its SuperDrive software into autonomous trucks of Traton’s brands, which include Scania, MAN, and International.

According to California-based PlusAI, the partnership comes as freight fleets across the U.S. and Europe are facing persistent driver shortages, rising operating costs, and increasing demand for safer, more reliable freight capacity. However, meeting that need with broad adoption of autonomous vehicles will depend on confidence in vehicle performance, rigorous safety validation, and a commercialization model led by established OEMs.

The expanded partnership will build on the collaboration first announced in 2024, when PlusAI’s virtual driver system, SuperDrive, was selected as the on-highway autonomous driving platform for Traton’s brands. Since then, the companies say they have reached technical and operational milestones toward delivering Level 4 autonomous trucking capabilities. Notably, International initiated autonomous fleet trials in Texas with an unspecified logistics and transportation operator.

“Autonomous trucking is a strategic pillar of Traton’s long-term technology roadmap,” said Niklas Klingenberg, Member of the Executive Board, responsible for Research & Development in the Traton Group. “Autonomy represents a meaningful opportunity to deliver higher uptime and greater value for our fleet customers while strengthening the long-term competitiveness of our brands. Our expanded partnership will reflect both this confidence and our shared goal of bringing factory-built on-road autonomous trucks to market at scale.”

  •  

Robots: Still making strides in logistics




Makers of humanoid robots are targeting logistics, specifically the warehouse, as they continue a steady march to integrate their human-looking machines into today’s increasingly automated workplaces. That’s because research shows that the labor-intensive warehouse is a promising market for the still-nascent technology, which mimics the human body and can perform a range of material handling and order fulfillment tasks.


U.K.-based research firm IDTechEx projects logistics and warehousing will be the second-largest adopter of humanoid robots over the next 10 years, following just behind the automotive industry (see Exhibit 1). Key benefits in the warehouse include bringing precision and consistency to repetitive tasks and improving speed while minimizing human error, the company said in an October market outlook report.

“Facing acute labor shortages and rising operational complexity, warehouses are turning to humanoids as a promising solution,” according to the report. “The benefits are multifaceted: Humanoid robots help lower labor costs, reduce operational disruptions, and offer unmatched flexibility, capable of adapting to varying tasks throughout the day.”

But the research also tells a deeper story: As of last year, humanoid robot deployment in warehouses remained below 5%, due to both technological and commercial roadblocks. Short operating time and long recharge cycles can create substantial downtime, for instance, while limited field testing and safety concerns have left many end-users cautious. A separate industry study, by U.K. researcher Interact Analysis, predicts humanoid robot growth will be relatively slow in the short term, reaching about 40,000 shipments globally by 2032.

“The humanoid robot market is currently experiencing substantial hype, fueled by a large addressable market and significant investment activity,” Rueben Scriven, research manager at Interact Analysis, wrote in the 2025 report. “However, despite the potential, our outlook remains cautious due to several key barriers that hinder widespread adoption, including high prices and the gap in the dexterity needed to match human productivity levels, both of which are likely to persist into the next decade. However, we maintain that there’s a significant potential in the mid- to long term.”

Challenges aside, the work to develop and deploy humanoids continues, with many companies hitting major milestones in 2025 and early 2026. Here’s a look at some of the most recent accomplishments.

DIGIT GETS BUSY

Humanoid robots resemble the human body—in general, they have a torso, head, and two arms and legs, but they can also replicate just portions of the body. Robotic arms can be considered humanoid, as can bots that feature an upper body on a wheeled base. The bipedal variety—those that can walk on two legs—are gaining momentum.

Agility Robotics announced late last year that its bipedal humanoid robot, called Digit, had moved more than 100,000 totes in a commercial environment—at a GXO Logistics facility in Flowery Branch, Georgia. Just a few weeks later, the company said it would deploy Digit robots in San Antonio, Texas, to handle fulfillment operations for e-commerce fulfillment platform Mercado Libre. The companies said they plan to explore additional uses for Digit across Mercado Libre’s warehouses in Latin America. They did not give a timeframe for the rollout.

Agility’s humanoid robots are also in use at facilities run by Amazon and German motion technology company Schaeffler.

Agility is a business unit of Humanoid Global Holdings, which includes robotic companies Cartwheel Robotics, RideScan Ltd., and Formic Technologies Inc. in its portfolio of businesses.

ALPHA BIPEDAL TAKES OFF

U.K.-based robotics and AI (artificial intelligence) developer Humanoid launched its first bipedal robot this past December, introducing HMND 01 Alpha Bipedal. The robot went from design to working prototype in just five months and was up and walking just 48 hours after final assembly—a feat that typically takes weeks or even months, according to the bot’s developers.

Alpha Bipedal stands five feet, 10 inches tall and can carry loads of 33 pounds in its arms. Still in testing, the bot is designed to tackle industrial, household, and service tasks.

“HMND 01 is designed to address real-world challenges across industrial and home environments,” Artem Sokolov, founder and CEO of Humanoid, said in a December statement announcing the launch. “With manufacturing sectors facing labor shortages of up to 27%, leaving significant gaps in production, and millions of people performing physically demanding or repetitive tasks, robots can provide meaningful support. In domestic environments, they have the potential to assist elderly people or those with physical limitations, helping with object handling, coordination, and daily activities. Every day, over 16 billion hours are spent on unpaid domestic and care work worldwide—work that, if valued economically, would exceed 40% of GDP in some countries. By taking on these responsibilities, humanoid robots can free humans to focus on higher-value and safer work, improving their productivity and quality of life.”

HMND 01 Alpha Bipedal follows the September launch of Humanoid’s wheeled Alpha platform, which has been tested commercially and helped extend the company’s reach from industrial and logistics tasks—including warehouse automation, picking, and palletizing—to domestic support applications.

AGILE ONE TAKES OFF

Robotic automation company Agile Robots launched its first humanoid robot, called Agile One, in November. The robot is designed to work in industrial settings, where company leaders say it can operate safely and efficiently alongside humans and other robotic solutions. The bot’s key tasks include material gathering and transport, pick-and-place operations, machine tending, tool use, and fine manipulation.

Agile One will be manufactured at the company’s facilities in Germany.

“At Agile Robots, we believe the next industrial revolution is Physical AI: intelligent, autonomous, and flexible robots that can perceive, understand, and act in the physical world,” Agile Robots’ CEO and founder, Dr. Zhaopeng Chen, said in a statement announcing the launch. “Agile One embodies this revolution.”

The new humanoid is part of the company’s wider portfolio of AI-driven robotic systems, which includes robotic hands and arms as well as autonomous mobile robots (AMRs) and automated guided vehicles (AGVs). All are driven by the company’s AI software platform, AgileCore, and are designed to work together.

“The real value for our industrial customers isn’t just a stand-alone intelligent humanoid, but an entire intelligent production system,” Chen said in the statement. “We see [Agile One] working seamlessly alongside our other robotic solutions, each part of the system, connected and learning from each other. This approach of applying Physical AI to whole production systems can give our customers a new level of holistic efficiency and quality.”

Full production of Agile One begins this year.

​Safety first: Industry updates standards for humanoid robots


As two-legged and four-legged robots begin to find applications in supply chain operations, the sector is refining its safety standards to ensure that humanoid and collaborative robots can be deployed at scale, according to a December report from Interact Analysis.

The work is necessary because the unique mechanics associated with legged robotics introduce new challenges around stability, fall dynamics, and unpredictable motion, according to report author Clara Sipes, a market analyst at Interact Analysis. To be precise, unlike statically stable machines, dynamically stable machines such as humanoids collapse when power is cut, creating residual risk in the event of a fall.

In response, new standards such as the International Organization for Standardization’s ISO 26058-1 and ISO 25785-1 have been developed to address both statically and dynamically stable mobile robotics. In addition, ISO TR (Technical Report) R15.108 examines the challenges associated with bipedal, quadrupedal, and wheeled balancing mobile robots.

According to the Interact Analysis report, one of the most notable shifts is the removal of references to “collaborative modes.” In the most recent revisions, collaborative robots must be evaluated based on the application, not the robot alone, since each application carries its own risks, and the standard now encourages assessing the entire environment within which the robot operates.

Additional changes cover requirements for improved cyber resilience, the report said. European regulatory changes, particularly the Cyber Resilience Act (CRA), AI Act, and Machinery Regulation, are establishing a unified framework for safety, cybersecurity, and risk management. That will shape the future of industrial automation by addressing new vulnerabilities within products that are increasingly connected to a network.

In its report, Interact Analysis advised manufacturers and integrators in the robotic sector to prepare early for the upcoming standards revisions. With multiple regulations taking effect over the next few years, organizations that begin aligning now will avoid costly redesigns and rushed compliance efforts later, the report noted.

—Ben Ames, Senior News Editor

  •  

Report: AI Scale Pushing Enterprise Infrastructure toward Failure

NEW YORK, Jan. 29, 2026 — Cockroach Labs, a cloud-agnostic distributed SQL databases with CockroachDB, today announced findings from its second annual survey, “The State of AI Infrastructure 2026: Can Systems Withstand AI Scale?” The report reveals a growing concern that AI use is starting to overwhelm the traditional IT systems meant to support it. As […]

The post Report: AI Scale Pushing Enterprise Infrastructure toward Failure appeared first on Inside HPC & AI News | High-Performance Computing & Artificial Intelligence.

  •  

Microsoft Is More Dependent On OpenAI Than The Converse

Everyone is jumpy about how much capital expenses Microsoft has on the books in 2025 and what it expects to spend on datacenters and their hardware in 2026.

Microsoft Is More Dependent On OpenAI Than The Converse was written by Timothy Prickett Morgan at The Next Platform.

  •  

Big Blue Poised To Peddle Lots Of On Premises GenAI

If you want to know the state of the art in GenAI model development, you watch what the Super 8 hyperscalers and cloud builders are doing and you also keep an eye on the major model builders outside of these companies – mainly, OpenAI, Anthropic, and xAI as well as a few players in China like DeepSeek.

Big Blue Poised To Peddle Lots Of On Premises GenAI was written by Timothy Prickett Morgan at The Next Platform.

  •  

Microsoft Takes On Other Clouds With “Braga” Maia 200 AI Compute Engines

Microsoft is not just the world’s biggest consumer of OpenAI models, but also still the largest partner providing compute, networking, and storage to OpenAI as it builds its latest GPT models.

Microsoft Takes On Other Clouds With “Braga” Maia 200 AI Compute Engines was written by Timothy Prickett Morgan at The Next Platform.

  •  

Nvidia’s $2 Billion Investment In CoreWeave Is A Drop In A $250 Billion Bucket

With the hyperscalers and the cloud builders all working on their own CPU and AI XPU designs, it is no wonder that Nvidia has been championing the neoclouds that can’t afford to try to be everything to everyone – this is the very definition of enterprise computing – and that, frankly, are having trouble coming up with the trillions of dollars to cover the 150 gigawatts to more than 200 gigawatts of datacenter capacity that is estimated to be on the books between 2025 and 2030 for AI workloads.

Nvidia’s $2 Billion Investment In CoreWeave Is A Drop In A $250 Billion Bucket was written by Timothy Prickett Morgan at The Next Platform.

  •