Normal view

Received today — 2 April 2026

Sion Power’s Licerion cells exceed 500 Wh/kg for defense and aerospace

1 April 2026 at 15:53

Sion Power is expanding its Licerion® lithium-metal battery program to supply cells and battery systems for US defense and aerospace. The cells are engineered to exceed 500 Wh/kg, up to 200 Wh/kg more than current advanced lithium-ion technology, even with silicon anode enhancements.

The platform covers both primary (single-discharge) and secondary (rechargeable) configurations. Target applications include long-endurance UAS, tactical and counter-UAS drones, missile and loitering munition platforms, autonomous maritime and ground vehicles and space systems. Sion Power operates a 110,000 sq ft cell manufacturing facility in Tucson, Arizona, and says it can demonstrate cells and integrated battery systems today, and expects initial product shipments in late 2026.

Lithium-metal anodes store substantially more energy per kilogram than graphite because lithium metal is lighter and more electrochemically active. For weight-constrained platforms, closing the gap from 300-350 Wh/kg for advanced Li-ion to 500+ Wh/kg translates directly into longer endurance and expanded payload capacity. Sion Power’s expansion also responds to US policy momentum—NDAA provisions support domestic battery supply chains and highlight demand for American-manufactured advanced cells.

“Our lithium-metal technology provides the step-change in energy density required to support longer-range missions, increased flight duration and higher payload capability while maintaining a U.S.-based manufacturing capability aligned with national security priorities,” said Pamela Fletcher, CEO of Sion Power.

“By combining high-energy lithium-metal chemistry with advanced battery pack engineering, Sion Power enables defense integrators to unlock two to three times increases in mission endurance, significantly extended operational range and dramatically higher payload capacity compared with conventional lithium-ion and lithium-polymer batteries used in today’s unmanned systems,” said Tracy Kelley, chief science officer at Sion Power.

Source: Sion Power

Vishay’s new automotive MOSFET driver delivers 8 mm creepage in compact SMD-4 package

1 April 2026 at 15:50

Vishay Intertechnology has launched the VODA1275, an automotive-grade photovoltaic MOSFET driver that delivers 8 mm creepage distance and CTI 600 mold compound in a compact SMD-4 package. The device targets high voltage automotive applications including pre-charge circuits, wall chargers, and battery management systems for EVs and HEVs.

The VODA1275 delivers 20 V open circuit voltage, 20 μA short circuit current, and 80 μs turn-on time—three times faster than competing devices, according to Vishay. The driver provides reinforced isolation with a working isolation voltage of 1260 Vpeak and isolation test voltage of 5300 VRMS, making it suitable for 800 V+ battery systems. The device is AEC-Q102 qualified and meets automotive reliability standards.

The high open circuit voltage allows designers to use a single MOSFET driver instead of two drivers in series, which was previously required for higher voltage applications. This simplifies circuit design and reduces component count in systems that need to drive MOSFETs and IGBTs reliably at high voltages. The driver can also enable custom solid-state relays to replace electromechanical relays in next-generation vehicles.

The optically isolated device draws power from an infrared emitter on the low voltage side, eliminating the need for an external power supply on the isolated side. “The VODA1275 features the industry’s fastest turn-on times and the highest open circuit voltage and short circuit current in its class,” the company stated. The driver is RoHS-compliant and halogen-free. Samples and production quantities are available now with eight-week lead times, priced at $1.20 per piece for US delivery.

Source: Vishay Intertechnology

Mercedes-Benz Trucks opens orders for its eArocs 400 electric construction truck

1 April 2026 at 15:24

Mercedes-Benz Trucks will begin sales of its new battery-electric eArocs 400 in April, expanding its electric portfolio to include the construction segment.

Customers in an initial 13 EU markets can now order the eArocs 400, which made its debut at last year’s bauma trade fair in Munich. Beginning in the third quarter of 2026, the base vehicle will be produced at the Mercedes-Benz plant in Wörth am Rhein, followed by integration of the electric drivetrain by Paul Group, headquartered in Vilshofen an der Donau.

The eArocs 400 is equipped with two LFP battery packs, each offering 207 kWh of capacity, housed in a battery tower behind the cab. It’s designed specifically for urban and near-road construction work, and in many use cases, it can complete a full work day without intermediate charging.

The eArocs 400 is initially offered in two versions, with technically permissible gross vehicle weights of 37 and 44 tonnes. It is available in an 8×4/4 axle configuration and four wheelbase options, and is suitable for applications such as dump bodies and concrete mixer bodies.

Key components from the second-generation Mercedes Benz eActros portfolio have been incorporated into the eArocs 400.

The eArocs 400 features an 800-volt onboard electrical architecture, as well as an integrated 3-speed transmission, providing a continuous output of 380 kW and a peak output of 450 kW. The truck supports charging at up to 400 kW via the standard CCS2 charging interface, available on both sides of the vehicle.

“The new battery-electric eArocs 400 combines the robustness required with an efficient electric drive system, covering key use cases in near-road construction,” said Stina Fagerman, Head of Marketing, Sales and Services at Mercedes Benz Trucks.

Source: Mercedes-Benz Trucks

Bosch Rexroth introduces TS 7plus conveyor for payloads up to 3,000 kg

1 April 2026 at 15:00

Bosch Rexroth has introduced the TS 7plus, a fully electric roller conveyor designed for heavy-payload manufacturing lines. The company says it’s the world’s first freely configurable, fully electric transfer system for loads up to 3,000 kg, targeting automotive, battery and aerospace/defense assembly.

The TS 7plus runs on modular sections using solid or hollow rollers roughly 50% larger than those in the predecessor TS 7 system. The larger rollers reduce moving parts per meter, which Bosch Rexroth says improves availability. Standard workpiece pallets go up to 2,200 x 3,000 mm, minimum transport height is 350 mm for both longitudinal and transverse conveying, and conveyor speed reaches 24 m/min—Bosch Rexroth says that’s significantly faster than AGVs. A redesigned bearing block with two mounting tabs speeds assembly and simplifies maintenance and replacement.

Drive is via lubrication-free king shafts with bevel gears, eliminating the re-tensioning and lubrication demands of chain drives. Motors come in 180 W and 250 W variants with a third-party interface, and can mount inside or outside the conveyor section. Internal mounting clears the working area of interfering contours, the bevel gear path also keeps lubricants away from workpieces.

The system supports two operating modes: conventional accumulation with stop gates, and a segmented mode where each motor section runs only when required. Segmented operation cuts energy consumption over the full lifecycle and allows smaller motors to be specified, extending service life. Configuration is handled by MTpro planning software—available as a local install or as the browser-based MTpro Online Designer—which auto-generates CAD models and parts lists from the standard-component builds for export to the Rexroth Store or certified partners.

Source: Bosch Rexroth

Magna unveils DHD REX single-motor hybrid drive for range-extended EVs

31 March 2026 at 15:46

Magna, one of the world’s largest automotive suppliers, has introduced DHD REX, a single-motor dedicated hybrid drive for range extended electric vehicles (REEVs). The ready-to-integrate system is built on a modular architecture designed for OEMs operating across markets with different regulatory requirements, infrastructure conditions and customer expectations.

DHD REX runs in three modes: pure electric driving, a generating mode in which the ICE charges the battery for range extension, and an optional parallel hybrid mode for highway performance. The single-motor design reduces cost and packaging complexity compared to dual-motor configurations. Magna says the system is validated across B through E vehicle segments in AWD layouts including SUVs, and integrates into both ICE-based platforms and BEV-derived architectures.

In a range extended EV, the combustion engine runs as a generator in most conditions rather than driving the wheels—the electric motor handles propulsion. DHD REX’s optional parallel mode adds the ability for the ICE to contribute mechanical drive at highway speeds, where the efficiency penalty of the generator-motor conversion path is most pronounced.

DHD REX complements Magna’s DHD Duo, a dual e-motor dedicated hybrid already in series production. The single-motor architecture targets OEMs that want range extension capability without the cost and packaging of a two-motor system, and the modular design adapts to both ICE-based platforms being electrified and native BEV architectures adding a range extender.

“DHD REX reflects our commitment to adaptable, customer-focused solutions that support a wide range of performance and market expectations,” said Diba Ilunga, President Magna Powertrain.

Source: Magna

The certified BMS trap: why it might not actually protect your battery

31 March 2026 at 15:40

Off-the-shelf controllers with safety certifications are giving e-mobility engineers a false sense of security.

An off-the-shelf BMS with a third-party functional safety certification sounds like a solved problem. SIL-rated, ASIL-rated, ready to drop into your e-mobility battery pack. But according to Rich Byczek, Global Chief Engineer for Batteries at Intertek, that certification probably doesn’t cover what you think it covers.

“Certified BMS systems, meaning certified systems that have functional safety certifications from a third party, don’t necessarily address these functions,” Byczek told Charged during a recent webinar (now available to watch on demand). “They just look at the controller as a more generic electrical system.”

The problem: most certifications evaluate the controller hardware against a general integrity standard (IEC 61508, ISO 26262 or ISO 13849). They verify that the electronics are reliable. They don’t verify that the controller monitors individual cell voltages, manages cell-level temperature limits or handles the specific failure modes of lithium-ion chemistry.

Fuses don’t protect at the cell level

The gap is sharpest with passive protection. A pack-level fuse can interrupt a gross overcurrent event, but it’s blind to an individual cell in a series string being driven past its voltage limits. That requires active, per-cell monitoring, and a generic certified controller may not have the inputs and outputs to deliver it.

For e-mobility systems specifically, Byczek stressed that the failure modes and effects analysis (FMEA) must evaluate overvoltage, undervoltage, overcharge, overdischarge, over- and under-temperature, short circuit and excessive current, all at the cell level. “We look at those at the cell level, not only at the macro or battery pack level,” he said.

This is a different world from portable devices, where legacy standards like IEC 62133 rely on type tests and single-fault evaluations. Those standards were designed for products a user could set down and walk away from.

E-mobility doesn’t work that way. “You’re literally riding on top of that battery, potentially going at a fairly high speed,” said Byczek. “You can’t just get away from it.”

Start with the FMEA, not the certificate

The fix isn’t complicated, but it does require work. Start with an FMEA that covers every safety-critical function your BMS must perform, at the cell level. Then verify that your controller (certified or not) actually has the architecture to deliver each one. A certified controller is a starting point, not a finish line.

The standards themselves can be mixed and matched. SIL, ASIL and Performance Levels don’t map one-to-one, but regulators accept cross-framework approaches as long as your risk assessment demonstrably covers every identified hazard. For BMS systems, you’re typically targeting SIL 2, ASIL B or PLc, but the specific level matters less than proving your system can fail safely when a sensor drifts, a resistor opens or a communication link drops.

For teams pivoting from automotive EV programs into adjacent markets like forklifts, floor scrubbers and personal mobility devices, this is the adjustment that matters most. The batteries may be smaller, but the safety obligations are not.

Watch the full webinar: Rich Byczek’s complete presentation on applying functional safety to e-mobility battery systems is available on demand.

ENNOVI patents adhesive-free lamination for battery cell contacting systems

31 March 2026 at 15:34

ENNOVI has secured a German patent for its adhesive-free lamination technology for battery cell contacting systems (CCS). The laser-based process eliminates the adhesives used in conventional hot and cold lamination, and the company says the technology is already validated—meaning OEMs can adopt it without having to prove out the manufacturing process themselves.

CCS components connect and integrate individual cells within a battery module, typically combining busbars, voltage sense lines and the physical laminate layers that hold them together. Conventional CCS lamination bonds those layers using adhesives in hot or cold press processes. ENNOVI’s laser lamination achieves the same bond without adhesive material. The technology supports cylindrical, prismatic and soft pouch cell architectures. With this patent, ENNOVI now offers three lamination options (hot, cold and adhesive-free) for its CCS designs, giving battery engineers a process choice matched to their cell format.

The patent’s main commercial argument is risk reduction. Developing a new lamination process in-house takes time and carries qualification uncertainty; using a pre-validated, patented technology lets engineering teams skip that work. ENNOVI supports co-development and tailored engineering engagement, which it says allows OEM partners to maintain control over their product roadmaps.

The technology was developed at ENNOVI’s Advanced Solutions Engineering Center in Neckarsulm, which includes prototyping, testing and R&D capabilities. The facility holds ISO 9001:2015 and TISAX certifications—the latter covering automotive supply chain data security requirements.

“Automotive OEMs and battery manufacturers can design in the unique features of adhesive-free lamination, reduce engineering risk by using a technology that is already validated, rather than reinventing it,” said Randy Tan, Product Portfolio Director for Energy Systems at ENNOVI.

Source: ENNOVI

Surya Roshni Lights Up Indhana Bhawan with Advanced Façade Lighting

Surya Roshni Limited, one of India’s most trusted names in lighting, wires & cables, fans, home appliances and water pumps, with the widely recognised ‘Prakash Surya’ brand across water tanks, PVC pipes and steel pipes, has successfully executed the façade lighting for Indhana Bhawan, the headquarters of Karnataka Power Transmission Corporation Limited (KPTCL), reinforcing its […]

Turning Conversation into Action: Nomad Futurist Foundation at DCD>Connect | New York

1 April 2026 at 14:00

Originally posted on Nomad Futurist.

At DCD>Connect | New York, the Nomad Futurist Foundation didn’t just participate in the conversation about building the future workforce — we demonstrated what it looks like to actively create it.

Through two milestone moments, we brought together today’s leaders and tomorrow’s innovators, proving that meaningful change in the digital infrastructure industry happens when ideas are backed by action.

Mana Hui: Aligning Leaders Around a Shared Mission 

After Day 1 of the conference, we gathered some of the industry’s most forward-thinking voices at the rooftop of The Knickerbocker Hotel for our Mana Hui: Leaders Connect Networking Event.

More than a networking reception, Mana Hui created a dedicated space for leaders to come together around a shared purpose: how we can collectively inspire, educate, and open doors for the next generation of digital infrastructure talent.

Conversations focused on tangible solutions, from increasing visibility into career pathways, to strengthening mentorship opportunities, to ensuring students and early-career professionals understand the real-world impact of this industry. The room was filled with decision-makers, innovators, and advocates aligned around one idea: preparing the future workforce is not a side initiative; it is a responsibility.

Mana Hui set the tone by reinforcing the power of collaboration. When leaders unite with intention, momentum builds, and that momentum must translate into action.

Powering the Next Generation: From Conversation to Impact 

On Day 2, that momentum became measurable impact through our Powering the Next Generation Student Workshop.

Students and emerging professionals joined us for an experience designed not just to inform, but to connect. Industry leaders shared authentic stories about their career journeys, including challenges, pivots, and lessons learned, providing students with transparent insight into opportunities across the digital infrastructure landscape.

Rather than a traditional panel format, the workshop fostered dynamic dialogue. Students actively engaged, asked thoughtful questions, and contributed their own perspectives, creating an environment rooted in collaboration and curiosity.

A defining highlight came when a group of students from New York University presented a live demonstration of one of their own projects, offering a powerful reminder that the next generation is not waiting for opportunity. They are already building the future.

We were proud to welcome students representing an exceptional range of institutions, including Harvard Law School, Columbia University, Cornell University, Dartmouth College, University of Notre Dame, Stevens Institute of Technology, and more. Many of these students are preparing to enter the workforce within months and are eager to contribute meaningfully to the industry.

Following the workshop, members of the Nomad leadership team continued the experience with a visit to the iconic 60 Hudson Street building for a tour of the NYI and Hudson Interxchange facilities, led by Ambassador Arthur Valhuerdi. For even some of our own members, it was their first time inside a live data center environment, making it a meaningful extension of the day’s learning and a powerful reminder of the infrastructure behind the digital world.

To continue reading, please click here.

The post Turning Conversation into Action: Nomad Futurist Foundation at DCD>Connect | New York appeared first on Data Center POST.

CDM Acquires Majority Stake in R&D Specialties to Power AI Modular Data Center Surge

1 April 2026 at 13:00

In a strategic move underscoring the shift toward modular infrastructure, Compu Dynamics Modular (CDM), a Chantilly, Virginia-based specialist in prefabricated data center solutions, has acquired a majority stake in R&D Specialties, an Odessa, Texas, manufacturer of UL-certified control panels and modular electrical systems. Announced today, the deal expands CDM’s manufacturing footprint to 120,000 square feet, with room for growth on a 15-acre campus, positioning the company to meet skyrocketing demand for AI-ready, high-density deployments from hyperscalers, colocation providers, and enterprises.

This acquisition arrives at a pivotal moment. AI and high-performance computing (HPC) workloads demand unprecedented speed, density, and scalability – challenges traditional builds struggle to match. Modular solutions, once niche, are now the default for rapid, repeatable deployments.

“Modular infrastructure is where efficiency meets innovation,” said Ron Mann, vice president of CDM. “For decades, we’ve delivered solutions that solve real engineering challenges in high-stakes environments. Joining forces with R&D Specialties allows us to bring that expertise to the next generation of AI data centers at scale.”

Steve Altizer, president and CEO of Compu Dynamics, emphasized the market imperative: “This investment is about building the capabilities and capacity the market is demanding right now. AI infrastructure requires a different approach; one that delivers faster, scales smarter, and performs better. R&D Specialties brings the engineering depth and manufacturing precision that align perfectly with where this industry is headed.”

R&D Specialties, founded in 1983, excels in custom-engineered systems for mission-critical settings, complementing CDM’s vendor-neutral, end-to-end services – from design and liquid-cooled IT platforms to commissioning and maintenance. Brad Howell, president of R&D Specialties, noted the synergy: “Through joining forces with CDM, our growth opportunities for the combined teams have expanded even further. Being part of the AI infrastructure revolution and building what’s next is exciting.”

For data center operators, this signals broader ecosystem maturation. CDM’s turnkey modules accelerate time to market while integrating high-density power, low-latency networking, and sustainability features. With an extensive North American partner network, the combined entity can deploy campus-scale solutions anywhere, anytime – critical as AI power needs strain grids and supply chains.

This deal exemplifies how strategic M&A is fueling modular dominance, helping the industry navigate AI’s compute explosion with agility and reliability. Learn more at cd-modular.com.

The post CDM Acquires Majority Stake in R&D Specialties to Power AI Modular Data Center Surge appeared first on Data Center POST.

Data Governance and Clinical Innovation

31 March 2026 at 13:00

Artificial intelligence is a tool designed to power innovation, but it’s important to understand its primary fuel: data. Data is required not only for the outputs of AI algorithms but also for their training and operation. Because of this, in sectors where innovation has become driven by technologies like artificial intelligence, data has essentially become fuel for innovation, and it’s important to ensure the safety and quality of this data to stimulate it.

Understandably, many critics have expressed concern over the use of artificial intelligence in healthcare settings, considering the private, sensitive nature of the data used in the field. Patient personal information is not only highly sensitive but also protected by law, meaning there are strict regulations and guidelines dictating how entities in healthcare can use artificial intelligence with regard to patient data.

Why strong data governance is essential for AI in healthcare

However, that doesn’t mean artificial intelligence shouldn’t be used in healthcare whatsoever. Instead, it means there is a need for strong data governance, as this is an essential step in enabling safe and ethical AI use in any industry, particularly ones such as healthcare where the stakes are high. In addition to ensuring compliance with any applicable regulations, strong data governance helps create greater transparency and trust that inspires patient confidence.

It’s important to remember the reason why the healthcare sector wants to deploy artificial intelligence technology in the first place: AI can accelerate innovation and lead to improved patient outcomes. For example, innovators in the healthcare industry have used AI to accelerate drug discovery, conduct more accurate diagnostics, and streamline operations in a way that significantly improves efficiency. But to achieve these outcomes, systems must have access to accurate, well-managed data.

The key to this is creating compliance frameworks that reduce and mitigate the risks of artificial intelligence while still supporting scalable healthcare solutions. Of course, the core of any compliance framework in healthcare is data security and privacy, but these guidelines can also help control other risks, such as algorithmic bias and “black box” risks, ensuring that all decisions and recommendations made by an artificial intelligence are fair and explainable.

Enabling the responsible deployment of AI in healthcare

Ultimately, data governance isn’t about gatekeeping but about collaboration and enabling the responsible and ethical deployment of artificial intelligence. The mindset with which we approach AI shouldn’t be about limiting how we can use the technology, but instead how we can facilitate its use in a way that does not compromise data integrity or patient privacy.

Right now, the key goal of healthcare practitioners who hope to implement artificial intelligence should be to build trust and reliability in these systems. The steps required to achieve this include ensuring data quality and diversity, maintaining transparent communication, and continuous monitoring and validation.

The best way to look at AI systems in healthcare is as an analog to human employees. In healthcare, not even human employees have unfettered access to patient data. There are access controls based on the level of access an individual needs, with checks and balances and supervisory control.

The same philosophy should apply to autonomous systems. Just as approvals and access controls are required of human employees, so too should AI systems require approvals from human overseers.

Indeed, there is a world in which artificial intelligence can revolutionize the healthcare industry for the better, alleviating some of the burden on healthcare workers and contributing to improved patient outcomes. However, for this to happen, the adoption of AI must be done in a way that is responsible and ethical. With this mindset, prioritizing strong data governance, AI can become a reliable partner in patient care.

# # #

About the Author

Chris Hutchins serves as the founder and CEO of Hutchins Data Strategy Consulting. The healthcare institutions benefit from his expertise in developing scalable moral data and artificial intelligence methods to maximize their data potential. His areas of expertise include enterprise data governance, responsible AI adoption, and self-service analytics. His expertise helps organizations achieve substantial results through technology implementation. Through team empowerment, Chris assists healthcare leaders in enhancing care delivery while reducing administrative work and transforming data into meaningful outcomes.

The post Data Governance and Clinical Innovation appeared first on Data Center POST.

Community Resistance Is Often Overwhelm – Not Opposition

30 March 2026 at 13:00

In my last article, I wrote about the need for calm, evidence-based leadership in an increasingly polarized infrastructure environment. One of the realities that continues to surface in communities across the country is that what we often interpret as resistance to development is something more nuanced. In many cases, communities are not pushing back out of ideology, they are responding to complexity, uncertainty, and the absence of trusted frameworks to guide long-term decisions.

Across the United States, digital infrastructure projects, namely data center developments, are encountering growing community resistance.

Too often, this pushback is quickly labeled as anti-growth sentiment, environmental activism, or resistance to technology. But in many cases, that interpretation misses the deeper reality.

What is often labeled as opposition is actually overwhelm.

Communities are being asked to make decisions about infrastructure that will shape their economic future for decades; without the tools, context, or trusted guidance to evaluate those decisions confidently.

Digital infrastructure, particularly large-scale or hyperscale data centers and supporting connectivity systems represents a new class of development. These projects intersect simultaneously with power infrastructure, water resources, land use planning, tax policy, and even national competitiveness. That level of complexity is unprecedented for many local decision-makers.

As a former elected official in Westchester County, New York, and after serving two-terms I know for a fact that most elected officials did not run for office to evaluate hyperscale infrastructure proposals. They ran to address zoning disputes, improve roads, manage school budgets, and respond to everyday civic concerns. When faced with proposals involving megawatt-scale energy demand, unfamiliar technical terminology, global technology narratives, and uncertain long-term impacts, decision paralysis is a natural outcome.

In that environment, saying “no” can feel like the safest and most responsible choice. And for me, this is the crux of the matter. If elected officials don’t know what they are saying no to, it could have dire consequences on the future of their communities – and country.

Further fueling this sentiment are the political dynamics across our country. Local leaders operate within short election cycles and highly visible public scrutiny. Approving a controversial project can feel like a personal political gamble,  particularly when the information landscape is polarized and the benefits are difficult to quantify in near-term terms. And, let’s be honest, you have to live with your neighbors and their emotional reactions to things they too don’t understand.

Trust gaps also play a role. Communities observe large incentive packages (community benefit plans), opaque project branding (project names rather than company brands), and rapid land acquisitions that may span 100’s of acres or more. This can create perceptions of imbalance:  imbalance of information, imbalance of power, and imbalance of benefit. Even when development intentions are positive, the process can feel accelerated and asymmetric from the community’s perspective.

There is also a fear of irreversibility. Digital infrastructure is often perceived as permanent, transformative, and difficult to unwind once built. And fears from past industrial builds like aluminum smelters and energy production sites have not laid an easy path for large-scale developments in our country’s future. That perception alone can drive precautionary decisions, calls for moratoria, and emotional public hearings.

From the industry side, resistance is sometimes misread as anti-technology bias or organized opposition. But frequently the underlying issue is not ideology, it is cognitive and institutional readiness. Communities are not rejecting opportunity; they are struggling to evaluate it.

This is where structured engagement models become essential.

At my company, iMiller Public Relations, we approach these efforts through an effort I call The Groundswell™ approach. The Groundswell approach reframes community engagement from persuasion to empowerment. It begins with understanding local decision dynamics; who influences outcomes, what matters most to residents, and how technical issues translate into civic implications. It emphasizes early education before formal approvals, surfaces community benefit opportunities, and builds coalition narratives that reduce fear rather than inflame it.

Informed communities make more confident decisions. They are better positioned to align development with their long-term economic vision rather than reacting project by project.

When overwhelm occurs simultaneously across multiple regions, the implications extend beyond any single development. Infrastructure deployment becomes fragmented. Investor confidence can weaken. Regional competitiveness begins to diverge. National digital readiness ultimately suffers.

Community overwhelm, therefore, is not just a local planning challenge, it is a strategic issue.

Resistance is often the first signal that institutions need new tools, governance frameworks require modernization, and engagement models must evolve. Calm, structured dialogue is not simply good community relations. It is foundational to building the next generation of digital infrastructure in a way that is both sustainable and broadly supported.

The work I am leading at the OIX Association and the Digital Infrastructure Framework Committee (DIFC), is working to create practical guidance that helps communities evaluate digital infrastructure within their broader economic vision, not project by project, crisis by crisis.

Understanding this distinction may be one of the most important steps we can take right now.

Learn more about what we are doing at iMiller Public Relations to bridge the gap between industry and community for the digital infrastructure sector, go to www.imillerpr.com.

For information about the OIX DIFC, visit www.oix.org/standards-and-certifications/oix-dif-standard.

The post Community Resistance Is Often Overwhelm – Not Opposition appeared first on Data Center POST.

Purpose-Built for AI: The Shift Toward Modular Data Center Infrastructure

26 March 2026 at 14:00

Originally posted on Compu Dynamics.

Discover how AI is transforming mission‑critical infrastructure: From modular data center design and liquid cooling to extreme power density to purpose‑built AI facilities, Steve Altizer, President and CEO of Compu Dynamics, covers these topics in this recent conversation.

At PTC 2026 in Hawaii, Isabel Paradis of HOT TELECOM held a discussion with Altizer to discuss how AI is reshaping the way modular data centers are designed now and in the future.

AI Is Rewriting the Rules of Data Center Design

AI is transforming data centers. While many are still trying to shoehorn AI workloads into traditional designs, that approach is only going to last a few more years. Hyperscalers are leading the way into an AI‑centric future, where liquid cooling – once a specialty – is now becoming standard across the industry.

Retrofitting conventional colo or cloud facilities for AI is not ideal. It’s not as cost effective as doing something that’s purpose built, yet building AI‑only facilities also carries risk, because repurposing that heavy investment later is difficult. The industry is therefore moving toward modular infrastructure, which allows for hybrid, purpose‑built AI facilities that remain flexible enough to serve a range of customers.

To continue reading, please click here.

The post Purpose-Built for AI: The Shift Toward Modular Data Center Infrastructure appeared first on Data Center POST.

AI Workloads and the Implications for High-Density Data Centre Design

23 March 2026 at 14:00

AI workloads are pushing data centre infrastructure towards higher rack densities, new cooling strategies and greater power demand. Jamie Darragh, Data Centre Director, Europe, at global data centre engineering design consultancy Black & White Engineering, examines the design implications for the next generation of facilities.

AI and high-performance computing are placing new demands on data centre infrastructure. Rack densities are increasing; facilities are being delivered at larger scale and operators are under pressure to support workloads that consume far greater levels of power and generate far higher heat loads than conventional cloud environments.

Independent forecasts underline the pace of expansion. Gartner estimates global data centre electricity consumption will rise from around 448TWh in 2025 to roughly 980TWh by 2030, driven largely by AI-optimised computing infrastructure. Within that growth, AI servers alone are expected to account for close to 44% of data centre power consumption by the end of the decade.

For our engineering teams, these workloads are altering the practical limits of traditional infrastructure design. Rack densities exceeding 100–200kW are now appearing in project specifications, particularly where large AI training clusters are planned. These loads influence every part of the building environment, from electrical distribution and cooling capacity to structural loading and cable management.

Designing for extreme density

Under these conditions, air cooling alone becomes difficult to sustain across entire facilities. Liquid cooling is therefore increasingly included in the baseline design of new data centres rather than introduced later as a specialist solution. This cooling method is becoming increasingly favourable due to its higher specific thermal capacity compared with air, which enables more efficient heat transfer and removal. Direct-to-chip and rack-level systems are being designed alongside air cooling so facilities can accommodate different densities and equipment types across the same site.

The introduction of liquid systems requires careful coordination between disciplines. Facilities must manage environments where air and liquid cooling operate together, supported by monitoring platforms, safety controls and operational procedures capable of supporting both approaches.

Some IT chips require different liquid cooling temperatures than those used in air-cooling systems, creating technical hurdles for the overall heat rejection system and requiring precise control of the cooling circuit temperature. Another engineering challenge lies in integrating these systems with power distribution, control platforms and maintenance strategies rather than selecting one cooling method over another.

Higher density also narrows operational tolerance. Commissioning becomes more demanding and redundancy strategies require more detailed modelling. Infrastructure must be capable of supporting peak compute demand while maintaining efficiency when loads are lower, placing greater emphasis on flexible electrical and mechanical systems.

The scale of development is also increasing. Buildings that once delivered a few megawatts of capacity are now part of campus-scale developments where multiple data halls contribute to facilities delivering hundreds of megawatts. data centres are increasingly planned and delivered as long-term infrastructure assets rather than individual projects.

This environment encourages repeatable design and industrialised delivery methods. Developers and investors expect predictable construction schedules and consistent performance across multiple sites. As a result, engineering teams are placing greater emphasis on modular infrastructure systems and digital design methods that allow mechanical and electrical systems to be configured and deployed repeatedly.

Power, control and operational intelligence

Power availability is also becoming a determining factor in project planning. In many regions, grid connection capacity is now one of the main constraints on new development. Gartner has warned that by 2027 as many as 40% of AI data centres could face operational limits because of power availability.

Developers are therefore engaging more closely with utilities during early feasibility stages and exploring complementary infrastructure such as on-site generation and energy storage. In some cases, data centres are also being designed to contribute to wider grid stability through demand response and energy management capability.

Artificial intelligence is also beginning to influence how facilities themselves are operated. Machine-learning systems are already being used in some environments to optimise airflow patterns, cooling plant performance and power distribution using live operational data.

The next stage will see more widespread use of integrated control platforms and digital twins capable of modelling facility behaviour in real time. These systems allow operators to simulate infrastructure performance under different load conditions, test operational changes and identify maintenance requirements before faults occur.

Environmental performance remains another constraint as compute density increases. Higher workloads place additional pressure on energy supply while raising questions around water consumption, construction materials and waste heat recovery. Planning authorities and investors are increasingly looking for measurable improvements in efficiency and carbon reporting before approving new developments. Sustainability therefore sits alongside power and cooling as a central engineering consideration rather than a secondary design feature.

Taken together, these conditions create a more complex design environment for data centre infrastructure. Higher compute densities, power constraints and new operational technologies require mechanical, electrical and digital systems to be considered together from the earliest design stages.

Facilities intended to support AI workloads must accommodate far greater performance requirements than earlier generations of data centres while remaining adaptable as infrastructure technologies and operating practices continue to develop.

# # #

About the Author

Jamie Darragh is Data Centre Director, Europe at Black & White Engineering. He leads the delivery of complex, mission-critical projects across the region, with a focus on technical quality, design coordination and strong client relationships. A Chartered Engineer and member of CIBSE and the IET, Jamie has worked across Europe, the Middle East and the UK since 2005. He brings a clear, practical approach to engineering challenges, combining technical expertise with commercial awareness. He is committed to developing teams that work collaboratively and perform at a high level. Jamie has received several industry awards, recognising both his technical capability and his impact on the built environment including ‘Engineer of the Year’ at leading Middle East industry awards.

The post AI Workloads and the Implications for High-Density Data Centre Design appeared first on Data Center POST.

Calm Leadership in a Polarized Infrastructure Debate

23 March 2026 at 13:00

Over the coming weeks, I will be sharing a series of reflections on the realities shaping digital infrastructure development in the United States. These perspectives come from ongoing conversations with communities, policymakers, developers, investors, and industry leaders navigating one of the most consequential infrastructure build cycles in modern history. As artificial intelligence accelerates demand for computing capacity, the decisions being made today, often at the local level, will influence economic competitiveness, regional growth, and public trust for decades to come. This series is intended to create space for more calm, evidence-based dialogue about how we plan, communicate, and lead through this moment of rapid transformation.

We are living through one of the most consequential infrastructure build cycles in modern history, not dissimilar to the first industrial revolution, and yet many of the decisions shaping our digital future are being made in environments defined by urgency, fear, and ideological polarization.

Digital infrastructure, from AI-ready data centers (AI Factories) to edge computing nodes in your local stripmall, are now central to economic competitiveness, national security, innovation, and quality of life. And still, conversations about development often become binary: pro-growth or anti-growth, pro-environment or pro-industry, local control or national interest.

Reality is far more complex. We are living out a paradoxical dilemma in real-time.

What we are seeing across the United States is not simply opposition to projects. It is a collision of competing priorities: environmental stewardship versus economic opportunity, investor timelines versus civic process, national competitiveness versus local autonomy. These tensions are real. They deserve thoughtful navigation, not reactive decision-making. And when the decisions are polarizing, the complexities are at their greatest.

One of the structural challenges is governance itself. As a former elected official in Westchester County, New York, and after serving two-terms, it is clear as day that Federal policy direction does not automatically translate into local action. As I often say: “Federal mandates don’t mean much when governors and local jurisdictions can simply say no.”

This is not a criticism, it is a recognition of how our democratically designed system works. Infrastructure decisions are ultimately shaped at the state, county, and municipal levels. And many of the leaders tasked with evaluating these developments are doing so without the benefit of neutral frameworks, long-term planning guidance, or consistent industry education.

At the same time, the public narrative around digital infrastructure has become increasingly emotional. Headlines focus on water usage, energy demand, or tax incentives, often without equal discussion of the broader economic and societal value these projects create.

Because a data center is not just a building. It is a catalyst.

Data centers are not just buildings. They are an economic driver across a wide-variety of professional services, hospitality, supply chains, and innovation.

Economic activity begins long before construction starts and extends far beyond permanent on-site employment. Yet many impact assessments still rely on narrow metrics that fail to capture this ecosystem effect.

When you look at impact studies narrowly,  like counting permanent jobs, you miss the enormous economic ecosystem that infrastructure development activates.

This disconnect contributes to mistrust and polarization. Communities feel pressured. Investors feel blocked. Policymakers feel caught in the middle.

What is needed now is calm, evidence-based leadership.

Leadership that can hold multiple truths at once:

  • Infrastructure development must be sustainable.
  • Communities deserve transparency and engagement.
  • Economic competitiveness cannot be taken for granted.

Long-term planning must transcend election cycles.

The work I am leading at the OIX Association and the Digital Infrastructure Framework Committee (DIFC), is working to create practical guidance that helps communities evaluate digital infrastructure within their broader economic vision, not project by project, crisis by crisis.

The goal is not to advocate for development at any cost.

The goal is to enable informed decision-making.

Because when stakeholders are equipped with context, data, and structured engagement models, conversations shift. Fear gives way to dialogue. Polarization gives way to planning. Urgency gives way to intentional action.

In a moment defined by technological acceleration, community leadership may simply need to be able to meet ability with reality. This will ensure that we, as a society, can move forward, together, with clarity.

Learn more about what we are doing at iMiller Public Relations to bridge the gap between industry and community for the digital infrastructure sector, go to www.imillerpr.com.

For information about the OIX DIFC, visit www.oix.org/standards-and-certifications/oix-dif-standard.

The post Calm Leadership in a Polarized Infrastructure Debate appeared first on Data Center POST.

Duos Edge AI Deploys Second Edge Data Center in Amarillo, Texas Market

20 March 2026 at 13:00

Duos Technologies Group Inc. (Nasdaq: DUOT), through Duos Edge AI, Inc., has announced the deployment of its second Edge Data Center in the Amarillo, Texas market. The new carrier-neutral, SOC 2-compliant facility is located on Potter County land adjacent to the largest colocation facility in the Texas Panhandle, further strengthening digital infrastructure for carriers, healthcare organizations, enterprises, and public sector entities across the region.​

Building on the success of its initial Amarillo deployment, this latest installation expands Duos Edge AI’s footprint in the Panhandle and adds high-density, low-latency computing capabilities for real-time AI applications, enhanced bandwidth, and secure data processing.

“We are proud to deepen our commitment to the Amarillo market with this second deployment, building on the foundation established by our initial EDC, which brought high-performance computing directly to the heart of the Panhandle,” said Dave Irek, Chief Operations Officer of Duos Edge AI. “This expansion enhances capacity and capability in the region, and by partnering on Potter County land adjacent to a premier colocation hub, we are creating a robust, carrier-neutral ecosystem designed to support innovation, attract investment, and drive long-term economic growth.”​

The company said the deployment also helps reduce dependence on data centers located in tier one cities while supporting underserved and high-growth markets across Texas. Duos Edge AI’s broader Texas expansion includes recent installations in Lubbock, Waco, Victoria, Abilene, and Corpus Christi.​

Potter County Judge Nancy Tanner added, “This collaboration with Duos Edge AI represents a significant investment in our community’s future. Positioning this advanced, carrier-neutral data center on county land next to the Panhandle’s largest colocation facility will attract new businesses, improve connectivity for our residents and schools, and position Potter County as a leader in digital infrastructure.”​

The new EDC is expected to be fully operational in the coming months.

To learn more about Duos Edge AI, visit www.duosedge.ai.

The post Duos Edge AI Deploys Second Edge Data Center in Amarillo, Texas Market appeared first on Data Center POST.

Data Center HVAC Market to Surpass USD 36 Billion by 2035

19 March 2026 at 13:00

The global data center HVAC market was valued at USD 13.7 billion in 2025 and is estimated to grow at a CAGR of 9.8% to reach USD 36 billion by 2035, according to recent report by Global Market Insights Inc.

Growth in the global data center HVAC industry is being fueled by rising computing intensity, expanding AI-driven workloads, and the continued development of hyperscale and enterprise facilities. As server densities increase and high-performance computing environments generate greater thermal loads, advanced cooling infrastructure has become essential to maintain operational stability and uptime. Research and development efforts across the HVAC industry are increasingly focused on liquid cooling technologies and next-generation thermal management systems capable of handling elevated power densities.

At the same time, stricter regulatory oversight related to energy consumption and environmental performance is encouraging operators to enhance system efficiency and reduce carbon output. ESG-focused initiatives and net-zero commitments are prompting facility upgrades aimed at optimizing Power Usage Effectiveness and lowering operating expenses. Improvements in airflow engineering, adoption of sustainable refrigerants, and integration of energy-efficient cooling architectures are reshaping infrastructure strategies. As regulatory expectations and energy costs continue to rise, demand for intelligent, high-efficiency HVAC solutions in data centers is expected to accelerate significantly.

Rising load capacities, sustainability targets, and regulatory compliance requirements are creating pressure for compact, scalable, and adaptable HVAC systems. Industry participants are responding by designing modular cooling platforms that can operate effectively across diverse geographies while maximizing space utilization and energy performance.

The data center HVAC market from solutions segment accounted for 76% share in 2025 and is forecast to grow at a CAGR of 8.9% from 2026 to 2035. Advanced monitoring tools equipped with artificial intelligence enable predictive maintenance, improve airflow management, and reduce unnecessary power consumption. Increased adoption of liquid-based cooling technologies is supporting high-density server environments while enhancing reliability and extending equipment lifespan through energy-conscious design.

The air-based cooling technologies segment held a 50% share in 2025 and is projected to grow at a CAGR of 8.8% during 2026-2035. Enhanced airflow optimization systems, variable-speed fan configurations, and intelligent environmental controls are improving thermal consistency and minimizing energy waste. Economizer-enabled designs are facilitating greater use of ambient air, while modular cooling units support scalability across both hyperscale and edge environments. Growing server power density is also accelerating interest in direct cooling and immersion-based methods supported by advanced coolant formulations that enhance heat transfer efficiency.

United States data center HVAC market reached USD 4.7 billion in 2025. Increasing cloud integration and AI-intensive applications are driving demand for more efficient cooling architectures. Investments are being supported by electrification incentives and decarbonization initiatives, encouraging broader adoption of intelligent HVAC controls and energy-optimized systems. Integration with smart building platforms and grid-responsive technologies is enabling facilities to manage peak loads, reduce demand charges, and incorporate renewable energy sources.

Key companies operating in the global data center HVAC market include Vertiv, Schneider Electric, Carrier Global, Daikin Industries, Trane Technologies, Johnson Controls, STULZ, Alfa Laval, Danfoss, and Modine Manufacturing. Companies in the global market are strengthening their competitive position through continuous innovation, strategic partnerships, and geographic expansion. Leading players are investing heavily in research and development to enhance liquid cooling efficiency, improve airflow intelligence, and integrate AI-driven monitoring systems. Collaborations with cloud service providers and data center developers are enabling customized cooling deployments for high-density environments. Firms are also expanding manufacturing capacity and regional service networks to support rapid infrastructure growth. Sustainability-focused product development, including low-global-warming-potential refrigerants and energy-efficient system architectures, is becoming a central competitive differentiator.

The post Data Center HVAC Market to Surpass USD 36 Billion by 2035 appeared first on Data Center POST.

Foresight Raises $25M to Tackle Infrastructure Execution Risks in the AI Era

18 March 2026 at 17:00

As global investment in AI infrastructure, power, and advanced manufacturing accelerates, a critical constraint is coming into sharper focus—project execution.

A newly announced $25 million Series A funding round for Foresight underscores a broader industry shift: while capital continues to flow into large-scale infrastructure, delivering these projects on time and on budget remains a persistent challenge.

The current wave of infrastructure investment is unprecedented in both scale and complexity. Hyperscale data centers, energy systems, and advanced industrial facilities are being developed simultaneously across global markets, often with overlapping supply chains and tight delivery timelines.

However, execution has emerged as a systemic issue.

Research indicates that nearly 90% of large-scale infrastructure projects are completed late or exceed budget expectations. In the context of AI infrastructure, delays can have cascading effects—impacting capacity availability, increasing financing costs, and delaying revenue generation.

Industry observers note that as demand for compute continues to surge, particularly for AI workloads, the margin for error in delivery timelines is shrinking.

A Shift Toward Predictive Delivery Models

Foresight, which positions itself as a predictive project delivery platform, is part of a growing cohort of technology providers aiming to address these execution challenges through data and automation.

The company’s platform is designed to move beyond traditional project management approaches—often reliant on static schedules and retrospective reporting—by introducing continuous validation of project progress and early identification of risk factors.

According to the company, its system enables infrastructure owners to establish baseline schedules more quickly, integrate data across stakeholders, and forecast potential delays before they materialize. Early adopters report improvements in forecast accuracy and reductions in cost overruns.

While such claims reflect a broader trend toward digitization in construction and infrastructure delivery, they also point to a deeper industry need: greater predictability in increasingly complex builds.

Why Execution Matters More in the AI Era

For data center developers and operators, execution risk is becoming more consequential.

Unlike previous infrastructure cycles, AI-driven demand is both immediate and rapidly evolving. Delays in bringing capacity online can result in missed opportunities, strained customer relationships, and competitive disadvantages in key markets.

At the same time, projects are becoming more interdependent. Power availability, equipment procurement, and site development must align precisely—leaving little room for disruption.

This dynamic is prompting a reassessment of how infrastructure projects are planned and managed, with greater emphasis on real-time data, cross-functional visibility, and proactive intervention.

Expanding Beyond Data Centers

Although the initial focus is on sectors such as hyperscale data centers, the challenges associated with project execution are not unique to digital infrastructure.

Foresight plans to expand its platform into adjacent industries, including energy, defense, and advanced manufacturing—areas that share similar characteristics: large capital commitments, complex supply chains, and high sensitivity to delays.

The company’s recent funding, led by Macquarie Capital Venture Capital, reflects investor interest in solutions that address these systemic inefficiencies.

An Industry Inflection Point

The emergence of predictive project delivery tools signals a broader transformation in how infrastructure is built.

For years, innovation in the data center sector has centered on compute performance, cooling technologies, and energy efficiency. Increasingly, attention is shifting toward the process of delivery itself.

As infrastructure programs continue to scale, the ability to execute with precision may become a defining factor in project success.

In an environment where demand is high and timelines are compressed, the question facing the industry is evolving—from whether projects can be financed to whether they can be delivered as planned.

The post Foresight Raises $25M to Tackle Infrastructure Execution Risks in the AI Era appeared first on Data Center POST.

Duos Edge AI and Seimitsu Strengthen Georgia’s Digital Infrastructure

18 March 2026 at 15:00

Duos Technologies Group Inc. (Nasdaq: DUOT), through Duos Edge AI, Inc., has formed a strategic partnership with Seimitsu to revolutionize digital infrastructure across Georgia. By combining Duos Edge AI’s modular, high-performance solutions with Seimitsu’s expansive high-speed fiber network, the collaboration delivers low-latency processing and high-bandwidth connectivity for businesses, municipalities, and healthcare providers statewide.

“Our mission is to bring the power of the cloud to the street corner. Partnering with Seimitsu allows us to integrate our Edge AI nodes into a robust, reliable fiber backbone, ensuring that Georgia’s industries – from the port of Savannah to Atlanta’s technology corridors – have the infrastructure they need to compete globally,” said Dave Irek, Chief Operations Officer of Duos Edge AI.

As demand for real-time data processing grows, driven by AI, IoT, and autonomous systems, infrastructure closer to end users has become critical. This partnership positions Georgia at the forefront of the Edge revolution with ultra-low latency processing, Seimitsu’s 25 terabits of low-latency fiber capacity across the Southeast, and rapid deployment of Duos Edge AI nodes in underserved and high-demand areas.

Sam Cook, CEO of Seimitsu, added, “For more than 40 years, Seimitsu has been committed to connecting our communities. This partnership with Duos Edge AI represents the next step in that journey. By integrating edge computing directly into our network, we are moving beyond simple transit services and delivering true digital transformation for our clients.”

The partnership supports Duos Edge AI’s nationwide expansion of distributed AI infrastructure through strategic fiber, power, and site partnerships.

To learn more about Duos Edge AI, visit www.duosedge.ai.

The post Duos Edge AI and Seimitsu Strengthen Georgia’s Digital Infrastructure appeared first on Data Center POST.

❌